All Your Them Are One Model To Us Learn

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

All Your Them Are One Model To Us Learn

Roger Critchlow-2
Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

-- rec --


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

Marcus G. Daniels

Figure 1:  The machines are coming!  

 

From: Friam [mailto:[hidden email]] On Behalf Of Roger Critchlow
Sent: Tuesday, June 20, 2017 9:40 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: [FRIAM] All Your Them Are One Model To Us Learn

 

Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

 

-- rec --

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

Merle Lefkoff-2
In reply to this post by Roger Critchlow-2
Wow, Roger.  I don't get all this, but tell me---does this change everything for modelers?

On Tue, Jun 20, 2017 at 9:39 AM, Roger Critchlow <[hidden email]> wrote:
Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

-- rec --


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



--
Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
emergentdiplomacy.org
Santa Fe, New Mexico, USA

Visiting Professor in Integrative Peacebuilding
Saint Paul University
Ottawa, Ontario, Canada

[hidden email]
mobile:  (303) 859-5609
skype:  merle.lelfkoff2

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

Marcus G. Daniels

Table 3 suggests that there are general information processing features that translate across domains.   Table 4 suggests that becoming an expert in many things doesn’t make you (much) of a worse expert in any one thing.   Don’t pull your kid out of liberal arts college just yet?   And weirdly different domains too.   Perhaps there is some natural modularity that comes out of the contrasting training sets, even without the attentional mechanism?

 

From: Friam [mailto:[hidden email]] On Behalf Of Merle Lefkoff
Sent: Tuesday, June 20, 2017 9:56 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] All Your Them Are One Model To Us Learn

 

Wow, Roger.  I don't get all this, but tell me---does this change everything for modelers?

 

On Tue, Jun 20, 2017 at 9:39 AM, Roger Critchlow <[hidden email]> wrote:

Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

 

-- rec --

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



 

--

Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
emergentdiplomacy.org

Santa Fe, New Mexico, USA

Visiting Professor in Integrative Peacebuilding

Saint Paul University

Ottawa, Ontario, Canada

 

[hidden email]
mobile:  (303) 859-5609
skype:  merle.lelfkoff2


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

Roger Critchlow-2
In reply to this post by Merle Lefkoff-2
Different kind of model, Merle, for a deep neural net the model is the architecture of untrained layers and the connections between them.  Then you train the model on a discrimination task, like distinguishing labradoodles from fried chicken.  So this doesn't mean much to simulation modelers for the moment.

-- rec --

Inline image 1

On Tue, Jun 20, 2017 at 11:56 AM, Merle Lefkoff <[hidden email]> wrote:
Wow, Roger.  I don't get all this, but tell me---does this change everything for modelers?

On Tue, Jun 20, 2017 at 9:39 AM, Roger Critchlow <[hidden email]> wrote:
Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

-- rec --


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



--
Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
emergentdiplomacy.org
Santa Fe, New Mexico, USA

Visiting Professor in Integrative Peacebuilding
Saint Paul University
Ottawa, Ontario, Canada

[hidden email]
mobile:  <a href="tel:(303)%20859-5609" value="+13038595609" target="_blank">(303) 859-5609
skype:  merle.lelfkoff2

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

Marcus G. Daniels

The last thing you want is your personal assistant robot putting your Labradoodle in the refrigerator.

 

From: Friam [mailto:[hidden email]] On Behalf Of Roger Critchlow
Sent: Tuesday, June 20, 2017 11:29 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] All Your Them Are One Model To Us Learn

 

Different kind of model, Merle, for a deep neural net the model is the architecture of untrained layers and the connections between them.  Then you train the model on a discrimination task, like distinguishing labradoodles from fried chicken.  So this doesn't mean much to simulation modelers for the moment.

 

-- rec --

 

Inline image 1

 

On Tue, Jun 20, 2017 at 11:56 AM, Merle Lefkoff <[hidden email]> wrote:

Wow, Roger.  I don't get all this, but tell me---does this change everything for modelers?

 

On Tue, Jun 20, 2017 at 9:39 AM, Roger Critchlow <[hidden email]> wrote:

Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

 

-- rec --

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



 

--

Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
emergentdiplomacy.org

Santa Fe, New Mexico, USA

Visiting Professor in Integrative Peacebuilding

Saint Paul University

Ottawa, Ontario, Canada

 

[hidden email]
mobile:  <a href="tel:(303)%20859-5609" target="_blank">(303) 859-5609
skype:  merle.lelfkoff2


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

gepr
On 06/20/2017 11:15 AM, Marcus Daniels wrote:
> The last thing you want is your personal assistant robot putting your Labradoodle in the refrigerator.

But what if you want to eat the other half later?

--
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: All Your Them Are One Model To Us Learn

Marcus G. Daniels
In reply to this post by Roger Critchlow-2

Roger writes:

 

“So this doesn't mean much to simulation modelers for the moment.”

 

Consider predicting a frame of 24 bit color values instead of a binary scalar (Labradoodle vs. Chicken).  With such an encoding, one could possibly `find’ the equations of motion in the learned neural net by watching a Labradoodle run.   In that sense I think it is of interest to simulation modelers.   Agent rules for free.

 

Marcus

 

From: Friam [mailto:[hidden email]] On Behalf Of Roger Critchlow
Sent: Tuesday, June 20, 2017 11:29 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] All Your Them Are One Model To Us Learn

 

Different kind of model, Merle, for a deep neural net the model is the architecture of untrained layers and the connections between them.  Then you train the model on a discrimination task, like distinguishing labradoodles from fried chicken.  So this doesn't mean much to simulation modelers for the moment.

 

-- rec --

 

Inline image 1

 

On Tue, Jun 20, 2017 at 11:56 AM, Merle Lefkoff <[hidden email]> wrote:

Wow, Roger.  I don't get all this, but tell me---does this change everything for modelers?

 

On Tue, Jun 20, 2017 at 9:39 AM, Roger Critchlow <[hidden email]> wrote:

Apropos my babbling about all patterns being patterns and all the mechanisms that recognize patterns being an incomprehensible jumble of mechanisms, Google shares One Model to Learn Them All, https://arxiv.org/abs/1706.05137, in which it turns out that throwing all the architectural elements from all kinds of deep learning into a single model ends up working pretty well.  Adding the recursive elements used to parse the linear sequences of elements in languages never hurts and mostly improves the performance of image classifiers and object recognizers.  Go figure.

 

-- rec --

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



 

--

Merle Lefkoff, Ph.D.
President, Center for Emergent Diplomacy
emergentdiplomacy.org

Santa Fe, New Mexico, USA

Visiting Professor in Integrative Peacebuilding

Saint Paul University

Ottawa, Ontario, Canada

 

[hidden email]
mobile:  <a href="tel:(303)%20859-5609" target="_blank">(303) 859-5609
skype:  merle.lelfkoff2


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove