This just turned up on hacker news:
[...] To this end we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million sequences spanning evolutionary diversity. The resulting model maps raw sequences to representations of biological properties without labels or prior domain knowledge. The learned representation space organizes sequences at multiple levels of biological granularity from the biochemical to proteomic levels. [...] Don't know if I have the energy to plow through the text. -- rec -- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Cool! “For synthetic biology, iteratively querying a model of the mutational fitness landscape could help efficiently guide the introduction of mutations to enhance protein function (Romero & Arnold, 2009), inform
protein design using a combination of activating mutants (Hu et al., 2018), and make rational substitutions to optimize protein properties such as substrate specificity (Packer et al., 2017), stability (Tan et al., 2014), and binding (Ricatti et al., 2019).” Get a few billion people to get full genome sequencing, and let the TPUs discover how we work! Everyone gets a custom cocktail to improve stamina, fight off cancer, etc. etc. Marcus From: Friam <[hidden email]> on behalf of Roger Critchlow <[hidden email]> This just turned up on hacker news:
Don't know if I have the energy to plow through the text. -- rec -- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
I did have some energy and it was a pretty entertaining read. So 7/8ths of the authors for this paper are at Facebook's AI group, though one gives an email address @gmail.com. The group that won the CASP13 (Critical Assessment of Structure Prediction) competition in December was from Google/DeepMind, as memorialized by https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp13-what-just-happened/. The DeepMind model, called AlphaFold, was supervised learning of 3D structure coordinates from amino acid sequences. DeepMind has yet to publish a paper detailing the methods used by AlphaFold This model is unsupervised learning to predict a missing amino acid given the rest of the sequence, so you plug in a new protein sequence of N amino acids and it spits out an amino acid probability distribution for each of the N positions, an N*25 dimensional vector that represents everything it learned from the training set. They report a series of tests that appear to support their claims, there doesn't appear to be any major cherry picking or data censoring involved in the tests. I'm not sure how they're encoding 25 amino acids, since wikipedia is pretty sure that 22 is all there are in proteins. But they don't actually extract the levels of organization from the model. They take the levels of organization as known facts and construct observations of the model that make predictions consistent with the levels. So if there are levels of organization as yet unidentified, they are at least as obscure in the model as they are in reality. And to claim that the levels of organization emerge from the model sort of ignores how much work went into constructing the observations. On the other hand, one might be surprised that all these levels are implicit in the amino acid sequences, but life knew that already, that's why it only remembers the sequences. The most complex model they fit learned 700 million parameters, and it wasn't overfit, so they're presumably gearing up to fit a series of bigger models to that exponentially growing database of known protein sequences. AlphaFold, meanwhile, is stuck working with the more slowly growing database of known protein 3D structures. -- rec -- On Tue, Apr 30, 2019 at 9:40 PM Marcus Daniels <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
I can imagine Facebook friends sharing their Ancestry.com data. Facebook compiles all that that and sells services to insurance companies so that they can anticipate risk.
There’s no bound on the stupidity of Facebook users. From: Friam <[hidden email]> on behalf of Roger Critchlow <[hidden email]> I did have some energy and it was a pretty entertaining read.
So 7/8ths of the authors for this paper are at Facebook's AI group, though one gives an email address @gmail.com. The group that won the CASP13 (Critical Assessment of Structure Prediction)
competition in December was from Google/DeepMind, as memorialized by https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp13-what-just-happened/.
The DeepMind model, called AlphaFold, was supervised learning of 3D structure coordinates from amino acid sequences. DeepMind has yet to publish a paper detailing the methods used by AlphaFold This model is unsupervised learning to predict a missing amino acid given the rest of the sequence, so you plug in a new protein sequence of N amino acids and it spits out an amino acid probability distribution for each of the N positions,
an N*25 dimensional vector that represents everything it learned from the training set. They report a series of tests that appear to support their claims, there doesn't appear to be any major cherry picking or data censoring involved in the tests. I'm not
sure how they're encoding 25 amino acids, since wikipedia is pretty sure that 22 is all there are in proteins. But they don't actually extract the levels of organization from the model. They take the levels of organization as known facts and construct observations of the model that make predictions consistent with the levels. So if there are levels
of organization as yet unidentified, they are at least as obscure in the model as they are in reality. And to claim that the levels of organization emerge from the model sort of ignores how much work went into constructing the observations. On the other hand, one might be surprised that all these levels are implicit in the amino acid sequences, but life knew that already, that's why it only remembers the sequences. The most complex model they fit learned 700 million parameters, and it wasn't overfit, so they're presumably gearing up to fit a series of bigger models to that exponentially growing database of known protein sequences. AlphaFold, meanwhile,
is stuck working with the more slowly growing database of known protein 3D structures. -- rec -- On Tue, Apr 30, 2019 at 9:40 PM Marcus Daniels <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Roger Critchlow-2
Thanks VERY much for posting some digested material from the paper. What you say below seems to hearken back to what JonZ (or maybe JohnK?) said awhile back, ... paraphrasing: that he would be hard-pressed to find something that organisms can do that can't be duplicated by a sequential machine.
That type of statement and yours below do not *imply* that an effect was NOT generated by a (semi)hierarchical structure. It merely implies something like the parallelism theorem, that anything a (semi)hierarchial system can do, a "flat" one can do (though perhaps with extra space or time costs). Am I reading your statement right? On 5/2/19 12:02 PM, Roger Critchlow wrote: > But they don't actually extract the levels of organization from the model. They take the levels of organization as known facts and construct observations of the model that make predictions consistent with the levels. So if there are levels of organization as yet unidentified, they are at least as obscure in the model as they are in reality. And to claim that the levels of organization emerge from the model sort of ignores how much work went into constructing the observations. > > On the other hand, one might be surprised that all these levels are implicit in the amino acid sequences, but life knew that already, that's why it only remembers the sequences. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
On the bounds of stupidity, there's at least a sucker born every minute, a large proportion of whom apparently benefit not at all from any kind of education. A theoretical sequential machine, perhaps, that might melt a hole through the earth while simulating a cell. The hierarchy in this case looks like linguistic compression to me, a way of summarizing results, the system is not depending on the levels of organization to work, we find levels convenient for explanations of how the system works. -- rec -- On Thu, May 2, 2019 at 1:36 PM uǝlƃ ☣ <[hidden email]> wrote: Thanks VERY much for posting some digested material from the paper. What you say below seems to hearken back to what JonZ (or maybe JohnK?) said awhile back, ... paraphrasing: that he would be hard-pressed to find something that organisms can do that can't be duplicated by a sequential machine. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
About levels. I tried to post this but ran into the size problem. ----------------------------------- Frank Wimberly My memoir: https://www.amazon.com/author/frankwimberly My scientific publications: https://www.researchgate.net/profile/Frank_Wimberly2 Phone (505) 670-9918 On Thu, May 2, 2019, 5:36 PM Frank Wimberly <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
I tried to copy this mail that had the file attached: We used the Hearsay-II extensively as a model for how to do parallel, distributed applications in the Robotics Institute at Carnegie Mellon. It makes use of levels and communication among them, up, down and within a level. Applications included factory automation, job shop scheduling, and others. As a speech-understanding system it was replaced by Harpy which was faster. Some will remember several other times that I have promoted this. I'm just trying to help. ----------------------------------- Frank Wimberly Phone (505) 670-9918 On Thu, May 2, 2019, 7:30 PM Frank Wimberly <[hidden email]> wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Roger Critchlow-2
Excellent! Thanks. I think I've found my new .signature tagline: Exploring the bounds of stupidity since 1966! 8^)
On 5/2/19 4:23 PM, Roger Critchlow wrote: > On the bounds of stupidity, there's at least a sucker born every minute, a > large proportion of whom apparently benefit not at all from any kind of > education. > > A theoretical sequential machine, perhaps, that might melt a hole through > the earth while simulating a cell. > > The hierarchy in this case looks like linguistic compression to me, a way > of summarizing results, the system is not depending on the levels of > organization to work, we find levels convenient for explanations of how the > system works. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
In reply to this post by Frank Wimberly-2
I remember skimming that paper before. The interesting question is the strictness (or "looseness") of the hierarchy. Figure 2 implies (eg "Verify") the ability to hop over entire levels. So the question boils down to whether or not it's really a hierarchy or something else, something like the subset of a power set of the primitives. I'm loosely analogizing with Koza's automatically defined functions (ADFs) where the operators can work over both the primitives and the "macros".
On 5/2/19 6:36 PM, Frank Wimberly wrote: > I tried to copy this mail that had the file attached: > > We used the Hearsay-II extensively as a model for how to do parallel, > distributed applications in the Robotics Institute at Carnegie Mellon. It > makes use of levels and communication among them, up, down and within a > level. Applications included factory automation, job shop scheduling, and > others. As a speech-understanding system it was replaced by Harpy which > was faster. > > Some will remember several other times that I have promoted this. I'm just > trying to help. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
WTH are you doing up at this hour?
WTH am I doing up at this hour? Hope you-re back to sleep. N Nicholas S. Thompson Emeritus Professor of Psychology and Biology Clark University http://home.earthlink.net/~nickthompson/naturaldesigns/ -----Original Message----- From: Friam [mailto:[hidden email]] On Behalf Of glen?C Sent: Friday, May 03, 2019 9:10 AM To: [hidden email] Subject: Re: [FRIAM] More on levels of sequence organization I remember skimming that paper before. The interesting question is the strictness (or "looseness") of the hierarchy. Figure 2 implies (eg "Verify") the ability to hop over entire levels. So the question boils down to whether or not it's really a hierarchy or something else, something like the subset of a power set of the primitives. I'm loosely analogizing with Koza's automatically defined functions (ADFs) where the operators can work over both the primitives and the "macros". On 5/2/19 6:36 PM, Frank Wimberly wrote: > I tried to copy this mail that had the file attached: > > We used the Hearsay-II extensively as a model for how to do parallel, > distributed applications in the Robotics Institute at Carnegie Mellon. > It makes use of levels and communication among them, up, down and > within a level. Applications included factory automation, job shop > scheduling, and others. As a speech-understanding system it was > replaced by Harpy which was faster. > > Some will remember several other times that I have promoted this. I'm > just trying to help. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by gepr
Roger writes:
"The hierarchy in this case looks like linguistic compression to me, a way of summarizing results, the system is not depending on the levels of organization to work, we find levels convenient for explanations of how the system works." Glen writes: "I'm loosely analogizing with Koza's automatically defined functions (ADFs) where the operators can work over both the primitives and the "macros". Considering how we go back-and-forth in long threads here, trying to sort out semantics, I don't see why it is reasonable to expect a generative learning approach to employ internal neurons that have easily interpretable meanings. Same with ADF: "Here's a program written in a private language, how does it work?" Like an Egyptologist trying to figure out hieroglyphics and understand the ancient culture. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Right. But that's the point, I think. To what extent are semantics invariant across these supposed "levels"? My argument is that "levels" are figments of our imagination. The best we can say is that iteration constructs something that we find convenient to name: "level". But what reality is actually doing is mere aggregation and the meanings of the primitives are no different from the meanings of the aggregates.
These private algorithms are a fiction, allowed (in some abstracted idealism) by our general purpose computers. Constructive logic/math disallows, to some extent, that complete abstraction. Every constructive method will, at some point, puncture the "levels". More to the point, *every* model that includes (a priori or not) "levels" is false and will eventually be falsified. You're asking something like "who cares?" or "does it matter?". Maybe not. But if we're trying to talk about consciousness and what a Turing machine knows or the difference between specific and general intelligence, it *is* reasonable, maybe even necessary. On 5/4/19 10:50 AM, Marcus Daniels wrote: > Considering how we go back-and-forth in long threads here, trying to sort out semantics, I don't see why it is reasonable to expect a generative learning approach to employ internal neurons that have easily interpretable meanings. Same with ADF: "Here's a program written in a private language, how does it work?" Like an Egyptologist trying to figure out hieroglyphics and understand the ancient culture. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
On Sat, May 04, 2019 at 05:25:54PM -0700, glen∈ℂ wrote:
> Right. But that's the point, I think. To what extent are semantics invariant across these supposed "levels"? My argument is that "levels" are figments of our imagination. The best we can say is that iteration constructs something that we find convenient to name: "level". But what reality is actually doing is mere aggregation and the meanings of the primitives are no different from the meanings of the aggregates. I don't think levels are just figments of imagination. Compression algorithms replace explicit descriptions with generative algorithms (like procedures of functions) that when called with appropriate parameters reproduce the original data. These generative descriptions have a tree-like structure, which is exactly the heirarchical structure you're after. Obviously, there is no unique compression algorithm, nor even a unique best algorithm. But I suspect that the best compression algorithms will probably agree up to an isomorphism on the heirarchical structure for most compressible data sets (note that this is already a set of measure zero in the space of all data sets :). I don't have any data for my hunch, though. -- ---------------------------------------------------------------------------- Dr Russell Standish Phone 0425 253119 (mobile) Principal, High Performance Coders Visiting Senior Research Fellow [hidden email] Economics, Kingston University http://www.hpcoders.com.au ---------------------------------------------------------------------------- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by gepr
If you're moving up from the phoneme to the word to the phrase level and some result in the last disambiguates between two phonemes, what difference does it make that you've violated some strict hierarchy? The case can be made that Raj Reddy won the Turing award based partly on his leadership of the Hearsay project. I think they were trying to get speech understanding to work more than they were trying to understand how humans do it. Frank ----------------------------------- Frank Wimberly Phone (505) 670-9918 On Sat, May 4, 2019, 6:26 PM glen∈ℂ <[hidden email]> wrote: Right. But that's the point, I think. To what extent are semantics invariant across these supposed "levels"? My argument is that "levels" are figments of our imagination. The best we can say is that iteration constructs something that we find convenient to name: "level". But what reality is actually doing is mere aggregation and the meanings of the primitives are no different from the meanings of the aggregates. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Russell Standish-2
Turing Award Pradesh, India EDUCATION:B.S., Civil Engineering, Guindy College of Engineering, Madras, (Now Anna University, Chennai), India, 1958; MTech, University of New South Wales, Sydney, Australia, 1960; PhD Stanford University, 1966. EXPERIENCE:Applied Science Representative, IBM (Australia), Sydney, Australia, 1960 – 1963; Assistant Professor of Computer Science, Stanford University 1966 – 1969; Associate Professor Computer Science, Carnegie Mellon University 1969 – 1973; Professor of Computer Science, Carnegie Mellon University 1973 – 1984; University Professor of Computer Science, Carnegie Mellon University; 1984 – present; Founding Director, Robotics Institute, Carnegie Mellon University; 1980 – 1992; Dean, School of Computer Science, Carnegie Mellon University, 1991 – 1999; Herbert A. Simon University Professor of Computer Science and Robotics, Carnegie Mellon University, 1992 – 2005; Founding Director, Carnegie Mellon University West Coast Campus, Mountain View, California; 2001 – 2004; Mozah Bint Nasser University Professor of Computer Science and Robotics, Carnegie Mellon University, 2005 – present HONORS AND AWARDS:Fellow, Acoustical Society of America; Fellow, Institute of Electrical and Electronics Engineers, Inc. (IEEE); Founding Fellow of the American Association for Artificial Intelligence (now called the Association for the Advancement of Artificial Intelligence, AAAI); Foreign Member, Chinese Academy of Engineering; Foreign Fellow, Indian National Science Academy (INSA); Foreign Fellow, Indian National Academy of Engineering(INAE); Recipient, Legion d’Honneur, presented by President Mitterrand of France (1984); Member of the National Academy of Engineering (1984); President, American Association for Artificial Intelligence (now called the Association for the Advancement of Artificial Intelligence, AAAI) (1987-1989); IBM Research Ralph Gomory Visiting Scholar Award (1991); Co-Recipient, Association for Computing Machinery Turing Award (jointly with Ed Feigenbaum) (1994); Member of the American Academy of Arts and Sciences (1995); Recipient, Padma Bhushan Award, presented by President of India (2001); Okawa Prize (2004); Honda Prize (2005); IJCAI Donald E. Walker Distinguished Service Award (2005); Vannevar Bush Award (2006); The IEEE James L. Flanagan Speech and Audio Processing Award (2008); inducted into IEEE Intelligent Systems' AI's Hall of Fame for the "significant contributions to the field of AI and intelligent systems" (2011). Honorary Doctorates: Sri Venkateswara University, Henri Poincaré University, University of New South Wales, Jawaharlal Nehru Technology University, University of Massachusetts, University of Warwick, Anna University, Indian Institute for Information Technology (Allahabad), Andhra University, IIT Kharagpur, and Hong Kong University of Science and Technology On Sat, May 4, 2019, 6:51 PM Russell Standish <[hidden email]> wrote: On Sat, May 04, 2019 at 05:25:54PM -0700, glen∈ℂ wrote: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Russell Standish-2
This is a great point. But these compressions work by establishing *regularity* in the self-evident/raw/explicit primitives they reproduce. And it's that regularity that provides for iteration. The hierarchies you're talking about work because each vertex in the branching structure (not always a tree) has something about it that's similar to some other vertex. A fully recursive system requires all the vertices to be the same in some sense, to have an invariant meaning no matter which "level" that vertex might be at.
As I tried to make clear in my response to Eric's digestion of the Bokov paper, I'm not suggesting that structures like DAGs are figments of our imagination, only the levels we impute onto them. I tried to make a similar argument a long time ago that "order" is a better term than "level". For example, if you group a set of primitives into tuples, 1-tuples, 2-tuples, 3-tuples, ..., you *can*, if you choose, to say all the 3-tuples form a level ... the 2nd level up (0th level being the 1-tuples, the primitives, 1st being the 2-tuples, etc.). But why? What power/usefulness is brought to the table by thinking of them as levels? What's wrong with the more accurate conception of "groupings of 3"? On 5/4/19 5:51 PM, Russell Standish wrote: > I don't think levels are just figments of imagination. Compression > algorithms replace explicit descriptions with generative algorithms > (like procedures of functions) that when called with appropriate > parameters reproduce the original data. These generative descriptions > have a tree-like structure, which is exactly the heirarchical > structure you're after. > > Obviously, there is no unique compression algorithm, nor even a unique > best algorithm. But I suspect that the best compression algorithms will probably > agree up to an isomorphism on the heirarchical structure for most > compressible data sets (note that this is already a set of measure > zero in the space of all data sets :). I don't have any data for my > hunch, though. > > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
In reply to this post by Frank Wimberly-2
I posit that all strict hierarchies *must* be violated for any[†] work to be done[‡]. In other words, strict hierarchies are fictions. They don't exist except in our imagination. So, the difference the violation makes is: Of course it violates a strict hierarchy; otherwise it wouldn't have worked. 8^)
[†] Well, any *significant* work to be done. The idea that an organism is more complex than a machine seems to be simply the qualifier that the work it does is somehow meaningful ... not merely rote. [‡] I further posit that this is the separation between specific and general intelligence. The reason humans are capable of executing tasks that are difficult to automate is because "we are large, we contain multitudes" (bastardizing Whitman). On 5/4/19 5:51 PM, Frank Wimberly wrote: > If you're moving up from the phoneme to the word to the phrase level and > some result in the last disambiguates between two phonemes, what difference > does it make that you've violated some strict hierarchy? ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Free forum by Nabble | Edit this page |