============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Administrator
|
The good news here is that Neil Gershenfeld is leading the effort.
Very down to earth, lots of street cred, and a mensch besides. One serious problem could be the proof that some languages are not Turing-recognizable. In computer-speak, a language is a set of strings, and any algorithm has an associated set of strings .. the algorithm's solutions. See Sipser p. 178, Ch 4, Decidability. All this translates to the more simple statement that computers cannot solve all problems. Note: The proof simply shows that the set of all sets of strings (languages) is uncountable, while the set of algorithms is countable. So the key formal question the Mind Machine Project, or MMP, must start with is whether or not the scope of their research is within the scope of algorithms. I hope it is! -- Owen On Dec 11, 2009, at 9:51 AM, Mikhail Gorelkin wrote: > http://web.mit.edu/newsoffice/2009/ai-overview.html > > --Mikhail ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Quoting Owen Densmore circa 09-12-11 09:58 AM:
> All this translates to the more simple statement that computers cannot > solve all problems. > > Note: The proof simply shows that the set of all sets of strings > (languages) is uncountable, while the set of algorithms is countable. > > So the key formal question the Mind Machine Project, or MMP, must start > with is whether or not the scope of their research is within the scope > of algorithms. I have to claim that the scope of their research is definitely NOT within the scope of algorithms. And I have to argue with your simplification a bit. Just because a single language cannot "solve" some problem doesn't mean a composition of languages can't "solve" that problem. Further, computers are (luckily) unfaithful instantiations of languages, in spite of all the effort we morlocks put into them. So, just because a language implemented by a computer cannot "solve" a given problem does not mean that the computer can't "solve" that problem. For me, the key formal question is whether they will come up with useful methods that go beyond algorithms (and even languages), because I believe that's necessary for the more interesting problems in AI. -- glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
> For me, the key formal question is whether they will come up with useful
> methods that go beyond algorithms (and even languages), because I > believe that's necessary for the more interesting problems in AI. I think one way to do it is to go to... Quantum Probability Theory and Quantum Logic: http://plato.stanford.edu/entries/qt-quantlog/ --Mikhail -----Original Message----- From: [hidden email] [mailto:[hidden email]] On Behalf Of glen e. p. ropella Sent: Friday, December 11, 2009 1:28 PM To: The Friday Morning Applied Complexity Coffee Group Subject: Re: [FRIAM] Rethinking artificial intelligence Quoting Owen Densmore circa 09-12-11 09:58 AM: > All this translates to the more simple statement that computers cannot > solve all problems. > > Note: The proof simply shows that the set of all sets of strings > (languages) is uncountable, while the set of algorithms is countable. > > So the key formal question the Mind Machine Project, or MMP, must start > with is whether or not the scope of their research is within the scope > of algorithms. I have to claim that the scope of their research is definitely NOT within the scope of algorithms. And I have to argue with your simplification a bit. Just because a single language cannot "solve" some problem doesn't mean a composition of languages can't "solve" that problem. Further, computers are (luckily) unfaithful instantiations of languages, in spite of all the effort we morlocks put into them. So, just because a language implemented by a computer cannot "solve" a given problem does not mean that the computer can't "solve" that problem. For me, the key formal question is whether they will come up with useful methods that go beyond algorithms (and even languages), because I believe that's necessary for the more interesting problems in AI. -- glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Most of early AI was heuristics, not algorithms. Some algorithms were incorporated into expert systems, in the belief that if an algorithm could solve the problem, fine; if not, heuristics might. But it was always *might*. True, computers can't solve all problems, neither can humans.
P. "A cold coming we had of it, Just the worst time of the year For a journey, and such a long journey; The ways deep and the weather sharp. The very dead of winter." T.S. Eliot ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Reunion of AI pioneers:
http://www.nytimes.com/2009/12/08/science/08sail.html?_r=1&scp=1&sq=markoff%20%22artificial%20intelligence%22&st=cse -tj On Fri, Dec 11, 2009 at 4:16 PM, Pamela McCorduck <[hidden email]> wrote:
-- ========================================== J. T. Johnson Institute for Analytic Journalism -- Santa Fe, NM USA www.analyticjournalism.com 505.577.6482(c)                   505.473.9646(h) http://www.jtjohnson.com         [hidden email] "Be Your Own Publisher" http://indiepubwest.com ========================================== ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Pamela McCorduck
Didn't it take an algorithm (an Inference Engine) to process the
heuristics? Also show me some silicon that doesn't use an algorithm
somewhere. So do you suppose the Mind Machine Project is a way to
break free of this computing/algorithmic model?
Robert C Pamela McCorduck wrote: Most of early AI was heuristics, not algorithms. Some algorithms were incorporated into expert systems, in the belief that if an algorithm could solve the problem, fine; if not, heuristics might. But it was always *might*. True, computers can't solve all problems, neither can humans. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Mikhail Gorelkin
Wow. Wish I could get in on that action.
~~James On Fri, Dec 11, 2009 at 11:51 AM, Mikhail Gorelkin <[hidden email]> wrote: > http://web.mit.edu/newsoffice/2009/ai-overview.html > > > > --Mikhail ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Robert J. Cordingley
More on non-algorithmic computing from Penrose:
http://en.wikipedia.org/wiki/The_Emperor's_New_Mind but I don't see how the brain can use quantum mechanics since it's biochemical and operates on a different scale. Has anyone read Penrose's book and can recommend it or not (even tho' it was awarded a prize by the Royal Society)? Robert C Robert J. Cordingley wrote: Didn't it take an algorithm (an Inference Engine) to process the heuristics? Also show me some silicon that doesn't use an algorithm somewhere. So do you suppose the Mind Machine Project is a way to break free of this computing/algorithmic model? ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Penrose's book was an odd fish indeed. I liked the physics. I utterly did not understand what he was talking about when he talked about AI. Neither did anyone else, so far as I heard. I think he had a prejudice against AI, and used this flimflammery to pretend he had scientific reasons for being against it.
On Dec 16, 2009, at 5:43 PM, Robert J. Cordingley wrote: More on non-algorithmic computing from Penrose: "A cold coming we had of it, Just the worst time of the year For a journey, and such a long journey; The ways deep and the weather sharp. The very dead of winter." T.S. Eliot ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Robert J. Cordingley
More on non-algorithmic computing from Penrose:I read this when it was published. I was interested because I had a dog (more of a lame coyote) in the fight. I was not very impressed. And subsequent work in the are did not improve the situation: Shadows of the Mind (Penrose & Hameroff) Now for my 15 milliseconds of fame: I introduced Penrose and Hameroff in 1984. Hameroff and I wrote a paper together: (Smith S, Watt RC, Hameroff SR. Cellular automata in cytoskeletal lattice proteins. Physica D, 1984; 10:l68-l74.) I wrote the Cellular Automata model of cytoskeletal lattice (Microtubulin) and did the parameter studies. It was an interesting (and unique CA model). Hameroff was postulating that consciousness was (at least partly) a property of information processing in Microtubules (read the paper for the arguement, some of it IMO was well enough motivated, if a bit of a stretch). Penrose wrote me to argue the point that consciousness was a nonlinear phenomena and that (perhaps) a key might not be the unique configuration of the geometry/topology of networks of microtubules within nerve cells, but rather (no explanation given) non-repeating patterns such as one would have if they implemented a "Penrose Cellular Automata". Unfortunately I did not keep the original letter, and I did not respond, much less do the (obvious in retrospect) thing of throwing myself at the great man's feet and immediately writing what would have been "Penrose Life" . Instead I introduced him to Hameroff (both seemed kind of balmy to me in my youthful confidence) and moved on. The two immediately bonded over this and went on to develop Orch-OR, a theory of consciousness they call Orchestrated Objective Reduction. I have never really been able to penetrate it beyond a superficial level. I am, nevertheless interested in anyone's insight into this whole topic... while I don't give it credence, I haven't been able to dismiss it exactly either. - Steve
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Robert J. Cordingley
As long as we're on AI and Math (whenever were we not) , recall that
the hard problems in AI are less matters of chess and more those of the
first five years of development. Here are some mathematicians
discussing same - Interesting to see how the conversation
unfolds.....got some Category Theory in thar too!
http://golem.ph.utexas.edu/category/2009/12/can_fiveyearolds_compute_copro.html Maybe we should be asking Miles and Reed about AI fundamentals. BTW, does anyone have a copy of Drew McDermott's Critique of Pure Reason, (ie a crisp semantics does not a logic make) which is in part a critique of the Naive Physics Manifesto? carl Robert J. Cordingley wrote: More on non-algorithmic computing from Penrose: ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Well, since you asked, though I must ay that my only qualifications in traditional AI is one course in my masters program, so I'm not sure why you'd care.. :) I think the more you look into classic AI, the less impressive it is from an explanatory POV. A lot of it seems to me like a kind of OR for cognition AFAICT. So yeah, I'd agree that to equate the sophistication and beauty of the human experience encountering, playing and mastering chess with that of a glorified tree-search -- no matter how finally tuned or dressed up -- is to completely misinterpret the whole point of intelligence. Its a fundamental confusion between awareness and basic curiosity on the one hand and quickness and large memory on the other. I'm really convinced by the pattern over symbols arguments, and in particular I think it's really telling that soft logic approaches seem to fall apart under longer inference chains. All of this is not to say that classic AI approaches aren't insightful and cool, but that their relationship to intelligence is glancing and superficial. And at the same time, I think the AI label actually limits the appreciation of all of the cool things that we *can* do with symbolic logic techniques. It's as if everyone decided to call the Wright brother's flyer an "artificial bird" and the name stuck. And now we see all of this coming back in the guise of exabyte science or whatever they're calling it. "Hey we have lot's of data! We'll get answers for free!!" On Dec 16, 2009, at 9:47 PM, Carl Tollander wrote:
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Free forum by Nabble | Edit this page |