“AI needs enough time to perform experiments to learn the consequences and meaning of different patterns of numbers”
Abused metaphor alert !!!!! Haven’t you “personalized” AI? It would seem to me that the one thing you are NOT allowed to do with AI is personalize it.
N
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
https://wordpress.clarku.edu/nthompson/
-----Original Message-----From: Friam <[hidden email]> On Behalf Of Marcus DanielsSent: Tuesday, December 1, 2020 12:13 PMTo: The Friday Morning Applied Complexity Coffee Group <[hidden email]>Subject: Re: [FRIAM] New ways of understanding the world
Map Nick's list of numbers to a spatiotemporal snapshot of the physical world. The dog and the human have both learned how to learn about it. Whether it took 1 year, 8000 years, or 2.7 billion years sort of doesn’t matter in the argument except that the new AI needs enough time to perform experiments to learn the consequences and meaning of different patterns of numbers. If the list of numbers describes every possible action that the AI could take and how that particular path would be recorded, then any given experiment could in principle be encapsulated in a single set of numbers; it is just a matter of what cells in the hyperspace the AI decides to look at.
-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Tuesday, December 1, 2020 9:54 AM
To: [hidden email]
Subject: Re: [FRIAM] New ways of understanding the world
Well, as I've tried to make clear, machines can *accrete* their machinery. I think this is essentially arguing for "genetic memory", the idea that there's a balance between scales of learning rates. What your dog learns after its birth is different from what it "knew" at its birth. I'm fine with tossing the word "theory" for this accreted build-up of inferences/outcomes/state. But it's as good a word as any other.
I suspect that there are some animals, like humans, born with FPGA-like learning structures so that their machinery accretes more after birth than other animals. And that there are some animals born with more of that machinery already built-in. And it's not a simple topic. Things like retractable claws are peculiar machinery that kindasorta *requires* one to think in terms of clawing, whereas our more rounded fingernails facilitate both clawing and, say, unscrewing flat head screws.
But this accreted machinery is *there*, no matter how much we want to argue where it came from. And it will be there for any given AI as well. Call it whatever you feel comfortable with.
On 12/1/20 9:39 AM, Marcus Daniels wrote:
> Dogs and humans share 84% of their DNA, so that almost sounds plausible on the face of it. However, humans have about 16 billion neurons in the cerebral cortex but the whole human genome is only about 3 billion base pairs, and only about 30 million of it codes for proteins. This seems to me to say that learning is more important than inheritance of "theories" if you must insist on using that word.
--
↙↙↙ uǝlƃ
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .FRIAM Applied Complexity Group listservZoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriamarchives: http://friam.471366.n2.nabble.com/FRIAM-COMIC http://friam-comic.blogspot.com/
Free forum by Nabble | Edit this page |