Probably It is the most interesting tech article that I have read in weeks.
https://mobile.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?emc=edit_th_20170917&nl=todaysheadlines&nlid=58593627&referer= ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
[mixing threads]
Mermin’s “Shut up and calculate” view which to me seems like agreeing to be blind because there is Braile. This to me has the same feel as agreeing that `real’ being whatever “a community of inquiry” says. How can one generate hypothesis in a productive way without any intuition or metaphysical foundation? Why would anyone want to? It seems to me doing theory this way is something a computer might as well do. I _believe_ something because I can manipulate it, visualize it, and anticipate a certain kind of result, not because it is written in a textbook or because a prediction pops out of a supercomputer. That formality is added value to the intuition, not a substitute for it.
Suppose (and it is not just hypothetical) that a machine learning algorithm could suggest how to design a battery with maximum capacity, develop recipes that
extended life, or find computationally efficient solutions to the evolution of quantum systems, or answer any number of hard scientific questions or solve any number of relevant engineering problems. Suppose it was completely mysterious to humans (at first)
how it worked, but it worked perfectly. The systems never failed and the predictions were always spot-on. Has something `real’ been found? The “Shut-up and calculate” approach seems to say yes. Why should I prefer to read papers or textbooks describing
human experiences? Instead, perhaps find ways to unpack and rationalize the machine representations (e.g. neural nets, rule-based systems, whatever).
Marcus From: Friam <[hidden email]> on behalf of Alfredo Covaleda Vélez <[hidden email]>
Sent: Monday, September 18, 2017 8:09:01 PM To: The Friday Morning Applied Complexity Coffee Group Subject: [FRIAM] Maybe a new hardware approach to deal with AI developments Probably It is the most interesting tech article that I have read in weeks.
https://mobile.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?emc=edit_th_20170917&nl=todaysheadlines&nlid=58593627&referer= ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Something like what's discussed in the nytimes article *must* obtain for computers to ever be as embedded as the human brain. We can make an analogy that helps explain why RussA's reified ideas argument is (slightly) flawed, but satisficing for a seemingly large number of tasks. The analogy being CPU ⇔ thoughts. As the nytimes article points out, the centralization of the computer's "thoughts" into the CPU has taken us really far, as has (perhaps) centralization-friendly philosophy like we got from Plato. But CPUs and the thoughts of philosophers have *never* really been disembodied. RussA's idea (contra Hoffman, I think) that there is a strong correlation between the world and thoughts, strong enough to imply that we can share/communicate ideas, relies on the hidden assumption that the communicating processes have the same embedding (eyeballs, fingers, ears, etc. for brains and disks, GPUs, RAM, etc. for CPUs).
The shared embedding is the source of the shared semantics ... It is the reason we (are tricked into thinking we can) share ideas. This is also true for computational infrastructure like ANNs or GAs trained on particular data or in a particular context. Making sense of the final configuration that seems to handle the I/O relation the way it "should", consists largely of studying the embedding of the configuration. The meaning comes from the interaction with what's out there, not some decoupled internal structure. I think this is at least part of why QM is appealing to philosophers and vice versa, because (e.g.) entanglement is a (very particular) type of environmental coupling. What information is closed under which operations? And what information is sensitive to couplings under which operations? On 09/19/2017 12:00 PM, Marcus Daniels wrote: > [mixing threads] > > > Mermin’s “Shut up and calculate” view which to me seems like agreeing to be blind because there is Braile. > > This to me has the same feel as agreeing that `real’ being whatever “a community of inquiry” says. How can one generate hypothesis in a productive way without any intuition or metaphysical foundation? Why would anyone want to? It seems to me doing theory this way is something a computer might as well do. I _believe_ something because I can manipulate it, visualize it, and anticipate a certain kind of result, not because it is written in a textbook or because a prediction pops out of a supercomputer. That formality is added value to the intuition, not a substitute for it. > > > Suppose (and it is not just hypothetical) that a machine learning algorithm could suggest how to design a battery with maximum capacity, develop recipes that extended life, or find computationally efficient solutions to the evolution of quantum systems, or answer any number of hard scientific questions or solve any number of relevant engineering problems. Suppose it was completely mysterious to humans (at first) how it worked, but it worked perfectly. The systems never failed and the predictions were always spot-on. Has something `real’ been found? The “Shut-up and calculate” approach seems to say yes. Why should I prefer to read papers or textbooks describing human experiences? Instead, perhaps find ways to unpack and rationalize the machine representations (e.g. neural nets, rule-based systems, whatever). > > > Marcus > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ > *From:* Friam <[hidden email]> on behalf of Alfredo Covaleda Vélez <[hidden email]> > *Sent:* Monday, September 18, 2017 8:09:01 PM > *To:* The Friday Morning Applied Complexity Coffee Group > *Subject:* [FRIAM] Maybe a new hardware approach to deal with AI developments > > Probably It is the most interesting tech article that I have read in weeks. > > https://mobile.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?emc=edit_th_20170917&nl=todaysheadlines&nlid=58593627&referer= -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Glen writes:
"Making sense of the final configuration that seems to handle the I/O relation the way it "should", consists largely of studying the embedding of the configuration. The meaning comes from the interaction with what's out there, not some decoupled internal structure."
To the extent there is compression or partitioning/expansion of the I/O relation might give a `story' with regard to what's out there. How do progressively higher levels in a neural net selectively combine signals into mappings? My dog isn't going to tell me how she selects an item to steal & march around with, but if I could probe neurons in her brain I might find one that fires for
large but lightweight soft things like pillows, paper towels, and so on.
Marcus From: Friam <[hidden email]> on behalf of gⅼеɳ ☣ <[hidden email]>
Sent: Tuesday, September 19, 2017 2:43:55 PM To: FriAM Subject: Re: [FRIAM] Maybe a new hardware approach to deal with AI developments Something like what's discussed in the nytimes article *must* obtain for computers to ever be as embedded as the human brain. We can make an analogy that helps explain why RussA's reified ideas argument is (slightly) flawed, but satisficing
for a seemingly large number of tasks. The analogy being CPU ⇔ thoughts. As the nytimes article points out, the centralization of the computer's "thoughts" into the CPU has taken us really far, as has (perhaps) centralization-friendly philosophy like we
got from Plato. But CPUs and the thoughts of philosophers have *never* really been disembodied. RussA's idea (contra Hoffman, I think) that there is a strong correlation between the world and thoughts, strong enough to imply that we can share/communicate
ideas, relies on the hidden assumption that the communicating processes have the same embedding (eyeballs, fingers, ears, etc. for brains and disks, GPUs, RAM, etc. for CPUs).
The shared embedding is the source of the shared semantics ... It is the reason we (are tricked into thinking we can) share ideas. This is also true for computational infrastructure like ANNs or GAs trained on particular data or in a particular context. Making sense of the final configuration that seems to handle the I/O relation the way it "should", consists largely of studying the embedding of the configuration. The meaning comes from the interaction with what's out there, not some decoupled internal structure. I think this is at least part of why QM is appealing to philosophers and vice versa, because (e.g.) entanglement is a (very particular) type of environmental coupling. What information is closed under which operations? And what information is sensitive to couplings under which operations? On 09/19/2017 12:00 PM, Marcus Daniels wrote: > [mixing threads] > > > Mermin’s “Shut up and calculate” view which to me seems like agreeing to be blind because there is Braile. > > This to me has the same feel as agreeing that `real’ being whatever “a community of inquiry” says. How can one generate hypothesis in a productive way without any intuition or metaphysical foundation? Why would anyone want to? It seems to me doing theory this way is something a computer might as well do. I _believe_ something because I can manipulate it, visualize it, and anticipate a certain kind of result, not because it is written in a textbook or because a prediction pops out of a supercomputer. That formality is added value to the intuition, not a substitute for it. > > > Suppose (and it is not just hypothetical) that a machine learning algorithm could suggest how to design a battery with maximum capacity, develop recipes that extended life, or find computationally efficient solutions to the evolution of quantum systems, or answer any number of hard scientific questions or solve any number of relevant engineering problems. Suppose it was completely mysterious to humans (at first) how it worked, but it worked perfectly. The systems never failed and the predictions were always spot-on. Has something `real’ been found? The “Shut-up and calculate” approach seems to say yes. Why should I prefer to read papers or textbooks describing human experiences? Instead, perhaps find ways to unpack and rationalize the machine representations (e.g. neural nets, rule-based systems, whatever). > > > Marcus > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ > *From:* Friam <[hidden email]> on behalf of Alfredo Covaleda Vélez <[hidden email]> > *Sent:* Monday, September 18, 2017 8:09:01 PM > *To:* The Friday Morning Applied Complexity Coffee Group > *Subject:* [FRIAM] Maybe a new hardware approach to deal with AI developments > > Probably It is the most interesting tech article that I have read in weeks. > > https://mobile.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?emc=edit_th_20170917&nl=todaysheadlines&nlid=58593627&referer= -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
On 09/19/2017 01:54 PM, Marcus Daniels wrote:
> To the extent there is compression or partitioning/expansion of the I/O relation might give a `story' with regard to what's out there. Yes, that's a fantastic point ... a bit like the holographic principle, I suppose. > How do progressively higher levels in a neural net selectively combine signals into mappings? My dog isn't going to tell me how she selects an item to steal & march around with, but if I could probe neurons in her brain I might find one that fires for large but lightweight soft things like pillows, paper towels, and so on. I agree. But I think it's important to emphasize that those neurons are an integral part of the sensorimotor complex. It's a bit of a false dichotomy to distinguish "thoughts" from teeth and tongue. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
> How do progressively higher levels in a neural net selectively combine signals into mappings? My dog isn't going to tell me how she selects an item to steal & march around with, but if I could probe neurons in her brain I might find one that fires for large but lightweight soft things like pillows, paper towels, and so on.
Glen writes: "I agree. But I think it's important to emphasize that those neurons are an integral part of the sensorimotor complex. It's a bit of a false dichotomy to distinguish "thoughts" from teeth and tongue." On the other hand, she could choose to push over the container with her food in it or grab the bag of treats. The preferred soft objects are apparently for entertainment and social purposes, which is distinct and more abstract than mastication and satiation. But yes, they are something she can be agile in manipulating. She can jump over the couch with a roll of paper towels in her mouth. Not so with a coffee cup or heavy bone. What she prefers is constrained by her physical strength, and potential skeletal and tissue vulnerabilities. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
On 09/20/2017 08:44 AM, Marcus Daniels wrote:
> What she prefers is constrained by her physical strength, and potential skeletal and tissue vulnerabilities. Right. But my argument (here... I'm not necessarily convicted to this) is that what she prefers is not *merely* constrained by the extensional parts of her self, but that her self is *defined* and determined by the extensional parts. I'm willing to admit some wiggle room, e.g. dreaming. When my cats dream, their whiskers twitch, they chatter, and their claws go in and out. If they didn't show that behavior, I'd have zero evidence that they dreamed at all. So, even dreams are defined and determined by their extensions. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Maybe, but if I could
run 40 miles per hour, or began to develop an electric organ, I'm pretty sure I'd start to exercise those capabilities. And if she could jump 10 feet in the air instead of 4, she'd soon be doing it. [Hmm, maybe I should get a trampoline?] -----Original Message----- On 09/20/2017 08:44 AM, Marcus Daniels wrote: > What she prefers is constrained by her physical strength, and potential skeletal and tissue vulnerabilities.
Right. But my argument (here... I'm not necessarily convicted to this) is that what she prefers is not *merely* constrained by the extensional parts of her self, but that her self is *defined* and determined by the extensional parts.
I'm willing to admit some wiggle room, e.g. dreaming. When my cats dream, their whiskers twitch, they chatter, and their claws go in and out. If they didn't show that behavior, I'd have zero evidence that they dreamed at all. So, even dreams are defined
and determined by their extensions. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
On 09/20/2017 09:10 AM, Marcus Daniels wrote:
> Maybe, but if I could run 40 miles per hour <https://www.usatoday.com/story/life/2015/11/02/mark-wahlberg-six-billion-dollar-man-2017/75047226/>, or began to develop an electric organ, I'm pretty sure I'd start to exercise those capabilities. And if she could jump 10 feet in the air instead of 4, she'd soon be doing it. [Hmm, maybe I should get a trampoline?] Another good point. But it's explained by imperfect and/or exploratory control over one's extensions. I often meet people who see what I do to workout (including hanging upside down from a bar and some weird wrist-breaking exercises) and they respond like "Well, that's great but I would/could never do such a thing." They have various reasons. But when/if I get a chance to show them how to ease into weird things *safely*, they soon learn that, YES, their legs will bend that way, too. They just have to *try*. The same is true of my cats. I'm constantly showing the unathletic pudgy one that she, too, can balance on that skinny limb like the others do naturally. Some of us are just more exploratory with our extensions. I suspect that's a "unit" of selection as well. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
> I often meet people who see what I do to workout (including hanging upside down from a bar and some weird wrist-breaking exercises) and they respond like "Well, that's great but I would/could never do such a thing."
Yep, like the distinction between low calorie diets vs. intense exercise. Putting aside draining effects of chemotherapy or other debilitating illnesses some relatively healthy people just have no idea, and will never have an idea, how dramatically their body and metabolism can change with sustained exercise. That is not a behavior they will ever really investigate. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
And to go back to the topic, many have no idea how much their *thinking* does change with intense exercise or intense nutrition changes. All this argues directly against RussA's argument of reified ideas. And it relates back to the article Alfredo posted, too. Our intelligence doesn't reside in our brains and, therefore, it's reasonable to think that an artificial intelligence's intelligence will not reside in some sort of CPU.
On 09/20/2017 09:33 AM, Marcus Daniels wrote: > Yep, like the distinction between low calorie diets vs. intense exercise. Putting aside draining effects of chemotherapy or other debilitating illnesses some relatively healthy people just have no idea, and will never have an idea, how dramatically their body and metabolism can change with sustained exercise. That is not a behavior they will ever really investigate. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
I think the spirit of the NY Times article, and current trends, is _not_ to reify.
Graphics processors, tensor processors, FPGAs, spiking systems, quantum annealers, etc. are by in large tackling machine learning, not engineered intelligence (class AI) or even (necessarily) supervised learning. We are _blinded_ by what we think we know. -----Original Message----- From: Friam [mailto:[hidden email]] On Behalf Of g??? ? Sent: Wednesday, September 20, 2017 10:47 AM To: FriAM <[hidden email]> Subject: Re: [FRIAM] Maybe a new hardware approach to deal with AI developments And to go back to the topic, many have no idea how much their *thinking* does change with intense exercise or intense nutrition changes. All this argues directly against RussA's argument of reified ideas. And it relates back to the article Alfredo posted, too. Our intelligence doesn't reside in our brains and, therefore, it's reasonable to think that an artificial intelligence's intelligence will not reside in some sort of CPU. On 09/20/2017 09:33 AM, Marcus Daniels wrote: > Yep, like the distinction between low calorie diets vs. intense exercise. Putting aside draining effects of chemotherapy or other debilitating illnesses some relatively healthy people just have no idea, and will never have an idea, how dramatically their body and metabolism can change with sustained exercise. That is not a behavior they will ever really investigate. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
On 09/20/2017 10:08 AM, Marcus Daniels wrote:
> I think the spirit of the NY Times article, and current trends, is _not_ to reify. Right. That's what I was saying. 8^) But my guess is RussA isn't seeing this conversation. > Graphics processors, tensor processors, FPGAs, spiking systems, quantum annealers, etc. are by in large tackling machine learning, not engineered intelligence (class AI) or even (necessarily) supervised learning. We are _blinded_ by what we think we know. And the further point is that general intelligence simply does. not. exist. Like the self, it's trickery... an ephemeral binding or syncopation of our various particular intelligences. By this reasoning, one day, we'll simply wake up and notice that our car, with all it's little pieces of machine learning have resulted in accidentally/stigmergically engineered intelligence. -- ☣ gⅼеɳ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Great phrase/takeway from this thread "Syncopated Intelligence"!
I've already reprogrammed my bluetooth mic/speaker in my Truck to say (in a somber, sotto voce, male voice) "What are you doing, Steve?" in place of the tiny accented female Asian Voice that used to say "Powah Onha" and then "I can't let you do that Steve" for "Bluetooth Connectedah". I'm afraid to say "Open the pod bay door, Hal" for fear it might actually manage to open the driver's door and roll me out into traffic. I think I"ve watched/read too much Science Fiction in my life... or the engineers of our time have? -Stig Mergy On 9/20/17 11:16 AM, gⅼеɳ ☣ wrote: > On 09/20/2017 10:08 AM, Marcus Daniels wrote: >> I think the spirit of the NY Times article, and current trends, is _not_ to reify. > Right. That's what I was saying. 8^) But my guess is RussA isn't seeing this conversation. > >> Graphics processors, tensor processors, FPGAs, spiking systems, quantum annealers, etc. are by in large tackling machine learning, not engineered intelligence (class AI) or even (necessarily) supervised learning. We are _blinded_ by what we think we know. > And the further point is that general intelligence simply does. not. exist. Like the self, it's trickery... an ephemeral binding or syncopation of our various particular intelligences. By this reasoning, one day, we'll simply wake up and notice that our car, with all it's little pieces of machine learning have resulted in accidentally/stigmergically engineered intelligence. > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Free forum by Nabble | Edit this page |