WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

thompnickson2

Dear Frennemies,

 

I have had my ears boxed so often for dragging threads into my metaphor den, that I thought I ought to rethread this.  But the paper Glen posts and Russ applauds posts is really interesting, describing the manner in which implicit assumptions into our AI can lead it wildly astray: “There’s more than one way to [see] a cat.”

 

The article had an additional lesson for me.  To the extent that you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is  more easily understood than the thing it models.  Don’t use chimpanzees as models if you are interested in mice.   

 

Simulations would seem to me to have the same obligation.  If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right?  So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?

 

What further interested me about these models that the AI provided was that they were in part natural and in part contrived.  So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation.  So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Russ Abbott
Sent: Monday, August 10, 2020 11:04 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] meaning, only text

 

Independent of Kavanaugh, that was a great article. That's the first I have heard of this work. It begins to explain a lot about deep learning and its literal and figurative superficiality.

 

-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles

 

 

On Mon, Aug 10, 2020 at 7:02 AM uǝlƃ ↙↙↙ <[hidden email]> wrote:

And to round out another thread, wherein I proposed Brett Kavanaugh *is* Artificial Intelligence, this article pops up:

  Where We See Shapes, AI Sees Textures
  Jordana Cepelewicz
  https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/

In the context of "originalism" and reading *through* the text, the question is: Why does Brett *seem* intelligent [‽] in a different way than your average zero-shot AI? I like Nick's argument that meaning is higher-order pattern. The results Cepelewicz cites validate that argument []. But if we continue, we'll fall back into the argument about high-order Markovity, free will, and steganographic [de]coding. And (worse) it dovetails with No Free Lunch and whether strict potentialists are well-justified in using higher order operators. Multi-objective constraint solving (aka parallax) seems to cut a compromise through the whole meta-thread. But, as always, the tricks lie in composition and modularity. How do the constraints compose? Which problems can be teased apart from which other problems to create cliques in the graph or even repurposable anatomical modules? How do we construct structured memory for saving snapshots of swapped out partial solutions? Etc.


[‽] If you can't tell, I'm really enjoying using a frat boy political operative who *pretends* to be a SCOTUS justice in the argument for strong AI. To use an actual justice like Gorsuch as such just isn't satisfying.

[] Of course, we don't learn from confirmation. We only learn from critical objection. And the 2nd half of the article does that well enough, I think.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

gepr
I see 2 questions: 1) are obtuse simulations useful? And 2) are all simulations naturfacts?

My answers are Yes and Yes. The easiest ways to see (1) are that incomprehensible simulations are useful if they speed up time, shrink space, and/or predict accurately. In essence, what makes them useful in these circumstances is *manipulationist*. Even if you don't understand what you're manipulating, having something *to* manipulate helps. The canonical ALife example is that we only have 1 fundamental type of life. So, we have limited ways in which we can manipulate it. Simulated life helps us think clearly about *how* we might manipulate living systems that both do and do not exist.

Re: (2) - It should be clear that *all* simulations are part artifact and part natural object. But because it's so obvious to me, I'm having trouble coming up with any way in which one might have a *pure* artifact. I suppose the closest we can get is a virtual machine, an emulator for another piece of hardware/software. Then whatever's executing inside the VM might be said to be purely artificial. But any simulation that runs on "bare metal" is a naturfact already. Then go a step further and argue that any simulation must, somehow, *simulate* its referent. And that means the behavior of the computation will be artifice made to look like some (presumably natural) referent. I.e. the requirements for the computation are inferred from behavior in the world. If we regard behavior as natural, then any such simulation will be a naturfact. This fields the question re: behavioral analogy.

But your question is more about structural analogy. To what extent must the structure of a computation mirror the structure of a referent for us to call it a naturfact? And it's that question that distinguishes mechanistic modeling from predictive modeling. I'm agnostic on this [⛧]. Although I'm a mechanistic modeler, I'm perfectly happy with pure behavioral analogies where the structure is unrelated to that of its referent.


[⛧] Well, I'm actually very opinionated on it. But those fine-grained opinions are irrelevant at this point. If/when we start arguing about "levels", then my fine-grained opinions will burst out like so many ants from a kicked bed.


On 8/10/20 11:20 AM, [hidden email] wrote:
> The article had an additional lesson for me.  To the extent that you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is  more easily understood than the thing it models.  Don’t use chimpanzees as models if you are interested in mice.   
>
>  
>
> Simulations would seem to me to have the same obligation.  If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right?  So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?
>
>  
>
> What further interested me about these models that the AI provided was that they were in part natural and in part contrived.  So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation.  So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

Frank Wimberly-2
When I was a grad student at Pitt in about 1974 the project was working on (Project Solo, NSF funded, to get high school students to write programs to solve problems in many application areas thereby requiring that the understand those areas be they physics and chemistry or set design) was given a high quality flight simulator based on ball/disk integrators.  I'm wondering whether it was obtuse.  I had a private pilot's license and it was certified by the FAA in a way that I could log time toward my Instrument Rating by using it.  I used it for dozens of hours but I didn't log the time.  I did became very confident about controlling and airplane at night or in instrument conditions (IMC if I recall correctly).  Would you call that a naturfact?

(I did find myself in an unforecasted cloud on a night flight back from Williamsburg nto Pittsburgh.  I called the control tower and filed an IFR flight plan to descend through the clouds, which I did with no problem.)
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Aug 10, 2020, 12:46 PM uǝlƃ ↙↙↙ <[hidden email]> wrote:
I see 2 questions: 1) are obtuse simulations useful? And 2) are all simulations naturfacts?

My answers are Yes and Yes. The easiest ways to see (1) are that incomprehensible simulations are useful if they speed up time, shrink space, and/or predict accurately. In essence, what makes them useful in these circumstances is *manipulationist*. Even if you don't understand what you're manipulating, having something *to* manipulate helps. The canonical ALife example is that we only have 1 fundamental type of life. So, we have limited ways in which we can manipulate it. Simulated life helps us think clearly about *how* we might manipulate living systems that both do and do not exist.

Re: (2) - It should be clear that *all* simulations are part artifact and part natural object. But because it's so obvious to me, I'm having trouble coming up with any way in which one might have a *pure* artifact. I suppose the closest we can get is a virtual machine, an emulator for another piece of hardware/software. Then whatever's executing inside the VM might be said to be purely artificial. But any simulation that runs on "bare metal" is a naturfact already. Then go a step further and argue that any simulation must, somehow, *simulate* its referent. And that means the behavior of the computation will be artifice made to look like some (presumably natural) referent. I.e. the requirements for the computation are inferred from behavior in the world. If we regard behavior as natural, then any such simulation will be a naturfact. This fields the question re: behavioral analogy.

But your question is more about structural analogy. To what extent must the structure of a computation mirror the structure of a referent for us to call it a naturfact? And it's that question that distinguishes mechanistic modeling from predictive modeling. I'm agnostic on this [⛧]. Although I'm a mechanistic modeler, I'm perfectly happy with pure behavioral analogies where the structure is unrelated to that of its referent.


[⛧] Well, I'm actually very opinionated on it. But those fine-grained opinions are irrelevant at this point. If/when we start arguing about "levels", then my fine-grained opinions will burst out like so many ants from a kicked bed.


On 8/10/20 11:20 AM, [hidden email] wrote:
> The article had an additional lesson for me.  To the extent that you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is  more easily understood than the thing it models.  Don’t use chimpanzees as models if you are interested in mice.   
>
>  
>
> Simulations would seem to me to have the same obligation.  If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right?  So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?
>
>  
>
> What further interested me about these models that the AI provided was that they were in part natural and in part contrived.  So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation.  So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

gepr
It's difficult to tell whether it's obtuse. That depends a bit on who's trying to understand the model. But, yes, it's definitely a naturfact. I think it would be safe to argue that all analog computers are further along toward the natural object end of the spectrum than digital computers.

On 8/10/20 12:09 PM, Frank Wimberly wrote:
> high quality flight simulator based on ball/disk integrators.  I'm wondering whether it was obtuse.  I had a private pilot's license and it was certified by the FAA in a way that I could log time toward my Instrument Rating by using it.  I used it for dozens of hours but I didn't log the time.  I did became very confident about controlling and airplane at night or in instrument conditions (IMC if I recall correctly).  Would you call that a naturfact?

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

Frank Wimberly-2
I agree, Glen.  Regarding obscurity, with enough study, which some of the high school students attempted, you could see how the change in heading is the integral of the turn rate, which is controlled by the rudder and yoke, etc.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Aug 10, 2020, 2:00 PM uǝlƃ ↙↙↙ <[hidden email]> wrote:
It's difficult to tell whether it's obtuse. That depends a bit on who's trying to understand the model. But, yes, it's definitely a naturfact. I think it would be safe to argue that all analog computers are further along toward the natural object end of the spectrum than digital computers.

On 8/10/20 12:09 PM, Frank Wimberly wrote:
> high quality flight simulator based on ball/disk integrators.  I'm wondering whether it was obtuse.  I had a private pilot's license and it was certified by the FAA in a way that I could log time toward my Instrument Rating by using it.  I used it for dozens of hours but I didn't log the time.  I did became very confident about controlling and airplane at night or in instrument conditions (IMC if I recall correctly).  Would you call that a naturfact?

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

thompnickson2
In reply to this post by gepr
Glen,

I can see your epitaph-- ITAL==>Glen Ropella, Coiner of Terms<==ital

"Naturfact," like "Steelman" is doomed to be a keeper.  So let me try to understand it, better.  The question in my [so-called] mind is, Can artifacts generate naturfacts? Let's say we design a robot so as to avoid objects it "sees" at a 45 degree angle.  Sitting on the laboratory floor before us, that robot is an "artifact".  Now, we place that robot in a rectangular room with bare walls and a bunch of Styrofoam cubes, and we turn it on.  It sets about herding the cubes into piles.  Has the artifact produced a naturfact?  Or, analogous to the intelligent design discussion, do we need to invent a concept of "derived artifactuality".  

It's possible you don't give a damn, but it is your term and best you guard it or it will be kidnapped.

Nick



Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
[hidden email]
https://wordpress.clarku.edu/nthompson/
 


-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ???
Sent: Monday, August 10, 2020 12:46 PM
To: FriAM <[hidden email]>
Subject: Re: [FRIAM] WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

I see 2 questions: 1) are obtuse simulations useful? And 2) are all simulations naturfacts?

My answers are Yes and Yes. The easiest ways to see (1) are that incomprehensible simulations are useful if they speed up time, shrink space, and/or predict accurately. In essence, what makes them useful in these circumstances is *manipulationist*. Even if you don't understand what you're manipulating, having something *to* manipulate helps. The canonical ALife example is that we only have 1 fundamental type of life. So, we have limited ways in which we can manipulate it. Simulated life helps us think clearly about *how* we might manipulate living systems that both do and do not exist.

Re: (2) - It should be clear that *all* simulations are part artifact and part natural object. But because it's so obvious to me, I'm having trouble coming up with any way in which one might have a *pure* artifact. I suppose the closest we can get is a virtual machine, an emulator for another piece of hardware/software. Then whatever's executing inside the VM might be said to be purely artificial. But any simulation that runs on "bare metal" is a naturfact already. Then go a step further and argue that any simulation must, somehow, *simulate* its referent. And that means the behavior of the computation will be artifice made to look like some (presumably natural) referent. I.e. the requirements for the computation are inferred from behavior in the world. If we regard behavior as natural, then any such simulation will be a naturfact. This fields the question re: behavioral analogy.

But your question is more about structural analogy. To what extent must the structure of a computation mirror the structure of a referent for us to call it a naturfact? And it's that question that distinguishes mechanistic modeling from predictive modeling. I'm agnostic on this [⛧]. Although I'm a mechanistic modeler, I'm perfectly happy with pure behavioral analogies where the structure is unrelated to that of its referent.


[⛧] Well, I'm actually very opinionated on it. But those fine-grained opinions are irrelevant at this point. If/when we start arguing about "levels", then my fine-grained opinions will burst out like so many ants from a kicked bed.


On 8/10/20 11:20 AM, [hidden email] wrote:

> The article had an additional lesson for me.  To the extent that
> you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is  more easily understood than the thing it models.  Don’t use chimpanzees as models if you are interested in mice.
>
>  
>
> Simulations would seem to me to have the same obligation.  If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right?  So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?
>
>  
>
> What further interested me about these models that the AI provided was that they were in part natural and in part contrived.  So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation.  So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.


--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 


- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

gepr
Ha! Like "steelman", I did not coin it, nor is it "my term". I've forgotten where I learned it, probably from the diffusion of innovations work I did a long time ago. In museology, it seems to mean "a product of natural processes". But I use it more to mean "objects one finds lying around on the ground". I use it that way because the museology definition seems rife with causa prima problems ... much like the one you're trying to get at below. (E.g. are humans natural? If so, are human constructions natural? Nonsense.)

As always, it's a spectrum, not a strict disjunction. YMMV (your mileage may vary). Feel free to kidnap or abuse the term in any way you see fit.


On 8/10/20 1:32 PM, [hidden email] wrote:
> I can see your epitaph-- ITAL==>Glen Ropella, Coiner of Terms<==ital
>
> "Naturfact," like "Steelman" is doomed to be a keeper.  So let me try to understand it, better.  The question in my [so-called] mind is, Can artifacts generate naturfacts? Let's say we design a robot so as to avoid objects it "sees" at a 45 degree angle.  Sitting on the laboratory floor before us, that robot is an "artifact".  Now, we place that robot in a rectangular room with bare walls and a bunch of Styrofoam cubes, and we turn it on.  It sets about herding the cubes into piles.  Has the artifact produced a naturfact?  Or, analogous to the intelligent design discussion, do we need to invent a concept of "derived artifactuality".  
>
> It's possible you don't give a damn, but it is your term and best you guard it or it will be kidnapped.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

Prof David West
In reply to this post by thompnickson2
Nick,

You have inadvertently hit upon a pet peeve.

For fifty plus years, Software Engineering — both theory and practice — has engaged in building models/simulacrum that are orders of magnitude more complicated, and those same orders of magnitude less understandable, than the business systems that are ostensibly modeled/simulated.

The ultimate absurdity is that business continues to pay for what they KNOW will not work.

davew



On Mon, Aug 10, 2020, at 12:20 PM, [hidden email] wrote:

Dear Frennemies,

 

I have had my ears boxed so often for dragging threads into my metaphor den, that I thought I ought to rethread this.  But the paper Glen posts and Russ applauds posts is really interesting, describing the manner in which implicit assumptions into our AI can lead it wildly astray: “There’s more than one way to [see] a cat.”

 

The article had an additional lesson for me.  To the extent that you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is  more easily understood than the thing it models.  Don’t use chimpanzees as models if you are interested in mice.   

 

Simulations would seem to me to have the same obligation.  If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right?  So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?

 

What further interested me about these models that the AI provided was that they were in part natural and in part contrived.  So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation.  So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/


 

 

From: Friam <[hidden email]> On Behalf Of Russ Abbott
Sent: Monday, August 10, 2020 11:04 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] meaning, only text

 

Independent of Kavanaugh, that was a great article. That's the first I have heard of this work. It begins to explain a lot about deep learning and its literal and figurative superficiality.

 

-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles

 

 

On Mon, Aug 10, 2020 at 7:02 AM uǝlƃ ↙↙↙ <[hidden email]> wrote:

And to round out another thread, wherein I proposed Brett Kavanaugh *is* Artificial Intelligence, this article pops up:

  Where We See Shapes, AI Sees Textures
  Jordana Cepelewicz

In the context of "originalism" and reading *through* the text, the question is: Why does Brett *seem* intelligent [‽] in a different way than your average zero-shot AI? I like Nick's argument that meaning is higher-order pattern. The results Cepelewicz cites validate that argument []. But if we continue, we'll fall back into the argument about high-order Markovity, free will, and steganographic [de]coding. And (worse) it dovetails with No Free Lunch and whether strict potentialists are well-justified in using higher order operators. Multi-objective constraint solving (aka parallax) seems to cut a compromise through the whole meta-thread. But, as always, the tricks lie in composition and modularity. How do the constraints compose? Which problems can be teased apart from which other problems to create cliques in the graph or even repurposable anatomical modules? How do we construct structured memory for saving snapshots of swapped out partial solutions? Etc.


[‽] If you can't tell, I'm really enjoying using a frat boy political operative who *pretends* to be a SCOTUS justice in the argument for strong AI. To use an actual justice like Gorsuch as such just isn't satisfying.

[] Of course, we don't learn from confirmation. We only learn from critical objection. And the 2nd half of the article does that well enough, I think.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam



- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/