Posted by
thompnickson2 on
Aug 10, 2020; 8:32pm
URL: http://friam.383.s1.nabble.com/WAS-meaning-only-text-IS-Simulations-as-constructed-metaphors-tp7598246p7598255.html
Glen,
I can see your epitaph-- ITAL==>Glen Ropella, Coiner of Terms<==ital
"Naturfact," like "Steelman" is doomed to be a keeper. So let me try to understand it, better. The question in my [so-called] mind is, Can artifacts generate naturfacts? Let's say we design a robot so as to avoid objects it "sees" at a 45 degree angle. Sitting on the laboratory floor before us, that robot is an "artifact". Now, we place that robot in a rectangular room with bare walls and a bunch of Styrofoam cubes, and we turn it on. It sets about herding the cubes into piles. Has the artifact produced a naturfact? Or, analogous to the intelligent design discussion, do we need to invent a concept of "derived artifactuality".
It's possible you don't give a damn, but it is your term and best you guard it or it will be kidnapped.
Nick
Nicholas Thompson
Emeritus Professor of Ethology and Psychology
Clark University
[hidden email]
https://wordpress.clarku.edu/nthompson/
-----Original Message-----
From: Friam <
[hidden email]> On Behalf Of u?l? ???
Sent: Monday, August 10, 2020 12:46 PM
To: FriAM <
[hidden email]>
Subject: Re: [FRIAM] WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors
I see 2 questions: 1) are obtuse simulations useful? And 2) are all simulations naturfacts?
My answers are Yes and Yes. The easiest ways to see (1) are that incomprehensible simulations are useful if they speed up time, shrink space, and/or predict accurately. In essence, what makes them useful in these circumstances is *manipulationist*. Even if you don't understand what you're manipulating, having something *to* manipulate helps. The canonical ALife example is that we only have 1 fundamental type of life. So, we have limited ways in which we can manipulate it. Simulated life helps us think clearly about *how* we might manipulate living systems that both do and do not exist.
Re: (2) - It should be clear that *all* simulations are part artifact and part natural object. But because it's so obvious to me, I'm having trouble coming up with any way in which one might have a *pure* artifact. I suppose the closest we can get is a virtual machine, an emulator for another piece of hardware/software. Then whatever's executing inside the VM might be said to be purely artificial. But any simulation that runs on "bare metal" is a naturfact already. Then go a step further and argue that any simulation must, somehow, *simulate* its referent. And that means the behavior of the computation will be artifice made to look like some (presumably natural) referent. I.e. the requirements for the computation are inferred from behavior in the world. If we regard behavior as natural, then any such simulation will be a naturfact. This fields the question re: behavioral analogy.
But your question is more about structural analogy. To what extent must the structure of a computation mirror the structure of a referent for us to call it a naturfact? And it's that question that distinguishes mechanistic modeling from predictive modeling. I'm agnostic on this [⛧]. Although I'm a mechanistic modeler, I'm perfectly happy with pure behavioral analogies where the structure is unrelated to that of its referent.
[⛧] Well, I'm actually very opinionated on it. But those fine-grained opinions are irrelevant at this point. If/when we start arguing about "levels", then my fine-grained opinions will burst out like so many ants from a kicked bed.
On 8/10/20 11:20 AM,
[hidden email] wrote:
> The article had an additional lesson for me. To the extent that
> you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is more easily understood than the thing it models. Don’t use chimpanzees as models if you are interested in mice.
>
>
>
> Simulations would seem to me to have the same obligation. If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right? So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray?
>
>
>
> What further interested me about these models that the AI provided was that they were in part natural and in part contrived. So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation. So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.
--
↙↙↙ uǝlƃ
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe
http://redfish.com/mailman/listinfo/friam_redfish.comarchives:
http://friam.471366.n2.nabble.com/FRIAM-COMIC
http://friam-comic.blogspot.com/
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam
un/subscribe
http://redfish.com/mailman/listinfo/friam_redfish.comarchives:
http://friam.471366.n2.nabble.com/FRIAM-COMIC
http://friam-comic.blogspot.com/