Abduction and Introspection

classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

Abduction and Introspection

thompnickson2

Thanks, Pieter,

 

Interesting comments.  To be a 1k percent honest, this was sent to the list by mistake.  It was meant to go to my collaborator, Eric Charles,  as a new way to organize a paper we are writing.  But after  reading your comments, I was glad I had made the mistake. 

 

I was weaned on Popper.  Falsification played an important role in my thinking about how to do science.  But then I met Peirce, who has no qualms about the logic off affirmation.

 

Abduction : Peirce :: Bold Conjectures : Popper

 

I prefer Peirce’s account because in general I don’t think of inference as necessarily conscious.  When I hear a coyote howl in the night I am not conscious of making the inference that there is a coyote outside my houseo, but inference it is, nonetheless.  Thanks for your comments.

 

Anybody else?

 

Off to the weekly service!

 

Nick

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Pieter Steenekamp
Sent: Thursday, January 23, 2020 11:05 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Your worst nightmare

 

To put a Popper inspired philosophy of science-hat on this topic. The key is in the falsification and good explanations process. Conjectures form in a human's mind without consciously knowing where it comes from. To try to use introspection to understand the roots of the conjecture is fruitless. A process of cognitive falsification then takes this conjecture further. The first stages might be a very informal process. Without expressing it like that, the mind asks - I have this idea, why could it be false. If it passes the first stages then a good explanation for the conjecture is developed and it could be put out there in the world. This idea which originally started as a conjecture now develops into knowledge whilst continuously open to be falsified and better explanations are developed. There is no knowledge that is immune against falsification and attempts to hamper the falsification process limits the growth of knowledge.
I think this is a different paradigm in support of Nick's point that too strong emphasis on introspection shuts down rather than inspiring inquiry.

 

On Fri, 24 Jan 2020 at 00:38, <[hidden email]> wrote:

New Abstract:

 

As psychologists in the behaviorist tradition, we have long had misgivings about the concept of introspection.  The metaphor behind the concept is misleading, and despite the wide use of the concept in both vernacular and professional settings, we doubt that anybody has ever resorted to introspection in the sense that the concept is usually understood.  Additional misgivings arise from the study of the philosophy of C S Peirce. Peirce’s Pragmaticism, one of the foundations of modern behaviorism, rejects the Cartesian notion that all knowledge first arises from direct                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       knowledge of one’s own mind – i.e., from introspection.   Peirce declares that all knowledge arises from inference.  He even reverses the flow, declaring that self-knowledge is largely inference from what we do and what happens to us.  The logical operation by which we infer our selves is that called  “Abduction” by Peirce.   When we engage in abduction, we use one or more properties of an individual event or object to infer its membership in a class of events or objects that share this properties with our initial event or object.  Abductions have potential heuristic power because when we infer what class an individual event belongs to we may infer by deduction other properties that this individual may have.  However abductions vary tremendously in their heuristic power ranging from the from highly useful and testable expectations to implications that are mere vacuous or misleading.  We argue that the manner in which “introspection” is understood in psychology abuses the logic of abduction, prematurely shutting down, rather than inspiring inquiry.   Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

gepr

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)


On 1/24/20 7:20 AM, [hidden email] wrote:

> Anybody else?
>
> *From:* Friam <[hidden email]> *On Behalf Of *Pieter Steenekamp
> *Sent:* Thursday, January 23, 2020 11:05 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <[hidden email]>
> *Subject:* Re: [FRIAM] Your worst nightmare
>
>  
>
> To put a Popper inspired philosophy of science-hat on this topic. The key is in the falsification and good explanations process. Conjectures form in a human's mind without consciously knowing where it comes from. To try to use introspection to understand the roots of the conjecture is fruitless. A process of cognitive falsification then takes this conjecture further. The first stages might be a very informal process. Without expressing it like that, the mind asks - I have this idea, why could it be false. If it passes the first stages then a good explanation for the conjecture is developed and it could be put out there in the world. This idea which originally started as a conjecture now develops into knowledge whilst continuously open to be falsified and better explanations are developed. There is no knowledge that is immune against falsification and attempts to hamper the falsification process limits the growth of knowledge.
> I think this is a different paradigm in support of Nick's point that too strong emphasis on introspection shuts down rather than inspiring inquiry.
>
>  
>
On 1/23/20 2:38 PM, [hidden email] wrote:
> New Abstract:
>
>  
>
> As psychologists in the behaviorist tradition, we have long had misgivings about the concept of introspection.  The metaphor behind the concept is misleading, and despite the wide use of the concept in both vernacular and professional settings, we doubt that anybody has ever resorted to introspection in the sense that the concept is usually understood.  Additional misgivings arise from the study of the philosophy of C S Peirce. Peirce’s Pragmaticism, one of the foundations of modern behaviorism, rejects the Cartesian notion that all knowledge first arises from direct knowledge of one’s own mind – i.e., from introspection.   Peirce declares that all knowledge arises from inference.  He even reverses the flow, declaring that self-knowledge is largely inference from what we do and what happens to us.  The logical operation by which we infer our selves is that called  “Abduction” by Peirce.   When we engage in abduction, we use one or more properties of an individual event or object to infer its membership in a class of events or objects that share this properties with our initial event or object.  Abductions have potential heuristic power because when we infer what class an individual event belongs to we may infer by deduction other properties that this individual may have.  However abductions vary tremendously in their heuristic power ranging from the from highly useful and testable expectations to implications that are mere vacuous or misleading.  We argue that the manner in which “introspection” is understood in psychology abuses the logic of abduction, prematurely shutting down, rather than inspiring inquiry.

--
☣ uǝlƃ
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

thompnickson2

Hi, Glen,

 

At FRIAM today, some of us  were talking with wonder and gratitude about your extra-ordinary ability to read and comment on what others write.  I wish you would come here some day so we can buy you coffee. Also, fwiw, let me say, in this public forum, that I owe you commentary on any writing you are doing that you need commentary on.

 

As to the issue of inter-component monitoring, I am  not sure we'll get into it much in this article because the monitoring of one component by another seems to me "other-perception", as I understand it.  Here is how I made the argument some years back, in https://www.researchgate.net/publication/311349078_The_many_perils_of_ejective_anthropomorphism

 

p. 87. 

 

 

I have always longed to know that an actual computer scientist would say about this inexpert speculation.  How WOULD you wire a computer to assess its own “state”. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ?
Sent: Friday, January 24, 2020 12:48 PM
To: FriAM <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

 

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

 

 

On 1/24/20 7:20 AM, [hidden email] wrote:

> Anybody else?

>

> *From:* Friam <[hidden email]> *On Behalf Of *Pieter

> Steenekamp

> *Sent:* Thursday, January 23, 2020 11:05 PM

> *To:* The Friday Morning Applied Complexity Coffee Group

> <[hidden email]>

> *Subject:* Re: [FRIAM] Your worst nightmare

>

>  

>

> To put a Popper inspired philosophy of science-hat on this topic. The key is in the falsification and good explanations process. Conjectures form in a human's mind without consciously knowing where it comes from. To try to use introspection to understand the roots of the conjecture is fruitless. A process of cognitive falsification then takes this conjecture further. The first stages might be a very informal process. Without expressing it like that, the mind asks - I have this idea, why could it be false. If it passes the first stages then a good explanation for the conjecture is developed and it could be put out there in the world. This idea which originally started as a conjecture now develops into knowledge whilst continuously open to be falsified and better explanations are developed. There is no knowledge that is immune against falsification and attempts to hamper the falsification process limits the growth of knowledge.

> I think this is a different paradigm in support of Nick's point that too strong emphasis on introspection shuts down rather than inspiring inquiry.

>

>  

>

On 1/23/20 2:38 PM, [hidden email] wrote:

> New Abstract:

>

>

> As psychologists in the behaviorist tradition, we have long had misgivings about the concept of introspection.  The metaphor behind the concept is misleading, and despite the wide use of the concept in both vernacular and professional settings, we doubt that anybody has ever resorted to introspection in the sense that the concept is usually understood.  Additional misgivings arise from the study of the philosophy of C S Peirce. Peirce’s Pragmaticism, one of the foundations of modern behaviorism, rejects the Cartesian notion that all knowledge first arises from direct knowledge of one’s own mind – i.e., from introspection.   Peirce declares that all knowledge arises from inference.  He even reverses the flow, declaring that self-knowledge is largely inference from what we do and what happens to us.  The logical operation by which we infer our selves is that called  “Abduction” by Peirce.   When we engage in abduction, we use one or more properties of an individual event or object to infer its membership in a class of events or objects that share this properties with our initial event or object.  Abductions have potential heuristic power because when we infer what class an individual event belongs to we may infer by deduction other properties that this individual may have.  However abductions vary tremendously in their heuristic power ranging from the from highly useful and testable expectations to implications that are mere vacuous or misleading.  We argue that the manner in which “introspection” is understood in psychology abuses the logic of abduction, prematurely shutting down, rather than inspiring inquiry.

 

--

uǝlƃ

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

archives back to 2003: http://friam.471366.n2.nabble.com/

FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

gepr
I'm sure you're being generous by *not* calling me argumentative or contrarian, or any number of other words. 8^) But I'll take it, anyway.

In the text you attached, you talk about that other module and privileged access. As far as how I think many *others* talk about self-perception, I have no problems with what you've written. But what the entire discussion, including the text you attached, ignores is that privileged access involves *manipulation* as well as observation, 2 necessary types of interaction.

The idea is one I've lobbed at you before re: feedback. We can consider your example of putting your leg down to get out of bed in the morning. My assertion was that my doubt that the floor is there manifests itself as very FAST feedback (proprioception) regarding the movement of my leg toward the floor and if the distance seems too great, my manipulation of my leg rapidly compensates. So, if I forgot that I'm sleeping in a hotel with a thicker mattress, I quickly *remember* that because of this privileged introspection (manipulate, observe, manipulate, observe, ...).

To couch this in terms of one sub-component extrocepting another (with which I don't disagree, in gist), it's the *speed* of the feedback between the two components that gives the impression that the 2 components are tightly coupled and can be considered one component "me introspecting" or "me propriocepting".

This sort of reasoning founds (I think) Buzsaki's "Rhythms of the Brain" and the concept of "neurodynamic binding". Any discussion of self-perception must surely talk at least a little bit about that, right?

To sum up, I think your discussion should include 2 things: 1) manipulation and observation, and 2) feedback between sub-components of the "self". If you adopted those, then you could easily dovetail into "abduction" (intra-self inference by action) and even "falsification" (intra-self trial and error). It wouldn't take much of a mention in your text to satisfy me ... just some hand wavy stuff telling me you've thought about (1) and (2) in the context of your criticism of the way "introspection" is used in psych literature.


On 1/24/20 12:45 PM, [hidden email] wrote:

> At FRIAM today, some of us  were talking with wonder and gratitude about your extra-ordinary ability to read and comment on what others write.  I wish you would come here some day so we can buy you coffee. Also, fwiw, let me say, in this public forum, that I owe you commentary on any writing you are doing that you need commentary on.
>
>  
>
> As to the issue of inter-component monitoring, I am  not sure we'll get into it much in this article because the monitoring of one component by another seems to me "other-perception", as I understand it.  Here is how I made the argument some years back, in https://www.researchgate.net/publication/311349078_The_many_perils_of_ejective_anthropomorphism: 
>
>  
> p. 87. 
>
>  
> I have always longed to know that an actual computer scientist would say about this inexpert speculation.  How WOULD you wire a computer to assess its own “state”. 

--
☣ uǝlƃ

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Marcus G. Daniels
In reply to this post by thompnickson2

Nick writes (about Glen):

 

“At FRIAM today, some of us  were talking with wonder and gratitude about your extra-ordinary ability to read and comment on what others write.”

 

I think he must just not be distracted by Slack.  

But seriously, Glen is fast!

 

Marcus

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Steve Smith
In reply to this post by gepr

-

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

Indulging in a "pre-buttal", are we? ;^)

- ⏰




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

thompnickson2

Hi, Steve,

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Steven A Smith
Sent: Saturday, January 25, 2020 9:47 AM
To: [hidden email]
Subject: Re: [FRIAM] Abduction and Introspection

 

-

 
Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

Indulging in a "pre-buttal", are we? ;^)

-

 

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

thompnickson2
In reply to this post by thompnickson2

Steve Smith wrote,

“Indulging in a "pre-buttal", are we? ;^)”

Commenting on Glen’s worries about “self perception”.

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

 

NST has two further comments, here: 

First is, I don’t recognize the smiley.  It looks like Steve’s  nose is out of joint. 

Second is, Better watch out, fella.  Glen and I actually agree on this one.  Can you imagine what your rhetorical life would be like if Glen and I ganged up on you? 

Third, is PLEASE, PLEASE, èPLEASEç would everybody take seriously the question:

 

              As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself”?  Is it sufficient to say that one of the components is monitoring one of the other components.  Is the light that comes on when you switch on the computer monitoring “the computer”  an instance of the computer monitoring the computer, is it just an instance of the light monitoring the power supply.  What does “self perception” mean to a computer engineer.  When Kelly Ann Connelly speaks for the Trump Administration, is that the Trump administration speaking for itself? I think we might say yes, because Kelly is designated, designed, sent (etc) to speak for the administration.  So when a component of a computer that has been designed to speak for that computer speaks, is that the computer reporting on itself, or just the “reporting monitor” speaking on it view of the system of which it is part.   How do we recognize design?  Ask the programmer what he intended?  What if he screwed up the design.  If Kelly Anne’s sensors were, in fact, wrongly  placed on Trump’s tummy so all she heard was bowel sounds, would she still speak for The Administration? 

 

I don’t think there is a truth of this matter, but I really, è REALLYç want to know how you think about it.  Please see my original post on introspection below.

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: [hidden email] <[hidden email]>
Sent: Friday, January 24, 2020 1:46 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: RE: [FRIAM] Abduction and Introspection

 

Hi, Glen,

 

At FRIAM today, some of us  were talking with wonder and gratitude about your extra-ordinary ability to read and comment on what others write.  I wish you would come here some day so we can buy you coffee. Also, fwiw, let me say, in this public forum, that I owe you commentary on any writing you are doing that you need commentary on.

 

As to the issue of inter-component monitoring, I am  not sure we'll get into it much in this article because the monitoring of one component by another seems to me "other-perception", as I understand it.  Here is how I made the argument some years back, in https://www.researchgate.net/publication/311349078_The_many_perils_of_ejective_anthropomorphism

 

p. 87. 

 

 

I have always longed to know that an actual computer scientist would say about this inexpert speculation.  How WOULD you wire a computer to assess its own “state”. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

 

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ?
Sent: Friday, January 24, 2020 12:48 PM
To: FriAM <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

 

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

 

 

On 1/24/20 7:20 AM, [hidden email] wrote:

> Anybody else?

>

> *From:* Friam <[hidden email]> *On Behalf Of *Pieter

> Steenekamp

> *Sent:* Thursday, January 23, 2020 11:05 PM

> *To:* The Friday Morning Applied Complexity Coffee Group

> <[hidden email]>

> *Subject:* Re: [FRIAM] Your worst nightmare

>

>  

>

> To put a Popper inspired philosophy of science-hat on this topic. The key is in the falsification and good explanations process. Conjectures form in a human's mind without consciously knowing where it comes from. To try to use introspection to understand the roots of the conjecture is fruitless. A process of cognitive falsification then takes this conjecture further. The first stages might be a very informal process. Without expressing it like that, the mind asks - I have this idea, why could it be false. If it passes the first stages then a good explanation for the conjecture is developed and it could be put out there in the world. This idea which originally started as a conjecture now develops into knowledge whilst continuously open to be falsified and better explanations are developed. There is no knowledge that is immune against falsification and attempts to hamper the falsification process limits the growth of knowledge.

> I think this is a different paradigm in support of Nick's point that too strong emphasis on introspection shuts down rather than inspiring inquiry.

>

>  

>

On 1/23/20 2:38 PM, [hidden email] wrote:

> New Abstract:

>

>

> As psychologists in the behaviorist tradition, we have long had misgivings about the concept of introspection.  The metaphor behind the concept is misleading, and despite the wide use of the concept in both vernacular and professional settings, we doubt that anybody has ever resorted to introspection in the sense that the concept is usually understood.  Additional misgivings arise from the study of the philosophy of C S Peirce. Peirce’s Pragmaticism, one of the foundations of modern behaviorism, rejects the Cartesian notion that all knowledge first arises from direct knowledge of one’s own mind – i.e., from introspection.   Peirce declares that all knowledge arises from inference.  He even reverses the flow, declaring that self-knowledge is largely inference from what we do and what happens to us.  The logical operation by which we infer our selves is that called  “Abduction” by Peirce.   When we engage in abduction, we use one or more properties of an individual event or object to infer its membership in a class of events or objects that share this properties with our initial event or object.  Abductions have potential heuristic power because when we infer what class an individual event belongs to we may infer by deduction other properties that this individual may have.  However abductions vary tremendously in their heuristic power ranging from the from highly useful and testable expectations to implications that are mere vacuous or misleading.  We argue that the manner in which “introspection” is understood in psychology abuses the logic of abduction, prematurely shutting down, rather than inspiring inquiry.

 

--

uǝlƃ

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

archives back to 2003: http://friam.471366.n2.nabble.com/

FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

jon zingale
In reply to this post by thompnickson2
Perhaps along with manipulate and observe could be predict.
Presently, I am making my way through two books on predictive processing.

1) Surfing Uncertainty, by Andy Clark
2) Extended Consciousness and Predictive Processing, by M. Kirchhoff and J. Kiverstein

Phenomenal consciousness either arising from, or being supported by,
top down prediction of incoming sensory inputs is a common theme
to both texts. In Glen's example an individual gets out of an unfamiliar
bed in the morning. If I am reading Clark correctly, he would assert that
the surprise Glen feels comes from Glen's nervous system incorrectly
predicting the sensory inputs to his nervous system. Perhaps this sort
of thing can be construed as an observation, but it seems a bit more.
There appears to be an anticipation present as well.

Jonathan Zingale

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

jon zingale
In reply to this post by thompnickson2
As an addendum to my previous comment above, I suppose
introspection to be understood in terms of querying one's
own nervous system. Perhaps, to introspect is to attempt
to simulate patterns of sensory input, stimulating the nervous
system into returning its predictions.

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Marcus G. Daniels
In reply to this post by thompnickson2

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

thompnickson2

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

image001.png (82K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Marcus G. Daniels

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Pieter Steenekamp
I would go along with Johsua Epstein's "if you did not grow it you did not explain it". Keep in mind that this motto applies to problems involving emergence. So what I'm saying is that it's in many cases futile to apply logic to reasoning to find answers - and I refer to the emergent properties of the human brain as well as to ABM (agent based modeling) software. But even if the problem involves emergence, it's easy for both human and computer logic to apply validation logic. Similar to the P=NP problem*, it's difficult to find the solution, but easy to verify. 

So my answer to "As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself" is simply: explicitly verify the results. There are many approaches to do this verification; applying logic, checking against measured actual data, checking for violations of physics, etc.   

*I know you all know it, just a refresher, The P=NP problem is one of the biggest unsolved computer science problems. There is a class of very difficult to solve problems and a class of very easy to verify problems. The P=NP problem asks the following: if you have a difficult to solve but easy to verify problem, is it possible to find a solution that is reasonably easy for a computer to solve. "Reasonably easy" is defined as can you solve it in polynomial time. The current algorithms takes exponential time to solve it and even for a moderate size problem that means more time that the age of the universe for a supercomputer to solve it.

Pieter

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <[hidden email]> wrote:

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Prof David West
In reply to this post by thompnickson2
Nick,
re: "self-aware computers."

When I am programming objects, I make frequent use of "reflection" which has some things in common with self-awareness  at least at a superficial level.

I can send a message to an object asking it to do something like tell me it's name.  I get a name back from the object.

I can also send a message to an object asking if it knows how to tell me its name. I get back a Boolean.

If the Boolean was True, then I can ask the object to tell me its name.  If the Boolean was False then I go and find some other more talkative object.

The computer is an object. So I could ask it if it contains a disk drive, or a Java Virtual Machine, and only if I get a True in response will I actually ask the computer to do some thing.

Reflection requires the object to "look at itself" (e.g. the list of messages it can respond to) and respond appropriately to questions about that list (e.g. is 'name' included).  I can also ask about an object's state, e.g. are you busy right now or can I send you a message."  The object will reply appropriately.

I can even ask more complicated questions like, "when will you be done with what you are doing so I can send you a message."

All of this might appear to require self-awareness, but it is just a feature of some programming languages.

The light you mention is a part (lamp) monitoring a part (current flowing out of power supply).

A closer metaphor for computer monitoring itself is when the bootstrap program is run and when the computer becomes aware that its operating system is ready, it displays a command prompt or a desktop.

"Asking about the design" is exactly what reflection is used for — trying to find out what some other programmer designed/build into the objects they created.

All of this uses language that parallels the language we use for self-awareness / self-perception. Intentionally, because it is a metaphor.

That does not mean that the computer or the object is, in some metaphysical /epistemological sense self-anything.

davew


On Sat, Jan 25, 2020, at 7:30 PM, [hidden email] wrote:

Steve Smith wrote,

“Indulging in a "pre-buttal", are we? ;^)”

Commenting on Glen’s worries about “self perception”.

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

 

NST has two further comments, here: 

First is, I don’t recognize the smiley.  It looks like Steve’s  nose is out of joint. 

Second is, Better watch out, fella.  Glen and I actually agree on this one.  Can you imagine what your rhetorical life would be like if Glen and I ganged up on you? 

Third, is PLEASE, PLEASE, èPLEASEç would everybody take seriously the question:

 

              As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself”?  Is it sufficient to say that one of the components is monitoring one of the other components.  Is the light that comes on when you switch on the computer monitoring “the computer”  an instance of the computer monitoring the computer, is it just an instance of the light monitoring the power supply.  What does “self perception” mean to a computer engineer.  When Kelly Ann Connelly speaks for the Trump Administration, is that the Trump administration speaking for itself? I think we might say yes, because Kelly is designated, designed, sent (etc) to speak for the administration.  So when a component of a computer that has been designed to speak for that computer speaks, is that the computer reporting on itself, or just the “reporting monitor” speaking on it view of the system of which it is part.   How do we recognize design?  Ask the programmer what he intended?  What if he screwed up the design.  If Kelly Anne’s sensors were, in fact, wrongly  placed on Trump’s tummy so all she heard was bowel sounds, would she still speak for The Administration? 

 

I don’t think there is a truth of this matter, but I really, è REALLYç want to know how you think about it.  Please see my original post on introspection below.

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/


 

 

Sent: Friday, January 24, 2020 1:46 PM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: RE: [FRIAM] Abduction and Introspection

 

Hi, Glen,

 

At FRIAM today, some of us  were talking with wonder and gratitude about your extra-ordinary ability to read and comment on what others write.  I wish you would come here some day so we can buy you coffee. Also, fwiw, let me say, in this public forum, that I owe you commentary on any writing you are doing that you need commentary on.

 

As to the issue of inter-component monitoring, I am  not sure we'll get into it much in this article because the monitoring of one component by another seems to me "other-perception", as I understand it.  Here is how I made the argument some years back, in https://www.researchgate.net/publication/311349078_The_many_perils_of_ejective_anthropomorphism

 

p. 87. 

 


 

I have always longed to know that an actual computer scientist would say about this inexpert speculation.  How WOULD you wire a computer to assess its own “state”. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

 

-----Original Message-----
From: Friam <[hidden email]> On Behalf Of u?l? ?
Sent: Friday, January 24, 2020 12:48 PM
To: FriAM <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

 

Well, your abstract seems to assume something akin to coherence, the idea that whatever's doing the introspection is a whole/atomic thing perceiving that whole/atomic thing. I think we know that established types of self-perception (proprio-, entero-) consist of one sub-component monitoring another sub-component. It's not clear to me whether you intend to address that part-whole aspect of self-perception or not. But if you don't address it, *I* won't be satisfied with whatever you write. 8^)

 

 

On 1/24/20 7:20 AM, [hidden email] wrote:

> Anybody else?

>

> *From:* Friam <[hidden email]> *On Behalf Of *Pieter

> Steenekamp

> *Sent:* Thursday, January 23, 2020 11:05 PM

> *To:* The Friday Morning Applied Complexity Coffee Group

> <[hidden email]>

> *Subject:* Re: [FRIAM] Your worst nightmare

>

>  

>

> To put a Popper inspired philosophy of science-hat on this topic. The key is in the falsification and good explanations process. Conjectures form in a human's mind without consciously knowing where it comes from. To try to use introspection to understand the roots of the conjecture is fruitless. A process of cognitive falsification then takes this conjecture further. The first stages might be a very informal process. Without expressing it like that, the mind asks - I have this idea, why could it be false. If it passes the first stages then a good explanation for the conjecture is developed and it could be put out there in the world. This idea which originally started as a conjecture now develops into knowledge whilst continuously open to be falsified and better explanations are developed. There is no knowledge that is immune against falsification and attempts to hamper the falsification process limits the growth of knowledge.

> I think this is a different paradigm in support of Nick's point that too strong emphasis on introspection shuts down rather than inspiring inquiry.

>

>  

>

On 1/23/20 2:38 PM, [hidden email] wrote:

> New Abstract:

>


>

> As psychologists in the behaviorist tradition, we have long had misgivings about the concept of introspection.  The metaphor behind the concept is misleading, and despite the wide use of the concept in both vernacular and professional settings, we doubt that anybody has ever resorted to introspection in the sense that the concept is usually understood.  Additional misgivings arise from the study of the philosophy of C S Peirce. Peirce’s Pragmaticism, one of the foundations of modern behaviorism, rejects the Cartesian notion that all knowledge first arises from direct knowledge of one’s own mind – i.e., from introspection.   Peirce declares that all knowledge arises from inference.  He even reverses the flow, declaring that self-knowledge is largely inference from what we do and what happens to us.  The logical operation by which we infer our selves is that called  “Abduction” by Peirce.   When we engage in abduction, we use one or more properties of an individual event or object to infer its membership in a class of events or objects that share this properties with our initial event or object.  Abductions have potential heuristic power because when we infer what class an individual event belongs to we may infer by deduction other properties that this individual may have.  However abductions vary tremendously in their heuristic power ranging from the from highly useful and testable expectations to implications that are mere vacuous or misleading.  We argue that the manner in which “introspection” is understood in psychology abuses the logic of abduction, prematurely shutting down, rather than inspiring inquiry.

 

--

uǝlƃ

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

archives back to 2003: http://friam.471366.n2.nabble.com/

FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


Attachments:
  • image002.png
  • image004.png


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Frank Wimberly-2
In reply to this post by Pieter Steenekamp
Validation:  Bill Reynolds wrote a paper about inferring causal models from observational data to validate, for example, agent based models.  My original idea was that If the same causal edges emerge as are observed in the modeled system then that helps validate the model.  Bill thought it was better to compare the causal model with experts' opinions regarding causation in the modeled system.  Good enough.

Reynolds, W. N., Wimberly, F. C.,

Simulation Validation Using Causal     Inference Theory with Morphological Constraints.  Proceedings of the     2011 Winter Simulation Conference, Arizona Grand Resort, December,     2011.

I mentioned this here recently and Glen asked for a copy so I sent it to him.  I look forward to his comments.

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Jan 25, 2020, 11:48 PM Pieter Steenekamp <[hidden email]> wrote:
I would go along with Johsua Epstein's "if you did not grow it you did not explain it". Keep in mind that this motto applies to problems involving emergence. So what I'm saying is that it's in many cases futile to apply logic to reasoning to find answers - and I refer to the emergent properties of the human brain as well as to ABM (agent based modeling) software. But even if the problem involves emergence, it's easy for both human and computer logic to apply validation logic. Similar to the P=NP problem*, it's difficult to find the solution, but easy to verify. 

So my answer to "As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself" is simply: explicitly verify the results. There are many approaches to do this verification; applying logic, checking against measured actual data, checking for violations of physics, etc.   

*I know you all know it, just a refresher, The P=NP problem is one of the biggest unsolved computer science problems. There is a class of very difficult to solve problems and a class of very easy to verify problems. The P=NP problem asks the following: if you have a difficult to solve but easy to verify problem, is it possible to find a solution that is reasonably easy for a computer to solve. "Reasonably easy" is defined as can you solve it in polynomial time. The current algorithms takes exponential time to solve it and even for a moderate size problem that means more time that the age of the universe for a supercomputer to solve it.

Pieter

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <[hidden email]> wrote:

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Frank Wimberly-2
Introspection :  it would be possible to insert a causal inference module into an agent-based modeling program.  The ABM could examine the causal conclusions emerging from the data it's generating.  It seems to me that is close to what "introspection" means to me.

Frank
-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sun, Jan 26, 2020, 7:56 AM Frank Wimberly <[hidden email]> wrote:
Validation:  Bill Reynolds wrote a paper about inferring causal models from observational data to validate, for example, agent based models.  My original idea was that If the same causal edges emerge as are observed in the modeled system then that helps validate the model.  Bill thought it was better to compare the causal model with experts' opinions regarding causation in the modeled system.  Good enough.

Reynolds, W. N., Wimberly, F. C.,

Simulation Validation Using Causal     Inference Theory with Morphological Constraints.  Proceedings of the     2011 Winter Simulation Conference, Arizona Grand Resort, December,     2011.

I mentioned this here recently and Glen asked for a copy so I sent it to him.  I look forward to his comments.

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Jan 25, 2020, 11:48 PM Pieter Steenekamp <[hidden email]> wrote:
I would go along with Johsua Epstein's "if you did not grow it you did not explain it". Keep in mind that this motto applies to problems involving emergence. So what I'm saying is that it's in many cases futile to apply logic to reasoning to find answers - and I refer to the emergent properties of the human brain as well as to ABM (agent based modeling) software. But even if the problem involves emergence, it's easy for both human and computer logic to apply validation logic. Similar to the P=NP problem*, it's difficult to find the solution, but easy to verify. 

So my answer to "As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself" is simply: explicitly verify the results. There are many approaches to do this verification; applying logic, checking against measured actual data, checking for violations of physics, etc.   

*I know you all know it, just a refresher, The P=NP problem is one of the biggest unsolved computer science problems. There is a class of very difficult to solve problems and a class of very easy to verify problems. The P=NP problem asks the following: if you have a difficult to solve but easy to verify problem, is it possible to find a solution that is reasonably easy for a computer to solve. "Reasonably easy" is defined as can you solve it in polynomial time. The current algorithms takes exponential time to solve it and even for a moderate size problem that means more time that the age of the universe for a supercomputer to solve it.

Pieter

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <[hidden email]> wrote:

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Marcus G. Daniels
In reply to this post by Pieter Steenekamp

I thought the question was about software engineering, not about predicting emergent behavior?    Detecting undesirable behaviors is easier than predicting all behaviors..

 

From: Friam <[hidden email]> on behalf of Pieter Steenekamp <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 10:48 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

I would go along with Johsua Epstein's "if you did not grow it you did not explain it". Keep in mind that this motto applies to problems involving emergence. So what I'm saying is that it's in many cases futile to apply logic to reasoning to find answers - and I refer to the emergent properties of the human brain as well as to ABM (agent based modeling) software. But even if the problem involves emergence, it's easy for both human and computer logic to apply validation logic. Similar to the P=NP problem*, it's difficult to find the solution, but easy to verify. 

 

So my answer to "As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself" is simply: explicitly verify the results. There are many approaches to do this verification; applying logic, checking against measured actual data, checking for violations of physics, etc.   

 

*I know you all know it, just a refresher, The P=NP problem is one of the biggest unsolved computer science problems. There is a class of very difficult to solve problems and a class of very easy to verify problems. The P=NP problem asks the following: if you have a difficult to solve but easy to verify problem, is it possible to find a solution that is reasonably easy for a computer to solve. "Reasonably easy" is defined as can you solve it in polynomial time. The current algorithms takes exponential time to solve it and even for a moderate size problem that means more time that the age of the universe for a supercomputer to solve it.

 

Pieter

 

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <[hidden email]> wrote:

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

Frank Wimberly-2
Man is Half Blind
Delmore Schwartz

Hope is not a certainty;

Guess is not a certainty.
So are perception and prediction.
Intuition is an illusion.
Yet they are the only aids
With which man walks, half blind.

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sun, Jan 26, 2020, 10:49 AM Marcus Daniels <[hidden email]> wrote:

I thought the question was about software engineering, not about predicting emergent behavior?    Detecting undesirable behaviors is easier than predicting all behaviors..

 

From: Friam <[hidden email]> on behalf of Pieter Steenekamp <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 10:48 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

I would go along with Johsua Epstein's "if you did not grow it you did not explain it". Keep in mind that this motto applies to problems involving emergence. So what I'm saying is that it's in many cases futile to apply logic to reasoning to find answers - and I refer to the emergent properties of the human brain as well as to ABM (agent based modeling) software. But even if the problem involves emergence, it's easy for both human and computer logic to apply validation logic. Similar to the P=NP problem*, it's difficult to find the solution, but easy to verify. 

 

So my answer to "As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself" is simply: explicitly verify the results. There are many approaches to do this verification; applying logic, checking against measured actual data, checking for violations of physics, etc.   

 

*I know you all know it, just a refresher, The P=NP problem is one of the biggest unsolved computer science problems. There is a class of very difficult to solve problems and a class of very easy to verify problems. The P=NP problem asks the following: if you have a difficult to solve but easy to verify problem, is it possible to find a solution that is reasonably easy for a computer to solve. "Reasonably easy" is defined as can you solve it in polynomial time. The current algorithms takes exponential time to solve it and even for a moderate size problem that means more time that the age of the universe for a supercomputer to solve it.

 

Pieter

 

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <[hidden email]> wrote:

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

image001.png (87K) Download Attachment
image001.png (87K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Abduction and Introspection

thompnickson2

Oh, Frank!

 

That’s really good!

 

That’s the paradox of pragmatism.  Some things are NOT random; we just never know for sure which ones those are.

 

N

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Sunday, January 26, 2020 2:18 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Man is Half Blind

Delmore Schwartz

 

Hope is not a certainty;
Guess is not a certainty.
So are perception and prediction.
Intuition is an illusion.
Yet they are the only aids
With which man walks, half blind.

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sun, Jan 26, 2020, 10:49 AM Marcus Daniels <[hidden email]> wrote:

I thought the question was about software engineering, not about predicting emergent behavior?    Detecting undesirable behaviors is easier than predicting all behaviors..

 

From: Friam <[hidden email]> on behalf of Pieter Steenekamp <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 10:48 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

I would go along with Johsua Epstein's "if you did not grow it you did not explain it". Keep in mind that this motto applies to problems involving emergence. So what I'm saying is that it's in many cases futile to apply logic to reasoning to find answers - and I refer to the emergent properties of the human brain as well as to ABM (agent based modeling) software. But even if the problem involves emergence, it's easy for both human and computer logic to apply validation logic. Similar to the P=NP problem*, it's difficult to find the solution, but easy to verify. 

 

So my answer to "As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself" is simply: explicitly verify the results. There are many approaches to do this verification; applying logic, checking against measured actual data, checking for violations of physics, etc.   

 

*I know you all know it, just a refresher, The P=NP problem is one of the biggest unsolved computer science problems. There is a class of very difficult to solve problems and a class of very easy to verify problems. The P=NP problem asks the following: if you have a difficult to solve but easy to verify problem, is it possible to find a solution that is reasonably easy for a computer to solve. "Reasonably easy" is defined as can you solve it in polynomial time. The current algorithms takes exponential time to solve it and even for a moderate size problem that means more time that the age of the universe for a supercomputer to solve it.

 

Pieter

 

On Sat, 25 Jan 2020 at 23:04, Marcus Daniels <[hidden email]> wrote:

I would say the problem of debugging (or introspection if you insist)  is like if you find yourself at some random place, never seen before, and the task it do develop a map and learn the local language and customs.  If one is given the job of law enforcement (debugging violations of law), it is necessary to collect quite a bit of information, e.g. the laws of the jurisdiction, the sensitivities and conflicts in the area, and detailed geography.  In haphazardly-developed  software, learning about one part of a city teaches you nothing about another part of the city.   In well-designed software, one can orient oneself quickly because there are many easily-learnable conventions to follow.    I would say this distinction between the modeler and the modeled is not that helpful.   To really avoid bugs, one wants to have metaphorical citizens that are genetically incapable of breaking laws.   Privileged access is kind of beside the point because in practice software is often far too big to fully rationalize. 

 

From: Friam <[hidden email]> on behalf of "[hidden email]" <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, January 25, 2020 at 11:57 AM
To: 'The Friday Morning Applied Complexity Coffee Group' <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Thanks, Marcus,

 

Am I correct that all of your examples fall with in this frame;

 


I keep expecting you guys to scream at me, “Of course, you idiot, self-perception is partial and subject to error!  HTF could it be otherwise?”   I would love that.  I would record it and put it on loop for half my colleagues in psychology departments around the world. 

 

Nick

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

[hidden email]

https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <[hidden email]> On Behalf Of Marcus Daniels
Sent: Saturday, January 25, 2020 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Abduction and Introspection

 

Nick writes:

 

 As software engineers, what conditions would a program have to fulfill to say that a computer was monitoring “itself

 

It is common for codes that calculate things to periodically test invariants that should hold.   For example, a physics code might test for conservation of mass or energy.   A conversion between a data structure with one index scheme to another is often followed by a check to ensure the total number of records did not change, or if it did change that it changed by an expected amount.   It is also possible, but less common, to write a code so that proofs are constructed by virtue of the code being compliable against a set of types.   The types describe all of the conditions that must hold regarding the behavior of a function.    In that case it is not necessary to detect if something goes haywire at runtime because it is simply not possible for something to go haywire.  (A computer could still miscalculate due to a cosmic ray, or some other physical interruption, but assuming that did not happen a complete proof-carrying code would not fail within its specifications.)

A weaker form of self-monitoring is to periodically check for memory or disk usage, and to raise an alarm if they are unexpectedly high or low.   Such an alarm might trigger cleanups of old results, otherwise kept around for convenience. 

 

Marcus

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove