Intentionality is the mark of the vital

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Nick Thompson
Jochen,

Thanks for your kind response.

Your question churns my head.  I was keen to argue that intentionality is a
property not only of thinking things but of any biological thing.  But I
never imagined that intentionality could be used as a criterion of
vitality.  I do believe that every living system displays intentionality,
but I now have to think about whether I think that all intentional systems
are living.   I guess NOT.  However, my reasons for holding this belief are
probably robotophobic.  

Nick


PS  My first response to your  question was to write the following 100
words of baffle-gab, like the good academic I am.  It might be marginally
interesting in and off itself, but it didnt seem to answer your question.
I had put too much effort into it to throw it away, so I stuck it below.
Feel free to ignore it.  

BEGIN BAFFLEGAB =======================================================

Intentionality is one of those words that leads to endless confusion.  It
can refer to having an intention or it can refer to a peculiar propert to
assertions containing verbs of mentation, wanting, thinking, feeling, etc.
The sentence, "Jones's intention was that the books be placed on the table"
is intentional in both senses: intentional in sense one because it tells us
something about what Jones is up to, and intentional in the second sense
because it displays the odd property of referential opacity.    Unlike the
statement "the books are on the table" , the statement about Jones's
intentions cannot be verified nor disconfirmed by gathering information
about the location of the books.  

 The two are intimately connected.  Any statement one makes about the
intentions of others in sense one is inevitably an intensional utterance in
sense two because the truth value of the statement lies in the organization
of Jones's behavior, rather than whether Jones's intention is ever
fulfilled.  

It was in this second, perhaps strained, philosophic sense, that I think
the cue relation is necessarily intentional.  When we say that C is a cue
to X, we mean that from the point of view of the system we are interested
in, C stands in for X.   ("In the Human respiratory system, Blood acidity
is a cue for blood oxygenation")  To the extent that robots use cues, they
MUST be intentional in this sense.  

===========================================================
end  BAFFLEGAB.  

Nicholas Thompson
nickthompson at earthlink.net
http://home.earthlink.net/~nickthompson


> [Original Message]
> From: <friam-request at redfish.com>
> To: <friam at redfish.com>
> Date: 7/14/2006 12:00:29 PM
> Subject: Friam Digest, Vol 37, Issue 17
>
> Send Friam mailing list submissions to
> friam at redfish.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://redfish.com/mailman/listinfo/friam_redfish.com
> or, via email, send a message with subject or body 'help' to
> friam-request at redfish.com
>
> You can reach the person managing the list at
> friam-owner at redfish.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Friam digest..."
>
>
> Today's Topics:
>
>    1. Re: 100 billion neurons (George Duncan)
>    2. Re: 100 billion neurons (Jim Rutt)
>    3. Re: 100 billion neurons (Frank Wimberly)
>    4. Intentionality - the mark of the vital (Jochen Fromm)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 13 Jul 2006 10:38:47 -0600
> From: "George Duncan" <gd17 at andrew.cmu.edu>
> Subject: Re: [FRIAM] 100 billion neurons
> To: "The Friday Morning Applied Complexity Coffee Group"
> <friam at redfish.com>
> Message-ID:
> <b9b019d10607130938n1e6e953etaa4bbd1409d24ffb at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Shall this conversation be neuronic rather than neurotic?
>
> Or try this
> http://www.technologyreview.com/read_article.aspx?id=17164&ch=infotech
>
>
> On 7/13/06, Giles Bowkett <gilesb at gmail.com> wrote:
> >
> > I'm inclined to agree. The model I use is nonlinear fluid dynamics.
> > Say you've got a thought which you began thinking when you were young.
> > That thought is a fluid in motion. Over the course of your life you
> > revisit certain ideas and revise certain opinions. The motion
> > continues for decades. The way you think is like an information
> > processing system which evolves over the course of your life, and it's
> > true enough to call that software, not hardware, but the flow of data
> > through that system is entirely organic, and creating an exact copy of
> > a given flow in nonlinear fluid dynamics is impossible. The structure
> > of your mode of thinking -- your "software" -- is shaped tremendously
> > by the things that you think about; therefore replicating the
> > processor without replicating the data can only be of partial
> > usefulness, if the processor is shaped by and for the data. It's like
> > copying a river by duplicating exactly every last rock and pebble, but
> > leaving out the water.
> >
> > On 7/10/06, Frank Wimberly <wimberly3 at earthlink.net> wrote:
> > > Back in the 1980's Hans and I had offices next to each other in the
> > > Robotics Institute at Carnegie Mellon.  Over a period of a couple of
> > > years we had numerous arguments about whether machines could realize
> > > consciousness; whether a human mind could be transferred to a machine,
> > > etc.  I remember saying that if somehow my "mind" were transferred
from
> > > my body to some robot--which I felt was impossible--it might be that
> > > everyone else would agree that it was a remarkable likeness but that I
> > > would be gone.  Hans replied that I undervalued myself--that I am
> > > software not hardware.  After many arguments along these lines I said,
> > > "Hans, I now understand why you don't understand what I am saying
about
> > > consciousness--you don't have it."  This was all in good humor and
later
> > > when I was teaching a course in AI to MBA students I invited Hans to
> > > continue our debate in class.  A good time was had by all, I hope.
> > >
> > > Frank
> > >
> > > ---
> > > Frank C. Wimberly
> > > 140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918
(cell)

> > > Santa Fe, NM 87505           wimberly3 at earthlink.net
> > >
> > > -----Original Message-----
> > > From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> > > Behalf Of Martin C. Martin
> > > Sent: Saturday, July 08, 2006 7:16 PM
> > > To: The Friday Morning Applied Complexity Coffee Group
> > > Subject: Re: [FRIAM] 100 billion neurons
> > >
> > > I suspect you'd like Hans Moravec's books:
> > >
> > > http://www.amazon.com/gp/product/0674576187
> > > http://www.amazon.com/gp/product/0195136306
> > >
> > > He uses Moore's law and estimates of the brain's computing power to
> > > calculate when we'll have human equivalence in "a computer."  I forget
> > > the date, but it's not far.  He also talks about a number of very
> > > interesting consequences of this.
> > >
> > > - Martin
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > >
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > >
> >
> >
> > --
> > Giles Bowkett
> > http://www.gilesgoatboy.org
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org
> >
>
>
>
> --
> George T. Duncan
> Professor of Statistics
> Heinz School of Public Policy and Management
> Carnegie Mellon University
> Pittsburgh, PA 15213
> (412) 268-2172
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
/pipermail/friam_redfish.com/attachments/20060713/bcb8105c/attachment-0001.h
tml

>
> ------------------------------
>
> Message: 2
> Date: Thu, 13 Jul 2006 16:57:49 -0600
> From: Jim Rutt <jim at jimrutt.com>
> Subject: Re: [FRIAM] 100 billion neurons
> To: The Friday Morning Applied Complexity Coffee Group
> <friam at redfish.com>
> Message-ID: <6.2.0.14.2.20060713165629.045705c8 at mail.jimrutt.com>
> Content-Type: text/plain; charset="us-ascii"
>
> as an interesting argument that the old hardware/software argument about
> consciousness is often malformed, take a look see at:
>
>
>
> Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in the
> Making of
> Consciousness_
>
>
>
>
> At 07:30 AM 7/10/2006, you wrote:
> >Back in the 1980's Hans and I had offices next to each other in the
> >Robotics Institute at Carnegie Mellon.  Over a period of a couple of
> >years we had numerous arguments about whether machines could realize
> >consciousness; whether a human mind could be transferred to a machine,
> >etc.  I remember saying that if somehow my "mind" were transferred from
> >my body to some robot--which I felt was impossible--it might be that
> >everyone else would agree that it was a remarkable likeness but that I
> >would be gone.  Hans replied that I undervalued myself--that I am
> >software not hardware.  After many arguments along these lines I said,
> >"Hans, I now understand why you don't understand what I am saying about
> >consciousness--you don't have it."  This was all in good humor and later
> >when I was teaching a course in AI to MBA students I invited Hans to
> >continue our debate in class.  A good time was had by all, I hope.
> >
> >Frank
> >
> >---
> >Frank C. Wimberly
> >140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918 (cell)
> >Santa Fe, NM 87505           wimberly3 at earthlink.net
> >
> >-----Original Message-----
> >From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> >Behalf Of Martin C. Martin
> >Sent: Saturday, July 08, 2006 7:16 PM
> >To: The Friday Morning Applied Complexity Coffee Group
> >Subject: Re: [FRIAM] 100 billion neurons
> >
> >I suspect you'd like Hans Moravec's books:
> >
> >http://www.amazon.com/gp/product/0674576187
> >http://www.amazon.com/gp/product/0195136306
> >
> >He uses Moore's law and estimates of the brain's computing power to
> >calculate when we'll have human equivalence in "a computer."  I forget
> >the date, but it's not far.  He also talks about a number of very
> >interesting consequences of this.
> >
> >- Martin
> >
> >============================================================
> >FRIAM Applied Complexity Group listserv
> >Meets Fridays 9a-11:30 at cafe at St. John's College
> >lectures, archives, unsubscribe, maps at http://www.friam.org
> >
> >
> >============================================================
> >FRIAM Applied Complexity Group listserv
> >Meets Fridays 9a-11:30 at cafe at St. John's College
> >lectures, archives, unsubscribe, maps at http://www.friam.org
>
> ===================================
> Jim Rutt
> voice:  505-989-1115
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
/pipermail/friam_redfish.com/attachments/20060713/3f05e21d/attachment-0001.h
tml

>
> ------------------------------
>
> Message: 3
> Date: Thu, 13 Jul 2006 18:59:23 -0600
> From: "Frank Wimberly" <wimberly3 at earthlink.net>
> Subject: Re: [FRIAM] 100 billion neurons
> To: "'The Friday Morning Applied Complexity Coffee Group'"
> <friam at redfish.com>
> Message-ID: <021801c6a6e0$c072c530$0300a8c0 at franknotebook>
> Content-Type: text/plain; charset="iso-8859-1"
>
> At the risk of being neurotic, here is link to a review of Damasio's
> book:
>
> http://dir.salon.com/story/books/review/1999/09/21/damasio/index.html
>
>
> Frank
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz??????????????(505) 995-8715 or (505) 670-9918 (cell)
> Santa Fe, NM 87505???????????wimberly3 at earthlink.net
> -----Original Message-----
> From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> Behalf Of Jim Rutt
> Sent: Thursday, July 13, 2006 4:58 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] 100 billion neurons
>
> as an interesting argument that the old hardware/software argument about
> consciousness is often malformed, take a look see at:
>
>
>
> Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in the
> Making of
> Consciousness_
>
> ?
>
>
> At 07:30 AM 7/10/2006, you wrote:
>
> Back in the 1980's Hans and I had offices next to each other in the
> Robotics Institute at Carnegie Mellon.? Over a period of a couple of
> years we had numerous arguments about whether machines could realize
> consciousness; whether a human mind could be transferred to a machine,
> etc.? I remember saying that if somehow my "mind" were transferred from
> my body to some robot--which I felt was impossible--it might be that
> everyone else would agree that it was a remarkable likeness but that I
> would be gone.? Hans replied that I undervalued myself--that I am
> software not hardware.? After many arguments along these lines I said,
> "Hans, I now understand why you don't understand what I am saying about
> consciousness--you don't have it."? This was all in good humor and later
> when I was teaching a course in AI to MBA students I invited Hans to
> continue our debate in class.? A good time was had by all, I hope.
>
> Frank
>
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz????????????? (505) 995-8715 or (505) 670-9918 (cell)
> Santa Fe, NM 87505?????????? wimberly3 at earthlink.net
>
> -----Original Message-----
> From: friam-bounces at redfish.com [ mailto:friam-bounces at redfish.com] On
> Behalf Of Martin C. Martin
> Sent: Saturday, July 08, 2006 7:16 PM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] 100 billion neurons
>
> I suspect you'd like Hans Moravec's books:
>
> http://www.amazon.com/gp/product/0674576187
> http://www.amazon.com/gp/product/0195136306
>
> He uses Moore's law and estimates of the brain's computing power to
> calculate when we'll have human equivalence in "a computer."? I forget
> the date, but it's not far.? He also talks about a number of very
> interesting consequences of this.
>
> - Martin
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org 
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org 
> ===================================
> Jim Rutt
> voice:? 505-989-1115??
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 14 Jul 2006 09:42:14 +0200
> From: "Jochen Fromm" <fromm at vs.uni-kassel.de>
> Subject: [FRIAM] Intentionality - the mark of the vital
> To: "'The Friday Morning Applied Complexity Coffee Group'"
> <friam at redfish.com>
> Message-ID: <000001c6a719$092071a0$5fda338d at Toshiba>
> Content-Type: text/plain; charset="us-ascii"
>
>  
> I have finally read the article "Intentionality is
> the mark of the vital". It contains interesting
> remarks about the mind/body problem, about the
> relationship between mental and material "substance",
> and nice illustrations (for example about lions and gnus).
> Well written.
>
> If "intentionality is the mark of the vital",
> are artificial agents with intentions the first
> step towards vital, living systems ? Agents are
> of course used in artificial life, but in the
> context of the article the question seems to
> gain new importance.
>
> -J.
> ________________________________
>
> From: Nicholas Thompson
> Sent: Monday, June 26, 2006 3:20 AM
> To: friam at redfish.com
> Subject: [FRIAM] self-consciousness
>
> For those rare few of you that are INTENSELY interested by the recent
> discussion on self consciousness, here is a paper on the subject  which
> asserts that every organism must have a point of view.  
>  
> http://home.earthlink.net/~nickthompson/id14.html
>  
>
>
>
>
> ------------------------------
>
> _______________________________________________
> Friam mailing list
> Friam at redfish.com
> http://redfish.com/mailman/listinfo/friam_redfish.com
>
>
> End of Friam Digest, Vol 37, Issue 17
> *************************************




Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Russell Standish
I think that intentionality is a modelling property - something has
intentionality because it is useful to model a given system as if it
had a mind like ours, more useful than any other model we might have.

So we can say a computer has intentionality, if it is useful model the
machine as having a mind. This obviously depends on the software
application, and how technical the person is (someone who programs a
computer - like me - is more likely to have a machine model, rather
than mind model of a computer).

So - in terms of answering your question about whether intentionality
is the mark of the vital, I would have to answer no. I do not see much
intentional behaviour amongst simple animals (eg insects) or plants -
rather I tend to think of these as complex machine. On the contrary, to a
well designed artificial human (as in a computer game character) I will assign
intentionality, even though I know they're only the outputs of algorithms.

Cheers

On Fri, Jul 14, 2006 at 09:36:54PM -0400, Nicholas Thompson wrote:

> Jochen,
>
> Thanks for your kind response.
>
> Your question churns my head.  I was keen to argue that intentionality is a
> property not only of thinking things but of any biological thing.  But I
> never imagined that intentionality could be used as a criterion of
> vitality.  I do believe that every living system displays intentionality,
> but I now have to think about whether I think that all intentional systems
> are living.   I guess NOT.  However, my reasons for holding this belief are
> probably robotophobic.  
>
> Nick
>
>
> PS  My first response to your  question was to write the following 100
> words of baffle-gab, like the good academic I am.  It might be marginally
> interesting in and off itself, but it didnt seem to answer your question.
> I had put too much effort into it to throw it away, so I stuck it below.
> Feel free to ignore it.  
>
> BEGIN BAFFLEGAB =======================================================
>
> Intentionality is one of those words that leads to endless confusion.  It
> can refer to having an intention or it can refer to a peculiar propert to
> assertions containing verbs of mentation, wanting, thinking, feeling, etc.
> The sentence, "Jones's intention was that the books be placed on the table"
> is intentional in both senses: intentional in sense one because it tells us
> something about what Jones is up to, and intentional in the second sense
> because it displays the odd property of referential opacity.    Unlike the
> statement "the books are on the table" , the statement about Jones's
> intentions cannot be verified nor disconfirmed by gathering information
> about the location of the books.  
>
>  The two are intimately connected.  Any statement one makes about the
> intentions of others in sense one is inevitably an intensional utterance in
> sense two because the truth value of the statement lies in the organization
> of Jones's behavior, rather than whether Jones's intention is ever
> fulfilled.  
>
> It was in this second, perhaps strained, philosophic sense, that I think
> the cue relation is necessarily intentional.  When we say that C is a cue
> to X, we mean that from the point of view of the system we are interested
> in, C stands in for X.   ("In the Human respiratory system, Blood acidity
> is a cue for blood oxygenation")  To the extent that robots use cues, they
> MUST be intentional in this sense.  
>
> ===========================================================
> end  BAFFLEGAB.  
>
> Nicholas Thompson
> nickthompson at earthlink.net
> http://home.earthlink.net/~nickthompson
>
>
> > [Original Message]
> > From: <friam-request at redfish.com>
> > To: <friam at redfish.com>
> > Date: 7/14/2006 12:00:29 PM
> > Subject: Friam Digest, Vol 37, Issue 17
> >
> > Send Friam mailing list submissions to
> > friam at redfish.com
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > http://redfish.com/mailman/listinfo/friam_redfish.com
> > or, via email, send a message with subject or body 'help' to
> > friam-request at redfish.com
> >
> > You can reach the person managing the list at
> > friam-owner at redfish.com
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Friam digest..."
> >
> >
> > Today's Topics:
> >
> >    1. Re: 100 billion neurons (George Duncan)
> >    2. Re: 100 billion neurons (Jim Rutt)
> >    3. Re: 100 billion neurons (Frank Wimberly)
> >    4. Intentionality - the mark of the vital (Jochen Fromm)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Thu, 13 Jul 2006 10:38:47 -0600
> > From: "George Duncan" <gd17 at andrew.cmu.edu>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: "The Friday Morning Applied Complexity Coffee Group"
> > <friam at redfish.com>
> > Message-ID:
> > <b9b019d10607130938n1e6e953etaa4bbd1409d24ffb at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Shall this conversation be neuronic rather than neurotic?
> >
> > Or try this
> > http://www.technologyreview.com/read_article.aspx?id=17164&ch=infotech
> >
> >
> > On 7/13/06, Giles Bowkett <gilesb at gmail.com> wrote:
> > >
> > > I'm inclined to agree. The model I use is nonlinear fluid dynamics.
> > > Say you've got a thought which you began thinking when you were young.
> > > That thought is a fluid in motion. Over the course of your life you
> > > revisit certain ideas and revise certain opinions. The motion
> > > continues for decades. The way you think is like an information
> > > processing system which evolves over the course of your life, and it's
> > > true enough to call that software, not hardware, but the flow of data
> > > through that system is entirely organic, and creating an exact copy of
> > > a given flow in nonlinear fluid dynamics is impossible. The structure
> > > of your mode of thinking -- your "software" -- is shaped tremendously
> > > by the things that you think about; therefore replicating the
> > > processor without replicating the data can only be of partial
> > > usefulness, if the processor is shaped by and for the data. It's like
> > > copying a river by duplicating exactly every last rock and pebble, but
> > > leaving out the water.
> > >
> > > On 7/10/06, Frank Wimberly <wimberly3 at earthlink.net> wrote:
> > > > Back in the 1980's Hans and I had offices next to each other in the
> > > > Robotics Institute at Carnegie Mellon.  Over a period of a couple of
> > > > years we had numerous arguments about whether machines could realize
> > > > consciousness; whether a human mind could be transferred to a machine,
> > > > etc.  I remember saying that if somehow my "mind" were transferred
> from
> > > > my body to some robot--which I felt was impossible--it might be that
> > > > everyone else would agree that it was a remarkable likeness but that I
> > > > would be gone.  Hans replied that I undervalued myself--that I am
> > > > software not hardware.  After many arguments along these lines I said,
> > > > "Hans, I now understand why you don't understand what I am saying
> about
> > > > consciousness--you don't have it."  This was all in good humor and
> later
> > > > when I was teaching a course in AI to MBA students I invited Hans to
> > > > continue our debate in class.  A good time was had by all, I hope.
> > > >
> > > > Frank
> > > >
> > > > ---
> > > > Frank C. Wimberly
> > > > 140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918
> (cell)
> > > > Santa Fe, NM 87505           wimberly3 at earthlink.net
> > > >
> > > > -----Original Message-----
> > > > From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> > > > Behalf Of Martin C. Martin
> > > > Sent: Saturday, July 08, 2006 7:16 PM
> > > > To: The Friday Morning Applied Complexity Coffee Group
> > > > Subject: Re: [FRIAM] 100 billion neurons
> > > >
> > > > I suspect you'd like Hans Moravec's books:
> > > >
> > > > http://www.amazon.com/gp/product/0674576187
> > > > http://www.amazon.com/gp/product/0195136306
> > > >
> > > > He uses Moore's law and estimates of the brain's computing power to
> > > > calculate when we'll have human equivalence in "a computer."  I forget
> > > > the date, but it's not far.  He also talks about a number of very
> > > > interesting consequences of this.
> > > >
> > > > - Martin
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > > >
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > > >
> > >
> > >
> > > --
> > > Giles Bowkett
> > > http://www.gilesgoatboy.org
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College
> > > lectures, archives, unsubscribe, maps at http://www.friam.org
> > >
> >
> >
> >
> > --
> > George T. Duncan
> > Professor of Statistics
> > Heinz School of Public Policy and Management
> > Carnegie Mellon University
> > Pittsburgh, PA 15213
> > (412) 268-2172
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL:
> /pipermail/friam_redfish.com/attachments/20060713/bcb8105c/attachment-0001.h
> tml
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Thu, 13 Jul 2006 16:57:49 -0600
> > From: Jim Rutt <jim at jimrutt.com>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: The Friday Morning Applied Complexity Coffee Group
> > <friam at redfish.com>
> > Message-ID: <6.2.0.14.2.20060713165629.045705c8 at mail.jimrutt.com>
> > Content-Type: text/plain; charset="us-ascii"
> >
> > as an interesting argument that the old hardware/software argument about
> > consciousness is often malformed, take a look see at:
> >
> >
> >
> > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in the
> > Making of
> > Consciousness_
> >
> >
> >
> >
> > At 07:30 AM 7/10/2006, you wrote:
> > >Back in the 1980's Hans and I had offices next to each other in the
> > >Robotics Institute at Carnegie Mellon.  Over a period of a couple of
> > >years we had numerous arguments about whether machines could realize
> > >consciousness; whether a human mind could be transferred to a machine,
> > >etc.  I remember saying that if somehow my "mind" were transferred from
> > >my body to some robot--which I felt was impossible--it might be that
> > >everyone else would agree that it was a remarkable likeness but that I
> > >would be gone.  Hans replied that I undervalued myself--that I am
> > >software not hardware.  After many arguments along these lines I said,
> > >"Hans, I now understand why you don't understand what I am saying about
> > >consciousness--you don't have it."  This was all in good humor and later
> > >when I was teaching a course in AI to MBA students I invited Hans to
> > >continue our debate in class.  A good time was had by all, I hope.
> > >
> > >Frank
> > >
> > >---
> > >Frank C. Wimberly
> > >140 Calle Ojo Feliz              (505) 995-8715 or (505) 670-9918 (cell)
> > >Santa Fe, NM 87505           wimberly3 at earthlink.net
> > >
> > >-----Original Message-----
> > >From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> > >Behalf Of Martin C. Martin
> > >Sent: Saturday, July 08, 2006 7:16 PM
> > >To: The Friday Morning Applied Complexity Coffee Group
> > >Subject: Re: [FRIAM] 100 billion neurons
> > >
> > >I suspect you'd like Hans Moravec's books:
> > >
> > >http://www.amazon.com/gp/product/0674576187
> > >http://www.amazon.com/gp/product/0195136306
> > >
> > >He uses Moore's law and estimates of the brain's computing power to
> > >calculate when we'll have human equivalence in "a computer."  I forget
> > >the date, but it's not far.  He also talks about a number of very
> > >interesting consequences of this.
> > >
> > >- Martin
> > >
> > >============================================================
> > >FRIAM Applied Complexity Group listserv
> > >Meets Fridays 9a-11:30 at cafe at St. John's College
> > >lectures, archives, unsubscribe, maps at http://www.friam.org
> > >
> > >
> > >============================================================
> > >FRIAM Applied Complexity Group listserv
> > >Meets Fridays 9a-11:30 at cafe at St. John's College
> > >lectures, archives, unsubscribe, maps at http://www.friam.org
> >
> > ===================================
> > Jim Rutt
> > voice:  505-989-1115
> >
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL:
> /pipermail/friam_redfish.com/attachments/20060713/3f05e21d/attachment-0001.h
> tml
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Thu, 13 Jul 2006 18:59:23 -0600
> > From: "Frank Wimberly" <wimberly3 at earthlink.net>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > <friam at redfish.com>
> > Message-ID: <021801c6a6e0$c072c530$0300a8c0 at franknotebook>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > At the risk of being neurotic, here is link to a review of Damasio's
> > book:
> >
> > http://dir.salon.com/story/books/review/1999/09/21/damasio/index.html
> >
> >
> > Frank
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz??????????????(505) 995-8715 or (505) 670-9918 (cell)
> > Santa Fe, NM 87505???????????wimberly3 at earthlink.net
> > -----Original Message-----
> > From: friam-bounces at redfish.com [mailto:friam-bounces at redfish.com] On
> > Behalf Of Jim Rutt
> > Sent: Thursday, July 13, 2006 4:58 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] 100 billion neurons
> >
> > as an interesting argument that the old hardware/software argument about
> > consciousness is often malformed, take a look see at:
> >
> >
> >
> > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in the
> > Making of
> > Consciousness_
> >
> > ?
> >
> >
> > At 07:30 AM 7/10/2006, you wrote:
> >
> > Back in the 1980's Hans and I had offices next to each other in the
> > Robotics Institute at Carnegie Mellon.? Over a period of a couple of
> > years we had numerous arguments about whether machines could realize
> > consciousness; whether a human mind could be transferred to a machine,
> > etc.? I remember saying that if somehow my "mind" were transferred from
> > my body to some robot--which I felt was impossible--it might be that
> > everyone else would agree that it was a remarkable likeness but that I
> > would be gone.? Hans replied that I undervalued myself--that I am
> > software not hardware.? After many arguments along these lines I said,
> > "Hans, I now understand why you don't understand what I am saying about
> > consciousness--you don't have it."? This was all in good humor and later
> > when I was teaching a course in AI to MBA students I invited Hans to
> > continue our debate in class.? A good time was had by all, I hope.
> >
> > Frank
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz????????????? (505) 995-8715 or (505) 670-9918 (cell)
> > Santa Fe, NM 87505?????????? wimberly3 at earthlink.net
> >
> > -----Original Message-----
> > From: friam-bounces at redfish.com [ mailto:friam-bounces at redfish.com] On
> > Behalf Of Martin C. Martin
> > Sent: Saturday, July 08, 2006 7:16 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] 100 billion neurons
> >
> > I suspect you'd like Hans Moravec's books:
> >
> > http://www.amazon.com/gp/product/0674576187
> > http://www.amazon.com/gp/product/0195136306
> >
> > He uses Moore's law and estimates of the brain's computing power to
> > calculate when we'll have human equivalence in "a computer."? I forget
> > the date, but it's not far.? He also talks about a number of very
> > interesting consequences of this.
> >
> > - Martin
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org 
> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College
> > lectures, archives, unsubscribe, maps at http://www.friam.org 
> > ===================================
> > Jim Rutt
> > voice:? 505-989-1115??
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Fri, 14 Jul 2006 09:42:14 +0200
> > From: "Jochen Fromm" <fromm at vs.uni-kassel.de>
> > Subject: [FRIAM] Intentionality - the mark of the vital
> > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > <friam at redfish.com>
> > Message-ID: <000001c6a719$092071a0$5fda338d at Toshiba>
> > Content-Type: text/plain; charset="us-ascii"
> >
> >  
> > I have finally read the article "Intentionality is
> > the mark of the vital". It contains interesting
> > remarks about the mind/body problem, about the
> > relationship between mental and material "substance",
> > and nice illustrations (for example about lions and gnus).
> > Well written.
> >
> > If "intentionality is the mark of the vital",
> > are artificial agents with intentions the first
> > step towards vital, living systems ? Agents are
> > of course used in artificial life, but in the
> > context of the article the question seems to
> > gain new importance.
> >
> > -J.
> > ________________________________
> >
> > From: Nicholas Thompson
> > Sent: Monday, June 26, 2006 3:20 AM
> > To: friam at redfish.com
> > Subject: [FRIAM] self-consciousness
> >
> > For those rare few of you that are INTENSELY interested by the recent
> > discussion on self consciousness, here is a paper on the subject  which
> > asserts that every organism must have a point of view.  
> >  
> > http://home.earthlink.net/~nickthompson/id14.html
> >  
> >
> >
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > Friam mailing list
> > Friam at redfish.com
> > http://redfish.com/mailman/listinfo/friam_redfish.com
> >
> >
> > End of Friam Digest, Vol 37, Issue 17
> > *************************************
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org

--
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.

----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 8308 3119 (mobile)
Mathematics                               0425 253119 (")
UNSW SYDNEY 2052                 R.Standish at unsw.edu.au            
Australia                                http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------



Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Phil Henshaw-2
In reply to this post by Nick Thompson
Can we agree that trying to describe the characteristics of whole system
behavior, relating the simple to the complex, is a difficult challenge
that prompts each of us to stretch the meanings of words in ways others
feel uncomfortable with?   I think the simplest property all ordinary
whole systems have, air currents to orangutans, is loops of organization
which give operational meaning to inside & outside.  I know how to find
lots of them, but can't figure out what to call them.


Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com    


> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of Nicholas Thompson
> Sent: Friday, July 14, 2006 9:37 PM
> To: friam at redfish.com
> Cc: echarles
> Subject: [FRIAM] Intentionality is the mark of the vital
>
>
> Jochen,
>
> Thanks for your kind response.
>
> Your question churns my head.  I was keen to argue that
> intentionality is a property not only of thinking things but
> of any biological thing.  But I never imagined that
> intentionality could be used as a criterion of vitality.  I
> do believe that every living system displays intentionality,
> but I now have to think about whether I think that all
> intentional systems
> are living.   I guess NOT.  However, my reasons for holding
> this belief are
> probably robotophobic.  
>
> Nick
>
>
> PS  My first response to your  question was to write the
> following 100 words of baffle-gab, like the good academic I
> am.  It might be marginally interesting in and off itself,
> but it didnt seem to answer your question.
> I had put too much effort into it to throw it away, so I
> stuck it below.
> Feel free to ignore it.  
>
> BEGIN BAFFLEGAB
> =======================================================
>
> Intentionality is one of those words that leads to endless
> confusion.  It can refer to having an intention or it can
> refer to a peculiar propert to assertions containing verbs of
> mentation, wanting, thinking, feeling, etc.
> The sentence, "Jones's intention was that the books be placed
> on the table" is intentional in both senses: intentional in
> sense one because it tells us something about what Jones is
> up to, and intentional in the second sense
> because it displays the odd property of referential opacity.  
>   Unlike the
> statement "the books are on the table" , the statement about
> Jones's intentions cannot be verified nor disconfirmed by
> gathering information about the location of the books.  
>
>  The two are intimately connected.  Any statement one makes
> about the intentions of others in sense one is inevitably an
> intensional utterance in sense two because the truth value of
> the statement lies in the organization of Jones's behavior,
> rather than whether Jones's intention is ever fulfilled.  
>
> It was in this second, perhaps strained, philosophic sense,
> that I think the cue relation is necessarily intentional.  
> When we say that C is a cue to X, we mean that from the point
> of view of the system we are interested
> in, C stands in for X.   ("In the Human respiratory system,
> Blood acidity
> is a cue for blood oxygenation")  To the extent that robots
> use cues, they MUST be intentional in this sense.  
>
> ===========================================================
> end  BAFFLEGAB.  
>
> Nicholas Thompson
> nickthompson at earthlink.net http://home.earthlink.net/~nickthompson
>
>
> > [Original Message]
> > From: <friam-request at redfish.com>
> > To: <friam at redfish.com>
> > Date: 7/14/2006 12:00:29 PM
> > Subject: Friam Digest, Vol 37, Issue 17
> >
> > Send Friam mailing list submissions to
> > friam at redfish.com
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > http://redfish.com/mailman/listinfo/friam_redfish.com
> > or, via email, send a message with subject or body 'help' to
> > friam-request at redfish.com
> >
> > You can reach the person managing the list at
> > friam-owner at redfish.com
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Friam digest..."
> >
> >
> > Today's Topics:
> >
> >    1. Re: 100 billion neurons (George Duncan)
> >    2. Re: 100 billion neurons (Jim Rutt)
> >    3. Re: 100 billion neurons (Frank Wimberly)
> >    4. Intentionality - the mark of the vital (Jochen Fromm)
> >
> >
> >
> ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Thu, 13 Jul 2006 10:38:47 -0600
> > From: "George Duncan" <gd17 at andrew.cmu.edu>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: "The Friday Morning Applied Complexity Coffee Group"
> > <friam at redfish.com>
> > Message-ID:
> > <b9b019d10607130938n1e6e953etaa4bbd1409d24ffb at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Shall this conversation be neuronic rather than neurotic?
> >
> > Or try this
> >
> http://www.technologyreview.com/read_article.aspx?id=17164&ch=infotech
> >
> >
> > On 7/13/06, Giles Bowkett <gilesb at gmail.com> wrote:
> > >
> > > I'm inclined to agree. The model I use is nonlinear fluid
> dynamics.
> > > Say you've got a thought which you began thinking when you were
> > > young. That thought is a fluid in motion. Over the course of your
> > > life you revisit certain ideas and revise certain opinions. The
> > > motion continues for decades. The way you think is like an
> > > information processing system which evolves over the
> course of your
> > > life, and it's true enough to call that software, not
> hardware, but
> > > the flow of data through that system is entirely organic, and
> > > creating an exact copy of a given flow in nonlinear fluid
> dynamics
> > > is impossible. The structure of your mode of thinking -- your
> > > "software" -- is shaped tremendously by the things that you think
> > > about; therefore replicating the processor without
> replicating the
> > > data can only be of partial usefulness, if the processor
> is shaped
> > > by and for the data. It's like copying a river by duplicating
> > > exactly every last rock and pebble, but leaving out the water.
> > >
> > > On 7/10/06, Frank Wimberly <wimberly3 at earthlink.net> wrote:
> > > > Back in the 1980's Hans and I had offices next to each other in
> > > > the Robotics Institute at Carnegie Mellon.  Over a period of a
> > > > couple of years we had numerous arguments about whether
> machines
> > > > could realize consciousness; whether a human mind could be
> > > > transferred to a machine, etc.  I remember saying that
> if somehow
> > > > my "mind" were transferred
> from
> > > > my body to some robot--which I felt was impossible--it might be
> > > > that everyone else would agree that it was a remarkable
> likeness
> > > > but that I would be gone.  Hans replied that I undervalued
> > > > myself--that I am software not hardware.  After many arguments
> > > > along these lines I said, "Hans, I now understand why you don't
> > > > understand what I am saying
> about
> > > > consciousness--you don't have it."  This was all in
> good humor and
> later
> > > > when I was teaching a course in AI to MBA students I
> invited Hans
> > > > to continue our debate in class.  A good time was had by all, I
> > > > hope.
> > > >
> > > > Frank
> > > >
> > > > ---
> > > > Frank C. Wimberly
> > > > 140 Calle Ojo Feliz              (505) 995-8715 or
> (505) 670-9918
> (cell)
> > > > Santa Fe, NM 87505           wimberly3 at earthlink.net
> > > >
> > > > -----Original Message-----
> > > > From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com]
> > > > On Behalf Of Martin C. Martin
> > > > Sent: Saturday, July 08, 2006 7:16 PM
> > > > To: The Friday Morning Applied Complexity Coffee Group
> > > > Subject: Re: [FRIAM] 100 billion neurons
> > > >
> > > > I suspect you'd like Hans Moravec's books:
> > > >
> > > > http://www.amazon.com/gp/product/0674576187
> > > > http://www.amazon.com/gp/product/0195136306
> > > >
> > > > He uses Moore's law and estimates of the brain's
> computing power
> > > > to calculate when we'll have human equivalence in "a
> computer."  I
> > > > forget the date, but it's not far.  He also talks about
> a number
> > > > of very interesting consequences of this.
> > > >
> > > > - Martin
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > > archives, unsubscribe, maps at http://www.friam.org
> > > >
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > > archives, unsubscribe, maps at http://www.friam.org
> > > >
> > >
> > >
> > > --
> > > Giles Bowkett
> > > http://www.gilesgoatboy.org
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > archives, unsubscribe, maps at http://www.friam.org
> > >
> >
> >
> >
> > --
> > George T. Duncan
> > Professor of Statistics
> > Heinz School of Public Policy and Management
> > Carnegie Mellon University
> > Pittsburgh, PA 15213
> > (412) 268-2172
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL:
> /pipermail/friam_redfish.com/attachments/20060713/bcb8105c/att
> achment-0001.h
> tml
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Thu, 13 Jul 2006 16:57:49 -0600
> > From: Jim Rutt <jim at jimrutt.com>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: The Friday Morning Applied Complexity Coffee Group
> > <friam at redfish.com>
> > Message-ID: <6.2.0.14.2.20060713165629.045705c8 at mail.jimrutt.com>
> > Content-Type: text/plain; charset="us-ascii"
> >
> > as an interesting argument that the old hardware/software argument
> > about
> > consciousness is often malformed, take a look see at:
> >
> >
> >
> > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in
> > the
> > Making of
> > Consciousness_
> >
> >
> >
> >
> > At 07:30 AM 7/10/2006, you wrote:
> > >Back in the 1980's Hans and I had offices next to each
> other in the
> > >Robotics Institute at Carnegie Mellon.  Over a period of a
> couple of
> > >years we had numerous arguments about whether machines
> could realize
> > >consciousness; whether a human mind could be transferred to a
> > >machine, etc.  I remember saying that if somehow my "mind" were
> > >transferred from my body to some robot--which I felt was
> > >impossible--it might be that everyone else would agree
> that it was a
> > >remarkable likeness but that I would be gone.  Hans replied that I
> > >undervalued myself--that I am software not hardware.  After many
> > >arguments along these lines I said, "Hans, I now
> understand why you
> > >don't understand what I am saying about consciousness--you
> don't have
> > >it."  This was all in good humor and later when I was teaching a
> > >course in AI to MBA students I invited Hans to continue
> our debate in
> > >class.  A good time was had by all, I hope.
> > >
> > >Frank
> > >
> > >---
> > >Frank C. Wimberly
> > >140 Calle Ojo Feliz              (505) 995-8715 or (505)
> 670-9918 (cell)
> > >Santa Fe, NM 87505           wimberly3 at earthlink.net
> > >
> > >-----Original Message-----
> > >From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On
> > >Behalf Of Martin C. Martin
> > >Sent: Saturday, July 08, 2006 7:16 PM
> > >To: The Friday Morning Applied Complexity Coffee Group
> > >Subject: Re: [FRIAM] 100 billion neurons
> > >
> > >I suspect you'd like Hans Moravec's books:
> > >
> > >http://www.amazon.com/gp/product/0674576187
> > >http://www.amazon.com/gp/product/0195136306
> > >
> > >He uses Moore's law and estimates of the brain's computing
> power to
> > >calculate when we'll have human equivalence in "a computer."  I
> > >forget the date, but it's not far.  He also talks about a
> number of
> > >very interesting consequences of this.
> > >
> > >- Martin
> > >
> > >============================================================
> > >FRIAM Applied Complexity Group listserv
> > >Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > >archives, unsubscribe, maps at http://www.friam.org
> > >
> > >
> > >============================================================
> > >FRIAM Applied Complexity Group listserv
> > >Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > >archives, unsubscribe, maps at http://www.friam.org
> >
> > ===================================
> > Jim Rutt
> > voice:  505-989-1115
> >
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL:
> /pipermail/friam_redfish.com/attachments/20060713/3f05e21d/att
> achment-0001.h
> tml
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Thu, 13 Jul 2006 18:59:23 -0600
> > From: "Frank Wimberly" <wimberly3 at earthlink.net>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > <friam at redfish.com>
> > Message-ID: <021801c6a6e0$c072c530$0300a8c0 at franknotebook>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > At the risk of being neurotic, here is link to a review of Damasio's
> > book:
> >
> >
> http://dir.salon.com/story/books/review/1999/09/21/damasio/index.html
> >
> >
> > Frank
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz??????????????(505) 995-8715 or (505) 670-9918
> > (cell) Santa Fe, NM 87505???????????wimberly3 at earthlink.net
> > -----Original Message-----
> > From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On
> > Behalf Of Jim Rutt
> > Sent: Thursday, July 13, 2006 4:58 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] 100 billion neurons
> >
> > as an interesting argument that the old hardware/software argument
> > about consciousness is often malformed, take a look see at:
> >
> >
> >
> > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in
> > the Making of Consciousness_
> >
> > ?
> >
> >
> > At 07:30 AM 7/10/2006, you wrote:
> >
> > Back in the 1980's Hans and I had offices next to each other in the
> > Robotics Institute at Carnegie Mellon.? Over a period of a
> couple of
> > years we had numerous arguments about whether machines
> could realize
> > consciousness; whether a human mind could be transferred to
> a machine,
> > etc.? I remember saying that if somehow my "mind" were transferred
> > from my body to some robot--which I felt was impossible--it
> might be
> > that everyone else would agree that it was a remarkable
> likeness but
> > that I would be gone.? Hans replied that I undervalued
> myself--that I
> > am software not hardware.? After many arguments along these lines I
> > said, "Hans, I now understand why you don't understand what I am
> > saying about consciousness--you don't have it."? This was
> all in good
> > humor and later when I was teaching a course in AI to MBA
> students I
> > invited Hans to continue our debate in class.? A good time
> was had by
> > all, I hope.
> >
> > Frank
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz????????????? (505) 995-8715 or (505) 670-9918
> > (cell) Santa Fe, NM 87505?????????? wimberly3 at earthlink.net
> >
> > -----Original Message-----
> > From: friam-bounces at redfish.com [
> mailto:friam-bounces at redfish.com] On
> > Behalf Of Martin C. Martin
> > Sent: Saturday, July 08, 2006 7:16 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] 100 billion neurons
> >
> > I suspect you'd like Hans Moravec's books:
> >
> > http://www.amazon.com/gp/product/0674576187
> > http://www.amazon.com/gp/product/0195136306
> >
> > He uses Moore's law and estimates of the brain's computing power to
> > calculate when we'll have human equivalence in "a
> computer."? I forget
> > the date, but it's not far.? He also talks about a number of very
> > interesting consequences of this.
> >
> > - Martin
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > archives, unsubscribe, maps at http://www.friam.org
> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > archives, unsubscribe, maps at http://www.friam.org 
> > ===================================
> > Jim Rutt
> > voice:? 505-989-1115??
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Fri, 14 Jul 2006 09:42:14 +0200
> > From: "Jochen Fromm" <fromm at vs.uni-kassel.de>
> > Subject: [FRIAM] Intentionality - the mark of the vital
> > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > <friam at redfish.com>
> > Message-ID: <000001c6a719$092071a0$5fda338d at Toshiba>
> > Content-Type: text/plain; charset="us-ascii"
> >
> >  
> > I have finally read the article "Intentionality is
> > the mark of the vital". It contains interesting
> > remarks about the mind/body problem, about the
> > relationship between mental and material "substance",
> > and nice illustrations (for example about lions and gnus).
> > Well written.
> >
> > If "intentionality is the mark of the vital",
> > are artificial agents with intentions the first
> > step towards vital, living systems ? Agents are
> > of course used in artificial life, but in the
> > context of the article the question seems to
> > gain new importance.
> >
> > -J.
> > ________________________________
> >
> > From: Nicholas Thompson
> > Sent: Monday, June 26, 2006 3:20 AM
> > To: friam at redfish.com
> > Subject: [FRIAM] self-consciousness
> >
> > For those rare few of you that are INTENSELY interested by
> the recent
> > discussion on self consciousness, here is a paper on the subject  
> > which asserts that every organism must have a point of view.
> >  
> > http://home.earthlink.net/~nickthompson/id14.html
> >  
> >
> >
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > Friam mailing list
> > Friam at redfish.com
> > http://redfish.com/mailman/listinfo/friam_redfish.com
> >
> >
> > End of Friam Digest, Vol 37, Issue 17
> > *************************************
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Phil Henshaw-2
In reply to this post by Russell Standish
Russ,

I think switching to a subject that's simpler to model, (in this case
studying computers as a stand-in for studying brains) works well for
lots of things, when the new subject is an approximation of the
original.  Are brains and computers comparable subjects?  I think we
don't model brains because the problem can't be defined yet, not because
computers are a good analog.

Nick's proposition [paraphrasing] was doesn't 'intentionality' in living
systems (homing in on goal) mean we're no different from thermostats,
and mine was sort of the same thing backwards, asking if the biggest
step to 'consciousness' isn't actually for a system to act as a whole
simply for having internal loops.  The basic language abuse required for
both is both a valid complaint and key to appreciating the difficulty of
the problem.  The exercise is to try to frame a scale across all forms
of systems that has a place for everything.

Computers are chock full of homing devices and loops, but I don't think
they make it onto the 'vitality' scale.  Some of the basic differences
from 'wet ware' seem like they might be overcome using with virtual
models so that it appears that a whole environment is full of
independent actors.  That means comparing physical systems to mental
categories with lists of quite certain mathematical properties.  

What I see is that natural systems are nature's work-around for
absolutely massive improbability.  One thing that shows it is how living
things seem to be modeled on free enterprise (absent a few human
adaptations).  In living systems independent parts connect to each other
by free exchange, in open environments, through the bloodstream, within
cellular fluid, etc., communicating by nothing more reliable than
sending messages in a bottle.  I think clearly that's why we abandoned
the real thing for a virtual model, because modeling the former is just
nowhere near the fun.   Apparently the most efficient way for a liver
cell to talk to a hair cell is for them both to dump whatever they're
done with and grab whatever comes along...!  :)



On Behalf Of Russell Standish

> Sent: Thursday, July 13, 2006 3:12 PM
> To: nickthompson at earthlink.net; The Friday Morning Applied
> Complexity Coffee Group
> Cc: echarles
> Subject: Re: [FRIAM] Intentionality is the mark of the vital
>
>
> I think that intentionality is a modelling property -
> something has intentionality because it is useful to model a
> given system as if it had a mind like ours, more useful than
> any other model we might have.
>
> So we can say a computer has intentionality, if it is useful
> model the machine as having a mind. This obviously depends on
> the software application, and how technical the person is
> (someone who programs a computer - like me - is more likely
> to have a machine model, rather than mind model of a computer).
>
> So - in terms of answering your question about whether
> intentionality is the mark of the vital, I would have to
> answer no. I do not see much intentional behaviour amongst
> simple animals (eg insects) or plants - rather I tend to
> think of these as complex machine. On the contrary, to a well
> designed artificial human (as in a computer game character) I
> will assign intentionality, even though I know they're only
> the outputs of algorithms.
>
> Cheers
>
> On Fri, Jul 14, 2006 at 09:36:54PM -0400, Nicholas Thompson wrote:
> > Jochen,
> >
> > Thanks for your kind response.
> >
> > Your question churns my head.  I was keen to argue that
> intentionality
> > is a property not only of thinking things but of any
> biological thing.  
> > But I never imagined that intentionality could be used as a
> criterion
> > of vitality.  I do believe that every living system displays
> > intentionality, but I now have to think about whether I
> think that all intentional systems
> > are living.   I guess NOT.  However, my reasons for holding
> this belief are
> > probably robotophobic.
> >
> > Nick
> >
> >
> > PS  My first response to your  question was to write the
> following 100
> > words of baffle-gab, like the good academic I am.  It might be
> > marginally interesting in and off itself, but it didnt seem
> to answer
> > your question. I had put too much effort into it to throw
> it away, so
> > I stuck it below. Feel free to ignore it.
> >
> > BEGIN BAFFLEGAB
> > =======================================================
> >
> > Intentionality is one of those words that leads to endless
> confusion.  
> > It can refer to having an intention or it can refer to a peculiar
> > propert to assertions containing verbs of mentation, wanting,
> > thinking, feeling, etc. The sentence, "Jones's intention
> was that the
> > books be placed on the table" is intentional in both senses:
> > intentional in sense one because it tells us something
> about what Jones is up to, and intentional in the second sense
> > because it displays the odd property of referential
> opacity.    Unlike the
> > statement "the books are on the table" , the statement
> about Jones's
> > intentions cannot be verified nor disconfirmed by gathering
> > information about the location of the books.
> >
> >  The two are intimately connected.  Any statement one makes
> about the
> > intentions of others in sense one is inevitably an intensional
> > utterance in sense two because the truth value of the
> statement lies
> > in the organization of Jones's behavior, rather than
> whether Jones's
> > intention is ever fulfilled.
> >
> > It was in this second, perhaps strained, philosophic sense, that I
> > think the cue relation is necessarily intentional.  When we
> say that C
> > is a cue to X, we mean that from the point of view of the
> system we are interested
> > in, C stands in for X.   ("In the Human respiratory system,
> Blood acidity
> > is a cue for blood oxygenation")  To the extent that robots
> use cues,
> > they MUST be intentional in this sense.
> >
> > ===========================================================
> > end  BAFFLEGAB.
> >
> > Nicholas Thompson
> > nickthompson at earthlink.net http://home.earthlink.net/~nickthompson
> >
> >
> > > [Original Message]
> > > From: <friam-request at redfish.com>
> > > To: <friam at redfish.com>
> > > Date: 7/14/2006 12:00:29 PM
> > > Subject: Friam Digest, Vol 37, Issue 17
> > >
> > > Send Friam mailing list submissions to
> > > friam at redfish.com
> > >
> > > To subscribe or unsubscribe via the World Wide Web, visit
> > > http://redfish.com/mailman/listinfo/friam_redfish.com
> > > or, via email, send a message with subject or body 'help' to
> > > friam-request at redfish.com
> > >
> > > You can reach the person managing the list at
> > > friam-owner at redfish.com
> > >
> > > When replying, please edit your Subject line so it is
> more specific
> > > than "Re: Contents of Friam digest..."
> > >
> > >
> > > Today's Topics:
> > >
> > >    1. Re: 100 billion neurons (George Duncan)
> > >    2. Re: 100 billion neurons (Jim Rutt)
> > >    3. Re: 100 billion neurons (Frank Wimberly)
> > >    4. Intentionality - the mark of the vital (Jochen Fromm)
> > >
> > >
> > >
> --------------------------------------------------------------------
> > > --
> > >
> > > Message: 1
> > > Date: Thu, 13 Jul 2006 10:38:47 -0600
> > > From: "George Duncan" <gd17 at andrew.cmu.edu>
> > > Subject: Re: [FRIAM] 100 billion neurons
> > > To: "The Friday Morning Applied Complexity Coffee Group"
> > > <friam at redfish.com>
> > > Message-ID:
> > > <b9b019d10607130938n1e6e953etaa4bbd1409d24ffb at mail.gmail.com>
> > > Content-Type: text/plain; charset="iso-8859-1"
> > >
> > > Shall this conversation be neuronic rather than neurotic?
> > >
> > > Or try this
> > >
> http://www.technologyreview.com/read_article.aspx?id=17164&ch=infote
> > > ch
> > >
> > >
> > > On 7/13/06, Giles Bowkett <gilesb at gmail.com> wrote:
> > > >
> > > > I'm inclined to agree. The model I use is nonlinear fluid
> > > > dynamics. Say you've got a thought which you began
> thinking when
> > > > you were young. That thought is a fluid in motion. Over
> the course
> > > > of your life you revisit certain ideas and revise certain
> > > > opinions. The motion continues for decades. The way you
> think is
> > > > like an information processing system which evolves over the
> > > > course of your life, and it's true enough to call that
> software,
> > > > not hardware, but the flow of data through that system
> is entirely
> > > > organic, and creating an exact copy of a given flow in
> nonlinear
> > > > fluid dynamics is impossible. The structure of your mode of
> > > > thinking -- your "software" -- is shaped tremendously by the
> > > > things that you think about; therefore replicating the
> processor
> > > > without replicating the data can only be of partial
> usefulness, if
> > > > the processor is shaped by and for the data. It's like
> copying a
> > > > river by duplicating exactly every last rock and pebble, but
> > > > leaving out the water.
> > > >
> > > > On 7/10/06, Frank Wimberly <wimberly3 at earthlink.net> wrote:
> > > > > Back in the 1980's Hans and I had offices next to
> each other in
> > > > > the Robotics Institute at Carnegie Mellon.  Over a
> period of a
> > > > > couple of years we had numerous arguments about
> whether machines
> > > > > could realize consciousness; whether a human mind could be
> > > > > transferred to a machine, etc.  I remember saying that if
> > > > > somehow my "mind" were transferred
> > from
> > > > > my body to some robot--which I felt was
> impossible--it might be
> > > > > that everyone else would agree that it was a
> remarkable likeness
> > > > > but that I would be gone.  Hans replied that I undervalued
> > > > > myself--that I am software not hardware.  After many
> arguments
> > > > > along these lines I said, "Hans, I now understand why
> you don't
> > > > > understand what I am saying
> > about
> > > > > consciousness--you don't have it."  This was all in
> good humor
> > > > > and
> > later
> > > > > when I was teaching a course in AI to MBA students I invited
> > > > > Hans to continue our debate in class.  A good time was had by
> > > > > all, I hope.
> > > > >
> > > > > Frank
> > > > >
> > > > > ---
> > > > > Frank C. Wimberly
> > > > > 140 Calle Ojo Feliz              (505) 995-8715 or
> (505) 670-9918
> > (cell)
> > > > > Santa Fe, NM 87505           wimberly3 at earthlink.net
> > > > >
> > > > > -----Original Message-----
> > > > > From: friam-bounces at redfish.com
> > > > > [mailto:friam-bounces at redfish.com] On Behalf Of
> Martin C. Martin
> > > > > Sent: Saturday, July 08, 2006 7:16 PM
> > > > > To: The Friday Morning Applied Complexity Coffee Group
> > > > > Subject: Re: [FRIAM] 100 billion neurons
> > > > >
> > > > > I suspect you'd like Hans Moravec's books:
> > > > >
> > > > > http://www.amazon.com/gp/product/0674576187
> > > > > http://www.amazon.com/gp/product/0195136306
> > > > >
> > > > > He uses Moore's law and estimates of the brain's
> computing power
> > > > > to calculate when we'll have human equivalence in "a
> computer."  
> > > > > I forget the date, but it's not far.  He also talks about a
> > > > > number of very interesting consequences of this.
> > > > >
> > > > > - Martin
> > > > >
> > > > > ============================================================
> > > > > FRIAM Applied Complexity Group listserv
> > > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures,
> > > > > archives, unsubscribe, maps at http://www.friam.org
> > > > >
> > > > >
> > > > > ============================================================
> > > > > FRIAM Applied Complexity Group listserv
> > > > > Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures,
> > > > > archives, unsubscribe, maps at http://www.friam.org
> > > > >
> > > >
> > > >
> > > > --
> > > > Giles Bowkett
> > > > http://www.gilesgoatboy.org
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > > archives, unsubscribe, maps at http://www.friam.org
> > > >
> > >
> > >
> > >
> > > --
> > > George T. Duncan
> > > Professor of Statistics
> > > Heinz School of Public Policy and Management
> > > Carnegie Mellon University
> > > Pittsburgh, PA 15213
> > > (412) 268-2172
> > > -------------- next part --------------
> > > An HTML attachment was scrubbed...
> > > URL:
> >
> /pipermail/friam_redfish.com/attachments/20060713/bcb8105c/attachment-
> > 0001.h
> > tml
> > >
> > > ------------------------------
> > >
> > > Message: 2
> > > Date: Thu, 13 Jul 2006 16:57:49 -0600
> > > From: Jim Rutt <jim at jimrutt.com>
> > > Subject: Re: [FRIAM] 100 billion neurons
> > > To: The Friday Morning Applied Complexity Coffee Group
> > > <friam at redfish.com>
> > > Message-ID: <6.2.0.14.2.20060713165629.045705c8 at mail.jimrutt.com>
> > > Content-Type: text/plain; charset="us-ascii"
> > >
> > > as an interesting argument that the old hardware/software
> argument
> > > about
> > > consciousness is often malformed, take a look see at:
> > >
> > >
> > >
> > > Damasio, Antonio: _The Feeling of What Happens: Body and
> Emotion in
> > > the
> > > Making of
> > > Consciousness_
> > >
> > >
> > >
> > >
> > > At 07:30 AM 7/10/2006, you wrote:
> > > >Back in the 1980's Hans and I had offices next to each
> other in the
> > > >Robotics Institute at Carnegie Mellon.  Over a period of
> a couple
> > > >of years we had numerous arguments about whether machines could
> > > >realize consciousness; whether a human mind could be
> transferred to
> > > >a machine, etc.  I remember saying that if somehow my
> "mind" were
> > > >transferred from my body to some robot--which I felt was
> > > >impossible--it might be that everyone else would agree
> that it was
> > > >a remarkable likeness but that I would be gone.  Hans
> replied that
> > > >I undervalued myself--that I am software not hardware.  
> After many
> > > >arguments along these lines I said, "Hans, I now
> understand why you
> > > >don't understand what I am saying about consciousness--you don't
> > > >have it."  This was all in good humor and later when I
> was teaching
> > > >a course in AI to MBA students I invited Hans to continue our
> > > >debate in class.  A good time was had by all, I hope.
> > > >
> > > >Frank
> > > >
> > > >---
> > > >Frank C. Wimberly
> > > >140 Calle Ojo Feliz              (505) 995-8715 or (505)
> 670-9918 (cell)
> > > >Santa Fe, NM 87505           wimberly3 at earthlink.net
> > > >
> > > >-----Original Message-----
> > > >From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com]
> > > >On Behalf Of Martin C. Martin
> > > >Sent: Saturday, July 08, 2006 7:16 PM
> > > >To: The Friday Morning Applied Complexity Coffee Group
> > > >Subject: Re: [FRIAM] 100 billion neurons
> > > >
> > > >I suspect you'd like Hans Moravec's books:
> > > >
> > > >http://www.amazon.com/gp/product/0674576187
> > > >http://www.amazon.com/gp/product/0195136306
> > > >
> > > >He uses Moore's law and estimates of the brain's
> computing power to
> > > >calculate when we'll have human equivalence in "a computer."  I
> > > >forget the date, but it's not far.  He also talks about
> a number of
> > > >very interesting consequences of this.
> > > >
> > > >- Martin
> > > >
> > > >============================================================
> > > >FRIAM Applied Complexity Group listserv
> > > >Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > >archives, unsubscribe, maps at http://www.friam.org
> > > >
> > > >
> > > >============================================================
> > > >FRIAM Applied Complexity Group listserv
> > > >Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > >archives, unsubscribe, maps at http://www.friam.org
> > >
> > > ===================================
> > > Jim Rutt
> > > voice:  505-989-1115
> > >
> > > -------------- next part --------------
> > > An HTML attachment was scrubbed...
> > > URL:
> >
> /pipermail/friam_redfish.com/attachments/20060713/3f05e21d/attachment-
> > 0001.h
> > tml
> > >
> > > ------------------------------
> > >
> > > Message: 3
> > > Date: Thu, 13 Jul 2006 18:59:23 -0600
> > > From: "Frank Wimberly" <wimberly3 at earthlink.net>
> > > Subject: Re: [FRIAM] 100 billion neurons
> > > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > > <friam at redfish.com>
> > > Message-ID: <021801c6a6e0$c072c530$0300a8c0 at franknotebook>
> > > Content-Type: text/plain; charset="iso-8859-1"
> > >
> > > At the risk of being neurotic, here is link to a review
> of Damasio's
> > > book:
> > >
> > >
> http://dir.salon.com/story/books/review/1999/09/21/damasio/index.htm
> > > l
> > >
> > >
> > > Frank
> > >
> > > ---
> > > Frank C. Wimberly
> > > 140 Calle Ojo Feliz??????????????(505) 995-8715 or (505) 670-9918
> > > (cell) Santa Fe, NM 87505???????????wimberly3 at earthlink.net
> > > -----Original Message-----
> > > From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com]
> > > On Behalf Of Jim Rutt
> > > Sent: Thursday, July 13, 2006 4:58 PM
> > > To: The Friday Morning Applied Complexity Coffee Group
> > > Subject: Re: [FRIAM] 100 billion neurons
> > >
> > > as an interesting argument that the old hardware/software
> argument
> > > about consciousness is often malformed, take a look see at:
> > >
> > >
> > >
> > > Damasio, Antonio: _The Feeling of What Happens: Body and
> Emotion in
> > > the Making of Consciousness_
> > >
> > > ?
> > >
> > >
> > > At 07:30 AM 7/10/2006, you wrote:
> > >
> > > Back in the 1980's Hans and I had offices next to each
> other in the
> > > Robotics Institute at Carnegie Mellon.? Over a period of
> a couple of
> > > years we had numerous arguments about whether machines
> could realize
> > > consciousness; whether a human mind could be transferred to a
> > > machine, etc.? I remember saying that if somehow my "mind" were
> > > transferred from my body to some robot--which I felt was
> > > impossible--it might be that everyone else would agree
> that it was a
> > > remarkable likeness but that I would be gone.? Hans
> replied that I
> > > undervalued myself--that I am software not hardware.? After many
> > > arguments along these lines I said, "Hans, I now
> understand why you
> > > don't understand what I am saying about consciousness--you don't
> > > have it."? This was all in good humor and later when I
> was teaching
> > > a course in AI to MBA students I invited Hans to continue
> our debate
> > > in class.? A good time was had by all, I hope.
> > >
> > > Frank
> > >
> > > ---
> > > Frank C. Wimberly
> > > 140 Calle Ojo Feliz????????????? (505) 995-8715 or (505) 670-9918
> > > (cell) Santa Fe, NM 87505?????????? wimberly3 at earthlink.net
> > >
> > > -----Original Message-----
> > > From: friam-bounces at redfish.com [
> mailto:friam-bounces at redfish.com]
> > > On Behalf Of Martin C. Martin
> > > Sent: Saturday, July 08, 2006 7:16 PM
> > > To: The Friday Morning Applied Complexity Coffee Group
> > > Subject: Re: [FRIAM] 100 billion neurons
> > >
> > > I suspect you'd like Hans Moravec's books:
> > >
> > > http://www.amazon.com/gp/product/0674576187
> > > http://www.amazon.com/gp/product/0195136306
> > >
> > > He uses Moore's law and estimates of the brain's
> computing power to
> > > calculate when we'll have human equivalence in "a
> computer."? I forget
> > > the date, but it's not far.? He also talks about a number of very
> > > interesting consequences of this.
> > >
> > > - Martin
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > archives, unsubscribe, maps at http://www.friam.org
> > >
> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > archives, unsubscribe, maps at http://www.friam.org 
> > > ===================================
> > > Jim Rutt
> > > voice:? 505-989-1115??
> > >
> > >
> > >
> > >
> > > ------------------------------
> > >
> > > Message: 4
> > > Date: Fri, 14 Jul 2006 09:42:14 +0200
> > > From: "Jochen Fromm" <fromm at vs.uni-kassel.de>
> > > Subject: [FRIAM] Intentionality - the mark of the vital
> > > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > > <friam at redfish.com>
> > > Message-ID: <000001c6a719$092071a0$5fda338d at Toshiba>
> > > Content-Type: text/plain; charset="us-ascii"
> > >
> > >  
> > > I have finally read the article "Intentionality is
> > > the mark of the vital". It contains interesting
> > > remarks about the mind/body problem, about the
> > > relationship between mental and material "substance",
> > > and nice illustrations (for example about lions and gnus).
> > > Well written.
> > >
> > > If "intentionality is the mark of the vital",
> > > are artificial agents with intentions the first
> > > step towards vital, living systems ? Agents are
> > > of course used in artificial life, but in the
> > > context of the article the question seems to
> > > gain new importance.
> > >
> > > -J.
> > > ________________________________
> > >
> > > From: Nicholas Thompson
> > > Sent: Monday, June 26, 2006 3:20 AM
> > > To: friam at redfish.com
> > > Subject: [FRIAM] self-consciousness
> > >
> > > For those rare few of you that are INTENSELY interested by the
> > > recent discussion on self consciousness, here is a paper on the
> > > subject  which asserts that every organism must have a
> point of view.
> > >  
> > > http://home.earthlink.net/~nickthompson/id14.html
> > >  
> > >
> > >
> > >
> > >
> > > ------------------------------
> > >
> > > _______________________________________________
> > > Friam mailing list
> > > Friam at redfish.com
> > > http://redfish.com/mailman/listinfo/friam_redfish.com
> > >
> > >
> > > End of Friam Digest, Vol 37, Issue 17
> > > *************************************
> >
> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > archives, unsubscribe, maps at http://www.friam.org
>
> --
> *PS: A number of people ask me about the attachment to my
> email, which is of type "application/pgp-signature". Don't
> worry, it is not a virus. It is an electronic signature, that
> may be used to verify this email came from me if you have PGP
> or GPG installed. Otherwise, you may safely ignore this attachment.
>
> --------------------------------------------------------------
> --------------
> A/Prof Russell Standish                  Phone 8308 3119 (mobile)
> Mathematics                               0425 253119 (")
> UNSW SYDNEY 2052                 R.Standish at unsw.edu.au
>            
> Australia                                
> http://parallel.hpc.unsw.edu.au/rks
>             International
> prefix  +612, Interstate prefix 02
> --------------------------------------------------------------
> --------------
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>




Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Jochen Fromm-3

I must admit I was not fully aware of the
philosophical background for "intentionality"
http://plato.stanford.edu/entries/intentionality/
Maybe I confused "intentionality" with intentions.
I am not sure what "intentionality" really means.
Nevertheless, the aspect of "intentionality" as
world-directedness seems to be interesting (lat.
"intendere" means being directed towards some goal
or thing, to aim in a particular direction):
"intentionality" as existence of a "force" which
directs evolution and constraints a sequence of
events (incl. possible behaviors and actions). Systems
with "intentions" can be considered as organizers,
they try to organize things by imposing their order
specified in internal plans or schemas on the sequence of
external events.

Liveless, physical material has no intentions, even
if it is subject to evolution (evolution as gradual
development through time, lat. "evolvere" means to
unfold, unroll), it evolves usually towards a more
uniform, disordered state. Evolution in general
is not directed into a particular direction. It
has no direction, no plan and no goal. A closed
system without non-living elements evolves always
towards greater disorder, to a more equally
distributed state.

-J.



Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Roger Critchlow-2
This discussion has sent me back to the Gerald Edelman paper that I
posted several months ago:

   http://www.pnas.org/cgi/content/full/102/6/2111

As the abstract says:  Analyzing neural dynamics underlying complex
behavior is a major challenge in systems neurobiology.

-- rec --


Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Carlos Gershenson
In reply to this post by Russell Standish
> So - in terms of answering your question about whether intentionality
> is the mark of the vital, I would have to answer no. I do not see much
> intentional behaviour amongst simple animals (eg insects) or plants -
> rather I tend to think of these as complex machine. On the  
> contrary, to a
> well designed artificial human (as in a computer game character) I  
> will assign
> intentionality, even though I know they're only the outputs of  
> algorithms.

I agree with Russell's answer, but I would go even further.  
Intentionality, just like intelligence and cognition, is a property  
described by an observer. I can say that a tree is intentional  
because it wants to blossom when spring comes, or even that a rock is  
intentional because when I drop it it wants to go down. But really  
the question is not wether the rock is "really" intentional or not,  
because intentionality cannot be objective. The question is how  
useful it is for us to describe a rock as intentional...

If anybody's interested in a more complete exposition of this  
argument, I exposed it for cognition (but you can just change the  
word for intentionality or intelligence) in the following paper:
Cognitive Paradigms: Which One is the Best? Cognitive Systems  
Research 5(2):135-156, June 2004.
http://dx.doi.org/10.1016/j.cogsys.2003.10.002

Best regards,

     Carlos Gershenson...
     Centrum Leo Apostel, Vrije Universiteit Brussel
     Krijgskundestraat 33. B-1160 Brussels, Belgium
     http://homepages.vub.ac.be/~cgershen/

   ?Tendencies tend to change...?




Reply | Threaded
Open this post in threaded view
|

Intentionality is the mark of the vital

Phil Henshaw-2
In reply to this post by Jochen Fromm-3
That 'intention' normally includes referring to an inner map or image of
an objective is part of what I meant by our 'abuse' of language.   I
thought it was a productive question, perhaps, but the word didn't work
that well for describing the whole range of apparent system goal seeking
tendencies.  There clearly are a variety of systems that have and follow
internal images, but you can see it in their behavior and find it in the
physical structures they do it with.

Maybe I can mention the closely related problem that first set me off
looking for other kinds of answers.   The structure of natural systems
organized around resource pools, building the network of relationships
loops through free exchange, points the arrow of causation backwards.
It means that it's the consumer that determines whether a product is a
waste or resource, not the producer.   Natural systems involve
properties of things that are discovered, not predetermined,
organization built and fed by opportunity, not driven by necessity!
Natural systems are built around a combination of 'push' and 'pull'
connections.   In system models built in computers I believe tables of
properties are set only by the sending rather than receiving end of a
connection.  I can't say how to flip that to make market connections
work like free exchange pools do in nature, just that it's a basic
problem for comparing computer models to natural systems.

As a suggestion, it means something like hanging a new picture on the
wall of your mind, call it a window even though it may look blank at
first, through which you learn how to see the parts of the world where
causation works backwards.   I find there's good technique and lots to
find.



Phil Henshaw                       ????.?? ? `?.????
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave
NY NY 10040                      
tel: 212-795-4844                
e-mail: pfh at synapse9.com          
explorations: www.synapse9.com    


> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of Jochen Fromm
> Sent: Tuesday, July 18, 2006 7:57 AM
> To: 'The Friday Morning Applied Complexity Coffee Group'
> Subject: Re: [FRIAM] Intentionality is the mark of the vital
>
>
>
> I must admit I was not fully aware of the
> philosophical background for "intentionality"
> http://plato.stanford.edu/entries/intentionali> ty/
> Maybe I
> confused "intentionality" with intentions.
> I am
> not sure what "intentionality" really means.
> Nevertheless, the aspect of "intentionality" as
> world-directedness seems to be interesting (lat.
> "intendere" means being directed towards some goal
> or thing, to aim in a particular direction):
> "intentionality" as existence of a "force" which
> directs evolution and constraints a sequence of
> events (incl. possible behaviors and actions). Systems
> with "intentions" can be considered as organizers,
> they try to organize things by imposing their order
> specified in internal plans or schemas on the sequence of
> external events.
>
> Liveless, physical material has no intentions, even
> if it is subject to evolution (evolution as gradual
> development through time, lat. "evolvere" means to
> unfold, unroll), it evolves usually towards a more
> uniform, disordered state. Evolution in general
> is not directed into a particular direction. It
> has no direction, no plan and no goal. A closed
> system without non-living elements evolves always
> towards greater disorder, to a more equally
> distributed state.
>
> -J.
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
>
>