http://friam.383.s1.nabble.com/Intentionality-is-the-mark-of-the-vital-tp522122p522123.html
which give operational meaning to inside & outside. I know how to find
lots of them, but can't figure out what to call them.
680 Ft. Washington Ave
> -----Original Message-----
> From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On Behalf Of Nicholas Thompson
> Sent: Friday, July 14, 2006 9:37 PM
> To: friam at redfish.com
> Cc: echarles
> Subject: [FRIAM] Intentionality is the mark of the vital
>
>
> Jochen,
>
> Thanks for your kind response.
>
> Your question churns my head. I was keen to argue that
> intentionality is a property not only of thinking things but
> of any biological thing. But I never imagined that
> intentionality could be used as a criterion of vitality. I
> do believe that every living system displays intentionality,
> but I now have to think about whether I think that all
> intentional systems
> are living. I guess NOT. However, my reasons for holding
> this belief are
> probably robotophobic.
>
> Nick
>
>
> PS My first response to your question was to write the
> following 100 words of baffle-gab, like the good academic I
> am. It might be marginally interesting in and off itself,
> but it didnt seem to answer your question.
> I had put too much effort into it to throw it away, so I
> stuck it below.
> Feel free to ignore it.
>
> BEGIN BAFFLEGAB
> =======================================================
>
> Intentionality is one of those words that leads to endless
> confusion. It can refer to having an intention or it can
> refer to a peculiar propert to assertions containing verbs of
> mentation, wanting, thinking, feeling, etc.
> The sentence, "Jones's intention was that the books be placed
> on the table" is intentional in both senses: intentional in
> sense one because it tells us something about what Jones is
> up to, and intentional in the second sense
> because it displays the odd property of referential opacity.
> Unlike the
> statement "the books are on the table" , the statement about
> Jones's intentions cannot be verified nor disconfirmed by
> gathering information about the location of the books.
>
> The two are intimately connected. Any statement one makes
> about the intentions of others in sense one is inevitably an
> intensional utterance in sense two because the truth value of
> the statement lies in the organization of Jones's behavior,
> rather than whether Jones's intention is ever fulfilled.
>
> It was in this second, perhaps strained, philosophic sense,
> that I think the cue relation is necessarily intentional.
> When we say that C is a cue to X, we mean that from the point
> of view of the system we are interested
> in, C stands in for X. ("In the Human respiratory system,
> Blood acidity
> is a cue for blood oxygenation") To the extent that robots
> use cues, they MUST be intentional in this sense.
>
> ===========================================================
> end BAFFLEGAB.
>
> Nicholas Thompson
> nickthompson at earthlink.net
http://home.earthlink.net/~nickthompson>
>
> > [Original Message]
> > From: <friam-request at redfish.com>
> > To: <friam at redfish.com>
> > Date: 7/14/2006 12:00:29 PM
> > Subject: Friam Digest, Vol 37, Issue 17
> >
> > Send Friam mailing list submissions to
> > friam at redfish.com
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >
http://redfish.com/mailman/listinfo/friam_redfish.com> > or, via email, send a message with subject or body 'help' to
> > friam-request at redfish.com
> >
> > You can reach the person managing the list at
> > friam-owner at redfish.com
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of Friam digest..."
> >
> >
> > Today's Topics:
> >
> > 1. Re: 100 billion neurons (George Duncan)
> > 2. Re: 100 billion neurons (Jim Rutt)
> > 3. Re: 100 billion neurons (Frank Wimberly)
> > 4. Intentionality - the mark of the vital (Jochen Fromm)
> >
> >
> >
> ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Thu, 13 Jul 2006 10:38:47 -0600
> > From: "George Duncan" <gd17 at andrew.cmu.edu>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: "The Friday Morning Applied Complexity Coffee Group"
> > <friam at redfish.com>
> > Message-ID:
> > <b9b019d10607130938n1e6e953etaa4bbd1409d24ffb at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Shall this conversation be neuronic rather than neurotic?
> >
> > Or try this
> >
>
http://www.technologyreview.com/read_article.aspx?id=17164&ch=infotech> >
> >
> > On 7/13/06, Giles Bowkett <gilesb at gmail.com> wrote:
> > >
> > > I'm inclined to agree. The model I use is nonlinear fluid
> dynamics.
> > > Say you've got a thought which you began thinking when you were
> > > young. That thought is a fluid in motion. Over the course of your
> > > life you revisit certain ideas and revise certain opinions. The
> > > motion continues for decades. The way you think is like an
> > > information processing system which evolves over the
> course of your
> > > life, and it's true enough to call that software, not
> hardware, but
> > > the flow of data through that system is entirely organic, and
> > > creating an exact copy of a given flow in nonlinear fluid
> dynamics
> > > is impossible. The structure of your mode of thinking -- your
> > > "software" -- is shaped tremendously by the things that you think
> > > about; therefore replicating the processor without
> replicating the
> > > data can only be of partial usefulness, if the processor
> is shaped
> > > by and for the data. It's like copying a river by duplicating
> > > exactly every last rock and pebble, but leaving out the water.
> > >
> > > On 7/10/06, Frank Wimberly <wimberly3 at earthlink.net> wrote:
> > > > Back in the 1980's Hans and I had offices next to each other in
> > > > the Robotics Institute at Carnegie Mellon. Over a period of a
> > > > couple of years we had numerous arguments about whether
> machines
> > > > could realize consciousness; whether a human mind could be
> > > > transferred to a machine, etc. I remember saying that
> if somehow
> > > > my "mind" were transferred
> from
> > > > my body to some robot--which I felt was impossible--it might be
> > > > that everyone else would agree that it was a remarkable
> likeness
> > > > but that I would be gone. Hans replied that I undervalued
> > > > myself--that I am software not hardware. After many arguments
> > > > along these lines I said, "Hans, I now understand why you don't
> > > > understand what I am saying
> about
> > > > consciousness--you don't have it." This was all in
> good humor and
> later
> > > > when I was teaching a course in AI to MBA students I
> invited Hans
> > > > to continue our debate in class. A good time was had by all, I
> > > > hope.
> > > >
> > > > Frank
> > > >
> > > > ---
> > > > Frank C. Wimberly
> > > > 140 Calle Ojo Feliz (505) 995-8715 or
> (505) 670-9918
> (cell)
> > > > Santa Fe, NM 87505 wimberly3 at earthlink.net
> > > >
> > > > -----Original Message-----
> > > > From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com]
> > > > On Behalf Of Martin C. Martin
> > > > Sent: Saturday, July 08, 2006 7:16 PM
> > > > To: The Friday Morning Applied Complexity Coffee Group
> > > > Subject: Re: [FRIAM] 100 billion neurons
> > > >
> > > > I suspect you'd like Hans Moravec's books:
> > > >
> > > >
http://www.amazon.com/gp/product/0674576187> > > >
http://www.amazon.com/gp/product/0195136306> > > >
> > > > He uses Moore's law and estimates of the brain's
> computing power
> > > > to calculate when we'll have human equivalence in "a
> computer." I
> > > > forget the date, but it's not far. He also talks about
> a number
> > > > of very interesting consequences of this.
> > > >
> > > > - Martin
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > > archives, unsubscribe, maps at
http://www.friam.org> > > >
> > > >
> > > > ============================================================
> > > > FRIAM Applied Complexity Group listserv
> > > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > > archives, unsubscribe, maps at
http://www.friam.org> > > >
> > >
> > >
> > > --
> > > Giles Bowkett
> > >
http://www.gilesgoatboy.org> > >
> > > ============================================================
> > > FRIAM Applied Complexity Group listserv
> > > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > > archives, unsubscribe, maps at
http://www.friam.org> > >
> >
> >
> >
> > --
> > George T. Duncan
> > Professor of Statistics
> > Heinz School of Public Policy and Management
> > Carnegie Mellon University
> > Pittsburgh, PA 15213
> > (412) 268-2172
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL:
> /pipermail/friam_redfish.com/attachments/20060713/bcb8105c/att
> achment-0001.h
> tml
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Thu, 13 Jul 2006 16:57:49 -0600
> > From: Jim Rutt <jim at jimrutt.com>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: The Friday Morning Applied Complexity Coffee Group
> > <friam at redfish.com>
> > Message-ID: <6.2.0.14.2.20060713165629.045705c8 at mail.jimrutt.com>
> > Content-Type: text/plain; charset="us-ascii"
> >
> > as an interesting argument that the old hardware/software argument
> > about
> > consciousness is often malformed, take a look see at:
> >
> >
> >
> > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in
> > the
> > Making of
> > Consciousness_
> >
> >
> >
> >
> > At 07:30 AM 7/10/2006, you wrote:
> > >Back in the 1980's Hans and I had offices next to each
> other in the
> > >Robotics Institute at Carnegie Mellon. Over a period of a
> couple of
> > >years we had numerous arguments about whether machines
> could realize
> > >consciousness; whether a human mind could be transferred to a
> > >machine, etc. I remember saying that if somehow my "mind" were
> > >transferred from my body to some robot--which I felt was
> > >impossible--it might be that everyone else would agree
> that it was a
> > >remarkable likeness but that I would be gone. Hans replied that I
> > >undervalued myself--that I am software not hardware. After many
> > >arguments along these lines I said, "Hans, I now
> understand why you
> > >don't understand what I am saying about consciousness--you
> don't have
> > >it." This was all in good humor and later when I was teaching a
> > >course in AI to MBA students I invited Hans to continue
> our debate in
> > >class. A good time was had by all, I hope.
> > >
> > >Frank
> > >
> > >---
> > >Frank C. Wimberly
> > >140 Calle Ojo Feliz (505) 995-8715 or (505)
> 670-9918 (cell)
> > >Santa Fe, NM 87505 wimberly3 at earthlink.net
> > >
> > >-----Original Message-----
> > >From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On
> > >Behalf Of Martin C. Martin
> > >Sent: Saturday, July 08, 2006 7:16 PM
> > >To: The Friday Morning Applied Complexity Coffee Group
> > >Subject: Re: [FRIAM] 100 billion neurons
> > >
> > >I suspect you'd like Hans Moravec's books:
> > >
> > >
http://www.amazon.com/gp/product/0674576187> > >
http://www.amazon.com/gp/product/0195136306> > >
> > >He uses Moore's law and estimates of the brain's computing
> power to
> > >calculate when we'll have human equivalence in "a computer." I
> > >forget the date, but it's not far. He also talks about a
> number of
> > >very interesting consequences of this.
> > >
> > >- Martin
> > >
> > >============================================================
> > >FRIAM Applied Complexity Group listserv
> > >Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > >archives, unsubscribe, maps at
http://www.friam.org> > >
> > >
> > >============================================================
> > >FRIAM Applied Complexity Group listserv
> > >Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > >archives, unsubscribe, maps at
http://www.friam.org> >
> > ===================================
> > Jim Rutt
> > voice: 505-989-1115
> >
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL:
> /pipermail/friam_redfish.com/attachments/20060713/3f05e21d/att
> achment-0001.h
> tml
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Thu, 13 Jul 2006 18:59:23 -0600
> > From: "Frank Wimberly" <wimberly3 at earthlink.net>
> > Subject: Re: [FRIAM] 100 billion neurons
> > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > <friam at redfish.com>
> > Message-ID: <021801c6a6e0$c072c530$0300a8c0 at franknotebook>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > At the risk of being neurotic, here is link to a review of Damasio's
> > book:
> >
> >
>
http://dir.salon.com/story/books/review/1999/09/21/damasio/index.html> >
> >
> > Frank
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz??????????????(505) 995-8715 or (505) 670-9918
> > (cell) Santa Fe, NM 87505???????????wimberly3 at earthlink.net
> > -----Original Message-----
> > From: friam-bounces at redfish.com
> [mailto:friam-bounces at redfish.com] On
> > Behalf Of Jim Rutt
> > Sent: Thursday, July 13, 2006 4:58 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] 100 billion neurons
> >
> > as an interesting argument that the old hardware/software argument
> > about consciousness is often malformed, take a look see at:
> >
> >
> >
> > Damasio, Antonio: _The Feeling of What Happens: Body and Emotion in
> > the Making of Consciousness_
> >
> > ?
> >
> >
> > At 07:30 AM 7/10/2006, you wrote:
> >
> > Back in the 1980's Hans and I had offices next to each other in the
> > Robotics Institute at Carnegie Mellon.? Over a period of a
> couple of
> > years we had numerous arguments about whether machines
> could realize
> > consciousness; whether a human mind could be transferred to
> a machine,
> > etc.? I remember saying that if somehow my "mind" were transferred
> > from my body to some robot--which I felt was impossible--it
> might be
> > that everyone else would agree that it was a remarkable
> likeness but
> > that I would be gone.? Hans replied that I undervalued
> myself--that I
> > am software not hardware.? After many arguments along these lines I
> > said, "Hans, I now understand why you don't understand what I am
> > saying about consciousness--you don't have it."? This was
> all in good
> > humor and later when I was teaching a course in AI to MBA
> students I
> > invited Hans to continue our debate in class.? A good time
> was had by
> > all, I hope.
> >
> > Frank
> >
> > ---
> > Frank C. Wimberly
> > 140 Calle Ojo Feliz????????????? (505) 995-8715 or (505) 670-9918
> > (cell) Santa Fe, NM 87505?????????? wimberly3 at earthlink.net
> >
> > -----Original Message-----
> > From: friam-bounces at redfish.com [
> mailto:friam-bounces at redfish.com] On
> > Behalf Of Martin C. Martin
> > Sent: Saturday, July 08, 2006 7:16 PM
> > To: The Friday Morning Applied Complexity Coffee Group
> > Subject: Re: [FRIAM] 100 billion neurons
> >
> > I suspect you'd like Hans Moravec's books:
> >
> >
http://www.amazon.com/gp/product/0674576187> >
http://www.amazon.com/gp/product/0195136306> >
> > He uses Moore's law and estimates of the brain's computing power to
> > calculate when we'll have human equivalence in "a
> computer."? I forget
> > the date, but it's not far.? He also talks about a number of very
> > interesting consequences of this.
> >
> > - Martin
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > archives, unsubscribe, maps at
http://www.friam.org> >
> >
> > ============================================================
> > FRIAM Applied Complexity Group listserv
> > Meets Fridays 9a-11:30 at cafe at St. John's College lectures,
> > archives, unsubscribe, maps at
http://www.friam.org
> > ===================================
> > Jim Rutt
> > voice:? 505-989-1115??
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Fri, 14 Jul 2006 09:42:14 +0200
> > From: "Jochen Fromm" <fromm at vs.uni-kassel.de>
> > Subject: [FRIAM] Intentionality - the mark of the vital
> > To: "'The Friday Morning Applied Complexity Coffee Group'"
> > <friam at redfish.com>
> > Message-ID: <000001c6a719$092071a0$5fda338d at Toshiba>
> > Content-Type: text/plain; charset="us-ascii"
> >
> >
> > I have finally read the article "Intentionality is
> > the mark of the vital". It contains interesting
> > remarks about the mind/body problem, about the
> > relationship between mental and material "substance",
> > and nice illustrations (for example about lions and gnus).
> > Well written.
> >
> > If "intentionality is the mark of the vital",
> > are artificial agents with intentions the first
> > step towards vital, living systems ? Agents are
> > of course used in artificial life, but in the
> > context of the article the question seems to
> > gain new importance.
> >
> > -J.
> > ________________________________
> >
> > From: Nicholas Thompson
> > Sent: Monday, June 26, 2006 3:20 AM
> > To: friam at redfish.com
> > Subject: [FRIAM] self-consciousness
> >
> > For those rare few of you that are INTENSELY interested by
> the recent
> > discussion on self consciousness, here is a paper on the subject
> > which asserts that every organism must have a point of view.
> >
> >
http://home.earthlink.net/~nickthompson/id14.html> >
> >
> >
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > Friam mailing list
> > Friam at redfish.com
> >
http://redfish.com/mailman/listinfo/friam_redfish.com> >
> >
> > End of Friam Digest, Vol 37, Issue 17
> > *************************************
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at
http://www.friam.org>
>