A question for tomorrow

classic Classic list List threaded Threaded
115 messages Options
123456
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Frank Wimberly-2
Jon,

How about "experiences consciousness" in place of has consciousness.

Frsnk

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Apr 27, 2019, 11:03 AM Jon Zingale <[hidden email]> wrote:
Nick,

I love that the title of this thread is 'A question for tomorrow'.
My position continues to be that the label `conscious` is meaningful,
though along with you, I am not sure what language to use around it.
For instance, can something have consciousness? That said, a
conservative scoping of the phenomena I would wish to describe
with consciousness language begins with granting consciousness
to more than 7 billion things on this planet alone. Presently, for those
that agree thus far, it appears that the only way to synthesize new things
with consciousness is to have sex (up to some crude equivalence).
This constraint seems an unreasonable limitation and so the problem
of synthesizing consciousness strikes me as reasonably near, ie.
 `a question for tomorrow` and not some distant future.

You begin by asking about the Turing machine, an abstraction which
summarizes what we can say about processing information. Here,
I am going to extend Lee's comment and ask that we consider
particular implementations or better particular embodiments.

Hopefully said without too much hubris, given enough time and
memory, I can compute anything that a Turing machine can compute.
The games `Magic the Gathering` and `Mine Craft` are Turing
complete. I would suspect that under some characterization, the
Mississippi river is Turing complete. It would be a real challenge
for me state what abstractions like `Mine Craft` experience, but
sometimes I can speak to my own experience. Oscar Hammerstein
mused about what Old Man River knows.

Naively, it seems to me that some kind of information processing,
though not sufficient, is necessary for experience and for a foundations
for consciousness. Whether the information processor needs to be
Turing complete is not immediately obvious to me, perhaps a finite-
state machine will do. Still, I do not think that a complete description of
consciousness (or whatever it means to experience) can exist without
speaking to how it is that a thing comes to sense its world.

For instance, in the heyday of analogue synthesizers,  musicians
would slog these machines from city to city, altitude to altitude,
desert to rain-forested coast and these machines would notoriously
respond in kind. Their finicky capacitors would experience the
change and changes in micro-farads would ensue. What does an
analogue synthesizer know?

Cheers,
Jonathan Zingale

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Nick Thompson
In reply to this post by Frank Wimberly-2

Indeed, Frank.

 

We behaviorists call that abDUCKtion.

 

Nick

 

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 11:29 AM
To: [hidden email]; The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

 

Lee, Surely someone has developed probabilistic Turing Machines which can, very rarely, make errors.  I am ignorant of the field since 1972 when I took a course which used Hopcroft and Ullman as a text.

 

Nick, I agree that your questions are charming.  Your humanity is clearly seen.  By the way, it occurred to me this morning that the motto of behaviorists should be, "If it talks like a duck🦆...etc"

 

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 10:59 AM Russ Abbott <[hidden email]> wrote:

Nick,

 

One of the most attractive things about your posts is how charming they are. They are so well written! Thank you for keeping the discussion at such a civilized and enjoyable level -- even when I don't agree with you.

 

-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles

 

 

On Sat, Apr 27, 2019 at 9:44 AM <[hidden email]> wrote:

Frank writes:
> I would hate to have to demonstrate that a modern computer is an instance
> of a Turing Machine.  Among other things they usually have multiple
> processors as well as memory hierarchies.  But I suppose it could be done,
> theoretically.

First a passage from a chapter I contributed to a book edited by a
graduate student Nick knows (Zack Beckstead); I have cut out a bit in the
middle which aims at a different point not under consideration here.
===begin===
If talk of “machines” in the context of the human sciences seems out of
place, note that Turing (1936) actually introduces his “automatic machine”
as a formalization (thoroughly mathematical, though described in
suggestive mechanistic terms like “tape” and “scanning”) of “an idealized
*human* calculating agent” (Soare, 1996, p. 291; italics in the original),
called by Turing a “computer”. [...] As Turing remarks, “It is always
possible for the computer to break off from his work, to go away and
forget all about it, and later to come back and go on with it” (1936, p.
253). It seems to me that then it must also be “always possible for the
computer to break off” and never “come back” (in fact, this often happens
in the lives, and invariably upon the deaths, of non-idealized human
calculating agents).
===end===
Of course Turing's idealization of "an idealized *human* calculating
agent" also idealizes away the fact that human computers sometimes make
errors. A Turing machine doesn't make errors.  But both the processors and
the memory of a modern computer can, and *must* make errors (however
rarely, and however good the error-detection).  To at least that extent,
then, they are not *perfect* instantiations of Turing machines.  On the
other hand, that very fact about them makes them (in some sense) *more*
like (actual) human calculating agents.

So, Nick, why are you asking what Turing machines think, instead of what
modern computers think?  (Be careful how you answer that...)


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Marcus G. Daniels
In reply to this post by jon zingale

Jon writes:

 

< For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?  >

 

Knowing must involve a stable representation, e.g. to facilitate reasoning, but it also must be informed by a large network of relations.

Digital computers are really good at providing a stable representation.   With an extensive sensor network and an ability to engage in an environment, it seems reasonable to me to say an autonomous vehicle would know something about driving.  It has to pass a Turing test.   But I wonder to what extent humans benefit from their physical vulnerabilities to know things? 

 

One example that comes to mind with quantum computing is that a SQUID can be used to implement a qubit (a somewhat stable representation), but it can also be used as an exquisitely-sensitive sensor (low-field MRI).   The non-digital aspects of an analog computer, e.g. entanglement with the environment, could be used to both sense and compute at once.  

 

Marcus

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Nick Thompson
In reply to this post by lrudolph

So, Lee, you ask:

 

So, Nick, why are you asking what Turing machines think, instead of what modern computers think?  (Be careful how you answer that...)

 

So, I am trying to think like an honest monist.  It seems to me that a Turing Machine is a monist event processing system.  All you got is marks on the tape, right?

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

 

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of [hidden email]
Sent: Saturday, April 27, 2019 10:45 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

Frank writes:

> I would hate to have to demonstrate that a modern computer is an

> instance of a Turing Machine.  Among other things they usually have

> multiple processors as well as memory hierarchies.  But I suppose it

> could be done, theoretically.

 

First a passage from a chapter I contributed to a book edited by a graduate student Nick knows (Zack Beckstead); I have cut out a bit in the middle which aims at a different point not under consideration here.

===begin===

If talk of “machines” in the context of the human sciences seems out of place, note that Turing (1936) actually introduces his “automatic machine”

as a formalization (thoroughly mathematical, though described in suggestive mechanistic terms like “tape” and “scanning”) of “an idealized

*human* calculating agent” (Soare, 1996, p. 291; italics in the original), called by Turing a “computer”. [...] As Turing remarks, “It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it” (1936, p.

253). It seems to me that then it must also be “always possible for the computer to break off” and never “come back” (in fact, this often happens in the lives, and invariably upon the deaths, of non-idealized human calculating agents).

===end===

Of course Turing's idealization of "an idealized *human* calculating agent" also idealizes away the fact that human computers sometimes make errors. A Turing machine doesn't make errors.  But both the processors and the memory of a modern computer can, and *must* make errors (however rarely, and however good the error-detection).  To at least that extent, then, they are not *perfect* instantiations of Turing machines.  On the other hand, that very fact about them makes them (in some sense) *more* like (actual) human calculating agents.

 

So, Nick, why are you asking what Turing machines think, instead of what modern computers think?  (Be careful how you answer that...)

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

archives back to 2003: http://friam.471366.n2.nabble.com/

FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Marcus G. Daniels
In reply to this post by Frank Wimberly-2

Or implement a Mersenne Twister with a period of 219937 – 1 and inject some conditionals in the machine to make the `mistakes’.   That’s a distinction without a difference to a behaviorist.

From: Friam <[hidden email]> on behalf of Frank Wimberly <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Saturday, April 27, 2019 at 11:29 AM
To: "[hidden email]" <[hidden email]>, The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

 

Lee, Surely someone has developed probabilistic Turing Machines which can, very rarely, make errors.  I am ignorant of the field since 1972 when I took a course which used Hopcroft and Ullman as a text.

 

Nick, I agree that your questions are charming.  Your humanity is clearly seen.  By the way, it occurred to me this morning that the motto of behaviorists should be, "If it talks like a duck🦆...etc"

 

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 10:59 AM Russ Abbott <[hidden email]> wrote:

Nick,

 

One of the most attractive things about your posts is how charming they are. They are so well written! Thank you for keeping the discussion at such a civilized and enjoyable level -- even when I don't agree with you.

 

-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles

 

 

On Sat, Apr 27, 2019 at 9:44 AM <[hidden email]> wrote:

Frank writes:
> I would hate to have to demonstrate that a modern computer is an instance
> of a Turing Machine.  Among other things they usually have multiple
> processors as well as memory hierarchies.  But I suppose it could be done,
> theoretically.

First a passage from a chapter I contributed to a book edited by a
graduate student Nick knows (Zack Beckstead); I have cut out a bit in the
middle which aims at a different point not under consideration here.
===begin===
If talk of “machines” in the context of the human sciences seems out of
place, note that Turing (1936) actually introduces his “automatic machine”
as a formalization (thoroughly mathematical, though described in
suggestive mechanistic terms like “tape” and “scanning”) of “an idealized
*human* calculating agent” (Soare, 1996, p. 291; italics in the original),
called by Turing a “computer”. [...] As Turing remarks, “It is always
possible for the computer to break off from his work, to go away and
forget all about it, and later to come back and go on with it” (1936, p.
253). It seems to me that then it must also be “always possible for the
computer to break off” and never “come back” (in fact, this often happens
in the lives, and invariably upon the deaths, of non-idealized human
calculating agents).
===end===
Of course Turing's idealization of "an idealized *human* calculating
agent" also idealizes away the fact that human computers sometimes make
errors. A Turing machine doesn't make errors.  But both the processors and
the memory of a modern computer can, and *must* make errors (however
rarely, and however good the error-detection).  To at least that extent,
then, they are not *perfect* instantiations of Turing machines.  On the
other hand, that very fact about them makes them (in some sense) *more*
like (actual) human calculating agents.

So, Nick, why are you asking what Turing machines think, instead of what
modern computers think?  (Be careful how you answer that...)


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Nick Thompson
In reply to this post by Russ Abbott

Gosh, Russ; thanks.

 

Really!  It does help to be ignorant. 

 

Talking to you guys is like wandering in a field of wonders.  (Or is that wondering in a field of wanders?  I can never tell.

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Russ Abbott
Sent: Saturday, April 27, 2019 10:58 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

Nick,

 

One of the most attractive things about your posts is how charming they are. They are so well written! Thank you for keeping the discussion at such a civilized and enjoyable level -- even when I don't agree with you.

 

-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles

 

 

On Sat, Apr 27, 2019 at 9:44 AM <[hidden email]> wrote:

Frank writes:
> I would hate to have to demonstrate that a modern computer is an instance
> of a Turing Machine.  Among other things they usually have multiple
> processors as well as memory hierarchies.  But I suppose it could be done,
> theoretically.

First a passage from a chapter I contributed to a book edited by a
graduate student Nick knows (Zack Beckstead); I have cut out a bit in the
middle which aims at a different point not under consideration here.
===begin===
If talk of “machines” in the context of the human sciences seems out of
place, note that Turing (1936) actually introduces his “automatic machine”
as a formalization (thoroughly mathematical, though described in
suggestive mechanistic terms like “tape” and “scanning”) of “an idealized
*human* calculating agent” (Soare, 1996, p. 291; italics in the original),
called by Turing a “computer”. [...] As Turing remarks, “It is always
possible for the computer to break off from his work, to go away and
forget all about it, and later to come back and go on with it” (1936, p.
253). It seems to me that then it must also be “always possible for the
computer to break off” and never “come back” (in fact, this often happens
in the lives, and invariably upon the deaths, of non-idealized human
calculating agents).
===end===
Of course Turing's idealization of "an idealized *human* calculating
agent" also idealizes away the fact that human computers sometimes make
errors. A Turing machine doesn't make errors.  But both the processors and
the memory of a modern computer can, and *must* make errors (however
rarely, and however good the error-detection).  To at least that extent,
then, they are not *perfect* instantiations of Turing machines.  On the
other hand, that very fact about them makes them (in some sense) *more*
like (actual) human calculating agents.

So, Nick, why are you asking what Turing machines think, instead of what
modern computers think?  (Be careful how you answer that...)


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Nick Thompson
In reply to this post by jon zingale

Thanks, Jon, for that thoughtful post.  Mostly I hope that others will comment on it. 

 

I guess it comes down to two questions:  Grant, for a moment, that knowing is a relation between two entities.  Then, we can ask: What is the knowing relation?  And, what sorts of competencies are required for an entity to engage in such a relation?  And how many entities do you need before you have an instance of “knowing?” 

 

Let’s take a dog as an entity and “time to take a walk” as another entity and the dog’s owner as a third entity.  I would say that “the time to take a walk” is an entity they both know, although I don’t think they know it by the same description.  As we talk, here, I am beginning to wonder if the minimal conditions for a ‘knowing” require co=ordination between two organisms. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Jon Zingale
Sent: Saturday, April 27, 2019 11:04 AM
To: [hidden email]
Subject: Re: [FRIAM] A Question For Tomorrow

 

Nick,

 

I love that the title of this thread is 'A question for tomorrow'.

My position continues to be that the label `conscious` is meaningful,

though along with you, I am not sure what language to use around it.

For instance, can something have consciousness? That said, a

conservative scoping of the phenomena I would wish to describe

with consciousness language begins with granting consciousness

to more than 7 billion things on this planet alone. Presently, for those

that agree thus far, it appears that the only way to synthesize new things

with consciousness is to have sex (up to some crude equivalence).

This constraint seems an unreasonable limitation and so the problem

of synthesizing consciousness strikes me as reasonably near, ie.

 `a question for tomorrow` and not some distant future.

 

You begin by asking about the Turing machine, an abstraction which

summarizes what we can say about processing information. Here,

I am going to extend Lee's comment and ask that we consider

particular implementations or better particular embodiments.

 

Hopefully said without too much hubris, given enough time and

memory, I can compute anything that a Turing machine can compute.

The games `Magic the Gathering` and `Mine Craft` are Turing

complete. I would suspect that under some characterization, the

Mississippi river is Turing complete. It would be a real challenge

for me state what abstractions like `Mine Craft` experience, but

sometimes I can speak to my own experience. Oscar Hammerstein

mused about what Old Man River knows.

 

Naively, it seems to me that some kind of information processing,

though not sufficient, is necessary for experience and for a foundations

for consciousness. Whether the information processor needs to be

Turing complete is not immediately obvious to me, perhaps a finite-

state machine will do. Still, I do not think that a complete description of

consciousness (or whatever it means to experience) can exist without

speaking to how it is that a thing comes to sense its world.

 

For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?

 

Cheers,

Jonathan Zingale

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Nick Thompson
In reply to this post by Frank Wimberly-2

Hi Frank,

 

The problem is that one has immediately to ask, what is the contrast class of experiencing consciousness?  Experiencing non-consciousness?  I think for your line of thinking, where consciousness is direct, that’s an oxymoron.  For my line of thinking, when I woke up from my surgery and 24 hours had passed, I had a powerful experience of my non-consciousness. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 11:33 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Jon,

 

How about "experiences consciousness" in place of has consciousness.

 

Frsnk

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 11:03 AM Jon Zingale <[hidden email]> wrote:

Nick,

 

I love that the title of this thread is 'A question for tomorrow'.

My position continues to be that the label `conscious` is meaningful,

though along with you, I am not sure what language to use around it.

For instance, can something have consciousness? That said, a

conservative scoping of the phenomena I would wish to describe

with consciousness language begins with granting consciousness

to more than 7 billion things on this planet alone. Presently, for those

that agree thus far, it appears that the only way to synthesize new things

with consciousness is to have sex (up to some crude equivalence).

This constraint seems an unreasonable limitation and so the problem

of synthesizing consciousness strikes me as reasonably near, ie.

 `a question for tomorrow` and not some distant future.

 

You begin by asking about the Turing machine, an abstraction which

summarizes what we can say about processing information. Here,

I am going to extend Lee's comment and ask that we consider

particular implementations or better particular embodiments.

 

Hopefully said without too much hubris, given enough time and

memory, I can compute anything that a Turing machine can compute.

The games `Magic the Gathering` and `Mine Craft` are Turing

complete. I would suspect that under some characterization, the

Mississippi river is Turing complete. It would be a real challenge

for me state what abstractions like `Mine Craft` experience, but

sometimes I can speak to my own experience. Oscar Hammerstein

mused about what Old Man River knows.

 

Naively, it seems to me that some kind of information processing,

though not sufficient, is necessary for experience and for a foundations

for consciousness. Whether the information processor needs to be

Turing complete is not immediately obvious to me, perhaps a finite-

state machine will do. Still, I do not think that a complete description of

consciousness (or whatever it means to experience) can exist without

speaking to how it is that a thing comes to sense its world.

 

For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?

 

Cheers,

Jonathan Zingale

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Frank Wimberly-2
Yes, you were unconscious.  As you know, I had that experience a few days ago.

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Apr 27, 2019, 12:13 PM Nick Thompson <[hidden email]> wrote:

Hi Frank,

 

The problem is that one has immediately to ask, what is the contrast class of experiencing consciousness?  Experiencing non-consciousness?  I think for your line of thinking, where consciousness is direct, that’s an oxymoron.  For my line of thinking, when I woke up from my surgery and 24 hours had passed, I had a powerful experience of my non-consciousness. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 11:33 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Jon,

 

How about "experiences consciousness" in place of has consciousness.

 

Frsnk

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 11:03 AM Jon Zingale <[hidden email]> wrote:

Nick,

 

I love that the title of this thread is 'A question for tomorrow'.

My position continues to be that the label `conscious` is meaningful,

though along with you, I am not sure what language to use around it.

For instance, can something have consciousness? That said, a

conservative scoping of the phenomena I would wish to describe

with consciousness language begins with granting consciousness

to more than 7 billion things on this planet alone. Presently, for those

that agree thus far, it appears that the only way to synthesize new things

with consciousness is to have sex (up to some crude equivalence).

This constraint seems an unreasonable limitation and so the problem

of synthesizing consciousness strikes me as reasonably near, ie.

 `a question for tomorrow` and not some distant future.

 

You begin by asking about the Turing machine, an abstraction which

summarizes what we can say about processing information. Here,

I am going to extend Lee's comment and ask that we consider

particular implementations or better particular embodiments.

 

Hopefully said without too much hubris, given enough time and

memory, I can compute anything that a Turing machine can compute.

The games `Magic the Gathering` and `Mine Craft` are Turing

complete. I would suspect that under some characterization, the

Mississippi river is Turing complete. It would be a real challenge

for me state what abstractions like `Mine Craft` experience, but

sometimes I can speak to my own experience. Oscar Hammerstein

mused about what Old Man River knows.

 

Naively, it seems to me that some kind of information processing,

though not sufficient, is necessary for experience and for a foundations

for consciousness. Whether the information processor needs to be

Turing complete is not immediately obvious to me, perhaps a finite-

state machine will do. Still, I do not think that a complete description of

consciousness (or whatever it means to experience) can exist without

speaking to how it is that a thing comes to sense its world.

 

For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?

 

Cheers,

Jonathan Zingale

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Nick Thompson

Oh, yes.  We agree that I was unconscious.  And if you had been there, you would have experienced my unconsciousness.  But did I?  I think a person who adopts your position has to say, “No.”

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Yes, you were unconscious.  As you know, I had that experience a few days ago.

 

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 12:13 PM Nick Thompson <[hidden email]> wrote:

Hi Frank,

 

The problem is that one has immediately to ask, what is the contrast class of experiencing consciousness?  Experiencing non-consciousness?  I think for your line of thinking, where consciousness is direct, that’s an oxymoron.  For my line of thinking, when I woke up from my surgery and 24 hours had passed, I had a powerful experience of my non-consciousness. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 11:33 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Jon,

 

How about "experiences consciousness" in place of has consciousness.

 

Frsnk

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 11:03 AM Jon Zingale <[hidden email]> wrote:

Nick,

 

I love that the title of this thread is 'A question for tomorrow'.

My position continues to be that the label `conscious` is meaningful,

though along with you, I am not sure what language to use around it.

For instance, can something have consciousness? That said, a

conservative scoping of the phenomena I would wish to describe

with consciousness language begins with granting consciousness

to more than 7 billion things on this planet alone. Presently, for those

that agree thus far, it appears that the only way to synthesize new things

with consciousness is to have sex (up to some crude equivalence).

This constraint seems an unreasonable limitation and so the problem

of synthesizing consciousness strikes me as reasonably near, ie.

 `a question for tomorrow` and not some distant future.

 

You begin by asking about the Turing machine, an abstraction which

summarizes what we can say about processing information. Here,

I am going to extend Lee's comment and ask that we consider

particular implementations or better particular embodiments.

 

Hopefully said without too much hubris, given enough time and

memory, I can compute anything that a Turing machine can compute.

The games `Magic the Gathering` and `Mine Craft` are Turing

complete. I would suspect that under some characterization, the

Mississippi river is Turing complete. It would be a real challenge

for me state what abstractions like `Mine Craft` experience, but

sometimes I can speak to my own experience. Oscar Hammerstein

mused about what Old Man River knows.

 

Naively, it seems to me that some kind of information processing,

though not sufficient, is necessary for experience and for a foundations

for consciousness. Whether the information processor needs to be

Turing complete is not immediately obvious to me, perhaps a finite-

state machine will do. Still, I do not think that a complete description of

consciousness (or whatever it means to experience) can exist without

speaking to how it is that a thing comes to sense its world.

 

For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?

 

Cheers,

Jonathan Zingale

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Frank Wimberly-2
No.  But people who are under light anesthesia such as during a colonoscopy sometimes talk.  I don't think they remember that.

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Apr 27, 2019, 12:32 PM Nick Thompson <[hidden email]> wrote:

Oh, yes.  We agree that I was unconscious.  And if you had been there, you would have experienced my unconsciousness.  But did I?  I think a person who adopts your position has to say, “No.”

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Yes, you were unconscious.  As you know, I had that experience a few days ago.

 

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 12:13 PM Nick Thompson <[hidden email]> wrote:

Hi Frank,

 

The problem is that one has immediately to ask, what is the contrast class of experiencing consciousness?  Experiencing non-consciousness?  I think for your line of thinking, where consciousness is direct, that’s an oxymoron.  For my line of thinking, when I woke up from my surgery and 24 hours had passed, I had a powerful experience of my non-consciousness. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 11:33 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Jon,

 

How about "experiences consciousness" in place of has consciousness.

 

Frsnk

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 11:03 AM Jon Zingale <[hidden email]> wrote:

Nick,

 

I love that the title of this thread is 'A question for tomorrow'.

My position continues to be that the label `conscious` is meaningful,

though along with you, I am not sure what language to use around it.

For instance, can something have consciousness? That said, a

conservative scoping of the phenomena I would wish to describe

with consciousness language begins with granting consciousness

to more than 7 billion things on this planet alone. Presently, for those

that agree thus far, it appears that the only way to synthesize new things

with consciousness is to have sex (up to some crude equivalence).

This constraint seems an unreasonable limitation and so the problem

of synthesizing consciousness strikes me as reasonably near, ie.

 `a question for tomorrow` and not some distant future.

 

You begin by asking about the Turing machine, an abstraction which

summarizes what we can say about processing information. Here,

I am going to extend Lee's comment and ask that we consider

particular implementations or better particular embodiments.

 

Hopefully said without too much hubris, given enough time and

memory, I can compute anything that a Turing machine can compute.

The games `Magic the Gathering` and `Mine Craft` are Turing

complete. I would suspect that under some characterization, the

Mississippi river is Turing complete. It would be a real challenge

for me state what abstractions like `Mine Craft` experience, but

sometimes I can speak to my own experience. Oscar Hammerstein

mused about what Old Man River knows.

 

Naively, it seems to me that some kind of information processing,

though not sufficient, is necessary for experience and for a foundations

for consciousness. Whether the information processor needs to be

Turing complete is not immediately obvious to me, perhaps a finite-

state machine will do. Still, I do not think that a complete description of

consciousness (or whatever it means to experience) can exist without

speaking to how it is that a thing comes to sense its world.

 

For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?

 

Cheers,

Jonathan Zingale

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Russ Abbott
In reply to this post by Nick Thompson
Nick: "It’s not that I think that self-conscious (etc.) doesn’t exist; it’s that I think of it as a material relation."

What do you mean by a material relation?

Nick: "anywhere, anytime, etc., that material relation can be generated, there consciousness exists."

Is there something to that statement other than its tautological interpretation: whenever something can be brought into existence, it exists?


-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles


On Fri, Apr 26, 2019 at 11:02 PM Nick Thompson <[hidden email]> wrote:

Russ,

 

Thanks for stating the issues so precisely. 

 

You perhaps my side of the argument a tad too strongly.  It’s not that I think that self-conscious (etc.) doesn’t exist; it’s that I think of it as a material relation.  So anywhere, anytime, etc., that material relation can be generated, there consciousness exists.  It’s sort of like what Christ said: “wherever any number shall come together in my name, there shall I be.” Sorry, I am probably being silly there, but I just love that quote.)

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Russ Abbott
Sent: Friday, April 26, 2019 10:44 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

Good to talk to you again also, Nick.

 

You characterized me as saying, "yours is an in principle argument against any claim that machines and humans are ever doing the same thing, right?" 

I wouldn't go that far. One might argue that as physical beings, we are machines of a sort, so there's not such a clear line between machines and humans. One of our current scientific challenges is to figure out how to characterize it and how to push entities across it.

 

But moving to shallower water, consider this example. Presumably, no one would say that a standard washing machine knows how to clean clothes. A washing machine is built to control the flow of water in and out of its tank, to rotate its agitator for given periods of time, etc. We then informally say that the washing machine is cleaning the clothes. But it's not. It just performing mechanical actions that result in what we think of as clean clothes. 

 

Suppose we made the washing machine smarter. Suppose it had sensors that could sense the chemicals that we consider "dirt," and selected actions from its repertoire of actions that reduced the level of those chemicals below some minimal threshold. Would one say that it then knows how to clean clothes? I would say that it doesn't--except in an informal way of talking. The washing machine is built of physical components, sensors, etc. along with algorithms that (again) produce what we think of as clean clothes. But the washing machine doesn't think of them as clean clothes. It doesn't think of anything. It just does what it does.

 

Is there anything one might add to our washing machine so that we would want to say that it knows how to clean clothes. I can't think of any incremental steps. For me to attribute the washing machine with knowing how to clean clothes I would insist that it have consciousness and subjective experience. I know that's a big jump; it's the line between machines and humans that I would draw. I'm now recalling, Nick, that you don't believe in consciousness and subjective experience. Right? So we are probably at an impasse since we no longer have a common vocabulary. But even if the position I'm assuming you hold on consciousness and subjective experience were not a problem, I'd still be stuck. I have no idea how to build consciousness and subjective experience into a washing machine. This is probably where we got stuck the last time we talked about this. I guess we drifted back out to the deeper water anyway. Oh, well. Perhaps it was worth reviewing the issue. Perhaps not.

 

-- Russ 

 

On Fri, Apr 26, 2019 at 8:55 PM Nick Thompson <[hidden email]> wrote:

Larding below.

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Friday, April 26, 2019 8:19 PM
To: [hidden email]; The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

On the way to Friam I said to Nick.  Turing Machines don't know anything.  They may store representations of knowledge. [NST==>Frank: This is how I understand you.  The relation between a Turing Machine and knowledge is like the relation between Mathematics and the events or processes it models.  All the knowledge is in the interpretation  translate “life” into something that the Math or Machine can compute and in the interpretation that translate the results of the computation back into life.  Let’s see.  What am I accusing you of here.  OH.  I have it.  I am accusing you of a mathematicians understanding of computation.  Is that understanding of that relation canonical?   <==nst]  I further said that a photograph also represents knowledge.  For example, the number of floors of a given building.  Most people would be puzzled by the question, "What does a photo know?"[NST==>I think the metaphor is unfair.  Nobody has ever accused a photograph of being able to play chess, or to engage in other tasks which are broadly seen (at least by defrocked English majors) as cognitive.  <==nst]  

 

There were multiple parallel conversations after we arrived.  I don't recall additional discussions about what Turing Machines know.

[NST==>Except at the very end, after 3 hours of discussing other things.  By that time I was exhausted, and I don’t remember what we said.  We spent a lot of time exploring our attractions to unorthodox scientific opinion in such matters as MSG and headaches, auras, pigeon navigation, an even, by implication, the tin-hat stuff.  It’s a question I would love to poll the FRIAM list on:  How many of you engage in unproven health practices of various sorts, even though “science” tells you they are worthless?  Why, exactly?  How is that consistent with your criticisms of  climate science deniers?  <==nst]

Gotta go,

Thanks everybody,

 

N

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Fri, Apr 26, 2019, 8:06 PM Russ Abbott <[hidden email]> wrote:

Nick, I can't believe you are asking such a question -- unless by "know" you mean something very different from the common understanding. No computer knows anything, although it may have lots of stored information. (Information is meant in the Shannon sense.) 

 

For example, Oxford defines knowledge as "Facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject." This is distinct from, for example, having access to an encyclopedia--or even having memorized the contents of one. Turing machines, and computers in general, do not have an understanding of anything--even though they may have lots of Shannon-style information (which we understand as) related to some subject.

 

(Like Glen, though, I am interested in the results, if any, of this morning's meeting.)

 

-- Russ Abbott                                      
Professor, Computer Science
California State University, Los Angeles

 

 

On Fri, Apr 26, 2019 at 2:38 PM uǝlƃ <[hidden email]> wrote:

What was the result of this morning's conversation?

On 4/25/19 10:50 PM, Nick Thompson wrote:
> What does a Turing Machine know?


--
uǝlƃ

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC
http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A Question For Tomorrow

Russ Abbott
In reply to this post by Frank Wimberly-2
I remember part of mine. The anesthesia was a bit too light. At one point I felt the instrument in me. I opened my eyes and grunted. They gave me a bit more anesthesia.


On Sat, Apr 27, 2019 at 11:35 AM Frank Wimberly <[hidden email]> wrote:
No.  But people who are under light anesthesia such as during a colonoscopy sometimes talk.  I don't think they remember that.

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Apr 27, 2019, 12:32 PM Nick Thompson <[hidden email]> wrote:

Oh, yes.  We agree that I was unconscious.  And if you had been there, you would have experienced my unconsciousness.  But did I?  I think a person who adopts your position has to say, “No.”

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 12:16 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Yes, you were unconscious.  As you know, I had that experience a few days ago.

 

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 12:13 PM Nick Thompson <[hidden email]> wrote:

Hi Frank,

 

The problem is that one has immediately to ask, what is the contrast class of experiencing consciousness?  Experiencing non-consciousness?  I think for your line of thinking, where consciousness is direct, that’s an oxymoron.  For my line of thinking, when I woke up from my surgery and 24 hours had passed, I had a powerful experience of my non-consciousness. 

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[hidden email]] On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 11:33 AM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A Question For Tomorrow

 

Jon,

 

How about "experiences consciousness" in place of has consciousness.

 

Frsnk

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 11:03 AM Jon Zingale <[hidden email]> wrote:

Nick,

 

I love that the title of this thread is 'A question for tomorrow'.

My position continues to be that the label `conscious` is meaningful,

though along with you, I am not sure what language to use around it.

For instance, can something have consciousness? That said, a

conservative scoping of the phenomena I would wish to describe

with consciousness language begins with granting consciousness

to more than 7 billion things on this planet alone. Presently, for those

that agree thus far, it appears that the only way to synthesize new things

with consciousness is to have sex (up to some crude equivalence).

This constraint seems an unreasonable limitation and so the problem

of synthesizing consciousness strikes me as reasonably near, ie.

 `a question for tomorrow` and not some distant future.

 

You begin by asking about the Turing machine, an abstraction which

summarizes what we can say about processing information. Here,

I am going to extend Lee's comment and ask that we consider

particular implementations or better particular embodiments.

 

Hopefully said without too much hubris, given enough time and

memory, I can compute anything that a Turing machine can compute.

The games `Magic the Gathering` and `Mine Craft` are Turing

complete. I would suspect that under some characterization, the

Mississippi river is Turing complete. It would be a real challenge

for me state what abstractions like `Mine Craft` experience, but

sometimes I can speak to my own experience. Oscar Hammerstein

mused about what Old Man River knows.

 

Naively, it seems to me that some kind of information processing,

though not sufficient, is necessary for experience and for a foundations

for consciousness. Whether the information processor needs to be

Turing complete is not immediately obvious to me, perhaps a finite-

state machine will do. Still, I do not think that a complete description of

consciousness (or whatever it means to experience) can exist without

speaking to how it is that a thing comes to sense its world.

 

For instance, in the heyday of analogue synthesizers,  musicians

would slog these machines from city to city, altitude to altitude,

desert to rain-forested coast and these machines would notoriously

respond in kind. Their finicky capacitors would experience the

change and changes in micro-farads would ensue. What does an

analogue synthesizer know?

 

Cheers,

Jonathan Zingale

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Russell Standish-2
In reply to this post by Frank Wimberly-2
On Sat, Apr 27, 2019 at 11:28:41AM -0600, Frank Wimberly wrote:

>
> Lee, Surely someone has developed probabilistic Turing Machines which can, very
> rarely, make errors.  I am ignorant of the field since 1972 when I took a
> course which used Hopcroft and Ullman as a text.
>
> Nick, I agree that your questions are charming.  Your humanity is clearly
> seen.  By the way, it occurred to me this morning that the motto of
> behaviorists should be, "If it talks like a duck🦆...etc"
>
> Frank
>

There is a small amount of literature on probabilistic Turing
machines, which tends to go under the name "Turing machine with random
oracle".

The first result was an early one of Shannon's, who showed that adding
a random oracle did not increase the set of functions that are
computable.

However, conversely, there appear to interesting results that indicate
P=NP for random oracle machines. There is some controversy over this,
though, and personally, I've never been able to follow the proofs in
the area :).

If true, it meshes in well with the idea that evolutionary algorithms
exploit the obvious random oracles of "Variation" to effectively solve
some very NP hard problems.


--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow        [hidden email]
Economics, Kingston University         http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Marcus G. Daniels
Russell writes:

< However, conversely, there appear to interesting results that indicate P=NP for random oracle machines. There is some controversy over this, though, and personally, I've never been able to follow the proofs in the area :). >

Minimally, why is LaTeX the preferred format and not, say, Mathematica?   At least the latter makes it complete and computable.

Marcus
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Frank Wimberly-2
I'm not following.  What has LaTex vs Mathematica got to do with the proofs in question?

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Apr 27, 2019, 6:52 PM Marcus Daniels <[hidden email]> wrote:
Russell writes:

< However, conversely, there appear to interesting results that indicate P=NP for random oracle machines. There is some controversy over this, though, and personally, I've never been able to follow the proofs in the area :). >

Minimally, why is LaTeX the preferred format and not, say, Mathematica?   At least the latter makes it complete and computable.

Marcus
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Marcus G. Daniels

One reason it could be hard to follow something is because an implication is just not there, or notation is used in a contradictory fashion.   These are that a computer just won’t tolerate.   At least convince a computer that conclusions follow from premises and then I’ll bother to spend hours on it.   A proof is just a best effort, so use machines to make it as good as it can be. 

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 6:55 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

I'm not following.  What has LaTex vs Mathematica got to do with the proofs in question?

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 6:52 PM Marcus Daniels <[hidden email]> wrote:

Russell writes:

< However, conversely, there appear to interesting results that indicate P=NP for random oracle machines. There is some controversy over this, though, and personally, I've never been able to follow the proofs in the area :). >

Minimally, why is LaTeX the preferred format and not, say, Mathematica?   At least the latter makes it complete and computable.

Marcus
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Frank Wimberly-2
Thanks, Marcus.

How often are proofs with errors published in refereed articles or textbooks?

Hywel told me about a case in which Lincoln Wolfenstein got the sign wrong in the conclusion of a long article about neutrinos.  A result was that his article was cited much more than a typical one in physics.

Totally non sequitur: my daughter's best friend in highschool was Wolfenstein's daughter.

Frank

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

On Sat, Apr 27, 2019, 7:04 PM Marcus Daniels <[hidden email]> wrote:

One reason it could be hard to follow something is because an implication is just not there, or notation is used in a contradictory fashion.   These are that a computer just won’t tolerate.   At least convince a computer that conclusions follow from premises and then I’ll bother to spend hours on it.   A proof is just a best effort, so use machines to make it as good as it can be. 

 

From: Friam <[hidden email]> On Behalf Of Frank Wimberly
Sent: Saturday, April 27, 2019 6:55 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] A question for tomorrow

 

I'm not following.  What has LaTex vs Mathematica got to do with the proofs in question?

 

-----------------------------------
Frank Wimberly

My memoir:
https://www.amazon.com/author/frankwimberly

My scientific publications:
https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Sat, Apr 27, 2019, 6:52 PM Marcus Daniels <[hidden email]> wrote:

Russell writes:

< However, conversely, there appear to interesting results that indicate P=NP for random oracle machines. There is some controversy over this, though, and personally, I've never been able to follow the proofs in the area :). >

Minimally, why is LaTeX the preferred format and not, say, Mathematica?   At least the latter makes it complete and computable.

Marcus
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

lrudolph
In reply to this post by Marcus G. Daniels
> Russell writes:
>
> < However, conversely, there appear to interesting results that indicate
> P=NP for random oracle machines. There is some controversy over this,
> though, and personally, I've never been able to follow the proofs in the
> area :). >
>
> Minimally, why is LaTeX the preferred format and not, say, Mathematica?
> At least the latter makes it complete and computable.

Not everyone can afford Mathematica.  I can, but am not motivated enough
both to pay for it and to learn to use it well, given that very little of
the mathematics I want to do is very amenable to what it seems designed to
be best at.  Clark's mathematics department *couldn't* afford it while I
was there--Matlab was apparently enough cheaper, or perhaps more
appropriate to the research interests of the most likely user.  For the
occasional investigation of some example or other that comes up in my
work, free wxMaxima has mostly been adequate (but I could never persuade
any of the undergraduate math majors who were working with me and one of
our CS faculty on some geometric problems in motion planning that it was
something it was worth *their* trouble to learn in the hopes of
furthering our research: I  have no evangelical talents to be applied to
those who have not already been touched by the appropriate version of the
Holy Spirit).


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Reply | Threaded
Open this post in threaded view
|

Re: A question for tomorrow

Russell Standish-2
In reply to this post by Marcus G. Daniels
On Sun, Apr 28, 2019 at 12:52:02AM +0000, Marcus Daniels wrote:
> Russell writes:
>
> < However, conversely, there appear to interesting results that indicate P=NP for random oracle machines. There is some controversy over this, though, and personally, I've never been able to follow the proofs in the area :). >
>
> Minimally, why is LaTeX the preferred format and not, say, Mathematica?   At least the latter makes it complete and computable.


Convince Stephen Wolfram to open source Mathematica (or at least the
typesetting bits of it), then there might be some chance of
this. Otherwise, not so much.

LaTeX got its head start by not only being superior to its
competition, but also by being open source from the get go (unusual
for the time). When LaTeX came out, the only thing better (at least
according to some people) were incredibly expensive desktop publishing
packages worth $10K or more (back when $10K was worth more than double
that now).


--

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellow        [hidden email]
Economics, Kingston University         http://www.hpcoders.com.au
----------------------------------------------------------------------------

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives back to 2003: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
123456