Tautologies and other forms of circular reasoning.

classic Classic list List threaded Threaded
60 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: Systems, State, Recursion, Iteration.

Sarbajit Roy (testing)
Dear Russ

I've read your paper on how the Fed can fix the economy:

You've programed the states of the economy and frozen the Fed's response in turns of those states like traffic lights. It reminds me of classical control theory - pure and immediate "P"roportional control to control a single variable. Are there any "I"s and "D"s which are time/rate dependent or is that left up to the Fed?

On Sun, Apr 14, 2013 at 9:33 AM, Russ Abbott <[hidden email]> wrote:
Never beaten over the head with “hypothetical construct” or “intervening variable”. My notion of state is basic theoretical computer science. How an automaton (a formally defined mechanism such as a Turing Machine, Finite Automaton, etc.) reacts to its input depends on its state. This isn't intended to be particularly sophisticated. It's just a technique used when specifying how things interact with their environments. 

When a traffic light that controls a crosswalk is in the green state (in your direction) and you press the cross button, it ignores that input. When it's in its red state (in your direction) and you press the cross button, it starts counting down to turning green. How long the countdown will be depends on another element of its state: how much time has passed since the most recent green.

 
-- Russ Abbott
_____________________________________________
  Professor, Computer Science
  California State University, Los Angeles

  My paper on how the Fed can fix the economy: ssrn.com/abstract=1977688
  Google voice: 747-999-5105
  CS Wiki and the courses I teach
_____________________________________________ 



On Sat, Apr 13, 2013 at 8:48 PM, Nicholas Thompson <[hidden email]> wrote:

Thanks, Steve.  Will ponder all of this.  Nick

 

From: Friam [mailto:[hidden email]] On Behalf Of Steve Smith
Sent: Saturday, April 13, 2013 8:47 PM


To: The Friday Morning Applied Complexity Coffee Group
Subject: [FRIAM] Systems, State, Recursion, Iteration.

 

Nick -

It would be difficult to explain this (Marcus' definition of iteration vs recursion) to you without teaching you several key computer science concepts which are not necessarily difficult but are very *specific*.

The first step would be to answer your question of days ago about what a "System" is.   Physicists define System the same way Biologists (or even Social Scientists) do, just using different components and processes.   It involves the relationship between the "thing" itself (a subset of the universe) and a model that represents it. 

Therein lies two lossy compressions:  1) Reductionism is at best a convenient approximation... no subset or subsystem is completely isolated (unless perhaps somehow what is inside a black hole is isolated from what is outside, but that might be an uninteresting, degenerate case?);  2) The model is not the thing...   we've been all over this, right?  Another lossy compression/projection of reality. oh and a *third*; 3) We can only measure these quantities to some degree of precision.

In a system, a simultaneous measure every quantity of every aspect of the system is it's "state".  In practice, we can only measure some of the quantities to some precision of some of the aspects, and in fact, that is pretty much what modeling is about... choosing that subset according to various limited qualities such as what we *can* measure  and with what level of precision and with a goal in mind of answering specific questions with said model.

At this point, we are confronted with "what means State?"

Your preference for "Analytical Output" vs "State" I think reflects your attempt to think in terms of the implementation of a model (in a computer program, or human executed logic/algorithm).  The problems with "Analytical Output" in this context arise from both "Analytical" and "Output".   "Analytical" implies that the only or main value of the "state" is to do analysis on it.  In Marcus example, it's main use is to feed it right back into an iterated model... no human may ever look at this "state".  "Output" suggests (also) that the state is visible *outside* the system.   While (for analytical purposes) we might choose to capture a snapshot of the state, it is not an "output", it is just the STATE of the system (see above).

Marcus point was that in a recursive *program* (roughly a deterministic implementation rooted in formal symbol processing, of a model of some "system"), the "system" is nominally subdivided into physical or logical subsets or "subsystems" and executed *recursively* (to wit, by subdividing again until an answer can be obtained without further subdivision).  In an iterative *program*, the entire (sub) system model is executed with initial conditions (state) one time, then the resulting state of that iteration is used as the initial conditions for the *next* iteration until some convergence criteria (the state of the system ceases to change above some epsilon) is met.

I hope this helps...  and doesn't muddy the water yet more?

- Steve

I don't know, I don't speak Haskell.

 

--Doug

On Sat, Apr 13, 2013 at 3:29 PM, Nicholas Thompson <[hidden email]> wrote:

Could be!

 

Ok.  Now that that is behind us, what did the message mean?

 

N

 

From: Friam [mailto:[hidden email]] On Behalf Of Douglas Roberts
Sent: Saturday, April 13, 2013 3:02 PM


To: The Friday Morning Applied Complexity Coffee Group

Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Nick,

 

I surprised that you are not more conversant  in computer languages.  You're always, well, niggling about the meaning of this word, or that one in the context of this or that conversation.

 

With computer languages, there are very few ambiguities, contextual or other wise. Kind of like mathematics. For one as worried as you often appear to be about the true meaning of the written word, I would have thought that you would positively revel at the ability to express yourself with nearly absolute crystal clarity, no ambiguities whatsoever.

 

Could it be that you seek out the ambiguities that are ever present  in human languages to give yourself something to pounce upon and worry over, and to provide the opportunity to engage in nearly endless conversations?

 

--Doug

On Sat, Apr 13, 2013 at 2:05 PM, Nicholas Thompson <[hidden email]> wrote:

Can anybody translate this for a non programmer person?

N


-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of Marcus G.
Daniels
Sent: Saturday, April 13, 2013 1:10 PM
To: [hidden email]
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

On 4/12/13 5:40 PM, glen wrote:
> Iteration is most aligned with stateful repetition. Recursion is most
> aligned with stateless repetition.
Purely functional constructs can capture iteration, though.

$ cat foo.hs
import Control.Monad.State
import Control.Monad.Loops

inc :: State Int Bool
inc = do i <- get
          put (i + 1)
          return (i < 10)

main = do
   putStrLn (show (runState (whileM inc get) 5)) $ ghc --make foo.hs $ ./foo
([6,7,8,9,10],11)

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



 

--

Doug Roberts
[hidden email]


<a href="tel:505-455-7333" target="_blank">505-455-7333 - Office
<a href="tel:505-672-8213" target="_blank">505-672-8213 - Mobile


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



 

--

Doug Roberts
[hidden email]


<a href="tel:505-455-7333" target="_blank">505-455-7333 - Office
<a href="tel:505-672-8213" target="_blank">505-672-8213 - Mobile




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Systems, State, Recursion, Iteration.

lrudolph
In reply to this post by Nick Thompson
Nick,

> I guess I would call this a functional state.   Or perhaps a disposition.  

You could also (and equally well--or equally badly) use Lewin's phrase "the field at the
present time".  Or rather, since we do want to talk about existents that persist in time but
may have different states at different times (or the same state at different times), "the
field at some time".  Or just "the field".

My vote on my first parenthesis, by the way, is "equally badly".

Lee

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Systems, State, Recursion, Iteration.

Nick Thompson
In reply to this post by Russ Abbott

Russ,

 

Ok.  So, the question is, what is the “value added” of saying “green state” rather than just saying that the “light is green”.  Something comes through the gate with the camel, so to speak.  What is it?  N

 

From: Friam [mailto:[hidden email]] On Behalf Of Russ Abbott
Sent: Saturday, April 13, 2013 10:03 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Systems, State, Recursion, Iteration.

 

Never beaten over the head with “hypothetical construct” or “intervening variable”. My notion of state is basic theoretical computer science. How an automaton (a formally defined mechanism such as a Turing Machine, Finite Automaton, etc.) reacts to its input depends on its state. This isn't intended to be particularly sophisticated. It's just a technique used when specifying how things interact with their environments. 

 

When a traffic light that controls a crosswalk is in the green state (in your direction) and you press the cross button, it ignores that input. When it's in its red state (in your direction) and you press the cross button, it starts counting down to turning green. How long the countdown will be depends on another element of its state: how much time has passed since the most recent green.


 

-- Russ Abbott
_____________________________________________

  Professor, Computer Science
  California State University, Los Angeles

 

  My paper on how the Fed can fix the economy: ssrn.com/abstract=1977688
  Google voice: 747-999-5105

  CS Wiki and the courses I teach
_____________________________________________ 

 

On Sat, Apr 13, 2013 at 8:48 PM, Nicholas Thompson <[hidden email]> wrote:

Thanks, Steve.  Will ponder all of this.  Nick

 

From: Friam [mailto:[hidden email]] On Behalf Of Steve Smith
Sent: Saturday, April 13, 2013 8:47 PM


To: The Friday Morning Applied Complexity Coffee Group

Subject: [FRIAM] Systems, State, Recursion, Iteration.

 

Nick -

It would be difficult to explain this (Marcus' definition of iteration vs recursion) to you without teaching you several key computer science concepts which are not necessarily difficult but are very *specific*.

The first step would be to answer your question of days ago about what a "System" is.   Physicists define System the same way Biologists (or even Social Scientists) do, just using different components and processes.   It involves the relationship between the "thing" itself (a subset of the universe) and a model that represents it. 

Therein lies two lossy compressions:  1) Reductionism is at best a convenient approximation... no subset or subsystem is completely isolated (unless perhaps somehow what is inside a black hole is isolated from what is outside, but that might be an uninteresting, degenerate case?);  2) The model is not the thing...   we've been all over this, right?  Another lossy compression/projection of reality. oh and a *third*; 3) We can only measure these quantities to some degree of precision.

In a system, a simultaneous measure every quantity of every aspect of the system is it's "state".  In practice, we can only measure some of the quantities to some precision of some of the aspects, and in fact, that is pretty much what modeling is about... choosing that subset according to various limited qualities such as what we *can* measure  and with what level of precision and with a goal in mind of answering specific questions with said model.

At this point, we are confronted with "what means State?"

Your preference for "Analytical Output" vs "State" I think reflects your attempt to think in terms of the implementation of a model (in a computer program, or human executed logic/algorithm).  The problems with "Analytical Output" in this context arise from both "Analytical" and "Output".   "Analytical" implies that the only or main value of the "state" is to do analysis on it.  In Marcus example, it's main use is to feed it right back into an iterated model... no human may ever look at this "state".  "Output" suggests (also) that the state is visible *outside* the system.   While (for analytical purposes) we might choose to capture a snapshot of the state, it is not an "output", it is just the STATE of the system (see above).

Marcus point was that in a recursive *program* (roughly a deterministic implementation rooted in formal symbol processing, of a model of some "system"), the "system" is nominally subdivided into physical or logical subsets or "subsystems" and executed *recursively* (to wit, by subdividing again until an answer can be obtained without further subdivision).  In an iterative *program*, the entire (sub) system model is executed with initial conditions (state) one time, then the resulting state of that iteration is used as the initial conditions for the *next* iteration until some convergence criteria (the state of the system ceases to change above some epsilon) is met.

I hope this helps...  and doesn't muddy the water yet more?

- Steve

I don't know, I don't speak Haskell.

 

--Doug

On Sat, Apr 13, 2013 at 3:29 PM, Nicholas Thompson <[hidden email]> wrote:

Could be!

 

Ok.  Now that that is behind us, what did the message mean?

 

N

 

From: Friam [mailto:[hidden email]] On Behalf Of Douglas Roberts
Sent: Saturday, April 13, 2013 3:02 PM


To: The Friday Morning Applied Complexity Coffee Group

Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Nick,

 

I surprised that you are not more conversant  in computer languages.  You're always, well, niggling about the meaning of this word, or that one in the context of this or that conversation.

 

With computer languages, there are very few ambiguities, contextual or other wise. Kind of like mathematics. For one as worried as you often appear to be about the true meaning of the written word, I would have thought that you would positively revel at the ability to express yourself with nearly absolute crystal clarity, no ambiguities whatsoever.

 

Could it be that you seek out the ambiguities that are ever present  in human languages to give yourself something to pounce upon and worry over, and to provide the opportunity to engage in nearly endless conversations?

 

--Doug

On Sat, Apr 13, 2013 at 2:05 PM, Nicholas Thompson <[hidden email]> wrote:

Can anybody translate this for a non programmer person?

N


-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of Marcus G.
Daniels
Sent: Saturday, April 13, 2013 1:10 PM
To: [hidden email]
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

On 4/12/13 5:40 PM, glen wrote:
> Iteration is most aligned with stateful repetition. Recursion is most
> aligned with stateless repetition.
Purely functional constructs can capture iteration, though.

$ cat foo.hs
import Control.Monad.State
import Control.Monad.Loops

inc :: State Int Bool
inc = do i <- get
          put (i + 1)
          return (i < 10)

main = do
   putStrLn (show (runState (whileM inc get) 5)) $ ghc --make foo.hs $ ./foo
([6,7,8,9,10],11)

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



 

--

Doug Roberts
[hidden email]


<a href="tel:505-455-7333" target="_blank">505-455-7333 - Office
<a href="tel:505-672-8213" target="_blank">505-672-8213 - Mobile


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



 

--

Doug Roberts
[hidden email]


<a href="tel:505-455-7333" target="_blank">505-455-7333 - Office
<a href="tel:505-672-8213" target="_blank">505-672-8213 - Mobile



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Systems, State, Recursion, Iteration.

Russ Abbott
Sarbajit, Thanks for your comments/questions about my Fed suggestion. I'm not an engineer and wasn't thinking about a PID control mechanism. (In fact, I had to look it up!) I was leaving the decision about how to move the levers/dials to FED personnel and wasn't thinking about whether there would be a nice algorithm that gave correction values in terms of current error, past accumulated error, and predicted future error.  I suspect that macro economics is not yet up to that.  But perhaps I underestimate it. I would like to see a futures market in predicted corrections, which might do a good job.

Nick, Saying that the light is green is reporting an observation. Saying the light is in a green state is making a statement about the light as a mechanism. As I said, I think of the notion of state as identifying a collection of functionalities. So attributing a "green state" to the light implies that when in that state it has a specific set of functional attributes. Saying that the light is green doesn't say anything like that.

So I'd say that there is a big difference. "The light is green" is an observation; "the light is in a green state" claims that when in that state the light acts and is capable of acting in certain ways that may be distinct from how it acts and is capable of acting when in other states. (I say "may be" because a mechanism may have two distinct states that are indistinguishable wrt their operational characteristics. Have two such states would suggest design redundancy, but I can't deny the possibility. In software one generally doesn't want such states. In engineering systems such redundancy is often created for the sake of safety.)

 
-- Russ Abbott
_____________________________________________
  Professor, Computer Science
  California State University, Los Angeles

  My paper on how the Fed can fix the economy: ssrn.com/abstract=1977688
  Google voice: 747-999-5105
  CS Wiki and the courses I teach
_____________________________________________ 



On Sun, Apr 14, 2013 at 9:36 AM, Nicholas Thompson <[hidden email]> wrote:

Russ,

 

Ok.  So, the question is, what is the “value added” of saying “green state” rather than just saying that the “light is green”.  Something comes through the gate with the camel, so to speak.  What is it?  N

 

From: Friam [mailto:[hidden email]] On Behalf Of Russ Abbott
Sent: Saturday, April 13, 2013 10:03 PM


To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Systems, State, Recursion, Iteration.

 

Never beaten over the head with “hypothetical construct” or “intervening variable”. My notion of state is basic theoretical computer science. How an automaton (a formally defined mechanism such as a Turing Machine, Finite Automaton, etc.) reacts to its input depends on its state. This isn't intended to be particularly sophisticated. It's just a technique used when specifying how things interact with their environments. 

 

When a traffic light that controls a crosswalk is in the green state (in your direction) and you press the cross button, it ignores that input. When it's in its red state (in your direction) and you press the cross button, it starts counting down to turning green. How long the countdown will be depends on another element of its state: how much time has passed since the most recent green.


 

-- Russ Abbott
_____________________________________________

  Professor, Computer Science
  California State University, Los Angeles

 

  My paper on how the Fed can fix the economy: ssrn.com/abstract=1977688
  Google voice: <a href="tel:747-999-5105" value="+17479995105" target="_blank">747-999-5105

  CS Wiki and the courses I teach
_____________________________________________ 

 

On Sat, Apr 13, 2013 at 8:48 PM, Nicholas Thompson <[hidden email]> wrote:

Thanks, Steve.  Will ponder all of this.  Nick

 

From: Friam [mailto:[hidden email]] On Behalf Of Steve Smith
Sent: Saturday, April 13, 2013 8:47 PM


To: The Friday Morning Applied Complexity Coffee Group

Subject: [FRIAM] Systems, State, Recursion, Iteration.

 

Nick -

It would be difficult to explain this (Marcus' definition of iteration vs recursion) to you without teaching you several key computer science concepts which are not necessarily difficult but are very *specific*.

The first step would be to answer your question of days ago about what a "System" is.   Physicists define System the same way Biologists (or even Social Scientists) do, just using different components and processes.   It involves the relationship between the "thing" itself (a subset of the universe) and a model that represents it. 

Therein lies two lossy compressions:  1) Reductionism is at best a convenient approximation... no subset or subsystem is completely isolated (unless perhaps somehow what is inside a black hole is isolated from what is outside, but that might be an uninteresting, degenerate case?);  2) The model is not the thing...   we've been all over this, right?  Another lossy compression/projection of reality. oh and a *third*; 3) We can only measure these quantities to some degree of precision.

In a system, a simultaneous measure every quantity of every aspect of the system is it's "state".  In practice, we can only measure some of the quantities to some precision of some of the aspects, and in fact, that is pretty much what modeling is about... choosing that subset according to various limited qualities such as what we *can* measure  and with what level of precision and with a goal in mind of answering specific questions with said model.

At this point, we are confronted with "what means State?"

Your preference for "Analytical Output" vs "State" I think reflects your attempt to think in terms of the implementation of a model (in a computer program, or human executed logic/algorithm).  The problems with "Analytical Output" in this context arise from both "Analytical" and "Output".   "Analytical" implies that the only or main value of the "state" is to do analysis on it.  In Marcus example, it's main use is to feed it right back into an iterated model... no human may ever look at this "state".  "Output" suggests (also) that the state is visible *outside* the system.   While (for analytical purposes) we might choose to capture a snapshot of the state, it is not an "output", it is just the STATE of the system (see above).

Marcus point was that in a recursive *program* (roughly a deterministic implementation rooted in formal symbol processing, of a model of some "system"), the "system" is nominally subdivided into physical or logical subsets or "subsystems" and executed *recursively* (to wit, by subdividing again until an answer can be obtained without further subdivision).  In an iterative *program*, the entire (sub) system model is executed with initial conditions (state) one time, then the resulting state of that iteration is used as the initial conditions for the *next* iteration until some convergence criteria (the state of the system ceases to change above some epsilon) is met.

I hope this helps...  and doesn't muddy the water yet more?

- Steve

I don't know, I don't speak Haskell.

 

--Doug

On Sat, Apr 13, 2013 at 3:29 PM, Nicholas Thompson <[hidden email]> wrote:

Could be!

 

Ok.  Now that that is behind us, what did the message mean?

 

N

 

From: Friam [mailto:[hidden email]] On Behalf Of Douglas Roberts
Sent: Saturday, April 13, 2013 3:02 PM


To: The Friday Morning Applied Complexity Coffee Group

Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Nick,

 

I surprised that you are not more conversant  in computer languages.  You're always, well, niggling about the meaning of this word, or that one in the context of this or that conversation.

 

With computer languages, there are very few ambiguities, contextual or other wise. Kind of like mathematics. For one as worried as you often appear to be about the true meaning of the written word, I would have thought that you would positively revel at the ability to express yourself with nearly absolute crystal clarity, no ambiguities whatsoever.

 

Could it be that you seek out the ambiguities that are ever present  in human languages to give yourself something to pounce upon and worry over, and to provide the opportunity to engage in nearly endless conversations?

 

--Doug

On Sat, Apr 13, 2013 at 2:05 PM, Nicholas Thompson <[hidden email]> wrote:

Can anybody translate this for a non programmer person?

N


-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of Marcus G.
Daniels
Sent: Saturday, April 13, 2013 1:10 PM
To: [hidden email]
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

On 4/12/13 5:40 PM, glen wrote:
> Iteration is most aligned with stateful repetition. Recursion is most
> aligned with stateless repetition.
Purely functional constructs can capture iteration, though.

$ cat foo.hs
import Control.Monad.State
import Control.Monad.Loops

inc :: State Int Bool
inc = do i <- get
          put (i + 1)
          return (i < 10)

main = do
   putStrLn (show (runState (whileM inc get) 5)) $ ghc --make foo.hs $ ./foo
([6,7,8,9,10],11)

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



 

--

Doug Roberts
[hidden email]


<a href="tel:505-455-7333" target="_blank">505-455-7333 - Office
<a href="tel:505-672-8213" target="_blank">505-672-8213 - Mobile


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



 

--

Doug Roberts
[hidden email]


<a href="tel:505-455-7333" target="_blank">505-455-7333 - Office
<a href="tel:505-672-8213" target="_blank">505-672-8213 - Mobile



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Systems, State, Recursion, Iteration.

Steve Smith
butting in here...

Nick, Saying that the light is green is reporting an observation. Saying the light is in a green state is making a statement about the light as a mechanism. As I said, I think of the notion of state as identifying a collection of functionalities. So attributing a "green state" to the light implies that when in that state it has a specific set of functional attributes. Saying that the light is green doesn't say anything like that.
so, if the light is operated by relays, we can check the position of the relays to determine the "state" of the green light, but if the light *bulb* behind the green lense is burned out, is the light in the green state or not?  The relay powering it is.  Or do we simply make up another bit of state for whether the light bulb is operational... and then another for whether the lens is covered in snow to the point of being blanked?    

For the control-systems engineer designing/building/operating the light system the first is paramount, the second is valuable but the third is probably out of scope?  To the driver (or the traffic police) all three sum to one simply point: "what color is the light? (and on a good day, "can I trust it?").   

- Steve


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

glen ropella
In reply to this post by Marcus G. Daniels
Marcus G. Daniels wrote at 04/13/2013 07:42 PM:
> Iteration is a special case of recursion

Well, more generically, they're duals, meaning that either can be
(thought of as) a special case of the other.

But, more to the point, this goes back to the original discussion of
circular reasoning, but not in the merely syntactic sense of using terms
in their own definition.  It goes very deep into the foundations of how
we think about the ambience around us.  It seems to me there are only 2
ways: 1) an absolute concept of size vs. 2) a partial (staged?,
progressive?) concept of composition.  (1) is more state oriented and
provides the context for questions like "How many units do I see?"  (2)
is more process oriented and provides the context for questions like
"How do these things fit/work together to produce what I see?"

They handle circularity differently.  It seems to me that (1) handles
circularity with ambiguity or paradox.  Conversations containing things
like "this sentence is false" and the various ways of escaping such seem
to assume concepts like state, accretion, stigmergy, etc.  In contrast,
it seems like (2) handles circularity in a very relative or relational
way.  Cycles are implicit and, perhaps, ubiquitous.  This seems to
provide the right context for those endless conversations with people
who never seem to say anything with any finality ... just round and
round it goes, sometimes seeming like it's going somewhere only to
realize you're back where you started.

--
--
=><= glen e. p. ropella
But gladder to burn it all away


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

Marcus G. Daniels
On 4/16/13 12:41 PM, glen wrote:
> But, more to the point, this goes back to the original discussion of
> circular reasoning, but not in the merely syntactic sense of using
> terms in their own definition. It goes very deep into the foundations
> of how we think about the ambience around us. It seems to me there are
> only 2 ways: 1) an absolute concept of size vs. 2) a partial (staged?,
> progressive?) concept of composition.
The iteration vs. recursion distinction is a side issue.  A more
important issue is whether a model has referential transparency. Are all
the possible ways an object can change or reveal state made evident, or
are they hidden away in obscure ways due to implementation issues?

One way state transitions can be obscured is the use of iteration, where
an ill-defined object gets some sequence of actions applied to it, but
there are no constraints on how that can happen -- or even a record once
the actions have occurred.  In contrast, a strongly-typed functional
program is in some sense an experimental protocol by itself.  Meanwhile,
a statically typed framework can tolerate ambiguity.  It can be made
dynamic with some tags.  If the modeler wants some physical intuition in
the composition of objects and their behaviors, then there's nothing
that prevents that either.   The issue is whether a modeler is prepared
to put all of the degrees of freedom on the table and find and remove
those that are not essential, or imagine that 1 piece on each of 100
tables is somehow different from the same 100 pieces on 1 table.

Maybe we aren't talking about the same thing.  I'm not sure what you
mean by "size" above.  I think you might mean that "All eventualities
must be covered by top-down analysis."   I think you might mean that not
having to make types fit together means there are more ways entertain
the parts and pieces.   If so, I don't see it that way.   If there are
paths a computation can take which will result in failure, it's better
to know sooner than later about them.  If certain state configurations
require logic, generics, or big union types, to do nothing but something
benign -- until the appropriate treatment is identified -- being
confronted with those configurations as classes (at compile time) is
better than hitting the edge cases one by one at runtime.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

glen ropella
Marcus G. Daniels wrote at 04/16/2013 07:55 PM:
> A more important issue is whether a model has referential
> transparency. Are all the possible ways an object can change or
> reveal state made evident, or are they hidden away in obscure ways
> due to implementation issues?
>
> [...] The issue is whether a modeler is prepared to put all of the
> degrees of freedom on the table and find and remove those that are
> not essential, or imagine that 1 piece on each of 100 tables is
> somehow different from the same 100 pieces on 1 table.

Yes, exactly.  The conversation Nick started regarding tautologies is
fundamentally about separating [non-]essential, or in the extreme case,
no-ops.  I (think I intellectually, if not behaviorally) share your
preference for functional computation because it helps force me to be
more rigorous in my intent.  I'm as lazy as they come, though, and when
given too many bells and whistles, my product tends to be sloppy.  But I
tend to also argue that, sometimes, depending on the requirements set
out by the task, the sloppiness is not bad but merely a trivial side-effect.

But this might be where we're talking about different things, below...

> Maybe we aren't talking about the same thing.  I'm not sure what you
> mean by "size" above.  I think you might mean that "All eventualities
> must be covered by top-down analysis."   I think you might mean that not
> having to make types fit together means there are more ways entertain
> the parts and pieces.

Sorry, I was being obtuse.  I meant it in the sense of set measures, or
perhaps counting the members of a state space. In general, when we look
around us at the world, we tend to focus, to slice off a subset.  Then
we go about justifying that the focal subset is "smaller" than the
ambience from which we sliced it.  There seems to be 2 ways to do that,
by measuring the size of sets vs. iteratively, i.e. showing how various
subsets can be composed (unioned, accumulated) to construct various sets.

It's not entirely clear to me where "type" fits (at least not the
specific sense of "type" we use in programming).  But it seems to be
synonymous with the predicate that defines the set.  "Type" seems like a
state-oriented conception, whereas "predicate" seems like a
process-oriented conception.  We talk about things being "of a type".
But we talk about "satisfying a predicate".  I could easily be wrong in
my intuition, there.

>   If so, I don't see it that way.   If there are
> paths a computation can take which will result in failure, it's better
> to know sooner than later about them.  If certain state configurations
> require logic, generics, or big union types, to do nothing but something
> benign -- until the appropriate treatment is identified -- being
> confronted with those configurations as classes (at compile time) is
> better than hitting the edge cases one by one at runtime.

Well, to go back to my defense of my sloppiness.  Sometimes the
sloppiness is not bad or merely ignorable.  Sometimes, it's crucial to
re-use (or, more appropriately [mis|ab]use).  This is the concept I was
trying to get at earlier when I mispoke and claimed that iteration is
more open-ended than recursion.  It's not, since they're duals.  But
iteration, being state-oriented rather than process-oriented seems more
amenable to sloppiness.  When we finite-minded, hyper-focusing, pattern
recognizers wander around in the ambience, trying to "do stuff", we face
a kind of action threshold, a hurdle we have to get over in order to get
anything done.

When we try to be as rigorous as possible and put all our DoF on the
table, so to speak, that raises the threshold and makes action more
difficult.  Granted, it also might make the eventual action more
effective or powerful, but it does make it more difficult.

Given the variety of types of people out there, we end up with a nice
spread of people, those who would prefer to "just do it" versus those
who feel they should think long and hard before they do anything.  My
speculation is that it's easier for the sloppy people to "grab onto"
whatever they slice out of the ambience if they use a state-oriented
world view.  It seems very difficult to be a purely Taoistic floating
process, continuously, sloppily transforming/filtering things from birth
till death.

--
=><= glen e. p. ropella
This body of mine, man I don't wanna turn android


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

Nick Thompson

In my (leetle) world, referential opacity refers to ambiguities that arise in intentional utterances ... utterances of the form, "Jones believes (wants, thinks, hopes, etc.) that X is the case. "  They are opaque in that they tell us nothing about the truth of X.  So, for instance, "Jones believes that there are unicorns in central park"  tells us neither that such a thing as a horse with a horn in its forehead exists (because Jones may confuse unicorns with squirrels) or that there are any "unicorns" in central park, whatever Jones may conceive them to be (because Jones may be misinformed). 

 

What does the computer community think "referential opacity" means.  Are there statements in computer code that take the form , "from the point of view of circuit A, switch S has value V".  And do have later to worry that somewhere, later in the program, some other circuit, circuit B will encounter switch S and take it to have the value V? 

 

Nick

 

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 10:52 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Marcus G. Daniels wrote at 04/16/2013 07:55 PM:

> A more important issue is whether a model has referential

> transparency. Are all the possible ways an object can change or reveal

> state made evident, or are they hidden away in obscure ways due to

> implementation issues?

>

> [...] The issue is whether a modeler is prepared to put all of the

> degrees of freedom on the table and find and remove those that are not

> essential, or imagine that 1 piece on each of 100 tables is somehow

> different from the same 100 pieces on 1 table.

 

Yes, exactly.  The conversation Nick started regarding tautologies is fundamentally about separating [non-]essential, or in the extreme case, no-ops.  I (think I intellectually, if not behaviorally) share your preference for functional computation because it helps force me to be more rigorous in my intent.  I'm as lazy as they come, though, and when given too many bells and whistles, my product tends to be sloppy.  But I tend to also argue that, sometimes, depending on the requirements set out by the task, the sloppiness is not bad but merely a trivial side-effect.

 

But this might be where we're talking about different things, below...

 

> Maybe we aren't talking about the same thing.  I'm not sure what you

> mean by "size" above.  I think you might mean that "All eventualities

> must be covered by top-down analysis."   I think you might mean that not

> having to make types fit together means there are more ways entertain

> the parts and pieces.

 

Sorry, I was being obtuse.  I meant it in the sense of set measures, or perhaps counting the members of a state space. In general, when we look around us at the world, we tend to focus, to slice off a subset.  Then we go about justifying that the focal subset is "smaller" than the ambience from which we sliced it.  There seems to be 2 ways to do that, by measuring the size of sets vs. iteratively, i.e. showing how various subsets can be composed (unioned, accumulated) to construct various sets.

 

It's not entirely clear to me where "type" fits (at least not the specific sense of "type" we use in programming).  But it seems to be synonymous with the predicate that defines the set.  "Type" seems like a state-oriented conception, whereas "predicate" seems like a process-oriented conception.  We talk about things being "of a type".

But we talk about "satisfying a predicate".  I could easily be wrong in my intuition, there.

 

>   If so, I don't see it that way.   If there are

> paths a computation can take which will result in failure, it's better

> to know sooner than later about them.  If certain state configurations

> require logic, generics, or big union types, to do nothing but

> something benign -- until the appropriate treatment is identified --

> being confronted with those configurations as classes (at compile

> time) is better than hitting the edge cases one by one at runtime.

 

Well, to go back to my defense of my sloppiness.  Sometimes the sloppiness is not bad or merely ignorable.  Sometimes, it's crucial to re-use (or, more appropriately [mis|ab]use).  This is the concept I was trying to get at earlier when I mispoke and claimed that iteration is more open-ended than recursion.  It's not, since they're duals.  But iteration, being state-oriented rather than process-oriented seems more amenable to sloppiness.  When we finite-minded, hyper-focusing, pattern recognizers wander around in the ambience, trying to "do stuff", we face a kind of action threshold, a hurdle we have to get over in order to get anything done.

 

When we try to be as rigorous as possible and put all our DoF on the table, so to speak, that raises the threshold and makes action more difficult.  Granted, it also might make the eventual action more effective or powerful, but it does make it more difficult.

 

Given the variety of types of people out there, we end up with a nice spread of people, those who would prefer to "just do it" versus those who feel they should think long and hard before they do anything.  My speculation is that it's easier for the sloppy people to "grab onto"

whatever they slice out of the ambience if they use a state-oriented world view.  It seems very difficult to be a purely Taoistic floating process, continuously, sloppily transforming/filtering things from birth till death.

 

--

=><= glen e. p. ropella

This body of mine, man I don't wanna turn android

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College

to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

glen ep ropella

I'm not a good example of the computer community.  But I can suggest
that the concept is related, but not identical to yours.  To me,
referential opacity would imply a loss of control over what happened
when a reference was used (accessed or modified).  It's a kind of "fire
and forget" operation.  There are various "computer" context where the
implications of such would mean something practical.

In the context of concurrency, that would imply that you don't know
whether the the reference had or still has the same value it had when
you accessed it and you don't know when the value will percolate out to
whoever else depends on it when you modify it.

In the context of object orientation, it implies "encapsulation", the
separation of what you see on the outside from what actually goes on
inside an object.

In the context of our iteration vs. recursion discussion, it implies
that there are (may be) hidden states that are modified by accessing or
assigning values to the reference.

I'm sure there are more.


Nicholas Thompson wrote at 04/17/2013 10:07 AM:

> In my (leetle) world, referential opacity refers to ambiguities that
> arise in intentional utterances ... utterances of the form, "Jones
> believes (wants, thinks, hopes, etc.) that X is the case. "  They are
> opaque in that they tell us nothing about the truth of X.  So, for
> instance, "Jones believes that there are unicorns in central park"
> tells us neither that such a thing as a horse with a horn in its
> forehead exists (because Jones may confuse unicorns with squirrels) or
> that there are any "unicorns" in central park, whatever Jones may
> conceive them to be (because Jones may be misinformed).
>
>  
>
> What does the computer community think "referential opacity" means.  Are
> there statements in computer code that take the form , "from the point
> of view of circuit A, switch S has value V".  And do have later to worry
> that somewhere, later in the program, some other circuit, circuit B will
> encounter switch S and take it to have the value V?


--
glen e. p. ropella, 971-255-2847, http://tempusdictum.com
Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety. -- Benjamin Franklin


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

Steve Smith
In reply to this post by Nick Thompson
Nick, Glen, Marcus, et al -

The stew is getting nicely rich here.   While I wanted to ignore Owen's original question regarding isomorphisms between computing (language/concepts/models?) and philosophy as being naive, I know it isn't totally and the somewhat parallel conversation that has been continuing that started with circular reasoning has brought this out nicely (IMO).

This particular subthread on referential opacity is a nice example.  I would now try to frame (not dismiss or belittle) Owen's question with this particular exchange (Nick, Glen, Marcus) in mind.  In the world of natural language speaking, we expect that all things are ultimately translatable and that anyone from any language/culture can learn the language/culture of any other with enough time and diligence.   Aptitude, age and other circumstances make this harder or easier, but in principle, we tend to believe that there is really only one natural human language and learning the various examples (after the first) is doable.

On the other hand, we also have experiences that suggest just the opposite.  For example, I used to think that (american) English *must* bet the simplest/most-concise language because when you read instructions in a consumer product in several languages, somehow the English always seems to be the shortest or have the most white-space.   I'm pretty sure now, that in these cases, it is because the original instructions were in English and the others were translations which ended up being more wordy than the original because that is the nature of (simple) translation.   A single word, chosen for a specific reason in one language often needs a phrase or at least a modifier to make it more like the original one.   While it is possible that one of the languages being translated INTO has a more precise word that could replace a phrase or a noun with an adjective, it is unlikely that a simple translation is going to hit upon it. 

Thus so between computer science/engineering and Philosophy.   The words, while sometimes the same, have subtly importantly different meanings.   In this case, Referential Opacity.    I have coded on projects involving evidence theory where we in fact modeled the evidenciary processes and circumstances such as Nick's Jones, Unicorns, Squirrels and Central Park and I can attest that this ideal of referential opacity is NOT the same as in code.   Ideas with elaborate semantics (Unicorns, Horses, Horns, Magic, Locations) can be *referenced* by simple names ("Unicorns" ...) in a similar manner to the natural language discussion between Nick and Jones, and I believe the invention of OO programming was intended to align the act of programming more closely with natural language, it only does so, *at best* for the lexicons and dictionaries we design a set of class libraries for and build programs with.   It doesn't improve the alignment between the language *of* philosophy (or psychology or physics) with the language of programming.  If anything it distinguishes it.

If you look a  library, in particular, a specialized library like a Law Library, you notice it grows and grows and grows.   One might say the law started with the golden rule... but somehow that wasn't enough so somebody had to go meet Jehova on a mountain behind a burning bush to get receive 10 commandments... which were nearly as self-evident as the Golden Rule, but somehow spelled things out "just a little better".   The US constitution starts rightin "We hold these truths to be self-evident" and for the most part US Citizens for a couple hundred years (and I presume scholars from other nations and cultures) have read it over and over and nodded when they read those words.   There may be a few alternate world views where they shake their head and grimace at what we call "self-evident"...  but I think in general this large document is roughly as self evident as 10 commandments or a golden rule (or pick your own culture's equivalent)..   but we are compelled to split these hairs, to elaborate (said the man whose e-mails here are way too long) on most anything.

Computer science *adopted* or *inherited* it's terms from mathematics and logic which shares their own with Philosophy just as English inherited many words from Latin and Greek and Gaelic handed down through intermediate languages... and sometimes the words are dead on the same between languages and other times they are anything but.    Referential Opacity in Computer Programming means something very precise (if context dependent) as Glen so eloquently described just now.  It has a vague resemblance to what Nick means, due to it's heritage but to demand (or even wish for) the two to become an isomorphism lames one or both domains.  

One of our biggest limitations in this culture (american, european, western) in my opinion is that we were mostly raised up and trained up under the metaphors of factories, cities, and a zero-sum (scarcity vs abundance) economy.   If we were to (try to) constrain the Philosophy, for example, to fit within the (much more constrained and specialized) metaphors of Computer Science, it would at *best* be as bad as raising and educating our children on a factory model where they may all be churned out to have some degree of regularity and functionality, they are also taught (by their circumstance) that they are interchangeable, replaceable parts.  Later in life they end up having to sign up with a union (The brotherhood of gears and levers) to keep from being abused (and then simply replaced if they go out of tolerance) which is *also* built on the same metaphors.

So, no Nick... a programmer is not questioning whether she can know what a function (or object) means or to what degree of confidence or accuracy you can believe what is reports when you read out the value of a variable it has returned to you (Object Jones, return the state of your "Most Interesting Thing I Saw Today" variable).  A programmer is saying... "can I know anything more than what Jones tells me?"   In a very loose sense, you can draw a parallel...  Jones may know he did not see a unicorn, but it is in his programming to always make up something fanciful (squirrels or horses or babies in prams are unicorns in his strange or obtuse lexicon) when asked.   As with Glen's concurrency example,  between the time you ask Jones what was the most interesting thing he saw in Central Park, he may answer honestly ("I saw a Unicorn!") he may in fact see something more interesting, or recognize that there was no horn, and his *internal* state would shift from "Unicorn" to "Horse".   In programming, it is as likely as not that Object Jones would not have been designed to recognize the import of the question and quickly volunteer," wait... hold the phone... I just realized it was not a Unicorn, it was a SQUIRREL!".  There are programming models that *do* attempt this kind of stuff, but that is because the standard models tend not to do this obviously or easily.   User Interface and parallel, distributed models (e.g. federated models) *do* have mechanisms for this... but welll... a whole 'nother story.

So, in summary, I feel like I'm among Russian and Polynesian speakers (well, maybe just French and Spanish or Dutch and English) arguing over the meaning of words that sound the same in each language.  At best the two might be able to suss out the etymology and roots of the same words in a mother tongue that they share or were informed by... but it would be silly to go back and forth arguing that one is *more right* than another?   I know Nick is being genuinely curious, and I think Owen is being (stubbornly) idealistic... but the translations here are going to be on the order of translating between (or learning) a foriegn language, and one that might not have more than a passing relationship to the other (via mathematical logic).


- Steve

In my (leetle) world, referential opacity refers to ambiguities that arise in intentional utterances ... utterances of the form, "Jones believes (wants, thinks, hopes, etc.) that X is the case. "  They are opaque in that they tell us nothing about the truth of X.  So, for instance, "Jones believes that there are unicorns in central park"  tells us neither that such a thing as a horse with a horn in its forehead exists (because Jones may confuse unicorns with squirrels) or that there are any "unicorns" in central park, whatever Jones may conceive them to be (because Jones may be misinformed). 

 

What does the computer community think "referential opacity" means.  Are there statements in computer code that take the form , "from the point of view of circuit A, switch S has value V".  And do have later to worry that somewhere, later in the program, some other circuit, circuit B will encounter switch S and take it to have the value V? 

 

Nick

 

-----Original Message-----
From: Friam [[hidden email]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 10:52 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Marcus G. Daniels wrote at 04/16/2013 07:55 PM:

> A more important issue is whether a model has referential

> transparency. Are all the possible ways an object can change or reveal

> state made evident, or are they hidden away in obscure ways due to

> implementation issues?

>

> [...] The issue is whether a modeler is prepared to put all of the

> degrees of freedom on the table and find and remove those that are not

> essential, or imagine that 1 piece on each of 100 tables is somehow

> different from the same 100 pieces on 1 table.

 

Yes, exactly.  The conversation Nick started regarding tautologies is fundamentally about separating [non-]essential, or in the extreme case, no-ops.  I (think I intellectually, if not behaviorally) share your preference for functional computation because it helps force me to be more rigorous in my intent.  I'm as lazy as they come, though, and when given too many bells and whistles, my product tends to be sloppy.  But I tend to also argue that, sometimes, depending on the requirements set out by the task, the sloppiness is not bad but merely a trivial side-effect.

 

But this might be where we're talking about different things, below...

 

> Maybe we aren't talking about the same thing.  I'm not sure what you

> mean by "size" above.  I think you might mean that "All eventualities

> must be covered by top-down analysis."   I think you might mean that not

> having to make types fit together means there are more ways entertain

> the parts and pieces.

 

Sorry, I was being obtuse.  I meant it in the sense of set measures, or perhaps counting the members of a state space. In general, when we look around us at the world, we tend to focus, to slice off a subset.  Then we go about justifying that the focal subset is "smaller" than the ambience from which we sliced it.  There seems to be 2 ways to do that, by measuring the size of sets vs. iteratively, i.e. showing how various subsets can be composed (unioned, accumulated) to construct various sets.

 

It's not entirely clear to me where "type" fits (at least not the specific sense of "type" we use in programming).  But it seems to be synonymous with the predicate that defines the set.  "Type" seems like a state-oriented conception, whereas "predicate" seems like a process-oriented conception.  We talk about things being "of a type".

But we talk about "satisfying a predicate".  I could easily be wrong in my intuition, there.

 

>   If so, I don't see it that way.   If there are

> paths a computation can take which will result in failure, it's better

> to know sooner than later about them.  If certain state configurations

> require logic, generics, or big union types, to do nothing but

> something benign -- until the appropriate treatment is identified --

> being confronted with those configurations as classes (at compile

> time) is better than hitting the edge cases one by one at runtime.

 

Well, to go back to my defense of my sloppiness.  Sometimes the sloppiness is not bad or merely ignorable.  Sometimes, it's crucial to re-use (or, more appropriately [mis|ab]use).  This is the concept I was trying to get at earlier when I mispoke and claimed that iteration is more open-ended than recursion.  It's not, since they're duals.  But iteration, being state-oriented rather than process-oriented seems more amenable to sloppiness.  When we finite-minded, hyper-focusing, pattern recognizers wander around in the ambience, trying to "do stuff", we face a kind of action threshold, a hurdle we have to get over in order to get anything done.

 

When we try to be as rigorous as possible and put all our DoF on the table, so to speak, that raises the threshold and makes action more difficult.  Granted, it also might make the eventual action more effective or powerful, but it does make it more difficult.

 

Given the variety of types of people out there, we end up with a nice spread of people, those who would prefer to "just do it" versus those who feel they should think long and hard before they do anything.  My speculation is that it's easier for the sloppy people to "grab onto"

whatever they slice out of the ambience if they use a state-oriented world view.  It seems very difficult to be a purely Taoistic floating process, continuously, sloppily transforming/filtering things from birth till death.

 

--

=><= glen e. p. ropella

This body of mine, man I don't wanna turn android

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College

to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Isomorphism between computation and philosophy

glen ropella

Well said, Steve!  Mostly, what's kept me from commenting on the
"isomorphism" thread is ... well, the word "isomorphism". [grin]

I spend _all_ my time... seriously ... arguing against the "Grand
Unified Model" (GUM).  For some reason, everyone seems so certain,
convicted, that there exists the One True Truth (and they usually think
Cthulu whispers in their ear about it).  Even those of us who admit that
it may not exist, claim it's a Worthy Goal and we should all tow the line.

I do not believe there exists a single isomorphism between computing and
philosophy.  If _any_ exist at all, there are many. [*] And if I believe
that, then I have to consider the efficacy of my spending time figuring
out a single isomorphism.  Yes, to show that one exists would be
interesting.  But all it would achieve is continual and annoying
[mis]citation of that one demonstration, giving ammo to the GUM crowd.

Not only is that not in my ideological best interests, it's not even in
my practical best interests.  It would be a result analogous to Goedel's
Incompleteness Theorems, where everyone from postmodern Eddington
typewriters to serious people would jump in and muddy the waters.
Practically, all I want to do is find ways to get my work done and
finding/demonstrating a single isomorphism won't help me do that ...

UNLESS we could demonstrate there are _multiple_ isomorphisms.  Or
better yet, draw up a rough characterization of the distribution of all
morphisms, including multiple iso-s.

In the interests of problem solving, perhaps we could break down the
task and, rather than searching for an isomorphism, we could just lay
out one example morphism in some practical detail?  I think we could
mine the IACAP crowd for examples: http://www.iacap.org/  I had a lot of
fun at the one meeting of theirs I managed to attend.

[*] I'll leave the parentheticals alone and avoid trying to explain how
there can be multiple isomorphisms between any 2 particular things. ;-)

Steve Smith wrote at 04/17/2013 12:18 PM:
> The stew is getting nicely rich here.   While I wanted to ignore Owen's
> original question regarding isomorphisms between computing
> (language/concepts/models?) and philosophy as being naive, I know it
> isn't totally and the somewhat parallel conversation that has been
> continuing that started with circular reasoning has brought this out
> nicely (IMO).



--
=><= glen e. p. ropella
And I know I ain't digging on your lies


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Isomorphism between computation and philosophy

Owen Densmore
Administrator
Er,, of course there are many, right?  With two finite sets of size N there are N! 1-1, onto unique mappings, I believe.

But relax.  I went off the deep end with examples of things like decidability.

All I'm curious about is whether or not it is possible to somehow make philosophy, or simply intellectual conversation a bit more concrete.  Wouldn't you think computation and algorithms could express at least an interesting subset of intellectual discourse?

I remember being driven to watching Michael Sandel's great "What Is The Right Thing To Do" Harvard Justice lectures by Nick's vocabulary and style.  I found it a thrilling series and am glad its now part of a MOOC.  I'll probably watch more of similar a nature.  Exciting!

Unfortunately, some of the philosophic conversations I hear are poorly motivated and lack MS's great skill at driving people towards wanting understanding.

   -- Owen


On Wed, Apr 17, 2013 at 2:09 PM, glen <[hidden email]> wrote:

Well said, Steve!  Mostly, what's kept me from commenting on the
"isomorphism" thread is ... well, the word "isomorphism". [grin]

I spend _all_ my time... seriously ... arguing against the "Grand
Unified Model" (GUM).  For some reason, everyone seems so certain,
convicted, that there exists the One True Truth (and they usually think
Cthulu whispers in their ear about it).  Even those of us who admit that
it may not exist, claim it's a Worthy Goal and we should all tow the line.

I do not believe there exists a single isomorphism between computing and
philosophy.  If _any_ exist at all, there are many. [*] And if I believe
that, then I have to consider the efficacy of my spending time figuring
out a single isomorphism.  Yes, to show that one exists would be
interesting.  But all it would achieve is continual and annoying
[mis]citation of that one demonstration, giving ammo to the GUM crowd.

Not only is that not in my ideological best interests, it's not even in
my practical best interests.  It would be a result analogous to Goedel's
Incompleteness Theorems, where everyone from postmodern Eddington
typewriters to serious people would jump in and muddy the waters.
Practically, all I want to do is find ways to get my work done and
finding/demonstrating a single isomorphism won't help me do that ...

UNLESS we could demonstrate there are _multiple_ isomorphisms.  Or
better yet, draw up a rough characterization of the distribution of all
morphisms, including multiple iso-s.

In the interests of problem solving, perhaps we could break down the
task and, rather than searching for an isomorphism, we could just lay
out one example morphism in some practical detail?  I think we could
mine the IACAP crowd for examples: http://www.iacap.org/  I had a lot of
fun at the one meeting of theirs I managed to attend.

[*] I'll leave the parentheticals alone and avoid trying to explain how
there can be multiple isomorphisms between any 2 particular things. ;-)

Steve Smith wrote at 04/17/2013 12:18 PM:
> The stew is getting nicely rich here.   While I wanted to ignore Owen's
> original question regarding isomorphisms between computing
> (language/concepts/models?) and philosophy as being naive, I know it
> isn't totally and the somewhat parallel conversation that has been
> continuing that started with circular reasoning has brought this out
> nicely (IMO).



--
=><= glen e. p. ropella
And I know I ain't digging on your lies


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Isomorphism between computation and philosophy

glen ropella
Owen Densmore wrote at 04/17/2013 01:53 PM:
> Er,, of course there are many, right?  With two finite sets of size N there
> are N! 1-1, onto unique mappings, I believe.

Heh, there are way more than that!  What I meant was that there exist
more than 1 morphism that results in the same snapshot of the mapping.  E.g.

{0, 1, 2} -> {ooga, booga, slooga} via

   0 -> ooga
   1 -> booga
   2 -> slooga

But there can be any number of meanings inside the "->".  All that's
being represented by the morphism is that one goes to the other.  The
"going" is opaque, c.f. the other part of our conversation.  (I think
it's funny that we use this word "morphism" so often without remembering
the "to morph" part of it.)

> All I'm curious about is whether or not it is possible to somehow make
> philosophy, or simply intellectual conversation a bit more concrete.

Hm. I'm actually on Nick's side of that discussion.  Philosophy is
_more_ concrete than computing.  Even when it's abstract, it relies on
the thoughts and actions of people (or animals or inanimate objects).
Computing is, like mathematics, more symbolic.

Perhaps the word you're looking for is _definite_?

>  Wouldn't you think computation and algorithms could express at least an
> interesting subset of intellectual discourse?

Not really.  Like I was trying to address in the other thread on
iteration vs. recursion, discourse (including intellectual) is messy,
which is whence it derives its usefulness.  The same can be said of
things like jury trials.  The interestingness doesn't lie in the
abstract "law" as defined for the average (or median or whatever) human.
 The interestingness lies in the special cases.  Although much
philosophy pretends that it's trying to find some normative basis for
thought, what I see, mostly, is humans trying to be human ... aka messy.

> Unfortunately, some of the philosophic conversations I hear are poorly
> motivated and lack MS's great skill at driving people towards wanting
> understanding.

Sturgeon's quote comes to mind: Ninety percent of science fiction is
crud, but that's because ninety percent of everything is crud.

--
=><= glen e. p. ropella
In this world where I am king


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

Nick Thompson
In reply to this post by Steve Smith

Thanks, steve.  It made clear to me some of the terms of this argument that I was having a lot of trouble grasping. 

 

I do want to make sure we are straight on one point.  In my example, opacity refers to the INABILITY to make inferences on the basis of intentional utterances.  I don’t know that people use the term referential transparency, but if they do it refers to the ability to make inferences in “extentional” utterances.  If we take as true Jones belief that there are unicorns in Central Park, then we can infer that there are horses with horns in the foreheads in Central Park.  Truth is preserved in substitution.  The point is that science works with these sorts of utterances where truth is preserved in the substitution of terms.  In fact, science often works through the substitution of ever more precise terms to describe a phenomenon.   We start out with “the elderly lady in Bodung Province in China was killed by the new flu virus” and when we culture her virus and discover that it was H4N3, or whatever, we can substitute with truth and issue a press release saying that “the elderly lady in Bodung Province in China was killed by the H4N3 virus.”  This is referential transparency.  The same substitution cannot be made in the sentence, “The Premier of China knows that the elderly lady’s death was caused by the h4N3 virus.” 

 

In short, referential opacity BAD; referential transparency GOOD.   Are we on the same page here, or are the values flipped in compsci. 

 

Nick   

 

From: Friam [mailto:[hidden email]] On Behalf Of Steve Smith
Sent: Wednesday, April 17, 2013 1:18 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Nick, Glen, Marcus, et al -

The stew is getting nicely rich here.   While I wanted to ignore Owen's original question regarding isomorphisms between computing (language/concepts/models?) and philosophy as being naive, I know it isn't totally and the somewhat parallel conversation that has been continuing that started with circular reasoning has brought this out nicely (IMO).

This particular subthread on referential opacity is a nice example.  I would now try to frame (not dismiss or belittle) Owen's question with this particular exchange (Nick, Glen, Marcus) in mind.  In the world of natural language speaking, we expect that all things are ultimately translatable and that anyone from any language/culture can learn the language/culture of any other with enough time and diligence.   Aptitude, age and other circumstances make this harder or easier, but in principle, we tend to believe that there is really only one natural human language and learning the various examples (after the first) is doable.

On the other hand, we also have experiences that suggest just the opposite.  For example, I used to think that (american) English *must* bet the simplest/most-concise language because when you read instructions in a consumer product in several languages, somehow the English always seems to be the shortest or have the most white-space.   I'm pretty sure now, that in these cases, it is because the original instructions were in English and the others were translations which ended up being more wordy than the original because that is the nature of (simple) translation.   A single word, chosen for a specific reason in one language often needs a phrase or at least a modifier to make it more like the original one.   While it is possible that one of the languages being translated INTO has a more precise word that could replace a phrase or a noun with an adjective, it is unlikely that a simple translation is going to hit upon it. 

Thus so between computer science/engineering and Philosophy.   The words, while sometimes the same, have subtly importantly different meanings.   In this case, Referential Opacity.    I have coded on projects involving evidence theory where we in fact modeled the evidenciary processes and circumstances such as Nick's Jones, Unicorns, Squirrels and Central Park and I can attest that this ideal of referential opacity is NOT the same as in code.   Ideas with elaborate semantics (Unicorns, Horses, Horns, Magic, Locations) can be *referenced* by simple names ("Unicorns" ...) in a similar manner to the natural language discussion between Nick and Jones, and I believe the invention of OO programming was intended to align the act of programming more closely with natural language, it only does so, *at best* for the lexicons and dictionaries we design a set of class libraries for and build programs with.   It doesn't improve the alignment between the language *of* philosophy (or psychology or physics) with the language of programming.  If anything it distinguishes it.

If you look a  library, in particular, a specialized library like a Law Library, you notice it grows and grows and grows.   One might say the law started with the golden rule... but somehow that wasn't enough so somebody had to go meet Jehova on a mountain behind a burning bush to get receive 10 commandments... which were nearly as self-evident as the Golden Rule, but somehow spelled things out "just a little better".   The US constitution starts rightin "We hold these truths to be self-evident" and for the most part US Citizens for a couple hundred years (and I presume scholars from other nations and cultures) have read it over and over and nodded when they read those words.   There may be a few alternate world views where they shake their head and grimace at what we call "self-evident"...  but I think in general this large document is roughly as self evident as 10 commandments or a golden rule (or pick your own culture's equivalent)..   but we are compelled to split these hairs, to elaborate (said the man whose e-mails here are way too long) on most anything.

Computer science *adopted* or *inherited* it's terms from mathematics and logic which shares their own with Philosophy just as English inherited many words from Latin and Greek and Gaelic handed down through intermediate languages... and sometimes the words are dead on the same between languages and other times they are anything but.    Referential Opacity in Computer Programming means something very precise (if context dependent) as Glen so eloquently described just now.  It has a vague resemblance to what Nick means, due to it's heritage but to demand (or even wish for) the two to become an isomorphism lames one or both domains.  

One of our biggest limitations in this culture (american, european, western) in my opinion is that we were mostly raised up and trained up under the metaphors of factories, cities, and a zero-sum (scarcity vs abundance) economy.   If we were to (try to) constrain the Philosophy, for example, to fit within the (much more constrained and specialized) metaphors of Computer Science, it would at *best* be as bad as raising and educating our children on a factory model where they may all be churned out to have some degree of regularity and functionality, they are also taught (by their circumstance) that they are interchangeable, replaceable parts.  Later in life they end up having to sign up with a union (The brotherhood of gears and levers) to keep from being abused (and then simply replaced if they go out of tolerance) which is *also* built on the same metaphors.

So, no Nick... a programmer is not questioning whether she can know what a function (or object) means or to what degree of confidence or accuracy you can believe what is reports when you read out the value of a variable it has returned to you (Object Jones, return the state of your "Most Interesting Thing I Saw Today" variable).  A programmer is saying... "can I know anything more than what Jones tells me?"   In a very loose sense, you can draw a parallel...  Jones may know he did not see a unicorn, but it is in his programming to always make up something fanciful (squirrels or horses or babies in prams are unicorns in his strange or obtuse lexicon) when asked.   As with Glen's concurrency example,  between the time you ask Jones what was the most interesting thing he saw in Central Park, he may answer honestly ("I saw a Unicorn!") he may in fact see something more interesting, or recognize that there was no horn, and his *internal* state would shift from "Unicorn" to "Horse".   In programming, it is as likely as not that Object Jones would not have been designed to recognize the import of the question and quickly volunteer," wait... hold the phone... I just realized it was not a Unicorn, it was a SQUIRREL!".  There are programming models that *do* attempt this kind of stuff, but that is because the standard models tend not to do this obviously or easily.   User Interface and parallel, distributed models (e.g. federated models) *do* have mechanisms for this... but welll... a whole 'nother story.

So, in summary, I feel like I'm among Russian and Polynesian speakers (well, maybe just French and Spanish or Dutch and English) arguing over the meaning of words that sound the same in each language.  At best the two might be able to suss out the etymology and roots of the same words in a mother tongue that they share or were informed by... but it would be silly to go back and forth arguing that one is *more right* than another?   I know Nick is being genuinely curious, and I think Owen is being (stubbornly) idealistic... but the translations here are going to be on the order of translating between (or learning) a foriegn language, and one that might not have more than a passing relationship to the other (via mathematical logic).


- Steve

In my (leetle) world, referential opacity refers to ambiguities that arise in intentional utterances ... utterances of the form, "Jones believes (wants, thinks, hopes, etc.) that X is the case. "  They are opaque in that they tell us nothing about the truth of X.  So, for instance, "Jones believes that there are unicorns in central park"  tells us neither that such a thing as a horse with a horn in its forehead exists (because Jones may confuse unicorns with squirrels) or that there are any "unicorns" in central park, whatever Jones may conceive them to be (because Jones may be misinformed). 

 

What does the computer community think "referential opacity" means.  Are there statements in computer code that take the form , "from the point of view of circuit A, switch S has value V".  And do have later to worry that somewhere, later in the program, some other circuit, circuit B will encounter switch S and take it to have the value V? 

 

Nick

 

-----Original Message-----
From: Friam [[hidden email]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 10:52 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Marcus G. Daniels wrote at 04/16/2013 07:55 PM:

> A more important issue is whether a model has referential

> transparency. Are all the possible ways an object can change or reveal

> state made evident, or are they hidden away in obscure ways due to

> implementation issues?

>

> [...] The issue is whether a modeler is prepared to put all of the

> degrees of freedom on the table and find and remove those that are not

> essential, or imagine that 1 piece on each of 100 tables is somehow

> different from the same 100 pieces on 1 table.

 

Yes, exactly.  The conversation Nick started regarding tautologies is fundamentally about separating [non-]essential, or in the extreme case, no-ops.  I (think I intellectually, if not behaviorally) share your preference for functional computation because it helps force me to be more rigorous in my intent.  I'm as lazy as they come, though, and when given too many bells and whistles, my product tends to be sloppy.  But I tend to also argue that, sometimes, depending on the requirements set out by the task, the sloppiness is not bad but merely a trivial side-effect.

 

But this might be where we're talking about different things, below...

 

> Maybe we aren't talking about the same thing.  I'm not sure what you

> mean by "size" above.  I think you might mean that "All eventualities

> must be covered by top-down analysis."   I think you might mean that not

> having to make types fit together means there are more ways entertain

> the parts and pieces.

 

Sorry, I was being obtuse.  I meant it in the sense of set measures, or perhaps counting the members of a state space. In general, when we look around us at the world, we tend to focus, to slice off a subset.  Then we go about justifying that the focal subset is "smaller" than the ambience from which we sliced it.  There seems to be 2 ways to do that, by measuring the size of sets vs. iteratively, i.e. showing how various subsets can be composed (unioned, accumulated) to construct various sets.

 

It's not entirely clear to me where "type" fits (at least not the specific sense of "type" we use in programming).  But it seems to be synonymous with the predicate that defines the set.  "Type" seems like a state-oriented conception, whereas "predicate" seems like a process-oriented conception.  We talk about things being "of a type".

But we talk about "satisfying a predicate".  I could easily be wrong in my intuition, there.

 

>   If so, I don't see it that way.   If there are

> paths a computation can take which will result in failure, it's better

> to know sooner than later about them.  If certain state configurations

> require logic, generics, or big union types, to do nothing but

> something benign -- until the appropriate treatment is identified --

> being confronted with those configurations as classes (at compile

> time) is better than hitting the edge cases one by one at runtime.

 

Well, to go back to my defense of my sloppiness.  Sometimes the sloppiness is not bad or merely ignorable.  Sometimes, it's crucial to re-use (or, more appropriately [mis|ab]use).  This is the concept I was trying to get at earlier when I mispoke and claimed that iteration is more open-ended than recursion.  It's not, since they're duals.  But iteration, being state-oriented rather than process-oriented seems more amenable to sloppiness.  When we finite-minded, hyper-focusing, pattern recognizers wander around in the ambience, trying to "do stuff", we face a kind of action threshold, a hurdle we have to get over in order to get anything done.

 

When we try to be as rigorous as possible and put all our DoF on the table, so to speak, that raises the threshold and makes action more difficult.  Granted, it also might make the eventual action more effective or powerful, but it does make it more difficult.

 

Given the variety of types of people out there, we end up with a nice spread of people, those who would prefer to "just do it" versus those who feel they should think long and hard before they do anything.  My speculation is that it's easier for the sloppy people to "grab onto"

whatever they slice out of the ambience if they use a state-oriented world view.  It seems very difficult to be a purely Taoistic floating process, continuously, sloppily transforming/filtering things from birth till death.

 

--

=><= glen e. p. ropella

This body of mine, man I don't wanna turn android

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College

to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Isomorphism between computation and philosophy

Nick Thompson
In reply to this post by glen ropella
Glen,

I liked this, particularly its Bayesian conclusion (!?), but I wont be able
to comment thoughtfully on it for several hours.

Thanks, though,

N

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 3:13 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Isomorphism between computation and philosophy

Owen Densmore wrote at 04/17/2013 01:53 PM:
> Er,, of course there are many, right?  With two finite sets of size N
> there are N! 1-1, onto unique mappings, I believe.

Heh, there are way more than that!  What I meant was that there exist more
than 1 morphism that results in the same snapshot of the mapping.  E.g.

{0, 1, 2} -> {ooga, booga, slooga} via

   0 -> ooga
   1 -> booga
   2 -> slooga

But there can be any number of meanings inside the "->".  All that's being
represented by the morphism is that one goes to the other.  The "going" is
opaque, c.f. the other part of our conversation.  (I think it's funny that
we use this word "morphism" so often without remembering the "to morph" part
of it.)

> All I'm curious about is whether or not it is possible to somehow make
> philosophy, or simply intellectual conversation a bit more concrete.

Hm. I'm actually on Nick's side of that discussion.  Philosophy is _more_
concrete than computing.  Even when it's abstract, it relies on the thoughts
and actions of people (or animals or inanimate objects).
Computing is, like mathematics, more symbolic.

Perhaps the word you're looking for is _definite_?

>  Wouldn't you think computation and algorithms could express at least
> an interesting subset of intellectual discourse?

Not really.  Like I was trying to address in the other thread on iteration
vs. recursion, discourse (including intellectual) is messy, which is whence
it derives its usefulness.  The same can be said of things like jury trials.
The interestingness doesn't lie in the abstract "law" as defined for the
average (or median or whatever) human.
 The interestingness lies in the special cases.  Although much philosophy
pretends that it's trying to find some normative basis for thought, what I
see, mostly, is humans trying to be human ... aka messy.

> Unfortunately, some of the philosophic conversations I hear are poorly
> motivated and lack MS's great skill at driving people towards wanting
> understanding.

Sturgeon's quote comes to mind: Ninety percent of science fiction is crud,
but that's because ninety percent of everything is crud.

--
=><= glen e. p. ropella
In this world where I am king


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

glen ropella
In reply to this post by Nick Thompson
Nicholas Thompson wrote at 04/17/2013 02:22 PM:
> In short, referential opacity BAD; referential transparency GOOD.   Are we
> on the same page here, or are the values flipped in compsci.  

Depends on who "we" is.  In some contexts, it's bad.  In some, it's good.

--
=><= glen e. p. ropella
Neolithic fear is such a motivating factory.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

Nick Thompson
It's an example of referential opacity.  

-----Original Message-----
From: Friam [mailto:[hidden email]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 3:33 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

Nicholas Thompson wrote at 04/17/2013 02:22 PM:
> In short, referential opacity BAD; referential transparency GOOD.   Are we
> on the same page here, or are the values flipped in compsci.  

Depends on who "we" is.  In some contexts, it's bad.  In some, it's good.

--
=><= glen e. p. ropella
Neolithic fear is such a motivating factory.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

glen ropella
Nicholas Thompson wrote at 04/17/2013 02:51 PM:
> It's an example of referential opacity.  

No, I think it's an example of a response to an incomplete question, or
trying to evaluate a function without binding all the variables.  If you
leave out too much context, the answer is always "it depends."  Mu!

--
=><= glen e. p. ropella
Holding hands with Lucifer is never that enlightening.


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
Reply | Threaded
Open this post in threaded view
|

Re: Tautologies and other forms of circular reasoning.

Steve Smith
In reply to this post by Nick Thompson
Nick -

Thanks, steve.  It made clear to me some of the terms of this argument that I was having a lot of trouble grasping. 

I'm glad that helps... I often worry that my own idiosyncratic perspective or expression of my perspective is more good for muddying than clarifying.

I do want to make sure we are straight on one point.  In my example, opacity refers to the INABILITY to make inferences on the basis of intentional utterances.

In your case, it means that the circumstances don't support it, but in computer science, it means that the mechanisms *deliberately and unequivocally* disallow it.

  I don’t know that people use the term referential transparency, but if they do it refers to the ability to make inferences in “extentional” utterances.  If we take as true Jones belief that there are unicorns in Central Park, then we can infer that there are horses with horns in the foreheads in Central Park.  Truth is preserved in substitution.  The point is that science works with these sorts of utterances where truth is preserved in the substitution of terms.  In fact, science often works through the substitution of ever more precise terms to describe a phenomenon.   We start out with “the elderly lady in Bodung Province in China was killed by the new flu virus” and when we culture her virus and discover that it was H4N3, or whatever, we can substitute with truth and issue a press release saying that “the elderly lady in Bodung Province in China was killed by the H4N3 virus.”  This is referential transparency.  The same substitution cannot be made in the sentence, “The Premier of China knows that the elderly lady’s death was caused by the h4N3 virus.” 

 

In short, referential opacity BAD; referential transparency GOOD.   Are we on the same page here, or are the values flipped in compsci. 

Again, they are roughly related but far from isomorphic... as their *context* is not the same.  I think Glen is better at articulating these things than I am...  though I'm not sure about that either.

In compsci (computer language/compiler? theory) referential opacity and transparency (if the latter means anything more than the negation of the former?)  is about mechanism and utility (avoiding/allowing  side-effects, avoiding memory, etc. modularity, etc.).   In general, I think the conclusion is roughly *opposite* opacity Good, transparency Bad... though as Glen reminds us, the answer is naturally "it depends".

- Steve

 

Nick   

 

From: Friam [[hidden email]] On Behalf Of Steve Smith
Sent: Wednesday, April 17, 2013 1:18 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Nick, Glen, Marcus, et al -

The stew is getting nicely rich here.   While I wanted to ignore Owen's original question regarding isomorphisms between computing (language/concepts/models?) and philosophy as being naive, I know it isn't totally and the somewhat parallel conversation that has been continuing that started with circular reasoning has brought this out nicely (IMO).

This particular subthread on referential opacity is a nice example.  I would now try to frame (not dismiss or belittle) Owen's question with this particular exchange (Nick, Glen, Marcus) in mind.  In the world of natural language speaking, we expect that all things are ultimately translatable and that anyone from any language/culture can learn the language/culture of any other with enough time and diligence.   Aptitude, age and other circumstances make this harder or easier, but in principle, we tend to believe that there is really only one natural human language and learning the various examples (after the first) is doable.

On the other hand, we also have experiences that suggest just the opposite.  For example, I used to think that (american) English *must* bet the simplest/most-concise language because when you read instructions in a consumer product in several languages, somehow the English always seems to be the shortest or have the most white-space.   I'm pretty sure now, that in these cases, it is because the original instructions were in English and the others were translations which ended up being more wordy than the original because that is the nature of (simple) translation.   A single word, chosen for a specific reason in one language often needs a phrase or at least a modifier to make it more like the original one.   While it is possible that one of the languages being translated INTO has a more precise word that could replace a phrase or a noun with an adjective, it is unlikely that a simple translation is going to hit upon it. 

Thus so between computer science/engineering and Philosophy.   The words, while sometimes the same, have subtly importantly different meanings.   In this case, Referential Opacity.    I have coded on projects involving evidence theory where we in fact modeled the evidenciary processes and circumstances such as Nick's Jones, Unicorns, Squirrels and Central Park and I can attest that this ideal of referential opacity is NOT the same as in code.   Ideas with elaborate semantics (Unicorns, Horses, Horns, Magic, Locations) can be *referenced* by simple names ("Unicorns" ...) in a similar manner to the natural language discussion between Nick and Jones, and I believe the invention of OO programming was intended to align the act of programming more closely with natural language, it only does so, *at best* for the lexicons and dictionaries we design a set of class libraries for and build programs with.   It doesn't improve the alignment between the language *of* philosophy (or psychology or physics) with the language of programming.  If anything it distinguishes it.

If you look a  library, in particular, a specialized library like a Law Library, you notice it grows and grows and grows.   One might say the law started with the golden rule... but somehow that wasn't enough so somebody had to go meet Jehova on a mountain behind a burning bush to get receive 10 commandments... which were nearly as self-evident as the Golden Rule, but somehow spelled things out "just a little better".   The US constitution starts rightin "We hold these truths to be self-evident" and for the most part US Citizens for a couple hundred years (and I presume scholars from other nations and cultures) have read it over and over and nodded when they read those words.   There may be a few alternate world views where they shake their head and grimace at what we call "self-evident"...  but I think in general this large document is roughly as self evident as 10 commandments or a golden rule (or pick your own culture's equivalent)..   but we are compelled to split these hairs, to elaborate (said the man whose e-mails here are way too long) on most anything.

Computer science *adopted* or *inherited* it's terms from mathematics and logic which shares their own with Philosophy just as English inherited many words from Latin and Greek and Gaelic handed down through intermediate languages... and sometimes the words are dead on the same between languages and other times they are anything but.    Referential Opacity in Computer Programming means something very precise (if context dependent) as Glen so eloquently described just now.  It has a vague resemblance to what Nick means, due to it's heritage but to demand (or even wish for) the two to become an isomorphism lames one or both domains.  

One of our biggest limitations in this culture (american, european, western) in my opinion is that we were mostly raised up and trained up under the metaphors of factories, cities, and a zero-sum (scarcity vs abundance) economy.   If we were to (try to) constrain the Philosophy, for example, to fit within the (much more constrained and specialized) metaphors of Computer Science, it would at *best* be as bad as raising and educating our children on a factory model where they may all be churned out to have some degree of regularity and functionality, they are also taught (by their circumstance) that they are interchangeable, replaceable parts.  Later in life they end up having to sign up with a union (The brotherhood of gears and levers) to keep from being abused (and then simply replaced if they go out of tolerance) which is *also* built on the same metaphors.

So, no Nick... a programmer is not questioning whether she can know what a function (or object) means or to what degree of confidence or accuracy you can believe what is reports when you read out the value of a variable it has returned to you (Object Jones, return the state of your "Most Interesting Thing I Saw Today" variable).  A programmer is saying... "can I know anything more than what Jones tells me?"   In a very loose sense, you can draw a parallel...  Jones may know he did not see a unicorn, but it is in his programming to always make up something fanciful (squirrels or horses or babies in prams are unicorns in his strange or obtuse lexicon) when asked.   As with Glen's concurrency example,  between the time you ask Jones what was the most interesting thing he saw in Central Park, he may answer honestly ("I saw a Unicorn!") he may in fact see something more interesting, or recognize that there was no horn, and his *internal* state would shift from "Unicorn" to "Horse".   In programming, it is as likely as not that Object Jones would not have been designed to recognize the import of the question and quickly volunteer," wait... hold the phone... I just realized it was not a Unicorn, it was a SQUIRREL!".  There are programming models that *do* attempt this kind of stuff, but that is because the standard models tend not to do this obviously or easily.   User Interface and parallel, distributed models (e.g. federated models) *do* have mechanisms for this... but welll... a whole 'nother story.

So, in summary, I feel like I'm among Russian and Polynesian speakers (well, maybe just French and Spanish or Dutch and English) arguing over the meaning of words that sound the same in each language.  At best the two might be able to suss out the etymology and roots of the same words in a mother tongue that they share or were informed by... but it would be silly to go back and forth arguing that one is *more right* than another?   I know Nick is being genuinely curious, and I think Owen is being (stubbornly) idealistic... but the translations here are going to be on the order of translating between (or learning) a foriegn language, and one that might not have more than a passing relationship to the other (via mathematical logic).


- Steve

In my (leetle) world, referential opacity refers to ambiguities that arise in intentional utterances ... utterances of the form, "Jones believes (wants, thinks, hopes, etc.) that X is the case. "  They are opaque in that they tell us nothing about the truth of X.  So, for instance, "Jones believes that there are unicorns in central park"  tells us neither that such a thing as a horse with a horn in its forehead exists (because Jones may confuse unicorns with squirrels) or that there are any "unicorns" in central park, whatever Jones may conceive them to be (because Jones may be misinformed). 

 

What does the computer community think "referential opacity" means.  Are there statements in computer code that take the form , "from the point of view of circuit A, switch S has value V".  And do have later to worry that somewhere, later in the program, some other circuit, circuit B will encounter switch S and take it to have the value V? 

 

Nick

 

-----Original Message-----
From: Friam [[hidden email]] On Behalf Of glen
Sent: Wednesday, April 17, 2013 10:52 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Tautologies and other forms of circular reasoning.

 

Marcus G. Daniels wrote at 04/16/2013 07:55 PM:

> A more important issue is whether a model has referential

> transparency. Are all the possible ways an object can change or reveal

> state made evident, or are they hidden away in obscure ways due to

> implementation issues?

>

> [...] The issue is whether a modeler is prepared to put all of the

> degrees of freedom on the table and find and remove those that are not

> essential, or imagine that 1 piece on each of 100 tables is somehow

> different from the same 100 pieces on 1 table.

 

Yes, exactly.  The conversation Nick started regarding tautologies is fundamentally about separating [non-]essential, or in the extreme case, no-ops.  I (think I intellectually, if not behaviorally) share your preference for functional computation because it helps force me to be more rigorous in my intent.  I'm as lazy as they come, though, and when given too many bells and whistles, my product tends to be sloppy.  But I tend to also argue that, sometimes, depending on the requirements set out by the task, the sloppiness is not bad but merely a trivial side-effect.

 

But this might be where we're talking about different things, below...

 

> Maybe we aren't talking about the same thing.  I'm not sure what you

> mean by "size" above.  I think you might mean that "All eventualities

> must be covered by top-down analysis."   I think you might mean that not

> having to make types fit together means there are more ways entertain

> the parts and pieces.

 

Sorry, I was being obtuse.  I meant it in the sense of set measures, or perhaps counting the members of a state space. In general, when we look around us at the world, we tend to focus, to slice off a subset.  Then we go about justifying that the focal subset is "smaller" than the ambience from which we sliced it.  There seems to be 2 ways to do that, by measuring the size of sets vs. iteratively, i.e. showing how various subsets can be composed (unioned, accumulated) to construct various sets.

 

It's not entirely clear to me where "type" fits (at least not the specific sense of "type" we use in programming).  But it seems to be synonymous with the predicate that defines the set.  "Type" seems like a state-oriented conception, whereas "predicate" seems like a process-oriented conception.  We talk about things being "of a type".

But we talk about "satisfying a predicate".  I could easily be wrong in my intuition, there.

 

>   If so, I don't see it that way.   If there are

> paths a computation can take which will result in failure, it's better

> to know sooner than later about them.  If certain state configurations

> require logic, generics, or big union types, to do nothing but

> something benign -- until the appropriate treatment is identified --

> being confronted with those configurations as classes (at compile

> time) is better than hitting the edge cases one by one at runtime.

 

Well, to go back to my defense of my sloppiness.  Sometimes the sloppiness is not bad or merely ignorable.  Sometimes, it's crucial to re-use (or, more appropriately [mis|ab]use).  This is the concept I was trying to get at earlier when I mispoke and claimed that iteration is more open-ended than recursion.  It's not, since they're duals.  But iteration, being state-oriented rather than process-oriented seems more amenable to sloppiness.  When we finite-minded, hyper-focusing, pattern recognizers wander around in the ambience, trying to "do stuff", we face a kind of action threshold, a hurdle we have to get over in order to get anything done.

 

When we try to be as rigorous as possible and put all our DoF on the table, so to speak, that raises the threshold and makes action more difficult.  Granted, it also might make the eventual action more effective or powerful, but it does make it more difficult.

 

Given the variety of types of people out there, we end up with a nice spread of people, those who would prefer to "just do it" versus those who feel they should think long and hard before they do anything.  My speculation is that it's easier for the sloppy people to "grab onto"

whatever they slice out of the ambience if they use a state-oriented world view.  It seems very difficult to be a purely Taoistic floating process, continuously, sloppily transforming/filtering things from birth till death.

 

--

=><= glen e. p. ropella

This body of mine, man I don't wanna turn android

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College

to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com




============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

 



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
123