Steve writes:
“To the extent that the only and precise goal is to efficiently, unambiguously, and accurately serialize the contents of one's mind and transmit it to another mind which de-serializes with the goal of syncronizing the internal
states of Bob's mind to that of Alice's, perhaps what you say is spot on.”
The _From Other Tongues_ sketch is good. Both what is heard and what is said could be modeled as a closure over some subjective representation. Most computer programs have one representation (one or separable module
architectures, not many competing points of view), and, closures, if used, are over some (often small) subset of it. Agent based models, in contrast, usually have many representations, and so there is the possibility of a well-defined types (and closures
that use those types) for clauses in the blue and orange captions. The squiggles suggest that the types are not yet shared amongst the agents. I’m not sure I agree in the value of the interpolations and extrapolations of ontologies. It sounds too much like
“agree to disagree”. Progress I think requires aggressively creating and destroying types and constant by negotiation and empirical validation. Many “interpretations” just put off getting to the bottom of things. Keep the interpretations around long enough
to get parallax on a better interpretation, then press Delete.
Marcus
Free forum by Nabble | Edit this page |