Is FRIAM meeting this morning?
On Wednesday, January 2, 2019, 3:55:03 PM MST, <[hidden email]> wrote:
Send Friam mailing list submissions to To subscribe or unsubscribe via the World Wide Web, visit or, via email, send a message with subject or body 'help' to You can reach the person managing the list at When replying, please edit your Subject line so it is more specific than "Re: Contents of Friam digest..." Today's Topics:1. Re: Was: Abduction; Is Now: Dionysian and Apollonian Lives (u?l? ?) 2. Re: Statistical poser (aka fact checking is hard) (Steven A Smith) 3. Re: Abduction (Nick Thompson) 4. Re: Abduction (u?l? ?) 5. Re: Statistical poser (aka fact checking is hard) (Marcus Daniels) 6. Re: Was: Abduction; Is Now: Dionysian and Apollonian Lives ([hidden email]) 7. Re: Was: Abduction; Is Now: Dionysian and Apollonian Lives (Nick Thompson) 8. Re: Was: Abduction; Is Now: Dionysian and Apollonian Lives (u?l? ?) 9. Learning curves (was, Abduction) ([hidden email]) 10. Re: Learning curves (was, Abduction) (Marcus Daniels) 11. Re: Abduction (Prof David West) Since one of my dead horses is artificial discretization, I've always wondered what it's like to work in many-valued logics. So, proof by contradiction would change from [not-true => false] to [not-0 => {1,2,..,n}], assuming a discretized set of values {0..n}. But is there a continuous "many valued" logic, where any proposition can be evaluated to take on some sub-region of a continuous set? So, proof by contradiction would become something like [not∈{-∞,0} => ∈{0+ε,∞}]? On 1/2/19 11:23 AM, Frank Wimberly wrote: > p.s. Dropping the law of the excluded middle required giving up proof by > contradiction. -- ☣ uǝlƃ It is to this point that I prefer to think in terms of
"neurodiverse" rather than "mentally ill". Your definitions here
respond more to my idea of "sociopathy". I don't think of
sociopaths as being mentally ill, just not good members of the
society they find themselves in. Most *L*ibertarians I know seem
to be on the verge of sociopathy as a matter of honor. There has been a move afoot to recognize the selection value of neurodiversity in a group and to de-stigmatize or de-pathologize what was previously considered dis-ease or dys-function. https://www.nytimes.com/2015/08/23/books/review/neurotribes-by-steve-silberman.html On 1/2/19 12:33 PM, Marcus Daniels
wrote:
Well, right, Steve. Is it fair to say that, to some extent, you have "cultivated" dreaming?
I guess that's all I mean to say. I decided not to dream much.
By the way, may I unfairly take you to task about one thing you said. And I quote:
rational/linear modes of thinking/being,
There is nothing linear about rational thought. It is intensely hierarchical. It is its hieararchical nature, not it’s linearity, that leads it astray. Because one is working in one compartment, one misses things that would be obvious to people working in a less compartmentalized way. This reminds me of the mis use of the “learning curve” metaphor. People speak of a steep learning curve as something to be feared. In fact, people who learn quickly have a steep learning curve.
Your friendly metaphor police at your service,
Nick
Nicholas S. Thompson Emeritus Professor of Psychology and Biology Clark University http://home.earthlink.net/~nickthompson/naturaldesigns/
-----Original Message-----
I have spent my life cultivating hypnopompic and hypnogogic states... this, which supports lucid dreaming, is my best way to access mystical states... mindfulness meditation, as I practice it, can lapse into these states if I allow it.
I was put off by the drug-culture of my peers in the 60's/70's for many reasons, one might have included a strong steeping in rational/linear modes of thinking/being, in spite of an early discovery of and indulgence in lucid dreaming.
I know many who identify as "evening" or "morning" people, but there is evidence that before the industrial revolution brought ubiquitous artificial light (city gas or kerosene lamps, then electric lights, now flickering TV/computer/phone screens), "segmented sleep" was the standard. It was common (almost ubiquitous?) for people to go to sleep soon after dark and then wake in the middle of the night for an hour or two of wakefulness, referred to as "Dorvielle" in French Speaking cultures or "wake-sleep", a somewhat hypnotic state (perhaps a slow slide from hypnopompia to hypnogagia and back again?).
Hot climates/cultures have an alternative "segmented sleep" wherein the heat of the day is reserved for a "siesta" with both evening and early morning reserved for taking care of business when it is cooler. I think of a siesta as being somewhat lighter and more lucid-dream conducive than "night sleep".
- Steve
On 1/2/19 10:07 AM, Marcus Daniels wrote: > There's also this thing one can do called `sleeping in', which tends > to increase the probability of dream memory and/or lucid dreaming, at > least for me. A built-in neuroplasticity mechanism complete with > psychedelic phenomena and a safety mechanism of motor system > deactivation. ( > > On 1/2/19, 10:03 AM, "Friam on behalf of Nick Thompson" <[hidden email]> wrote: > > For instance, I have never dreamed about what mushrooms might do for me. Is that a fair statement of a difference between us? > > ============================================================ > FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe > at St. John's College to unsubscribe > http://redfish.com/mailman/listinfo/friam_redfish.com > archives back to 2003: http://friam.471366.n2.nabble.com/ > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove >
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove I claim the answer to your 2 questions is yes. As Marcus (with the usage classes) and Steve (with behavioral "drugs") point out, the reason people engage in such things is to make their lives *better* (according to some definition of "better"). To think anything else is to risk the madness of morons like Nancy Reagan or those who think alcoholics suffer from a moral failing, rather than a physiochemical one. You want your insulin pump to make your life better than it would be without it. Simple. Rational. As Dave pointed out, though, we have some very promising therapeutic agents that we've ignored because we've been hoodwinked by the moral proselytizing of anti-science nutbags who think like Scientologists -- Clear Body, Clear Mind and all that. On 1/2/19 11:33 AM, Nick Thompson wrote: > So is THAT the spirit in which people take psilocybin? Is that the spirit in which people welcome the legalization of LSD? I fear I may have wronged them horribly. To be so far from a moderately happy life to want to derange one's entire experience for even only a few hours, seems like a terrible thing to me. I regard sanity as an achievement, not a state of affairs into which life naturally folds. I would no more take LSD than crumple up a piece of paper before I put it in the printer. -- ☣ uǝlƃ Depression, bipolar disorder, and OCD are examples of the kind of mental illnesses I had in mind. They make life hard for those that have it. More downsides than upsides. As for sociopathy, for most people, just being too damned irritating will eventually create a cost for them too. Others become the president, at least for a while.
From: Friam <[hidden email]> on behalf of Steven A Smith <[hidden email]>
It is to this point that I prefer to think in terms of "neurodiverse" rather than "mentally ill". Your definitions here respond more to my idea of "sociopathy". I don't think of sociopaths as being mentally ill, just not good members of the society they find themselves in. Most *L*ibertarians I know seem to be on the verge of sociopathy as a matter of honor. There has been a move afoot to recognize the selection value of neurodiversity in a group and to de-stigmatize or de-pathologize what was previously considered dis-ease or dys-function. https://www.nytimes.com/2015/08/23/books/review/neurotribes-by-steve-silberman.html On 1/2/19 12:33 PM, Marcus Daniels wrote:
I'm not sure what you're buying with your move to "continuous" rather than (merely) "infinite-valued". I mean, though your discretized values {0..n} are integers, they are (in my small experience of many-valued logics, which does not include any actually *working* with them as logics) merely nominal labels--the order, and the arithmetic for that matter, are irrelevant semantically: the flavors 1, 2, 3 of not-true aren't such that 2 is more not-true than 1 but less not-true than 3, and certainly aren't such that 2 is exactly half-way from 1 to 3 in not-trueness. And, from another point of view, contrary to most people's "intuition" (as formed by what turns out to be bad pedagogy, not anything in the foundation of either physics or mathematics), "continuity" doesn't require infinitude. Way back in the early 1960 a couple of mathematicians independently (Bob Stong was one of them, I forget the other) noticed that all the algebraic topology that can be done with (finite) "simplicial complexes" (e.g., polyhedra) in Euclidean space (so, in particular, all the algebraic topology of compact differentiable manifolds) can be faithfully rephrased in terms of *finite* topological spaces (I mean, literally finite: only finitely many points, where in particular a one-element set does not have to be closed), if you don't insist that the topology be Hausdorff (but do impose one very weak "separation property" which I'm currently blanking on). Much more recently, a pair of Argentinians, J. Barmak & E. Minian, have published a series of papers (all available at the arXiv) extending and clarifying that. Logics with *that* kind of a continuum of values has, I think, already be done (the finite topological spaces in question can be reinterpreted as finite posets / finite lattices / etc., and at least "lattice-valued logics" has a familiar sound to me; but, again, I'm blanking on any details). > Since one of my dead horses is artificial discretization, I've always > wondered what it's like to work in many-valued logics. So, proof by > contradiction would change from [not-true => false] to [not-0 => > {1,2,..,n}], assuming a discretized set of values {0..n}. But is there a > continuous "many valued" logic, where any proposition can be evaluated to > take on some sub-region of a continuous set? So, proof by contradiction > would become something like [not∈{-∞,0} => ∈{0+ε,∞}]? Lee, I think you got your threads seriously tangled. N Nicholas S. Thompson Emeritus Professor of Psychology and Biology Clark University http://home.earthlink.net/~nickthompson/naturaldesigns/ -----Original Message----- From: Friam [mailto:[hidden email]] On Behalf Of [hidden email] Sent: Wednesday, January 02, 2019 2:14 PM To: [hidden email] Subject: Re: [FRIAM] Was: Abduction; Is Now: Dionysian and Apollonian Lives I'm not sure what you're buying with your move to "continuous" rather than (merely) "infinite-valued". I mean, though your discretized values {0..n} are integers, they are (in my small experience of many-valued logics, which does not include any actually *working* with them as logics) merely nominal labels--the order, and the arithmetic for that matter, are irrelevant semantically: the flavors 1, 2, 3 of not-true aren't such that 2 is more not-true than 1 but less not-true than 3, and certainly aren't such that 2 is exactly half-way from 1 to 3 in not-trueness. And, from another point of view, contrary to most people's "intuition" (as formed by what turns out to be bad pedagogy, not anything in the foundation of either physics or mathematics), "continuity" doesn't require infinitude. Way back in the early 1960 a couple of mathematicians independently (Bob Stong was one of them, I forget the other) noticed that all the algebraic topology that can be done with (finite) "simplicial complexes" (e.g., polyhedra) in Euclidean space (so, in particular, all the algebraic topology of compact differentiable manifolds) can be faithfully rephrased in terms of *finite* topological spaces (I mean, literally finite: only finitely many points, where in particular a one-element set does not have to be closed), if you don't insist that the topology be Hausdorff (but do impose one very weak "separation property" which I'm currently blanking on). Much more recently, a pair of Argentinians, J. Barmak & E. Minian, have published a series of papers (all available at the arXiv) extending and clarifying that. Logics with *that* kind of a continuum of values has, I think, already be done (the finite topological spaces in question can be reinterpreted as finite posets / finite lattices / etc., and at least "lattice-valued logics" has a familiar sound to me; but, again, I'm blanking on any details). > Since one of my dead horses is artificial discretization, I've always > wondered what it's like to work in many-valued logics. So, proof by > contradiction would change from [not-true => false] to [not-0 => > {1,2,..,n}], assuming a discretized set of values {0..n}. But is > there a continuous "many valued" logic, where any proposition can be > evaluated to take on some sub-region of a continuous set? So, proof > by contradiction would become something like [not∈{-∞,0} => ∈{0+ε,∞}]? ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove Blame Frank! 8^) Or blame yourself for artificially discretizing humans into Dionysian vs. Apollonian. Thanks, Lee. I doubt I have the ability to parse the Barmak and Minian work. But I appreciate your skepticism. My intention was to vaguely hand-wafe at something about closed and open topologies and, perhaps, imply something about analytical balls of radius epsilon as the truth that's preserved by deduction. I still think there's something that could be said about the rational numbers as possible truth values, as opposed to a dense infinity. But like my worry that all directed cyclic graphs can be reduced to DAGs, you've made me just as worried about the necessity of dense sets. On 1/2/19 1:18 PM, Nick Thompson wrote: > Lee, I think you got your threads seriously tangled. -- ☣ uǝlƃ Nick wrote, in relevant part, > This reminds me of the misuse of the "learning curve" > metaphor. People speak of a steep learning curve as something to be > feared. In fact, people who learn quickly have a steep learning curve. Behold, complete with ASCII art (so be ready to view this in a monospaced font, or forever hold your peace), an ancient USENET post of mine to alt.usage.english, from 1995 (!): ===begin=== Robert L Rosenberg ([hidden email]): >: A learning curve should be the graph of a non-decreasing function (time >: on the horizontal axis, knowledge of the topic on the vertical axis). A >: fast learner would have a generally steeper learning curve than a slow >: learner. At least that's the way I've always pictured it. [hidden email] (Keith Ivey) writes: >I agree that this makes sense, but it doesn't seem to correspond with >the way the phrase is used. In my experience, something that is hard >to learn is said to have a steep learning curve. Rosenberg's explanation not only makes sense, it accords with the original use by rat-runners and other operant conditioners (cf., e.g., _Psychology_ by James D. Laird and Nicholas S. Thompson, p. 164: "The ... steeper the curve, the faster the animal is learning"). More precisely, *during an interval of time where the curve is steep, the animal is learning quickly*. The present use is muddled; as Ivey points out, "something that is hard to learn is said to have a steep learning curve." Here's how I unmuddle it (but I don't know what, if anything, is going on in the heads of most people who use the phrase): by the Mean Value Theorem, or common intuition, if a (smooth) nondecreasing function f(t) with f(0)=0 and f(1)=1 is "steep" (has large derivative) somewhere, then it MUST be "flat" (have small derivative) somewhere else. Typical learning curves (I gather from the illustrations in Laird and Thompson) look either like Figure A or like Figure B: x o x x o x x o x o o o x o FIGURE A FIGURE B In the first case, you learn almost everything in a short period of time near the beginning of the training, then reach a plateau and learn the rest very slowly. In the second case, you learn very slowly for a long time, then take off near the end of the training. So the question is reduced to another one: which of Figures A and B is a "steep" curve to the average speaker? Lee Rudolph ===end=== Figure B is how R&D works, and Figure A describes a good student. On 1/2/19, 2:32 PM, "[hidden email]" <[hidden email]> wrote: Nick wrote, in relevant part, > This reminds me of the misuse of the "learning curve" > metaphor. People speak of a steep learning curve as something to be > feared. In fact, people who learn quickly have a steep learning curve. Behold, complete with ASCII art (so be ready to view this in a monospaced font, or forever hold your peace), an ancient USENET post of mine to alt.usage.english, from 1995 (!): ===begin=== Robert L Rosenberg ([hidden email]): >: A learning curve should be the graph of a non-decreasing function (time >: on the horizontal axis, knowledge of the topic on the vertical axis). A >: fast learner would have a generally steeper learning curve than a slow >: learner. At least that's the way I've always pictured it. [hidden email] (Keith Ivey) writes: >I agree that this makes sense, but it doesn't seem to correspond with >the way the phrase is used. In my experience, something that is hard >to learn is said to have a steep learning curve. Rosenberg's explanation not only makes sense, it accords with the original use by rat-runners and other operant conditioners (cf., e.g., _Psychology_ by James D. Laird and Nicholas S. Thompson, p. 164: "The ... steeper the curve, the faster the animal is learning"). More precisely, *during an interval of time where the curve is steep, the animal is learning quickly*. The present use is muddled; as Ivey points out, "something that is hard to learn is said to have a steep learning curve." Here's how I unmuddle it (but I don't know what, if anything, is going on in the heads of most people who use the phrase): by the Mean Value Theorem, or common intuition, if a (smooth) nondecreasing function f(t) with f(0)=0 and f(1)=1 is "steep" (has large derivative) somewhere, then it MUST be "flat" (have small derivative) somewhere else. Typical learning curves (I gather from the illustrations in Laird and Thompson) look either like Figure A or like Figure B: x o x x o x x o x o o o x o FIGURE A FIGURE B In the first case, you learn almost everything in a short period of time near the beginning of the training, then reach a plateau and learn the rest very slowly. In the second case, you learn very slowly for a long time, then take off near the end of the training. So the question is reduced to another one: which of Figures A and B is a "steep" curve to the average speaker? Lee Rudolph ===end=== ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
Well, did you pay your tithes last month????
It is really kind of silly to think that one can either characterize oneself, or be characterized by others, as Dionysian or Apollonian. the concept has become so mucked up since Nietzsche used the notions to define tragedy (a folly of his youth). The absurd overlay / infusion of Islamo-Judeo-Christo morality delivered a death blow to the whole idea.
In Greek Philosophy ones behavior (and thoughts if you want to allow such) were grounded in complex blend of the the two traits; and consequently everyone was "ambiguous" with regard to them. The intolerance of ambiguity among the People of the Book and most of Western culture, keeps trying to push for a two valued logic which is not useful.
If you want to use the terms as metaphors, Apollonian vs. Dionysian could correspond to 1) cortex vs. amygdala; or 2) right-brain vs. left-brain. Everyone knows that any behavior is simultaneously grounded in both elements, but to an observer, including an internal one, any given behavior might seem to be predominantly influenced by one or the other.
My claim to "Apollonian" is grounded in a long ago commitment to following the precepts of Jinyana (Jnana) Yoga. first in Vedic literature, the Buddhism and Taoism — Ch'an Buddhism --> Zen. I strive to make all of my behavior deliberate and intentional within a meta-rational and meta-logical context, utilizing the cortex / left-brain as a filter. If you ever read Korzibski, there echos in my head of his "cortico-thalamic pause."
Feel free to reduce the preceding mumbo-jumbo to: its all behavior, and each behavior is grounded in the complexity of the whole organism.
davew
On Wed, Jan 2, 2019, at 11:18 AM, Nick Thompson wrote:
Friam mailing list [hidden email] http://redfish.com/mailman/listinfo/friam_redfish.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives back to 2003: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Free forum by Nabble | Edit this page |