In my mind, the distinction is important between an assertion failing subjectively and objectively. An assertion could fail for sound reasons in a subjective way, but not be transparent. A Trump voter who wants to Cause harm to Washington
might have some private theory of how the harm would unfold and why it would be a Good Thing. Alternatively, they could just be acting in some vague emotional way based on feelings of alienation or humiliation or fear. In contrast, an assertion could fail
outside of the monad, amongst a set of types shared by many agents. And by virtue of being instances of shared types, the utterances at some level are all self-consistent. I am skeptical that a point of view can be turned into an artifact and shared in
all cases. It’s a best-effort thing even among willing participants, and many participants (maybe all) will not be able to accurately reflect on themselves.
From: Friam <[hidden email]> on behalf of Marcus Daniels <[hidden email]>
Reply-To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Date: Wednesday, January 9, 2019 at 2:10 PM
To: The Friday Morning Applied Complexity Coffee Group <[hidden email]>
Subject: Re: [FRIAM] Motives - Was Abduction
Nick writes:
< One solution I am exploring is trying to make every assertion that something is real into a three valued assertion including point of view. >
Confounding variables, like your example with Simpson’s Paradox. In functional programming, the life history of said person’s evolving point of view might live in a monad (a big object). Every assertion could be bind inside the monad
and access private information. Sometimes the assertions would fail, but it would fail in a subjective way.
Marcus
Free forum by Nabble | Edit this page |