I ran across this paper when I typed the subject into Google:
Animal rights, animal minds, and human mindreading https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2563326/ I thought I'd troll with it, here, since we've had so many discussions of monism and behaviorism. The question came up in this: Sam Harris & Jordan Peterson - Vancouver - 1 https://www.youtube.com/watch?v=jey_CzIOfYE I don't know where the question came up in their discussion. But it's clearly relevant for evolutionary psychology. If we could prove that non-human animals don't psychologize, then many of Peterson's arguments might hold some water. (Especially in light of what they're calling "metaphorical truth" ... e.g. "cargo cults".) Personally, it seems to me the idea that they *don't* psychologize is preposterous. Even without assuming a fine-grained spectrum between humans and our nearest non-human relatives, it seems reasonable that our "mind reading" is simply a more reflective (deeper) algorithm for the prediction of the behavior of others (or ourselves in counterfatcual situations). -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
How did they forget to invite a nihilist to that Harris/Peterson panel?
A nihilist might observe that a multi-cellular organism can have billions of states and the interactions between billions of different organisms is exponentially larger still. There's no reason to think in the evolution that led to humans to this point has tested all possible ways for groups to form and dissolve, or even sparsely sampled the possibilities. To Peterson, that God is the wisdom of humankind (and mostly men it seems), is just confusing the samples that have been seen so far (and captured in some stupid volumes) with the samples that could be made if we are all Free to be You and Me. But the samples cannot be even be taken in a socially conservative regime because it prevents it. I have no idea what Harris is talking about things being obviously good or bad. First world problems can be pretty horrific w.r.t. to addiction, suicide, and inequality. Sit outside at a McDonalds in most any city for half hour or so and you'll eventually notice someone finishing out garbage for their dinner. So there, Marcus On 9/13/18, 4:04 PM, "Friam on behalf of uǝlƃ ☣" <[hidden email] on behalf of [hidden email]> wrote: I ran across this paper when I typed the subject into Google: Animal rights, animal minds, and human mindreading https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2563326/ I thought I'd troll with it, here, since we've had so many discussions of monism and behaviorism. The question came up in this: Sam Harris & Jordan Peterson - Vancouver - 1 https://www.youtube.com/watch?v=jey_CzIOfYE I don't know where the question came up in their discussion. But it's clearly relevant for evolutionary psychology. If we could prove that non-human animals don't psychologize, then many of Peterson's arguments might hold some water. (Especially in light of what they're calling "metaphorical truth" ... e.g. "cargo cults".) Personally, it seems to me the idea that they *don't* psychologize is preposterous. Even without assuming a fine-grained spectrum between humans and our nearest non-human relatives, it seems reasonable that our "mind reading" is simply a more reflective (deeper) algorithm for the prediction of the behavior of others (or ourselves in counterfatcual situations). -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by gepr
Hi, glen, I have written a LOT about this: three papers stand out. The first argues that the notion of ejective anthropomorphism -- the idea that we enter the minds of others (including other animals) through our own direct knowledge of our own minds -- is absurd, requiring four premises, all of which we know to be false. (Please, a few people who are NOT members of RG try the link; I don’t trust research gate. You don’t actually have to read the article. But please let me know if RG actually delivers it to you as they promise they will, or do they put you through some sort of registration hell.) The second article argues that our emotions, being relations between our behavior and our circumstances, are inherently public events. The dog can see them, as clearly as he sees the bowl of kibble in your hands. The third article argues that the capacity to access the motivations and emotions of others reaches deep in our evolutionary history, being an essential tool in the repertoire of many predators (Wolves, lions, etc.). Glen has already read and commented on this stuff, a kindness for which I am eternally grateful If you are interested in this topic, I would love to hear from more of you. By the way, FWIW, I will be back in Santa Fe, ready to meet with the Home Church Congregation, at the first meeting in October. Have your homilies ready. Nick Nicholas S. Thompson Emeritus Professor of Psychology and Biology Clark University http://home.earthlink.net/~nickthompson/naturaldesigns/ -----Original Message----- I ran across this paper when I typed the subject into Google: Animal rights, animal minds, and human mindreading https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2563326/ I thought I'd troll with it, here, since we've had so many discussions of monism and behaviorism. The question came up in this: Sam Harris & Jordan Peterson - Vancouver - 1 https://www.youtube.com/watch?v=jey_CzIOfYE I don't know where the question came up in their discussion. But it's clearly relevant for evolutionary psychology. If we could prove that non-human animals don't psychologize, then many of Peterson's arguments might hold some water. (Especially in light of what they're calling "metaphorical truth" ... e.g. "cargo cults".) Personally, it seems to me the idea that they *don't* psychologize is preposterous. Even without assuming a fine-grained spectrum between humans and our nearest non-human relatives, it seems reasonable that our "mind reading" is simply a more reflective (deeper) algorithm for the prediction of the behavior of others (or ourselves in counterfatcual situations). -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Marcus G. Daniels
I've always wondered why Peterson equates moral relativism with nihilism. The two seem fundamentally different to me. I'm re-reading this:
What is complexity? https://www.ncbi.nlm.nih.gov/pubmed/12447974 wherein Chris makes the (kindasorta obvious) point that meaning is always relative to some reference structure. So... it makes sense that nihilism might be the admission that all reference structures are arbitrary or even fictitious. And moral relativism is the admission that morals only make sense when/if the relation and the referent exist and are identified. Nihilism, then, is a discrete thing. You're either a nihilist or you're not. Moral relativism seems to span the gamut *between* nihilism and realism. A moral relativist may well think that some reference structures are just stupid, others are questionable, and others are fairly reliable. So, is his (false) equivalence just more of Peterson's disingenuity, in an attempt to pose as the "wise father figure"? Or have I missed some bit of reasoning that shows they're equivalent? As for Harris' argument, he's relying on a common trope amongst people like him (including Pinker's recent book, Shermer's standard presentation, etc.). As horrific as the local 7-Eleven parking lot might be, it doesn't compare to what's happening in places like Yemen or Syria. Subjective horror shouldn't be discounted, of course. But those who claim to be realists will probably argue that objective horror is worse. A dualist could easily argue that what we should be optimizing are *qualia*, which may be unbound from their binding context, yet remain meaningful and real. So these "objective" numbers cited by Pinker would be largely irrelevant. Horror is invariant across the landscape. I know people who truly believe their lives are horrible, despite having their own car, a place to live, a big screen TV, beautiful children, etc. [sigh] On 09/13/2018 05:45 PM, Marcus Daniels wrote: > How did they forget to invite a nihilist to that Harris/Peterson panel? > > A nihilist might observe that a multi-cellular organism can have billions of states and the interactions between billions of different organisms is exponentially larger still. > There's no reason to think in the evolution that led to humans to this point has tested all possible ways for groups to form and dissolve, or even sparsely sampled the possibilities. To Peterson, that God is the wisdom of humankind (and mostly men it seems), is just confusing the samples that have been seen so far (and captured in some stupid volumes) with the samples that could be made if we are all Free to be You and Me. But the samples cannot be even be taken in a socially conservative regime because it prevents it. > > I have no idea what Harris is talking about things being obviously good or bad. First world problems can be pretty horrific w.r.t. to addiction, suicide, and inequality. Sit outside at a McDonalds in most any city for half hour or so and you'll eventually notice someone finishing out garbage for their dinner. > > So there, > > Marcus -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Glen writes: < As for Harris' argument, he's relying on a common trope amongst people like him (including Pinker's recent book, Shermer's standard presentation, etc.). As horrific as the local 7-Eleven parking lot might be, it doesn't compare to
what's happening in places like Yemen or Syria. > Maybe. I think you could make the case that ISIS terrorists are terrorists because it has given them something to believe-in and something to do with their lives.
It is only with the application of a prevalent value system that we equate terrorists with badness. Many junkies outside 7-Elevens are lost souls and will have abbreviated lives. They are unable to thrive. In contrast, a military
commander in Hamas living in the Gaza Strip may have miserable conditions to cope with, but they are respected by a group of people and aren't depressed. This was sort of Ted Kaczynski's point that technology raises the bar to the point many people can't
function any more. Another example are the
stories of (U.S.) soldiers who live in terrible conditions but bond tightly with their peers, people they might never be close to in civilian life.
Objectively they are in danger every day, but psychologically they crave the bond and the engagement in the fight.
Either moral relativists or full-on nihilists see that threads of subjective reality can and sometimes should be independent. I would argue that is useful on average at a universal level because it expands understanding rather than
being prescriptive. Peterson’s own arguments about how men rise to greatness in organizations admits that things can take care of themselves.
Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Nick Thompson
I'm not sure I've read all 3. I will, though. I *tried* to read this:
Animal Sentience: The other-minds problem https://animalstudiesrepository.org/animsent/vol1/iss1/1/ And I've read some of Harnad's other work and came away impressed. But his characterization of the "hard problem" seems fundamentally *off* to me. But who knows. He's smarter than I am. In any case, I found this response interesting: Nonhuman mind-reading ability https://animalstudiesrepository.org/animsent/vol1/iss1/2/ On 09/13/2018 06:34 PM, Nick Thompson wrote: > Glen has already read and commented on this stuff, a kindness for which I am eternally grateful If you are interested in this topic, I would love to hear from more of you. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
In reply to this post by Marcus G. Daniels
I make a similar argument about gun control. Most of my friends are advocates of stronger regulation. They *think* I'm also an advocate of such. And, objectively, I am because I sometimes parrot a subset of their arguments. E.g. I argue that there are multiple types of cause (perhaps 4: formal, final, efficient, material). And, yes the gun nuts hinge their arguments on efficient cause, which is fine. But it's irresponsible to ignore the material cause: guns. But those who know me, know I'm an inherent supporter of weapon freedom. Anyone ought to be able to own (and use) pretty much any weapon they want. I don't vote that way, though. And most of my acquaintances don't know that about me. My 2-faced position depends fundamentally on my belief that human life just isn't that important. I think, say, cougars[†] and bacteria have just as much right to life as humans. And, to some extent, humans are destroying the ecosystem. So, it's difficult for me to keep a straight face and claim that human life is somehow sacred. (It's even easier now that I have cancer.) So, yeah, more guns = more dead people. Personally, that's OK with me. Politically, however, it's a reality and if we all *understand* that more guns means more dead people ... and we don't want more dead people, then the only rational thing is to more strictly regulate (or eliminate) guns.
[†] https://www.oregonlive.com/pacific-northwest-news/index.ssf/2018/09/hunt_for_killer_cougar_in_oreg.html The Peterson/Harris argument is mostly about dogma. But if we munge the words/concepts a bit, we could just as easily make it about schema, where some of the variables are bound and others are free. I think if we did that, it would be trivial to admit both that this weaker form of dogma (arrived at by bio- or cultural evolution) does not disallow the rationalist the freedom to update the schema whenever some multi-objective optimization algorithm suggests it needs updating. I think, the problem with the 2nd video (their 2nd night of discussion) was that they just danced around our tendency to dichotomize *everything* always. It's just another example of how artificial discretization prevents people who agree on 90% of everything from codifying where they disagree. On 09/14/2018 10:03 AM, Marcus Daniels wrote: > I think you could make the case that ISIS terrorists are terrorists because it has given them something to believe-in and something to do with their lives. > > It is only with the application of a prevalent value system that we equate terrorists with badness. Many junkies outside 7-Elevens are lost souls and will have abbreviated lives. They are unable to thrive. In contrast, a military commander in Hamas living in the Gaza Strip may have miserable conditions to cope with, but they are respected by a group of people and aren't depressed. This was sort of Ted Kaczynski's point that technology raises the bar to the point many people can't function any more. > > Another example are the stories <https://www.nytimes.com/2018/09/10/opinion/911-lessons-veteran.html> of (U.S.) soldiers who live in terrible conditions but bond tightly with their peers, people they might never be close to in civilian life. > > Objectively they are in danger every day, but psychologically they crave the bond and the engagement in the fight. > > > > Either moral relativists or full-on nihilists see that threads of subjective reality can and sometimes should be independent. I would argue that is useful on average at a universal level because it expands understanding rather than being prescriptive. Peterson’s own arguments about how men rise to greatness in organizations admits that things can take care of themselves. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
In reply to this post by gepr
Glen,
That's extraordinarily kind of you, Glen. I would look forward to any responses you had. For most academic writers ... except the Dennetts and the Pinkers and the Dawkinses ... writing is like dropping gold pieces down an infinitely deep wishing well. You never even hear them hit the bottom, let alone get your wishes. I share your misgivings about Harnad. To his credit, Harnad published a lot of my commentaries while disagreeing vociferously with my perspective, whenever I approached him directly. He is a dyed in the wool Cartesian who has never doubted for a minute that all experience begins with knowledge of one's own mind. There are mountains of evidence to suggest that, on the contrary, experience of one's own mind is derived from experience of the world, and is as much, or more, of an inference that our inference of the minds of others. So the entire project laid out in his abstract is, to me, patently wrong-headed. Thanks, again, Glen. Nick Nicholas S. Thompson Emeritus Professor of Psychology and Biology Clark University http://home.earthlink.net/~nickthompson/naturaldesigns/ -----Original Message----- From: Friam [mailto:[hidden email]] On Behalf Of u?l? ? Sent: Friday, September 14, 2018 1:29 PM To: FriAM <[hidden email]> Subject: Re: [FRIAM] do animals psychologize? I'm not sure I've read all 3. I will, though. I *tried* to read this: Animal Sentience: The other-minds problem https://animalstudiesrepository.org/animsent/vol1/iss1/1/ And I've read some of Harnad's other work and came away impressed. But his characterization of the "hard problem" seems fundamentally *off* to me. But who knows. He's smarter than I am. In any case, I found this response interesting: Nonhuman mind-reading ability https://animalstudiesrepository.org/animsent/vol1/iss1/2/ On 09/13/2018 06:34 PM, Nick Thompson wrote: > Glen has already read and commented on this stuff, a kindness for > which I am eternally grateful If you are interested in this topic, I would love to hear from more of you. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by gepr
< But it's irresponsible to ignore the material cause: guns. But those who know me, know I'm an inherent supporter of weapon freedom. Anyone ought to be able to own (and use) pretty much any weapon they want. >
Without getting in to the usual arguments of that discussion, my 2-faced position (not really) comes with the recent printable gun controversy. If someone wants to share a file, who is to say they can't do it? And sooner-or-later there will be the possibility of programmable fabricated nano-machines or organisms. The gun thing is just the tip of the iceberg. The possibly of genocide scale violence initiated by individuals or small groups will come with it. What then? Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
"What then?" Well, my answer is we'll either be aware of our own ability to destroy the world[†] ... or we won't. Anyone who is not already aware of our ability to destroy the world is handicapped. So, those people are in sore need of an educational transformation. So what if such an education kills those of us who already know it? I'm willing to die so that the ignorant among us will learn their own powers.
[†] As usual, "world" is ambiguous. On 09/14/2018 01:30 PM, Marcus Daniels wrote: > < But it's irresponsible to ignore the material cause: guns. But those who know me, know I'm an inherent supporter of weapon freedom. Anyone ought to be able to own (and use) pretty much any weapon they want. > > > Without getting in to the usual arguments of that discussion, my 2-faced position (not really) comes with the recent printable gun controversy. If someone wants to share a file, who is to say they can't do it? And sooner-or-later there will be the possibility of programmable fabricated nano-machines or organisms. The gun thing is just the tip of the iceberg. The possibly of genocide scale violence initiated by individuals or small groups will come with it. What then? -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Good science fiction would develop characters like David Morse's character in 12 Monkeys as protagonists.
On 9/14/18, 4:16 PM, "Friam on behalf of uǝlƃ ☣" <[hidden email] on behalf of [hidden email]> wrote: "What then?" Well, my answer is we'll either be aware of our own ability to destroy the world[†] ... or we won't. Anyone who is not already aware of our ability to destroy the world is handicapped. So, those people are in sore need of an educational transformation. So what if such an education kills those of us who already know it? I'm willing to die so that the ignorant among us will learn their own powers. [†] As usual, "world" is ambiguous. On 09/14/2018 01:30 PM, Marcus Daniels wrote: > < But it's irresponsible to ignore the material cause: guns. But those who know me, know I'm an inherent supporter of weapon freedom. Anyone ought to be able to own (and use) pretty much any weapon they want. > > > Without getting in to the usual arguments of that discussion, my 2-faced position (not really) comes with the recent printable gun controversy. If someone wants to share a file, who is to say they can't do it? And sooner-or-later there will be the possibility of programmable fabricated nano-machines or organisms. The gun thing is just the tip of the iceberg. The possibly of genocide scale violence initiated by individuals or small groups will come with it. What then? -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Ha! When I was in college, I distracted myself from my failings at electrical engineering ... and my failure to grok topology by reading the University of Chicago's Ethics: https://www.journals.uchicago.edu/toc/et/current. One issue had an essay on "Killing versus Letting Die". I remember reading it while walking to "Electronic Properties of Materials", a 1st world privilege if there ever was one. I forget the issue and the author ... and even the content. But I can say I think stealing and releasing a deadly virus is somehow different from letting people kill themselves with their own stupidity. >8^D
On 09/14/2018 03:38 PM, Marcus Daniels wrote: > Good science fiction would develop characters like David Morse's character in 12 Monkeys as protagonists. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Wouldn't a social person aim to mitigate the stupidity, along the lines the of the resistance within the White House?
Or is there some reason it is best to bore on ahead with the stupidity? Because it is undemocratic or just because it is a bother? On 9/14/18, 4:53 PM, "Friam on behalf of uǝlƃ ☣" <[hidden email] on behalf of [hidden email]> wrote: Ha! When I was in college, I distracted myself from my failings at electrical engineering ... and my failure to grok topology by reading the University of Chicago's Ethics: https://www.journals.uchicago.edu/toc/et/current. One issue had an essay on "Killing versus Letting Die". I remember reading it while walking to "Electronic Properties of Materials", a 1st world privilege if there ever was one. I forget the issue and the author ... and even the content. But I can say I think stealing and releasing a deadly virus is somehow different from letting people kill themselves with their own stupidity. >8^D On 09/14/2018 03:38 PM, Marcus Daniels wrote: > Good science fiction would develop characters like David Morse's character in 12 Monkeys as protagonists. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Well, sure. If you intend to intervene, you have to choose the type of intervention. So, the "Killing versus Letting Die" dilemma is a special form of "intervene or don't". But if you choose to intervene, you have to also choose how to intervene. One might choose to kill by positively reinforcing extant bad habits ... i.e. open a gun store or invest in Smith & Wesson. That's an intervention of a sort. Another intervention might be to encourage one's mutual fund to disinvest in S&W or donate to a gun control PAC. But none of these deny the more fundamental choice of whether to intervene or not.
A *social* person may well choose to exacerbate it, thinking that it'll weed out the people with property XYZ (violent, ignorant, poor, whatever), while "civilized" people will inherently avoid killing each other with guns. Of course, such a person is probably ignorant of the full connectedness of the gene-space (which generates the phenomena of gun violence). And if we allow for kin selection of any sort, then it's plausible that such sociality is the problem, not the cure. ... So, "no", it's not a slam dunk to assume that a social person would aim to mitigate the stupidity, at least at various scopes. On 09/14/2018 04:05 PM, Marcus Daniels wrote: > Wouldn't a social person aim to mitigate the stupidity, along the lines the of the resistance within the White House? > Or is there some reason it is best to bore on ahead with the stupidity? Because it is undemocratic or just because it is a bother? -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
I was thinking more of the 12 Monkeys example, more so than the current phenomena of gun violence. If any dogmatic group can kill us all by downloading a nanotech kit, shouldn't either 1) they be educated, isolated, or eliminated with haste or 2) there be strong controls on distributing some kinds of information. It seems to me #2 is unacceptable, but the more likely outcome.
On 9/14/18, 5:20 PM, "Friam on behalf of uǝlƃ ☣" <[hidden email] on behalf of [hidden email]> wrote: Well, sure. If you intend to intervene, you have to choose the type of intervention. So, the "Killing versus Letting Die" dilemma is a special form of "intervene or don't". But if you choose to intervene, you have to also choose how to intervene. One might choose to kill by positively reinforcing extant bad habits ... i.e. open a gun store or invest in Smith & Wesson. That's an intervention of a sort. Another intervention might be to encourage one's mutual fund to disinvest in S&W or donate to a gun control PAC. But none of these deny the more fundamental choice of whether to intervene or not. A *social* person may well choose to exacerbate it, thinking that it'll weed out the people with property XYZ (violent, ignorant, poor, whatever), while "civilized" people will inherently avoid killing each other with guns. Of course, such a person is probably ignorant of the full connectedness of the gene-space (which generates the phenomena of gun violence). And if we allow for kin selection of any sort, then it's plausible that such sociality is the problem, not the cure. ... So, "no", it's not a slam dunk to assume that a social person would aim to mitigate the stupidity, at least at various scopes. On 09/14/2018 04:05 PM, Marcus Daniels wrote: > Wouldn't a social person aim to mitigate the stupidity, along the lines the of the resistance within the White House? > Or is there some reason it is best to bore on ahead with the stupidity? Because it is undemocratic or just because it is a bother? -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Hm. As usual, it depends on what you want to have happen, I suppose. Educating a zealot who wants to kill everyone will only make them more capable of killing everyone. If your desire is to avoid killing everyone, then the dogmatic group needs to be isolated or eliminated. But my guess is that your (1) and (2) are never disjoint. The isolation/elimination of the zealots is achieved, in part, through strong controls on the distribution of some kinds of info. We do this, already in almost every arena ... even to the extent of putting good scientific content behind paywalls and/or restrictions on exporting "munitions" like encryption algorithms.
The choice is still one of intervene or don't. I, perhaps sadly, can't shake my libertarianism, which tells me to avoid intervention where possible. There is no worse crime against the world than over-intervention. On 09/14/2018 04:36 PM, Marcus Daniels wrote: > I was thinking more of the 12 Monkeys example, more so than the current phenomena of gun violence. If any dogmatic group can kill us all by downloading a nanotech kit, shouldn't either 1) they be educated, isolated, or eliminated with haste or 2) there be strong controls on distributing some kinds of information. It seems to me #2 is unacceptable, but the more likely outcome. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
uǝʃƃ ⊥ glen
|
Out of curiosity, does over-intervention concern apply to government behavior only? One could imagine the same technology trends empower many groups and individuals.
On 9/14/18, 5:44 PM, "Friam on behalf of uǝlƃ ☣" <[hidden email] on behalf of [hidden email]> wrote: Hm. As usual, it depends on what you want to have happen, I suppose. Educating a zealot who wants to kill everyone will only make them more capable of killing everyone. If your desire is to avoid killing everyone, then the dogmatic group needs to be isolated or eliminated. But my guess is that your (1) and (2) are never disjoint. The isolation/elimination of the zealots is achieved, in part, through strong controls on the distribution of some kinds of info. We do this, already in almost every arena ... even to the extent of putting good scientific content behind paywalls and/or restrictions on exporting "munitions" like encryption algorithms. The choice is still one of intervene or don't. I, perhaps sadly, can't shake my libertarianism, which tells me to avoid intervention where possible. There is no worse crime against the world than over-intervention. On 09/14/2018 04:36 PM, Marcus Daniels wrote: > I was thinking more of the 12 Monkeys example, more so than the current phenomena of gun violence. If any dogmatic group can kill us all by downloading a nanotech kit, shouldn't either 1) they be educated, isolated, or eliminated with haste or 2) there be strong controls on distributing some kinds of information. It seems to me #2 is unacceptable, but the more likely outcome. -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by Marcus G. Daniels
Evidently we already know how to create organisms from a file:
https://www.ted.com/talks/dan_gibson_how_to_build_synthetic_dna_and_send_it_across_the_internet/transcript?language=en They did a flu virus vaccine - but it looks like you just as easily print up some anthrax. (Don't know if this is so, it seems like it from what I have read, but I am not a biologist.) Scariest part, it appears that the "printer" is Internet accessible. But, of course, it is "secured." And again, of course, what one can build so can another. If you want some nice science fiction focused on potentially dire (even existential) consequences from existing tech (hacking/games, drones, CRISPR) try Daniel Suarez' Daemon and Freedom; Change Agent; and, Kill Decision. davew On Fri, Sep 14, 2018, at 2:30 PM, Marcus Daniels wrote: > < But it's irresponsible to ignore the material cause: guns. But those > who know me, know I'm an inherent supporter of weapon freedom. Anyone > ought to be able to own (and use) pretty much any weapon they want. > > > Without getting in to the usual arguments of that discussion, my 2-faced > position (not really) comes with the recent printable gun controversy. > If someone wants to share a file, who is to say they can't do it? And > sooner-or-later there will be the possibility of programmable fabricated > nano-machines or organisms. The gun thing is just the tip of the > iceberg. The possibly of genocide scale violence initiated by > individuals or small groups will come with it. What then? > > Marcus > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Polio was synthesized 16 years ago. Today, there are even
compiler toolchains to do it, and
initiatives to improve the speed and reliability of synthesis techniques. Suppose one could, though data mining of full genome sequences, find a genomic signature for a predisposition for some objectionable thing. Then one could conditionally activate of gene edits: if (signature_found(genome,FEATURE)) { genome = perform_edit(genome,downregulate_progesterone) } FEATURE would vary by interest group, religion, and political party, e.g. “likely to have same sex attractions” or “prone to tribalism”. It would all look like an accident. I think it is hard to see how something like this
won’t happen. Marcus On 9/15/18, 7:25 AM, "Friam on behalf of Prof David West" <[hidden email] on behalf of [hidden email]> wrote: Evidently we already know how to create organisms from a file: https://www.ted.com/talks/dan_gibson_how_to_build_synthetic_dna_and_send_it_across_the_internet/transcript?language=en They did a flu virus vaccine - but it looks like you just as easily print up some anthrax. (Don't know if this is so, it seems like it from what I have read, but I am not a biologist.) Scariest part, it appears that the "printer" is Internet accessible. But, of course, it is "secured." And again, of course, what one can build so can another. If you want some nice science fiction focused on potentially dire (even existential) consequences from existing tech (hacking/games, drones, CRISPR) try Daniel Suarez' Daemon and Freedom; Change Agent; and, Kill Decision. davew On Fri, Sep 14, 2018, at 2:30 PM, Marcus Daniels wrote: > < But it's irresponsible to ignore the material cause: guns. But those
> who know me, know I'm an inherent supporter of weapon freedom. Anyone
> ought to be able to own (and use) pretty much any weapon they want. > > > Without getting in to the usual arguments of that discussion, my 2-faced
> position (not really) comes with the recent printable gun controversy.
> If someone wants to share a file, who is to say they can't do it? And
> sooner-or-later there will be the possibility of programmable fabricated
> nano-machines or organisms. The gun thing is just the tip of the
> iceberg. The possibly of genocide scale violence initiated by
> individuals or small groups will come with it. What then?
> > Marcus > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
In reply to this post by gepr
In my Tuscany vacation this year I've read among other books the biography from Michael White about "Leonardo da Vinci". He writes (on p. 130) that Leonardo was a vegetarian 500 years before such a lifestyle became common, and explains his reason: "He believed that anything capable of movement was also capable of pain and came to the conclusion that he would therefore eat only plants because they did not move" Remarkable for a man 500 years ago, isn't it? -Jochen Sent from my Tricorder -------- Original message -------- From: uǝlƃ ☣ <[hidden email]> Date: 9/14/18 00:03 (GMT+01:00) To: FriAM <[hidden email]> Subject: [FRIAM] do animals psychologize? Animal rights, animal minds, and human mindreading https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2563326/ I thought I'd troll with it, here, since we've had so many discussions of monism and behaviorism. The question came up in this: Sam Harris & Jordan Peterson - Vancouver - 1 https://www.youtube.com/watch?v=jey_CzIOfYE I don't know where the question came up in their discussion. But it's clearly relevant for evolutionary psychology. If we could prove that non-human animals don't psychologize, then many of Peterson's arguments might hold some water. (Especially in light of what they're calling "metaphorical truth" ... e.g. "cargo cults".) Personally, it seems to me the idea that they *don't* psychologize is preposterous. Even without assuming a fine-grained spectrum between humans and our nearest non-human relatives, it seems reasonable that our "mind reading" is simply a more reflective (deeper) algorithm for the prediction of the behavior of others (or ourselves in counterfatcual situations). -- ☣ uǝlƃ ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove |
Free forum by Nabble | Edit this page |