I have created a second agent-based model for my book. At least people cannot say now that there is no "meat" in it :-) It is a very simple model in Python where deceptions between agents lead to decrease of trust. The agents interact with each other, and each time an agent lies or betrays the opponent he looses a bit of trust. Trust is simply defined as the number of deceptions divided by the number of interactions. In addition we can model that
* punishment of sinners and imprisonment of criminals increase trust
* retaliation and lack of forgiveness decrease trust
Results can be found in this Jupyter notebook (I am exporting the Matplotlib images to SVG and then convert them to WMF to include them in the book)
https://github.com/JochenFromm/SwarmIntelligence/blob/master/notebooks/Trust.ipynb
As an inspiration I haved used the two classic books from Robert Axelrod ("The Evolution of Cooperation" & "The Complexity of Cooperation"). There is a nice Python repository for his models too
https://github.com/Axelrod-Python/Axelrod
-J.
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam
un/subscribe
http://redfish.com/mailman/listinfo/friam_redfish.comarchives:
http://friam.471366.n2.nabble.com/FRIAM-COMIC
http://friam-comic.blogspot.com/