Recently, there was some discussion of linear logic that sent me down a hole
where I found this lecture: https://www.youtube.com/watch?v=IW4LjjAWrO4&ab_channel=DanielMurfet I am really enjoying it. What is the derivative of a Turing machine? -- Sent from: http://friam.471366.n2.nabble.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
I haven't had time to think about any of this or read the paper, but here are the notes I took during the video simply to express that I am (lazily) interested.
• The idea of an infinitesimal binary delta *toward* another binary number reminded me of Conway's surreals. So rather than propagating uncertainty, could we propagate "lefts" and "rights" (or "ups" and "downs")? • Keeping track of how many times you "use" a hypothesis is the functional equivalent of consuming data. • Introduction and elimination allow for degenerate loops. If all hypotheses are "promoted", is Dereliction moot? I.e. how do we talk about infinite streams as input data? How that relates to the idea of collections of locally coned machines churning out reality will have to wait for more relaxed times. On 10/12/20 8:03 PM, jon zingale wrote: > Recently, there was some discussion of linear logic that sent me down a hole > where I found this lecture: > https://www.youtube.com/watch?v=IW4LjjAWrO4&ab_channel=DanielMurfet > > I am really enjoying it. What is the derivative of a Turing machine? -- ↙↙↙ uǝlƃ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen
|
A very brief search recovered this likely connection between surreal numbers
and linear logic via combinatorial games and compact closed categories, respectively: http://pages.cpsc.ucalgary.ca/~gscruttw/publications/CGC.pdf To contribute to your bullet points, I also wonder about an interpretation of other intensional/complexity domains like algorithmic stability. In particular, is there a nice extension of the theory to account for robustness[♠] à la David Ackley? For example, where traditionally lambda calculus is ill-equipped to distinguish the value of Qsort over Bubblesort in time, lambda calculus is also ill-equipped to distinguish the value of Bubblesort over Qsort in robustness. [♠] https://www.cs.unm.edu/~ackley/be-201301131528.pdf -- Sent from: http://friam.471366.n2.nabble.com/ - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Free forum by Nabble | Edit this page |