Posted by
gepr on
Aug 03, 2020; 7:50pm
URL: http://friam.383.s1.nabble.com/actual-vs-potential-tp7598086p7598088.html
To be a little clearer on my hand-wringing, here is a section where Bringsjord et al argue that belief in the Singularity is not rational:
> A
> (P1) There will be AI (created by HI).
> (P2) If there is AI, there will be AI+ (created by AI).
> (P3) If there is AI+, there will be AI++ (created by AI+).
> )
> There will be AI++ (= S will occur).
> [...]
> Our certainty in
> the lack of certainty here can be established by showing, formally, that the denial
> of (P1) is consistent, since if not-(P1) is consistent, it follows that (P1) doesn’t
> follow from any of the axioms of classical logic and mathematics (for example,
> from a standard axiomatic set theory, such as ZF). How then do we show that not-
> (P1) is consistent? We derive it from a set of premises which are themselves
> consistent. To do this, suppose that human persons are information-processing
> machines more powerful than standard Turing machines, for instance the infinite-
> time Turing machines specified and explored by Hamkins and Lewis (2000), that
> AI (as referred to in A) is based on standard Turing-level information processing,
> and that the process of creating the artificial intelligent machines is itself at the
> level of Turing-computable functions. Under these jointly consistent mathematical
> suppositions, it can be easily proved that AI can never reach the level of human
> persons (and motivated readers with a modicum of understanding of the mathe-
> matics of computer science are encouraged to carry out the proof). So, we know
> that (P1) isn’t certain.
Note the "for instance" of the ∞ time Turing machines, which itself seems to refer to a stable output in the long run that is taken as a non-halting output ... maybe kindasorta like the decimal format of 1/7 ... or Nick's conception of reality 8^D.
I keep thinking, with no decision in sight so far, that Wolpert and Benford's attempt to resolve Roko's Basilisk is related, that there's some underlying set-up that makes the whole controversy dissolve. You'll note the higher-order nature of AI+ and AI++. And if there are some higher-order operators that simply don't operate over potential infinities, what are they? And can we simply define our way out of it, as in defining 1/0 ≡ 0?
On 8/3/20 10:02 AM, uǝlƃ ↙↙↙ wrote:
>
> I know I've posted this before. I don't remember it getting any traction with y'all. But it's relevant to my struggles with beliefs in potential vs actual infinity:
>
> Belief in the Sinularity is Fideistic
>
https://link.springer.com/chapter/10.1007%2F978-3-642-32560-1_19>
> Not unrelated, I've often been a fan of trying identify *where* an argument goes wrong. And because this post mentions not only 1/0, but Isabelle, Coq, Idris, and Agda, I figured it might be a good follow-up to our modeling discussion on Friday, including my predisposition against upper ontologies.
>
> 1/0 = 0
>
https://www.hillelwayne.com/post/divide-by-zero/--
↙↙↙ uǝlƃ
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam
un/subscribe
http://redfish.com/mailman/listinfo/friam_redfish.comarchives:
http://friam.471366.n2.nabble.com/FRIAM-COMIC
http://friam-comic.blogspot.com/
uǝʃƃ ⊥ glen