It's not about people's belief they understand the software. And I can't speak to Dave's motivation for believing it's the Best Way. But I can describe ways that this style of design produces better software than other styles.
We've talked on list quite a bit about agency, hallmarks of living systems, complexity buzzwords like attractors, and far-from-equilibrium, etc. By treating software components as Kantian ends, rather than means, helps ensure *distribution* of computational effort/cost. Wrapping each component in its own ball of responsibility/duty/self-interest helps the designer play the game, on a long-term basis, of "logic, logic, where is the logic". The traditional systems engineering approach attempts to distribute logic according to a modernist kind of planning the whole thing out, a waterfall process where you spend lots of upstream time planning, get a blueprint, parcel out effort to subcontractors, verify, test, deploy, maintain. This works, but not for long, and not for massive heterogeneous systems. The newer approaches like "agile", "continuous delivery", "devops", "code as data", etc. are evolutionary steps from waterfall in the right direction. But the limit point is to eventually have every unit of computation carry with it, its own context, its own "closure". Such "personification" is an effective heuristic for doing (and remembering to do) that. On 12/1/20 12:31 PM, [hidden email] wrote: > > If you are saying that the more AI acts like a person, the more People will believe they understand it, I totally agree. Whether they believe truthfully is a whole ‘nother that matter. If ever there were a cradle for manipulation, AI is it. > > On 12/1/20 12:01 PM, Prof David West wrote: >> >> Everything I do in software is grounded in personification / anthropomorphization of objects - small bits of software. I would contend that this is the best way to understand and design such software. So I see no reason to avoid personification of AI software and would, in fact, argue that current approaches to designing an AI will fail precisely because they do not take that perspective. -- glen ep ropella 971-599-3737 - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Glen writes:
< We've talked on list quite a bit about agency, hallmarks of living systems, complexity buzzwords like attractors, and far-from-equilibrium, etc. By treating software components as Kantian ends, rather than means, helps ensure *distribution* of computational effort/cost. Wrapping each component in its own ball of responsibility/duty/self-interest helps the designer play the game, on a long-term basis, of "logic, logic, where is the logic". The traditional systems engineering approach attempts to distribute logic according to a modernist kind of planning the whole thing out, a waterfall process where you spend lots of upstream time planning, get a blueprint, parcel out effort to subcontractors, verify, test, deploy, maintain. This works, but not for long, and not for massive heterogeneous systems. > I think I'd prefer the analogy of a choir or an automated subway to boring division of labor. Nonetheless, Agile is a way to get a lot of (often junior) people involved in a project with bounded responsibility and to give them a sense of mastery. The emphasis is on simple, short (sub)contracts with transparency to the customer. There isn't much in the way of technical mastery, though, because the tasks to be accomplished are close to robotic and there is no Big Picture, just the periodic frantic consensus building -- I would argue consensus building for the sake of itself --- in contrast to the Kantian sort. Agile is one of those approaches that especially social people love because it is communication intensive and proliferates "Scrum masters" and other middle managers and mitigates the power of gurus. When the designer only has to consider local optimization to play the game and to take different roles in turn, it is not surprising that it is harder to achieve trust, global objectives, and efficiency. One gets a set of independent components that can be aggregated, but that aggregation may not have predictable properties in aggregate. I too would ridicule waterfall type approaches, but for a different reason. Waterfall type approaches put pompous know-it-alls in charge and delegate the "uninteresting" stuff to subcontractors. Or worse they bring several over-the-hill bosses into a room to rule by committee. As such individuals' technical knowledge and prowess is often dated, they make uninformed decisions and the delegated work needs to be redelegated over and over while they figure out why their Grand Plan didn't work out so well. IMO systems start to degrade when designers can't get the whole design and implementation in their head, and are reliant on other people for technical grounding. That's why government often doesn't work, and why we need AI. Marcus - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com archives: http://friam.471366.n2.nabble.com/ FRIAM-COMIC http://friam-comic.blogspot.com/ |
Free forum by Nabble | Edit this page |