Safe Artificial General Intelligence via Distributed Ledger Technology

Kristen W. Carlson

Background. Expert observers and artificial intelligence (AI) progression metrics indicate AI will exceed human intelligence within a few decades. Whether general AI that exceeds human capabilities (AGI) will be the single greatest boon in history or a disaster is unknown. No proofs exist that AGI will benefit humans or that AGI will not harm or eliminate humans. Objective. I propose a set of logically distinct conceptual components that are necessary and sufficient to 1) ensure that most known AGI scenarios will not harm humanity and 2) robustly align AGI values and goals with human values. Methods. By systematically addressing each pathway category to malevolent AI we can induce the methods/axioms required to redress the category. Results and Discussion. Distributed ledger technology (DLT, blockchain) is integral to this proposal, e.g. to reduce the probability of hacking, provide an audit trail to detect and correct errors or identify components causing vulnerability or failure and replace them or shut them down remotely and/or automatically, and to separate and balance key AGI components via decentralized apps (dApps). Smart contracts based on DLT are necessary to address evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms. 1) Access to technology by market license. 2) Transparent ethics embodied in DLT. 3) Morality encrypted via DLT. 4) Behavior control structure with values (ethics) at roots. 5) Individual bar-code identification of all critical components. 6) Configuration Item (from business continuity/disaster recovery planning). 7) Identity verification secured via DLT. 8) Smart automated contracts based on DLT. 9) Decentralized applications - AI software code modules encrypted via DLT. 10) Audit trail of component usage stored via DLT. 11) Social ostracism (denial of societal resources) augmented by DLT petitions.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment