How to mitigate AI-driven power concentration
We’re offering up to $4M to support projects that work to mitigate the dangers of AI-driven power concentration and move towards a better world of meaningful human agency.
It’s not like I haven’t been trying to head this problem off for over 3 decades now
(if not longer)
And it isn’t like I haven’t approached related Institutes (Lifeboat, Metaculus, Hansen, etc. as well as dissident right influencers like Murray, Derbyshire and Sailer – all of whom should have been eager) with the relevance of the prize awards for Algorithmic Information Criterion for macrosocial dynamics model selection (like during the pandemic).
My cynicism reaches inexhaustable extremes in this mismatch between “influence” and competence – to the point that I seriously want to tear the whole edifice down some point in the 1800s if not 1700s and start over.
But, ok, let’s say that I can set aside my attitude and entertain the possibility that Future of Life Institute is, like many others, starting to realize that the existing structures have screwed the pooch – not the least of which is the Jewish elite that has taken over from the old WASP elite without really understanding what it took to build that society in the first place and now finding themselves caught between calling those they want to take over “Nazis” and being called “Nazis” themselves. There is some evidence (ie: Trump) that there is an increasing “Come to Jesus” moment among the elites. And I did predict this clear back in the early 1990s when my proposal to replace the 16th Amendment with a net asset tax that funds the replacement of the welfare state with a citizen’s dividend was met with intense Jewish hostility precisely because it would have accomplished what the Future of Life Institute wants in the present RFP.
But it is clearly too much to ask people to take more than one change in policy in their thinking at a time, let alone 3 (Repeal 16th, Institute net asset tax, replace welfare state with citizen’s dividend) in order to get to a new stable state.
sigh
OK, so here’s what GPT4 did with my Hume’s Guillotine writeup to reformat it into a response to the FLI’s RFP. Whaddya think? I’ve got other things to do with my life than beg influencers that don’t want to be influced because they haven’t yet reached their “Come to Jesus” moment:
Proposal for Prize Awards in Macrosocial Dynamical Modeling
HumesGuillotine
Cynic: A blackguard whose faulty vision sees things as they are, not as they ought to be. — Ambrose Bierce’s Devil’s Dictionary
Introduction
This proposal outlines a series of competitions aimed at fostering rigorous ethics in Artificial General Intelligence (AGI) based on Hume’s Guillotine, which separates what is from what ought to be. Our focus is on macrosocial dynamical modeling using the Algorithmic Information Criterion (AIC) as a fundamental tool for understanding and mitigating AI-driven power concentration. The foundation of this approach is that to achieve any objective, including mitigating power concentration, a robust model of macrosocial dynamics is essential.
Background
Artificial General Intelligence (AGI) unifies is with ought. In Marcus Hutter’s AGI theory, AIXI, Algorithmic Information Theory provides the is through scientific knowledge, while Sequential Decision Theory provides the ought by applying this knowledge to decision-making. The Algorithmic Information Criterion (AIC) stands out among information criteria for model selection as it focuses on lossless compression of data, thereby avoiding confirmation bias and ensuring the integrity of scientific discovery.
Objectives
Our proposal aligns with the Future of Life Institute’s (FLI) request for proposals aimed at mitigating the risks of AI-driven power concentration. We emphasize that achieving these objectives necessitates a sound understanding of macrosocial dynamics through rigorous modeling. Our approach involves competitions that drive the development and validation of these models, ensuring they are robust, unbiased, and capable of informing effective interventions.
Methodology
Competitions
The proposal includes a series of competitions centered on:
- Data Collection: Curating comprehensive datasets that encompass diverse aspects of social dynamics.
- Model Development: Utilizing the AIC to develop models that accurately represent macrosocial dynamics without omitting critical data, including what might traditionally be considered noise or measurement errors.
- Model Evaluation: Employing objective scoring based on the AIC to evaluate the models’ predictive power and alignment with observed data.
Impact on AI-Driven Power Concentration
Effective mitigation of AI-driven power concentration requires an in-depth understanding of the underlying social structures and dynamics. By leveraging the AIC, our approach ensures that models are not only scientifically robust but also capable of identifying and predicting biases and power imbalances within social systems. This knowledge is crucial for designing policies and interventions that promote equitable distribution of AI benefits and prevent the concentration of power.
Alignment with FLI’s Vision
Our proposal directly addresses the topical focus areas outlined by FLI, including:
- Public AI Development: Proposing models and frameworks for AI that operate outside traditional corporate structures, ensuring greater public control and accountability.
- Safe Decentralization: Developing governance models for AI that prevent the proliferation of high-risk systems while promoting decentralized control.
- Effectiveness of Open Source: Evaluating how open-source AI initiatives can mitigate power concentration and proposing strategies for their effective implementation.
- Income Redistribution and Incentive Design: Exploring mechanisms to ensure equitable distribution of AI benefits and designing incentive structures that prioritize social good over profit maximization.
Expected Outcomes
- Enhanced Understanding of Social Dynamics: Providing robust models that elucidate the complexities of macrosocial interactions and power structures.
- Informed Policy Development: Offering insights that guide the creation of policies aimed at preventing AI-driven power concentration and promoting equitable distribution of AI benefits.
- Increased Public Engagement: Encouraging public involvement in AI development and governance through transparent and accountable modeling practices.
Conclusion
This proposal underscores the necessity of robust macrosocial dynamical modeling for any initiative aimed at mitigating AI-driven power concentration. By leveraging the Algorithmic Information Criterion, we ensure that our models are scientifically rigorous, unbiased, and capable of informing effective policies and interventions. We invite FLI to support this initiative, driving forward the development of equitable and sustainable AI systems.