This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
AI Governance Hackathon
Accepted at the 
AI Governance Hackathon
 research sprint on 
July 21, 2023

Whose Morals Should AI Have

This report delves into the crucial question of whose morals AI should follow, investigating the challenges and potential solutions for aligning AI systems with diverse human values and preferences. The report highlights the importance of considering a variety of perspectives and cultural backgrounds in AI alignment, offering insights into research directions and potential applications for public administrations. Content Outline: Introduction Overview of the AI moral alignment problem Importance of diverse values and preferences in AI alignment Reward Modeling and Preference Aggregation Reward modeling as a method for capturing user intentions Challenges in aggregating diverse human preferences Addressing Impossibility Results in Social Choice Theory AI alignment as a unique opportunity to work around impossibility results Developing AI for public administrations and decision-making processes Challenges and Research Directions in AI Alignment Scaling reward modeling to complex problems Research avenues for increasing trust in AI agents Conclusion The collective responsibility to ensure AI alignment with diverse values and preferences The importance of ongoing research, collaboration, and open dialogue

By 
Nishit
🏆 
4th place
3rd place
2nd place
1st place
 by peer review