Share Facebook Twitter LinkedIn Pinterest Bluesky Threads Predicting what future people value: A terse introduction to Axiological Futurism — Jim Buhler [Mar, 2023] https://forum.effectivealtruism.org/s/wmqLbtMMraAv5Gyqn/p/FCkchmXcSCQtJ9PZA
KKirdan on August 5, 2025 12:08 am >Humanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future ([Bostrom 2003](https://nickbostrom.com/astronomical/waste); [MacAskill 2022](https://whatweowethefuture.com/uk/); [Althaus and Gloor 2016](https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/)). But what things? How (dis)valuable? And how does this compare with things [grabby aliens](https://grabbyaliens.com/) would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future? >While this depends on many factors, a crucial one will likely be *the values of our successors*.
1 Comment
>Humanity might develop artificial general intelligence (AGI), colonize space, and create astronomical amounts of things in the future ([Bostrom 2003](https://nickbostrom.com/astronomical/waste); [MacAskill 2022](https://whatweowethefuture.com/uk/); [Althaus and Gloor 2016](https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/)). But what things? How (dis)valuable? And how does this compare with things [grabby aliens](https://grabbyaliens.com/) would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future?
>While this depends on many factors, a crucial one will likely be *the values of our successors*.