Technology VS. Society & Survival: Artificial Intelligence, Social Media & Deepfakes | #5



Some of our new digital technologies are helping to tear our politics apart, make us more angry and hateful, and even make it impossible to know what’s true. Facebook gets a lot of the negative attention (rightfully), but the problem is so much bigger. Our society is clearly not dealing with these technologies effectively. Basically, people are allowed to invent and put into the world whatever they want, no matter how harmful – and there are no rules. Then, on top of this very shaky foundation, people are creating artificial intelligence that will be profoundly more powerful than anything humanity has created before.


Computing is advancing at an exponential rate. Things are already moving faster than we’ve been able to keep up with, and it’s about to go a LOT faster.


Should we keep idolizing technology, and let tech companies and inventors do whatever they want? Or should we responsibly manage these transformative changes to our society, with political and economic systems that encourage safety?


Resources:


Deep Fakes:

~ Bloomberg News, 9/27/18 It's Getting Harder to Spot a Deep Fake Video https://www.youtube.com/watch?v=gLoI9hAX9dw

~ Radiolab, July 2017 Breaking News http://futureoffakenews.com/videos.html (go here to see videos of researchers developing deep fake software)

~ 80,000 Hours Podcast, 4/6/21

Nina Schick on Disinformation and the Rise of Synthetic Media

https://80000hours.org/podcast/episodes/nina-schick-disinformation-synthetic-media/


Social Media:

~ New York Times, 10/15/18 A Genocide Incited on Facebook, With Posts From Myanmar’s Military https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

~ BBC News, 9/12/18 The country where Facebook posts whipped up hate https://www.bbc.com/news/blogs-trending-45449938

~ MIT Technology Review, 3/11/21 How Facebook got addicted to spreading misinformation

https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/

~ Center for Humane Technology

https://www.humanetech.com/


Artificial Intelligence:

~ The Independent, 5/1/14 Stephen Hawking: Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough? https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html

~ CNBC, 3/13/18 Elon Musk: ‘Mark my words — A.I. is far more dangerous than nukes’ https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html

~ SingularityHub, 7/15/18 Why Most of Us Fail to Grasp Coming Exponential Gains in AI https://singularityhub.com/2018/07/15/why-most-of-us-fail-to-grasp-coming-exponential-gains-in-ai/

~ AlphaZero (chess-playing artificial intelligence) https://en.wikipedia.org/wiki/AlphaZero

~ MuZero (chess- and Atari-playing artificial intelligence) https://en.wikipedia.org/wiki/MuZero

~ SingularityHub, 6/18/20 OpenAI’s New Text Generator Writes Even More Like a Human https://singularityhub.com/2020/06/18/openais-new-text-generator-writes-even-more-like-a-human/

~ SingularityHub, 8/2/20 This AI Could Bring Us Computers That Can Write Their Own Software https://singularityhub.com/2020/08/02/this-ai-could-bring-us-computers-that-can-write-software/

~ SingularityHub, 5/31/17

Google’s AI-Building AI Is a Step Toward Self-Improving AI

https://singularityhub.com/2017/05/31/googles-ai-building-ai-is-a-step-toward-self-improving-ai/

~ Science, 4/13/20

Artificial intelligence is evolving all by itself

https://www.sciencemag.org/news/2020/04/artificial-intelligence-evolving-all-itself

~ Future of Life Institute - Benefits & Risks of Artificial Intelligence https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

~ Future of Life Institute Podcast, 3/19/21

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/

~ AI Research Considerations for Human Existential Safety (ARCHES) by Andrew Critch & David Krueger, 6/11/20 https://arxiv.org/abs/2006.04948 (This academic paper is a long read, but I highly recommend it. It’s understandable and well-written. It does an excellent job of explaining why AI safety is quite difficult because of complex interactions between multiple people and organizations, and multiple AI systems.)


Lethal Autonomous Weapons:

~ Campaign to Stop Killer Robots

https://www.stopkillerrobots.org/

~ Lethal Autonomous Weapons Systems

https://autonomousweapons.org/


Efforts to regulate artificial intelligence:

~ International Congress for the Governance of AI

https://www.icgai.org/

~ Future of Life Institute

https://futureoflife.org/policy-work/

~ Centre for the Governance of AI / Future of Life Institute at University of Oxford

https://www.fhi.ox.ac.uk/govai/

~ Center for AI and Digital Policy

https://caidp.dukakis.org/

~ Global Partnership on Artificial Intelligence

https://gpai.ai/