Robert Miles made some videos explaining issues in artificial intelligence, and in particular there are questions about how AI can pose a danger. His launching point is a 2016 paper, which is surely still relevant:
It’s been nearly two years since researchers from Google, Stanford, UC Berkeley, and OpenAI released the paper, “Concrete Problems in AI Safety,” yet it’s still one of the most important pieces on AI safety
That’s from a 2018 blog, but it’s surely still relevant. It’s from the Future of Life Institute:
Much shorter and more specific, in this short video he argues that Github’s copilot is dangerous in this sense: it can write good code, or bad code, and is just as happy either way.