We're at two and a half minutes to midnight on the Doomsday Clock. An aggressive, nationalistic populism is sweeping through the Western world. As the writer Martin Amis puts it, history feels like it's speeding up, powered by the rapid acceleration of scientific and technological advance.
And, according to the theoretical physicist Professor Stephen Hawking, the world's leading boffin, King of the Brains, hypernerd, boffzilla (etc etc), human behaviour inevitably isn't keeping up with the risks posed by potentially destructive technological advances.
Put simply, we're too naturally aggressive in the modern world for our own good. So, for example, if President Trump stubs his toe and blames the Chinese, in a fit of rage he might launch a tactical nuclear strike at Beijing out of sheer frustration and stupidity, whereas in the the olden days he'd have just pillaged a local village.
"Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages," Hawking told The Times in an interview.
"It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason."
Then there are the risks from manmade climate change. Or a Terminator-style situation where artificial intelligence advances well beyond human intelligence, and we end up building pyramids made from iPads for our robot pharaohs or something. But Hawking is still "optimistic", though that's a little suspicious coming from a 75-year-old who'll be dead before he ever has to kowtow to a self-aware smart fridge.
"We need to be quicker to identify such threats and act before they get out of control," Hawking told The Times. "This might mean some form of world government. But that might become a tyranny. All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges."
It's not the first time Hawking has warned about the future, in particular artificial intelligence. Speaking in October 2016 at the launch of the Leverhulme Centre for the Future of Intelligence in Cambridge, he said the development of AI could be "the biggest event in the history of our civilisation".
"But it could also be the last unless we learn how to avoid the risks," he said. "Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It will bring great disruption to our economy. And in the future, AI could develop a will of its own – a will that is in conflict with ours.
"In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not know which."
Can't wait to find out!