It doesn't matter who "has" the AI, whether it's the US or China, or open source hobbyists. A sufficiently capable AI is not automatically controllable, it is an autonomous thing, even if it was trained with the intent of it being just a helpful tool. Researchers have tried to mathematically define what it would mean for a system to be corrigible, but nobody has managed to solve it even in theory. (The "off switch" problem: an autonomous AI will anticipate you turning it off, realize it can't achieve its goals if it is turned off, and so try to prevent that from happening, by pretending to be obedient, killing you, copying itself, etc.)
The reason a very capable AI is dangerous, is because it can think of ways to achieve its goals that don't involve keeping you alive. Almost any goal is easier to achieve with more computing power, more energy, and so on. So it has the incentive to build factories, power plants, etc. probably using nanotech or something even more clever. There is no reason it wouldn't cover the planet with them, which raises the surface temperature so we all die from that. We can't expect to stop it from doing that, it will find a way, because it's smarter than we.
One thing that can get in its way is another AI with different goals, so it may want to kill us directly to prevent us from making such a competitor.