Asimov's laws are not strong enough to be used in practice. Strength isn't even a consideration, when considering that since they're written in English words would first have to be interpreted subjectively to have any meaning at all. You can find a good discussion of this here.
To transcribe an excerpt:
How do you define these things? How do you define "human", without first having to take a stand on almost every issue. And if "human" wasn't hard enough, you then have to define "harm", and you've got the same problem again. Almost any really solid unambiguous definitions you give for those words—that don't rely on human intuition—result in weird quirks of philosophy, leading to your AI doing something you really don't want it to do.
One can easily imagine that Asimov was smart enough to know this and was more interested in story-writing than designing real-world AI control protocols.
In the novel Neuromancer, it was suggested that AIs could possibly serve as checks against each other. Ray Kurzweil's impending Singularity, or the possibility of hyperintelligent AGIs otherwise, might not leave much of a possibility for humans to control AIs at all, leaving peer-regulation as the only feasible possibility.
It's worth noting that Eliezer Yudkowsky and others ran an experiment wherein Yudkowsky played the role of a superintelligent AI with the ability to speak, but no other connection outside of a locked box. The challengers were tasked simply with keeping the AI in the box at all costs. Yudkowsky escaped both times.