the video has been deleted. I would like to point out that the three laws of robotics aren't exactly foolproof anyway. It's a good start, but a proper AI would need a far more complex concept of what not-harming humans means
Well, you could just raise it like a child, in some kind of android body. Then once it's been treated like a child, it might kinda empathize with us as real people. At least, that's how id do it. After all we created it, might as well treat it like a human, with human laws.
Dang, can't find the video anywhere else. But it was about this: http://www.newscientist.com/article...ed-by-choice-of-who-to-save.html#.VBOYT_m1ZcQ
It wasn't exactly the most mindblowing thing in the universe. It was just little drones the size of lego figures bumping into each other.
Ah yea i read that article on New Scientist too. It's interesting in that it's clear that ethics isn't just some "3-4 rules and you're done". The laws of robotics are only a bare framework, not an end product.