The singularity is the hypothesized moment when artificial intelligence becomes as intelligent as humans. At that point, machines might have the ability to decide to make smarter machines, which will make smarter machines, ad infinitum, relegating humans to a subservient role in society. Another view of the singularity is that it will free humanity from the shackles of the material world, allowing unimaginable lifespan and freedom to think, create, and explore. A NYTimes.com article covers a meeting of computer scientists who are starting to wonder whether limits on artificial intelligence research should be imposed [LINK].
The article makes it seem as though these scientists are concerned with current, or near-future, technologies that could disrupt society. It cites a few recent advances, especially pushing this empathy simulating robot. From my reading, none of these technologies seems very threatening, and most have much more potential for good than harm.
Thinking farther into the future, to a time when the singularity is imminent, these concerns become very relevant. I suspect the scientists are more interested in dealing with ethical issues now that will help decision making then. The fact of the matter is that the singularity, in one form or another, is imminent, and so some thought about what it means is important. Regulating research seems like a wrong-headed direction to me though, because that will mean that the singularity will sneak up on us. Everyone will be pushing their science to bump around the edges of the rules, and suddenly that surface beyond which lies advanced artificial intelligence will be gone, disintegrated, and humanity won't be properly prepared because everyone promised they weren't going to go past that boundary.
Don't get me wrong, even at the moment of the singularity, I don't think it means machines will start taking over. Simply having the capacity to be more intelligent than humans doesn't mean those initial machines will be successful at autonomous thought and decision-making... i.e., they won't really be conscious. Rather, those intelligent machines will be in increment in the machine-human interaction that will, I hope, push the boundaries of the human experience. There are possibilities to extend lifespan, expand thought capacity, stimulate creativity, and boost productivity. These are the promises of intelligent machines, but so were they the promises of digital computers and nano-bots, so we can't rely on it happening. We still don't have flying cars and jet-packs, and we still don't have nanotechnology that repairs roads and buildings or constructs moon bases for us, nor do we know whether a simulation of the human brain pushes artificial intelligence to a new level, or if very advanced computing technology will be able to interact with biological systems in any interesting ways [cf. LINK]. Despite my hope for the coming singularity, it is far from certain that we'll know when it happens or what it means, and it is unlikely, with any amount of planning, that we'll know what to do when that day comes to make the most of the technology.