That is essentially the question we ended the lunchtime meeting with today.
What is a big question that needs to be answered definitively by climate science? The idea is to provoke a community wide effort, funneling creativity, effort, and resources toward one big problem. The example in particle physics was the Higgs boson and the construction of the large hadron collider. Is there something equivalent in climate research?
Showing posts with label future. Show all posts
Showing posts with label future. Show all posts
2010-08-26
2009-09-08
Hatoyama says emission cuts are coming, maybe
So Japan's PM says that the country is going to reduce carbon emissions by 2020 to 75% of 1990 emissions [LINK], but is requiring other countries to come along for the ride.
First off, great! It is terrific to see a world leader take a stand and give a real goal... dare I say a target.
Second, this could be a genius move on Hatoyama's part. Japan is pretty amazing when it comes to designing and building stuff, and there is a strong track record of taking ideas/concepts originated elsewhere and making them more useable, streamlined, and efficient (cars, VCRs, etc). So my first take on this is that Japanese companies like Toyota and Mitsubishi ( cf.) are going to have an obvious target for building efficient things (things of all kinds!). These companies already have a head start down this path, and having a huge economy destined to reduce emissions means there is economic incentive to improve R&D.
If these companies, which are already leading the world, now accelerate their R&D, they will be selling their wares to the rest of the world shortly. This will be especially true if Hatoyama gets his way and other countries do vow to reduce emissions. If Japanese companies can do for power generation what they have done for other industries, then the whole world will be buying Honda wind turbines and Sony solar panels in now time. (possible example) What a boost to the Japanese economy! Wish the USA could have thought of that.
First off, great! It is terrific to see a world leader take a stand and give a real goal... dare I say a target.
Second, this could be a genius move on Hatoyama's part. Japan is pretty amazing when it comes to designing and building stuff, and there is a strong track record of taking ideas/concepts originated elsewhere and making them more useable, streamlined, and efficient (cars, VCRs, etc). So my first take on this is that Japanese companies like Toyota and Mitsubishi ( cf.) are going to have an obvious target for building efficient things (things of all kinds!). These companies already have a head start down this path, and having a huge economy destined to reduce emissions means there is economic incentive to improve R&D.
If these companies, which are already leading the world, now accelerate their R&D, they will be selling their wares to the rest of the world shortly. This will be especially true if Hatoyama gets his way and other countries do vow to reduce emissions. If Japanese companies can do for power generation what they have done for other industries, then the whole world will be buying Honda wind turbines and Sony solar panels in now time. (possible example) What a boost to the Japanese economy! Wish the USA could have thought of that.
2009-07-26
Should we prepare for the singularity?
The singularity is the hypothesized moment when artificial intelligence becomes as intelligent as humans. At that point, machines might have the ability to decide to make smarter machines, which will make smarter machines, ad infinitum, relegating humans to a subservient role in society. Another view of the singularity is that it will free humanity from the shackles of the material world, allowing unimaginable lifespan and freedom to think, create, and explore. A NYTimes.com article covers a meeting of computer scientists who are starting to wonder whether limits on artificial intelligence research should be imposed [LINK].
The article makes it seem as though these scientists are concerned with current, or near-future, technologies that could disrupt society. It cites a few recent advances, especially pushing this empathy simulating robot. From my reading, none of these technologies seems very threatening, and most have much more potential for good than harm.
Thinking farther into the future, to a time when the singularity is imminent, these concerns become very relevant. I suspect the scientists are more interested in dealing with ethical issues now that will help decision making then. The fact of the matter is that the singularity, in one form or another, is imminent, and so some thought about what it means is important. Regulating research seems like a wrong-headed direction to me though, because that will mean that the singularity will sneak up on us. Everyone will be pushing their science to bump around the edges of the rules, and suddenly that surface beyond which lies advanced artificial intelligence will be gone, disintegrated, and humanity won't be properly prepared because everyone promised they weren't going to go past that boundary.
Don't get me wrong, even at the moment of the singularity, I don't think it means machines will start taking over. Simply having the capacity to be more intelligent than humans doesn't mean those initial machines will be successful at autonomous thought and decision-making... i.e., they won't really be conscious. Rather, those intelligent machines will be in increment in the machine-human interaction that will, I hope, push the boundaries of the human experience. There are possibilities to extend lifespan, expand thought capacity, stimulate creativity, and boost productivity. These are the promises of intelligent machines, but so were they the promises of digital computers and nano-bots, so we can't rely on it happening. We still don't have flying cars and jet-packs, and we still don't have nanotechnology that repairs roads and buildings or constructs moon bases for us, nor do we know whether a simulation of the human brain pushes artificial intelligence to a new level, or if very advanced computing technology will be able to interact with biological systems in any interesting ways [cf. LINK]. Despite my hope for the coming singularity, it is far from certain that we'll know when it happens or what it means, and it is unlikely, with any amount of planning, that we'll know what to do when that day comes to make the most of the technology.
The article makes it seem as though these scientists are concerned with current, or near-future, technologies that could disrupt society. It cites a few recent advances, especially pushing this empathy simulating robot. From my reading, none of these technologies seems very threatening, and most have much more potential for good than harm.
Thinking farther into the future, to a time when the singularity is imminent, these concerns become very relevant. I suspect the scientists are more interested in dealing with ethical issues now that will help decision making then. The fact of the matter is that the singularity, in one form or another, is imminent, and so some thought about what it means is important. Regulating research seems like a wrong-headed direction to me though, because that will mean that the singularity will sneak up on us. Everyone will be pushing their science to bump around the edges of the rules, and suddenly that surface beyond which lies advanced artificial intelligence will be gone, disintegrated, and humanity won't be properly prepared because everyone promised they weren't going to go past that boundary.
Don't get me wrong, even at the moment of the singularity, I don't think it means machines will start taking over. Simply having the capacity to be more intelligent than humans doesn't mean those initial machines will be successful at autonomous thought and decision-making... i.e., they won't really be conscious. Rather, those intelligent machines will be in increment in the machine-human interaction that will, I hope, push the boundaries of the human experience. There are possibilities to extend lifespan, expand thought capacity, stimulate creativity, and boost productivity. These are the promises of intelligent machines, but so were they the promises of digital computers and nano-bots, so we can't rely on it happening. We still don't have flying cars and jet-packs, and we still don't have nanotechnology that repairs roads and buildings or constructs moon bases for us, nor do we know whether a simulation of the human brain pushes artificial intelligence to a new level, or if very advanced computing technology will be able to interact with biological systems in any interesting ways [cf. LINK]. Despite my hope for the coming singularity, it is far from certain that we'll know when it happens or what it means, and it is unlikely, with any amount of planning, that we'll know what to do when that day comes to make the most of the technology.
Subscribe to:
Posts (Atom)