演讲与口才 百分网手机站

霍金最新演讲

时间:2017-05-18 12:38:51 演讲与口才 我要投稿

2017年霍金最新演讲

  4月27日,著名物理学家史蒂芬·霍金在北京举办的全球移动互联网大会上做了视频演讲。在演讲中,霍金重申人工智能崛起要么是人类最好的事情,要么就是最糟糕的事情。他认为,人类需警惕人工智能发展威胁。因为人工智能一旦脱离束缚,以不断加速的状态重新设计自身,人类由于受到漫长的生物进化的限制,将无法与之竞争,从而被取代。下面是演讲的中英文全文。

2017年霍金最新演讲

  Over my lifetime, I have seen very significant societal changes. Probably one of the most significant, and one that is increasingly concerning people today, is the rise of artificial intelligence.

  In short, I believe that the rise of powerful AI, will be either the best thing, or the worst, ever to happen to humanity.

  I have to say now, that we do not yet know which. But we should do all we can, to ensure that its future development benefits us, and our environment. We have no other option. I see the development of AI, as a trend with its own problems that we know must be dealt with, now and into the future.

  The progress in AI research and development is swift. And perhaps we should all stop for a moment, and focus our research, not only on making AI more capable, but on maximizing its societal benefit.

  Such considerations motivated the American Association for Artificial Intelligence's, two thousand and eight to two thousand and nine, Presidential Panel on Long-Term AI Futures, which up to recently had focused largely on techniques, that are neutral with respect to purpose.

  But our AI systems must do what we want them to do. Inter-disciplinary research can be a way forward: ranging from economics, law, and philosophy, to computer security, formal methods, and of course various branches of AI itself.

  Everything that civilization has to offer, is a product of human intelligence, and I believe there is no real difference between what can be achieved by a biological brain, and what can be achieved by a computer.

  It therefore follows that computers can, in theory, emulate human intelligence, and exceed it. But we don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.

  Indeed, we have concerns that clever machines will be capable of undertaking work currently done by humans, and swiftly destroy millions of jobs.

  While primitive forms of artificial intelligence developed so far, have proved very useful, I fear the consequences of creating something that can match or surpass humans. AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded. It will bring great disruption to our economy.

  And in the future, AI could develop a will of its own, a will that is in conflict with ours. Although I am well-known as an optimist regarding the human race, others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. I am not so sure.

  In January 2015, I, along with the technological entrepreneur, Elon Musk, and many other AI experts, signed an open letter on artificial intelligence, calling for serious research on its impact on society.

  In the past, Elon Musk has warned that super human artificial intelligence, is possible of providing incalculable benefits, but if deployed incautiously, will have an adverse effect on the human race.

  He and I, sit on the scientific advisory board for the Future of Life Institute, an organization working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems, while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety.

  In addition, for policymakers and the general public, the letter is meant to be informative, but not alarmist. We think it is very important, that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues.

  For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled. The four-paragraph letter, titled Research Priorities for Robust and Beneficial Artificial Intelligence, an Open Letter, lays out detailed research priorities in the accompanying twelve-page document.

  For the last 20 years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in some environment. In this context, intelligence is related to statistical and economic notions of rationality. Colloquially, the ability to make good decisions, plans, or inferences.

  As a result of this recent work, there has been a large degree of integration and cross-fertilisation among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

  As development in these areas and others, moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance, are worth large sums of money, prompting further and greater investments in research.

  There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer, is a product of human intelligence; we cannot predict what we might achieve, when this intelligence is magnified by the tools AI may provide.

  But, and as I have said, the eradication of disease and poverty is not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits, while avoiding potential pitfalls.

  Artificial intelligence research is now progressing rapidly. And this research can be discussed as short-term and long-term. Some short-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident, and a large probability of a small accident.

  Other concerns relate to lethal intelligent autonomous weapons. Should they be banned. If so, how should autonomy be precisely defined. If not, how should culpability for any misuse or malfunction be apportioned. Other issues include privacy concerns, as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.

  Long-term concerns, comprise primarily of the potential loss of control of AI systems, via the rise of super-intelligences that do not act in accordance with human wishes, and that such powerful systems would threaten humanity. Are such dysto//www.oh100.com/pices possible.

  If so, how might these situations arise. What kind of investments in research should be made, to better understand and to address the possibility of the rise of a dangerous super-intelligence, or the occurrence of an intelligence explosion.

  Existing tools for harnessing AI, such as reinforcement learning, and simple utility functions, are inadequate to solve this. Therefore more research is necessary to find and validate a robust solution to the control problem.

  Recent landmarks, such as the self-driving cars already mentioned, or a computer winning at the game of Go, are signs of what is to come. Enormous levels of investment are pouring into this technology.

  The achievements we have seen so far, will surely pale against what the coming decades will bring, and we cannot predict what we might achieve, when our own minds are amplified by AI.

  Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one, industrialisation. Every aspect of our lyves will be transformed. In short, success in creating AI, could be the biggest event in the history of our civilisation.

  But it could also be the last, unless we learn how to avoid the risks. I have said in the past that the development of full AI, could spell the end of the human race, such as the ultimate use of powerful autonomous weapons. Earlier this year, I, along with other international scientists, supported the United Nations convention to negotiate a ban on nuclear weapons.

  We await the outcome with nervous anticipation. Currently, nine nuclear powers have access to roughly 14,000 nuclear weapons, any one of which can obliterate cities, contaminate wide swathes of land with radioactive fall-out, and the most horrible hazard of all, cause a nuclear-induced winter, in which the fires and smoke might trigger a global mini-ice age.

  The result is a complete collapse of the global food system, and apocalyptic unrest, potentially killing most people on earth. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them, and discovered that their effects are even more horrific than first thought.

  At this stage, I may have possibly frightened you all here today, with talk of doom. I apologise. But it is important that you, as attendees to today's conference, recognise the position you hold in influencing future research and development of today's technology.

  I believe that we join together, to call for support of international treaties, or signing letters presented to individual governmental powers. Technology leaders and scientists are doing what they can, to obviate the rise of uncontrollable AI.

  In October last year, I opened a new center in Cambridge, England, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. The Leverhulme Centre for the Future of Intelligence, is a multi-disciplinary institute, dedicated to researching the future of intelligence, as crucial to the future of our civilisation and our species. We spend a great deal of time studying history, which let's face it, is mostly the history of stupidity.

  So it's a welcome change, that people are studying instead the future of intelligence. We are aware of the potential dangers, but I am at heart an optimist, and believe that the potential benefits of creating intelligence are huge. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world, by industrialisation.

  Every aspect of our lives will be transformed. My colleague at the institute, Huw Price, has acknowledged that the center came about partially as a result of the university’s Centre for Existential Risk. That institute examines a wider range of potential problems for humanity, while the Leverhulme Centre has a more narrow focus.

  Recent developments in the advancement of AI, include a call by the European Parliament for drafting a set of regulations, to govern the yooss and creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI.

  A European Parliament spokesman has commented, that as a growing number of areas in our daily lyves are increasingly affected by robots, we need to ensure that robots are, and will remain, in the service of humans.

  The report as presented to MEPs, makes it clear that it believes the world is on the cusp of a new industrial robot revolution. It examines whether or not providing legal rights for robots as electronic persons, on a par with the legal definition of corporate personhood, would be permissible.

  But stresses that at all times, researchers and designers should ensure all robotic design incorporates a kill switch. This didn't help the scientists on board the spaceship with Hal, the malfunctioning robotic computer in Kubrick’s two thousand and one, a Space Odyssey, but that was fiction. We deal with fact. Lorna Brazell, a partner at the multinational law firm Osborne Clarke, says in the report, that we don’t give whales and gorillas personhood, so there is no need to jump at robotic personhood.

  But the wariness is there. The report acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity, and challenge the human robot relationship. Finally, the report calls for the creation of a European agency for robotics and AI, that can provide technical, ethical, and regulatory expertise. If MEPs vote in favor of legislation, the report will go to the European Commission, which has three months to decide what legislative steps it will take.

  We too, have a role to play in making sure the next generation has not just the opportunity, but the determination, to engage fully with the study of science at an early level, so that they can go on to fulfil their potential, and create a better world for the whole human race.

  This is what I meant, when I was talking to you just now about the importance of learning and education. We need to take this beyond a theoretical discussion of how things should be, and take action, to make sure they have the opportunity to get on board. We stand on the threshold of a brave new world. It is an exciting, if precarious place to be, and you are the pioneers. I wish you well.

  Chinese technology leaders, scientists, investors and web users raise questions to Prof. Hawking

  Professor Hawking, we have learned so much from your insight.

  Next I’m going to ask some questions. These are from Chinese scientists and entrepreneurs.

  Kai-Fu Lee, CEO of Sinovation Ventures:

  "The large internet companies have access to massive databases, which allows them to make huge strides in AI by violating user's privacy. These companies can’t truly discipline themselves as they are lured by huge economic interests. This vastly disproportionate access to data could cause small companies and startups to fail to innovate. You have mentioned numerous times that we should restrain artificial intelligence, but it’s much harder to restrain humans. What do you think we can do to restrain the large internet companies?"

  As I understand it,the companies are using the data only for statistical purposes,but use of any personal information should be banned. It would help privacy, if all material on the internet, were encrypted by quantum cryptography with a code, that the internet companies could not break in a reasonable time. But the security services would object.

  Professor, the second question is from Fu Sheng, CEO, Cheetah Mobile:

  “Does the human soul exist as a form of quantum or another form of higher dimensional space?"

  I believe that recent advances in AI, such as computers winning at chess and Go, show that there is no essential difference between the human brain and a computer. Contrary to the opinion of my colleague Roger Penrose. Would one say a computer has a soul. In my opinion, the notion of an individual human soul is a Christian concept, linked to the afterlife which I consider to be a fairy story.

  Professor,the third question is from Ya-Qin Zhang, President, Baidu:

  “The way human beings observe and abstract the universe is constantly evolving, from observation and estimation to Newton's law and Einstein’s equation , and now data-driven computation and AI . What is next”

  We need a new quantum theory, which unifies gravity with the other forces of nature. Many people claim that it is string theory, but I have my doubts. So far about the only prediction is that space-time has ten dimensions.

  Professor, the forth question is from Zhang Shoucheng , Professor of Physics, Stanford University:

  “If you were to tell aliens about the highest achievements of our human civilization on the back of one envelope, what would you write ?”

  It is no good telling aliens about beauty or any other possible art form that we might consider to be the highest artistic achievement,because these are very human specific. Instead I would write about Godel’s Incompleteness Theorems and Fermat’s Last Theorem. These are things aliens would understand

  The next question is from myself:

  “We wish to promote the scientific spirit at all 9 GMIC conferences globally. What three books do you recommend technology leaders read to better understand the coming future and the science that is driving it?”

  They should be writing books not reading them. One fully understands something only when one has written a book about it.

  The next question is from Weibo user:

  What is the one thing we should never do in life, and the one thing we should all do?

  We should never give up, and we should all strive to understand as much as we can.

  The next question is also from Weibo user:

  “Human beings have experienced many evolutions ,for example, the Stone Age, the age of steam to the age of electricity. What do you think will drive the next evolution?”

  Advances in computer science, including artificial intelligence and quantum computing. Technology already forms a major part of our lives but in the coming decades, it will permeate every aspect of our society .intelligently supporting and advising us in many areas , including healthcare work education and science. But we must make sure we control AI not it us.

  Professor Hawking,the last question is from Hai Quan , Musician and VC:

  “If the technology is not mature yet for interstellar immigrants, do human beings have unsolvable challenges that could lead to human extinction apart from external catastrophes like asteroid hitting earth?”

  Yes. over-population, disease, war, famine, climate change and lack of water. It is within the power of man to solve these crises, but unfortunately these remain serious threats to our continued present on earth. These are all solvable, but so far have not been.