殺手機器人會成為終結者嗎

2015/08/11 瀏覽次數:31 收藏
分享到:

  8月11日口譯文章:殺電話器人會成為閉幕者嗎

  “我見過的事,你們人不會信任。”在片子《銀翼殺手》(Blade Runner)末端,魯特格爾樠爾(Rutger Hauer)扮演的反派將哈裏森輠祹(Harrison Ford)扮演的腳色拉回屋頂,饒了他一命,然後說出了這句話。在這句話裏,“人”是最主要的字,由於羅伊巴蒂(Roy Batty)不是人類,他是個機械人。他從太空殖民地逃到地球,向本身的發明者“泰勒公司”(Tyrell Corporation)實行抨擊。

  這才是我以為的“殺電話器人”(killer robot),一個在祛除你以前,能與你舉行一場高智商發言的存在。片子《銀翼殺手》改編自菲利普迪克(Philip K. Dick)的反烏托邦奇異小說《機械人會夢見電子羊嗎?》(Do Androids Dream of Electric Sheep)。影片上映於1982年,其時它照樣一部科幻片子,現在卻有點像真的,橫豎對付人工智能研討職員來講已足以成真了,前不久他們告誡要嚴防湧現自立式兵器武備比賽的傷害。

  包含特斯拉汽車(Tesla Motors)開創人埃倫穆斯克(Elon Musk)、理論物理學家斯蒂芬霍金(Stephen Hawking)在內,這些專家們所擔憂的殺人機械,與《銀翼殺手》裏的“連鎖”(Nexus)復制人比擬的話,只是原始級其余閉幕者。《銀翼殺手》裏的瑞秋(Rachael)開槍殺死哈裏森輠祹葲仇敵時,這位男主公愛上了她——瑞秋是個女機械人,她其實不曉得本身是復制人。然則當一台配備兵器的四軸飛翔器幹掉敵方兵士時,沒人會愛上它。

  機械人可以殺死咱們,但它們沒法懂得咱們。自立式殺人機械正在成為實際,以色列已具有了哈比(Harpy)反雷達無人機,它可以在空中漫無目標地飛翔,然後自立選取目的並加以搗毀。然則想發明出一架具有知識,能懂得人類情感,猜測人類行動,具有感知才能的龐雜機械,還是個迢遙遠景。

  理論上講,這類機械人是能造出來的。人工智能研討職員看不到任何原則上的停滯,阻攔機械人成長出更高程度的推理才能,或是人類那種靈巧的身材。汽車裝置線上今朝僅剩下可以或許迅速地擰螺絲的工人,和一些可以或許到車身外殼裏焊電線的工人,機械人臨時還沒法賽過他們。

  機械還占領必定的上風。它們沒必要為了合適顱骨巨細而緊縮處置裝配,它們也不須要氧氣供給——這是個高度耗能的技巧。它們的再造也不受進化軌則限定,完整可以變得愈來愈聰慧。

  但只管機械在進修、視覺及語音辨認、神經收集處置(這些元素都在轉變人工智能的潛力)方面成長敏捷,機械人照樣不克不及懂得人類。盤算機可以在國際象棋上輕松擊敗人類,但它們玩不了最高程度的撲克牌遊戲,由於它們得看破敵手的矯揉造作。

  斯坦福大學(Stanford University)人工智能試驗室主任李飛飛說:“盤算機在感知義務上正變得愈來愈好。它的算法已能辨認數千種汽車,而我只能認出三種。但在認知、同理心和情緒層面,機械還沒法靠近人類。”

  我也閱歷過一些事,你們這些人不會信任——谷歌的無人駕駛汽車。比來我坐著它旅遊了加州的山景城,震動地發明它感到上就像人類。它自負地,乃至是武斷地從交織路口加快,縮小與火線車輛的間隔,讓其他車輛沒法插入。假如全部司機都能如許沉著和理性,咱們的性命會更平安。

  在谷歌無人駕駛汽車裏,你可以看到它若何用傳感器和車頂雷達舉行感知。一名谷歌工程師手持一台條記本電腦,行人、公交車和其他車輛等四周物體的表面出現為空心的、挪動的圖形表現在屏幕上。這些物體依照分歧色彩分類,是以無人駕駛汽車曉得該對它們作出反響,和該從多遠避開。

  換言之,假如你在主動駕駛汽車的車頂安置一個導彈發射器,而且在車身雙側裝上機槍(固然啦,這並非說谷歌會幹這類事),它完萬能化身“殺電話器人”。它可以安閑地穿越都會,掃描征采有熱度的、遲緩挪動的粉赤色目的加以搗毀。

  是以,科學家們告誡人工智能研討範疇存在與自立兵器相聯合的情形,並不是是駭人聽聞。互聯網自己就發祥於美國國防部在上世紀60年月贊助的研討課題,並且在開辟尖端技巧方面,軍事和航天籌劃具有最豐富的財力和最猛烈的興致。要有多蠢,才會以為“殺電話器人”的湧現就象征著機械將接收全球。

  損壞事物比懂得或發明事物更易。人工智能(掃描、處置和剖析大型數據集的才能)其實不同等於“能人工智能”(artificial general intelligence),後者才具備履行大多半人類義務的才能。

  一些人告誡說機械將搶走今朝由人類完成的事情,但就連他們也贊成,治理、專業和藝術事情仍舊是平安的,由於這些事情須要具有高程度的推理才能、同感才能和發明才能。機械人的才能仍異常有限,它可以經由過程掃描一系列特點來辨認一個女人,卻沒法領會她的情感,也沒法應用常理來辦理料想以外的困難。

  羅伊巴蒂已在對決中克服了人類賞金獵人,卻又在敵手墜下屋頂時伸手救了他一命。接著他對賞金獵人說道:“活在恐怖裏的滋味欠好受吧?這便是當仆從的體驗。”咱們萬萬不要讓本身淪為仆從。

  【參考譯文】

  I’ve seen things you people wouldn’t believe,” the villain played by Rutger Hauer reminisces at the end of the film Blade Runner after hauling Harrison Ford’s character on to a roof top and sparing his life. “People” is the operative word since Roy Batty is not a person but an android who escapes to earth from a space colony and takes revenge on the Tyrell Corporation, his creator.

  That is what I call a killer robot — a being that can hold an intelligent conversation with you before wiping you out. It was science fiction in 1982, when Blade Runner, based on Philip K Dick’s dystopian fantasy novel Do Androids Dream of Electric Sheep? came out. It is now faintly plausible — sufficiently for artificial intelligence researchers to warn this week of the dangers of an autonomous arms race.

  The killer machines feared by those such as Elon Musk, the founder of Tesla Motors, and Stephen Hawking, the theoretical physicist, are crude terminators by comparison with the Nexus replicants in Blade Runner. No one would fall in love with an armed quadcopter that blows up enemy soldiers, as the hero of Blade Runner does with Rachael, the female android who does not realise that she is a replicant.

  Robots can murder us but they cannot understand us. Autonomous killing machines are becoming reality — Israel already has its Harpy anti-radar drone, which loiters in the sky before choosing and destroying targets itself. A sentient, sophisticated machine with common sense and the capacity to grasp people’s moods and predict behaviour is still a distant prospect.

  In theory, it will be created. Artificial intelligence researchers do not see the barrier in principle to robots developing higher reasoning powers, or the kind of physical dexterity that humans possess. The last remaining workers on car assembly lines are people who can attach screws nimbly and reach inside the body shells for electrical wiring in a way that has defeated robots to date.

  Machines also possess some advantages. They do not have to constrict their processing units to fit into skulls, and they do not need to supply them with oxygen, an energy-hogging technology. Nor are they limited by an evolutionary edict to reproduce, rather than purely to get cleverer.

  But despite rapid advances in machine learning, visual and voice recognition, neural network processing — all the elements that are now transforming the potential of artificial intelligence — androids are not with us. Computers can beat humans easily at chess, but poker at the highest level is beyond them — they would need to see through the other players’ bluffs.

  “Computers are becoming better and better at perception tasks,” says Fei-Fei Li, director of Stanford University’s artificial intelligence laboratory. “Algorithms can identify thousands of types of cars while I can only tell three of them. But at the cognitive, empathetic, and emotional level, machines are not even close to humans.”

  I have also experienced something you people would not believe — Google’s self-driving car. The thing that struck me as it toured Mountain View in California recently was that it felt human. It accelerated from junctions confidently, even assertively, closing the gaps with vehicles in front so others could not rush in. We would be safer if all drivers were equally calm and rational.

  Inside the car, you can see what it perceives with its sensors and rooftop radar. The outlines of objects around, including pedestrians, buses and other cars, are displayed like hollow, moving shapes on the screen of a laptop held by a Google engineer. The objects are categorised by different colours, so the vehicle knows it should react to them and how far to steer clear.

  A self-driving vehicle would, in other words, be a perfectly capable killer robot if you attached a missile launcher to its roof, and machine guns to its sides (not that Google would do such a thing, of course). It could cruise through cities, scanning for warm, slow-moving, pink-coloured objects to destroy.

  So it is not scaremongering for scientists to warn of artificial intelligence research being tainted by association with autonomous weapons. The internet itself emerged from research funded by the US Department of Defence in the 1960s, and military and space programmes have the deepest pockets and the keenest interest in developing cutting-edge technology. What would be foolish would be to think the advent of killer robots means that machines are ready to take over the world.

  Destroying things is easier than understanding or creating them. Artificial intelligence — the ability to scan, process and analyse large data sets — is not the same as the capacity to perform most human tasks (known as artificial general intelligence).

  Even those who warn of machines taking jobs that are now performed by humans accept that managerial, professional, and artistic jobs that demand high level reasoning, empathy and creativity are still safe. A robot that scans a set of features to identify a woman, but cannot grasp her mood, or use common sense to solve an unexpected puzzle, remains very limited.

  “Quite an experience to live in fear, isn’t it? That’s what it’s like to be a slave,” Roy Batty remarks to the human bounty-hunter he has defeated in combat before reaching out and rescuing him from falling to his death. Let us not enslave ourselves yet.