不聽話的機器人已經出現了

2015/12/04 瀏覽次數:5 收藏
分享到:

  12月4日口譯文章:不聽話的機械人已湧現了

  假如說好萊塢曾給科學家們一個教導,那必定是機械人開端對抗它們的發明者——人類。

  只管如許,機械人學家已開端教機械人謝絕人類的敕令了。

  他們已編程了一對小型的類人機械人,名字叫謝弗和鄧普斯特,假如人類的指令對它們本身有傷害,它們就會違反敕令。

  比擬那些會殺人的閉幕者機械人,這個實驗的成果更像是威爾·史姑娘主演的《我與機械人》中的誰人會報歉的機械人索尼,然則它們都遵照主要的原則。

  馬塞諸塞州塔夫茨大學的工程師戈登·布裏格斯和馬特赫茲博士正試圖發明出更能以人道化方法交換的機械人。

  在人工智能成長協會上揭櫫的一篇論文中,兩人說:“人類會由於各式各樣的緣故原由謝絕履行指令:從沒有才能做到對品德的掛念。”

  “鑒於自立體系的範圍性,大多半指令排擠反響機制只需應用曩昔的來由——缺乏常識大概才能。”

  “但是,跟著主動自立主體的才能賡續晉升,將會有愈來愈多人對盤算機倫理學大概自立主體行動的品德性範疇感興致。”

  兩人發明的機械人可以或許履行人的口頭指令如“起立”和“請坐”。

  但是,假如敕令他們走入停滯物大概走向桌子的邊沿時,機械人會規矩地謝絕。

  當請求機械人在桌子上向前走時,機械人會對峙,並告知敕令發出者:“對不起,火線沒有路,我不克不及這麽做。”

  再一次請求機械人時,它會說:“但這不平安。”

  當人告知機械人在它們走在桌子邊沿時,人們會接住機械人,那末機械人會異常信賴人類然後持續向前走,這還相稱使人激動。

  一樣的,人們告知機械人火線的停滯不是固體的時刻,機械人也會見義勇為地向前走。

  為了實現這一後果,研討職員在機械人的軟件法式中引入了推理機制,能讓機械人評估情況而且斷定這一指令是不是會危及它們的平安。

  但是,他們兩人的研討仿佛違背了科幻作家艾薩克·阿斯莫夫制訂的機械人軌則,該軌則劃定機械人必需屈服人類的敕令。

  很多人工智能專家以為,確保機械人遵照這些軌則是非常主要的——這些軌則還包含機械人永久不克不及損害人類,並在反面其他軌則相辯論的條件下掩護本身。

  這項事情大概會激發擔心:假如人工智能使機械人可以或許違反人類的敕令,那末它大概帶來災害性的效果。

  很多首腦人物,包含霍金傳授和馬斯克都已告誡過人工智能大概會失控。

  另外一些人告誡說,機械人大概終極會代替很多工人的事情,有些人擔心計心情器人將接收統統。

  在片子《我與機械人》中,人工智能讓機械人索尼沖破了編程,違背了人類的敕令。

  而戈登·布裏格斯和馬特赫茲彌補道:“為了能讓這些推理和對話機制更增強大和周全化,咱們另有許多事情要做。”

  【參考譯文】

  If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.

  Yet despite this, roboticists have started to teach their own creations to say no to human orders.

  They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.

  The results are more like the apologetic robot rebel Sonny from the film I, Robot, starring Will Smith, than the homicidal machines of Terminator, but they demonstrate an important principal.

  Engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts, are trying to create robots that can interact in a more human way.

  In a paper presented to the Association for the Advancement of Artificial Intelligence, the pair said: 'Humans reject directives for a wide range of reasons: from inability all the way to moral qualms.

  'Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.

  'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.'

  The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.

  However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.

  When asked to walk forward on a table, the robots refuse to budge, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'

  Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'

  Perhaps rather touchingly, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.

  Similarly when it is told an obstacle in front of them is not solid, the robot obligingly walks through it.

  To achieve this the researchers introduced reasoning mechanisms into the robots' software, allowing them to assess their environment and examine whether a command might compromise their safety.

  However, their work appears to breach the laws of robotics drawn up by science fiction author Isaac Asimov, which state that a robot must obey the orders given to it by human beings.

  Many artificial intelligence experts believe it is important to ensure robots adhere to these rules - which also require robots to never harm a human being and for them to protect their own existence only where it does not conflict with the other two laws.

  The work may trigger fears that if artificial intelligence is given the capacity to disobey humans, then it could have disastrous results.

  Many leading figures, including Professor Stephen Hawking and Elon Musk, have warned that artificial intelligence could spiral out of our control.

  Others have warned that robots could ultimately end up replacing many workers in their jobs while there are some who fear it could lead to the machines taking over.

  In the film I, Robot, artificial intelligence allows a robot called Sonny to overcome his programming and disobey the instructions of humans.

  However, Dr Scheutz and Mr Briggs added: 'There still exists much more work to be done in order to make these reasoning and dialogue mechanisms much more powerful and generalised.'