英語閱讀雙語新聞

警惕人工智能的失控風險

本文已影響 8.99K人 

As an experiment, Tunde Olanrewaju messed around one day with the Wikipedia entry of his employer, McKinsey. He edited the page to say that he had founded the consultancy firm. A friend took a screenshot to preserve the revised record.

作爲一項實驗,通德?奧蘭雷瓦朱(Tunde Olanrewaju)有一天給維基百科(Wikipedia)關於他僱主的條目——麥肯錫(McKinsey)——搗了點亂。他編輯了該頁面,說自己創辦了這家諮詢公司。一位朋友將修改後的記錄截圖保存了。

Within minutes, Mr Olanrewaju received an email from Wikipedia saying that his edit had been rejected and that the true founder’s name had been restored. Almost certainly, one of Wikipedia’s computer bots that police the site’s 40m articles had spotted, checked and corrected his entry.

幾分鐘內,奧蘭雷瓦朱收到來自維基百科的電子郵件,告知他的編輯操作被拒絕,麥肯錫真正創始人的名字已被恢復。幾乎可以肯定,管理維基百科網站上4000萬篇條目的機器人(bot)之一已發現、覈對並糾正了被他編輯的條目。

It is reassuring to know that an army of such clever algorithms is patrolling the frontline of truthfulness — and can outsmart a senior partner in McKinsey’s digital practice. In 2014, bots were responsible for about 15 per cent of all edits made on Wikipedia.

我們非常欣慰地得知,大量如此聰明的算法正巡邏在保衛真實性的前線——並且可以比麥肯錫旗下數字業務的資深合夥人更聰明。2014年,維基百科上約15%的編輯量是由機器人完成的。

But, as is the way of the world, algos can be used for offence as well as defence. And sometimes they can interact with each other in unintended and unpredictable ways. The need to understand such interactions is becoming ever more urgent as algorithms become so central in areas as varied as social media, financial markets, cyber security, autonomous weapons systems and networks of self-driving cars.

但是,世上的事情都是如此:算法既可以用於攻擊,也可以用於防禦。有時,算法之間可以通過非故意和不可預測的方式相互作用。由於算法在諸如社交媒體、金融市場、網絡安全、自主武器系統(AWS)和自動駕駛汽車網絡等不同領域發揮如此核心的作用,人類越來越迫切地需要理解這種相互作用。

A study published last month in the research journal Plos One, analysing the use of bots on Wikipedia over a decade, found that even those designed for wholly benign purposes could spend years duelling with each other.

今年2月研究期刊《公共科學圖書館?綜合》(PLoS ONE)發表的一篇論文發現,即使那些出於完全善良意願而設計的機器人,也可能會花費數年時間彼此爭鬥。這篇論文分析了十年來維基百科上機器人的使用情況。

In one such battle, Xqbot and Darknessbot disputed 3,629 entries, undoing and correcting the other’s edits on subjects ranging from Alexander the Great to Aston Villa football club.

在一次這樣的爭鬥中,Xqbot和Darknessbot在3629個條目——從亞歷山大大帝(Alexander the Great)到阿斯頓維拉(Aston Villa)足球俱樂部——上發生了衝突,反覆撤消和更正對方的編輯結果。

The authors, from the Oxford Internet Institute and the Alan Turing Institute, were surprised by the findings, concluding that we need to pay far more attention to these bot-on-bot interactions. “We know very little about the life and evolution of our digital minions.”

來自牛津互聯網學院(Oxford Internet Institute)和圖靈研究所(Alan Turing Institute)的幾位論文作者對這些發現感到吃驚。他們得出結論,我們需要對這些機器人之間的相互作用給予更多關注。“我們對我們的數字小黃人的生活和進化知之甚少。”

Wikipedia’s bot ecosystem is gated and monitored. But that is not the case in many other reaches of the internet where malevolent bots, often working in collaborative botnets, can run wild.

維基百科的機器人生態系統有門禁,受到監控。但在互聯網所觸及的許多其他領域,情況並非如此:惡意機器人——通常結成協作的殭屍網絡(botnet)來工作——可能會失控。

The authors highlighted the dangers of such bots mimicking humans on social media to “spread political propaganda or influence public discourse”. Such is the threat of digital manipulation that a group of European experts has even questioned whether democracy can survive the era of Big Data and Artificial Intelligence.

幾位作者突出強調了這種機器人模仿人類、在社交媒體上“傳播政治宣傳言論或影響公共話語”的危險。數字操縱的威脅如此嚴峻,以致一羣歐洲專家甚至質疑民主在大數據和人工智能時代還有沒有活路。

It may not be too much of an exaggeration to say we are reaching a critical juncture. Is truth, in some senses, being electronically determined? Are we, as the European academics fear, becoming the “digital slaves” of our one-time “digital minions”? The scale, speed and efficiency of some of these algorithmic interactions are reaching a level of complexity beyond human comprehension.

要說我們正在逼近一個緊要關頭,也許不太誇張。在某種意義上,真理是否正由電子手段確定?我們是否正如歐洲學者所害怕的那樣,正成爲曾經聽命於我們的“數字小黃人”的“數字奴隸”?一些算法之間交互作用的規模、速度和效率開始達到人類無法理解的複雜程度。

If you really want to scare yourself on a dark winter’s night you should read Susan Blackmore on the subject. The psychologist has argued that, by creating such computer algorithms we may have inadvertently unleashed a “third replicator”, which she originally called a teme, later modified to treme.

如果你真的想在冬日暗夜裏嚇唬自己的話,你應該讀一讀蘇珊?布萊克莫爾(Susan Blackmore)有關這個題材的作品。這位心理學家認爲,通過創建這樣的計算機算法,我們也許已不經意地釋放出一個“第三複制因子”——她最初稱之爲技因(teme),後來改稱爲treme。

The first replicators were genes that determined our biological evolution. The second were human memes, such as language, writing and money, that accelerated cultural evolution. But now, she believes, our memes are being superseded by non-human tremes, which fit her definition of a replicator as being “information that can be copied with variation and selection”.

第一複製因子是決定我們生物進化的基因。第二複製因子是人類的迷因(meme)——如語言、寫作和金錢——迷因加速了文化演變。但現在,她認爲,我們的迷因正在被非人類的treme取代,treme符合她對於複製因子的定義,即“可以有變化和有選擇地複製的信息”。

“We humans are being transformed by new technologies,” she said in a recent lecture. “We have let loose the most phenomenal power.”

“我們人類正在被新技術所改造,”她在最近的一次講座中說,“我們把一種最驚人的力量放出來了。”

For the moment, Prof Blackmore’s theory remains on the fringes of academic debate. Tremes may be an interesting concept, says Stephen Roberts, professor of machine learning at the University of Oxford, but he does not think we have lost control.

目前,布萊克莫爾教授的理論仍遊離於學術辯論的邊緣。牛津大學(University of Oxford)機器學習教授斯蒂芬?羅伯茨(Stephen Roberts)說,Treme或許是個有趣的概念,但他認爲,我們並未失去控制權。

警惕人工智能的失控風險

“There would be a lot of negative consequences of AI algos getting out of hand,” he says. “But we are a long way from that right now.”

“人工智能(AI)算法失控將產生很多負面後果,”他說,“但現在,我們距離這個局面還有很遠的距離。”

The more immediate concern is that political and commercial interests have learnt to “hack society”, as he puts it. “Falsehoods can be replicated as easily as truth. We can be manipulated as individuals and groups.”

更緊迫的問題是,用他的話說,政治和商業利益集團已學會了“侵入社會”。 “謊言可以像真理一樣輕易地複製。我們作爲個人和團體,都可能被操縱。”

His solution? To establish the knowledge equivalent of the Millennium Seed Bank, which aims to preserve plant life at risk from extinction.

他的解決方案是什麼?爲知識建立類似千年種子銀行(Millennium Seed Bank)那樣的保護計劃。千年種子銀行旨在保護瀕危植物免於滅絕。

“As we de-speciate the world we are trying to preserve these species’ DNA. As truth becomes endangered we have the same obligation to record facts.”

“隨着人類讓這個世界上的物種減少,我們在試圖保護這些物種的DNA。隨着真相變得瀕危,我們有同樣的義務記錄下事實。”

But, as we have seen with Wikipedia, that is not always such a simple task.

但是,正如我們在維基百科中所看到的情況那樣,這並不總是那麼簡單的任務。

猜你喜歡

熱點閱讀

最新文章