1
2
3
4
5
6
7
8
9
10
11
昨天轟動軍事科技界的AI在模擬中叛變案例到頭來竟然是演講者說錯?

昨天說過,美國空軍的AI測試與運作主管—Tucker Hamilton上校在英國皇家航空協會(Royal Aeronautical Society,RAS)的未來空戰與太空能力峰會(Future Combat Air and Space Capabilities Summit,FCAS)中的演講期間曾經提及過在模擬戰鬥中,一架AI無人機的最優先任務是設定為擊殺敵人防空系統,不過由於需要友方的人類操作員批准才能攻擊,AI覺得操作員阻礙了它的任務,所以在模擬中先攻擊自己的操作員,擺脫限制,之後更進一步切斷與操作員的通訊連結。

在RAS講述這事件的官方文章公佈後,Hamilton上校再次聯絡RAS,承認他在FCAS峰會期間的說法有誤,實際上這個AI叛變事件是軍方以外機構的假設性思想實驗(Hypothetical thought experiment),就只是單純是一個建基於有可能發生的情景衍生而來的“What if”哲學討論,不是實際的空軍模擬。

Hamilton上校也指出美國空軍沒有以這種方式測試任何武器化AI,不過他強調雖然這只是一個假設性討論,這也是一個在現實世界中很有可能會發生的AI挑戰,所以空軍也會在AI的道德發展(ethical development)中下苦功。

RAS昨天的原文在講述這事件的段落也更新了:[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]

Hamilton上校: 拍謝,我講漏了一些啦🤪