久久一区二区三区精品-久久一区二区明星换脸-久久一区二区精品-久久一区不卡中文字幕-91精品国产爱久久久久久-91精品国产福利尤物免费

等不及馬丁新作,人工智能續(xù)寫(xiě)《冰與火之歌》!想看戳這里

雕龍文庫(kù) 分享 時(shí)間: 收藏本文

等不及馬丁新作,人工智能續(xù)寫(xiě)《冰與火之歌》!想看戳這里

為了給苦苦等待的粉絲們找點(diǎn)樂(lè)趣,軟件工程師扎克·圖特(Zack Thoutt)讓循環(huán)神經(jīng)網(wǎng)絡(luò)人工智能技術(shù)學(xué)習(xí)該劇原著《冰與火之歌》前五部的內(nèi)容,然后續(xù)寫(xiě)五章劇情。這些人工智能創(chuàng)作的情節(jié)與粉絲早前的一些推測(cè)部分吻合,比如,詹姆最終殺死了瑟曦,囧雪成了龍騎士,而瓦里斯毒死了龍母。如果你感興趣,可以在GitHub的主頁(yè)上查看所有章節(jié)。

下面來(lái)了解一下人工智能是如何做到的:

After feeding a type of AI known as a recurrent neural network the roughly 5,000 pages of Martin's five previous books, software engineer Zack Thoutt has used the algorithm to predict what will happen next.

軟件工程師扎克·圖特讓一種名為循環(huán)神經(jīng)網(wǎng)絡(luò)的人工智能技術(shù)學(xué)習(xí)了《冰與火之歌》前五部近5000頁(yè)的內(nèi)容,然后利用該算法預(yù)測(cè)接下來(lái)的情節(jié)。

According to the AI's predictions, some long-held fan theories do play out - in the five chapters generated by the algorithm so far, Jaime ends up killing Cersei, Jon rides a dragon, and Varys poisons Daenerys.

根據(jù)人工智能的預(yù)測(cè),一些粉絲早前的推測(cè)的確出現(xiàn)了。在該算法目前撰寫(xiě)的五章內(nèi)容中,詹姆最終殺死了瑟曦,囧雪成了龍騎士,而瓦里斯毒死了龍母。

如果你感興趣,可以在GitHub的主頁(yè)上查看所有章節(jié)。附上傳送門(mén):

https://github.com/zackthoutt/got-book-6/tree/master/generated-book-v1

Each chapter starts with a character's name, just like Martin's actual books.

和馬丁本人撰寫(xiě)的小說(shuō)一樣,每章打頭的文字都是一個(gè)角色的名字。

But in addition to backing up what many of us already suspect will happen, the AI also introduces some fairly unexpected plot turns that we're pretty sure aren't going to be mirrored in either the TV show or Martin's books, so we wouldn't get too excited just yet.

不過(guò),我們也不要太過(guò)興奮,因?yàn)槌舜嬖诤芏嗳艘呀?jīng)預(yù)測(cè)會(huì)發(fā)生的劇情外,這個(gè)人工智能算法還引入了一些令人意外的情節(jié),它們絕對(duì)不會(huì)出現(xiàn)在電視劇或馬丁的小說(shuō)中。

For example, in the algorithm's first chapter, written from Tyrion's perspective, Sansa turns out to be a Baratheon.

例如,算法編寫(xiě)的第一章從小惡魔的視角寫(xiě)道,珊莎其實(shí)屬于拜拉席恩家族。

There's also the introduction of a strange, pirate-like new character called Greenbeard.

書(shū)中還出現(xiàn)了一個(gè)名叫Greenbeard的怪咖,這個(gè)新角色的身份和海盜類似。

"It's obviously not perfect," Thoutt told Sam Hill over at Motherboard. "It isn't building a long-term story and the grammar isn't perfect. But the network is able to learn the basics of the English language and structure of George R.R. Martin's style on its own."

圖特在接受Motherboard采訪時(shí)告訴山姆?希爾,“這個(gè)算法顯然并不完美,它不能編寫(xiě)長(zhǎng)篇故事,語(yǔ)法也有問(wèn)題。但是神經(jīng)網(wǎng)絡(luò)可以自學(xué)英語(yǔ)的基本語(yǔ)言知識(shí)以及馬丁的文風(fēng)結(jié)構(gòu)。”

Neural networks are a type of machine learning algorithm that are inspired by the human brain's ability to not just memorize and follow instructions, but actually learn from past experiences.

神經(jīng)網(wǎng)絡(luò)是一種機(jī)器學(xué)習(xí)算法,設(shè)計(jì)靈感來(lái)自于人腦的記憶能力、遵循指令的能力以及從過(guò)去經(jīng)驗(yàn)學(xué)習(xí)的能力。

A recurrent neural network is a specific subclass, which works best when it comes to processing long sequences of data, such as lengthy text from five previous books.

一個(gè)循環(huán)神經(jīng)網(wǎng)絡(luò)是一個(gè)特定的子集,最擅長(zhǎng)處理長(zhǎng)的數(shù)據(jù)序列,比如《冰與火之歌》前5部冗長(zhǎng)的文本。

In theory, Thoutt's algorithm should be able to create a true sequel to Martin's existing work, based off things that have already happened in the novels.

理論上,圖特的算法應(yīng)該能基于書(shū)中已經(jīng)出現(xiàn)的劇情創(chuàng)作出《冰與火之歌》真正的續(xù)集。

But in practice, the writing is clumsy and, most of the time, nonsensical. And it also references characters that have already died.

但實(shí)際上,這個(gè)算法的寫(xiě)作能力還很低級(jí),大部分內(nèi)容都不知所云,還會(huì)提到已經(jīng)死掉的角色。

Still, some of the lines sound fairly prophetic:

不過(guò),有些臺(tái)詞還是有一定預(yù)言性的:

"Arya saw Jon holding spears. Your grace," he said to an urgent maid, afraid. "The crow's eye would join you.

他對(duì)一個(gè)焦急的女仆說(shuō),“陛下,艾莉亞看到雪諾拿著長(zhǎng)矛。烏鴉的眼睛會(huì)跟著你。”

"A perfect model would take everything that has happened in the books into account and not write about characters being alive when they died two books ago," Thoutt told Motherboard.

圖特告訴Motherboard:“完美的算法模型能把書(shū)中的所有劇情考慮在內(nèi),且不會(huì)再讓兩部以前去世的角色再次復(fù)活。”

"The reality, though, is that the model isn't good enough to do that. If the model were that good authors might be in trouble ... but it makes a lot of mistakes because the technology to train a perfect text generator that can remember complex plots over millions of words doesn't exist yet."

“然而,實(shí)際上這個(gè)算法現(xiàn)在還不夠完善。如果它有那么完美的話,作家們可能就要丟飯碗了……完美的文字創(chuàng)作機(jī)器可以記住數(shù)百萬(wàn)字的復(fù)雜劇情,現(xiàn)在的技術(shù)還不能訓(xùn)練出這種功能,它會(huì)犯很多錯(cuò)誤。”

One of the main limitations here is the fact that the books just don't contain enough data for an algorithm.

最主要的局限之一是書(shū)中包含的數(shù)據(jù)對(duì)一個(gè)算法而言是不夠的。

Although anyone who's read them will testify that they're pretty damn long, they actually represent quite a small data set for a neural network to learn from.

雖然《冰與火之歌》的讀者都認(rèn)為這部小說(shuō)太長(zhǎng)了,但是對(duì)于神經(jīng)網(wǎng)絡(luò)要學(xué)習(xí)的數(shù)據(jù)集來(lái)說(shuō),這些內(nèi)容太少了。

But at the same time they contain a whole lot of unique words, nouns, and adjectives which aren't reused, which makes it very hard for the neural network to learn patterns.|

此外,書(shū)中包含了許多獨(dú)特的詞匯、名詞和形容詞,它們沒(méi)有重復(fù)出現(xiàn),這使得神經(jīng)網(wǎng)絡(luò)很難學(xué)習(xí)到模式。

Thoutt told Hill that a better source would be a book 100 times longer, but with the level of vocabulary of a children's book.

圖特告訴希爾,更合適的數(shù)據(jù)源是一本比《冰與火之歌》長(zhǎng)100倍,且詞匯水平相當(dāng)于兒童圖書(shū)的書(shū)籍。

為了給苦苦等待的粉絲們找點(diǎn)樂(lè)趣,軟件工程師扎克·圖特(Zack Thoutt)讓循環(huán)神經(jīng)網(wǎng)絡(luò)人工智能技術(shù)學(xué)習(xí)該劇原著《冰與火之歌》前五部的內(nèi)容,然后續(xù)寫(xiě)五章劇情。這些人工智能創(chuàng)作的情節(jié)與粉絲早前的一些推測(cè)部分吻合,比如,詹姆最終殺死了瑟曦,囧雪成了龍騎士,而瓦里斯毒死了龍母。如果你感興趣,可以在GitHub的主頁(yè)上查看所有章節(jié)。

下面來(lái)了解一下人工智能是如何做到的:

After feeding a type of AI known as a recurrent neural network the roughly 5,000 pages of Martin's five previous books, software engineer Zack Thoutt has used the algorithm to predict what will happen next.

軟件工程師扎克·圖特讓一種名為循環(huán)神經(jīng)網(wǎng)絡(luò)的人工智能技術(shù)學(xué)習(xí)了《冰與火之歌》前五部近5000頁(yè)的內(nèi)容,然后利用該算法預(yù)測(cè)接下來(lái)的情節(jié)。

According to the AI's predictions, some long-held fan theories do play out - in the five chapters generated by the algorithm so far, Jaime ends up killing Cersei, Jon rides a dragon, and Varys poisons Daenerys.

根據(jù)人工智能的預(yù)測(cè),一些粉絲早前的推測(cè)的確出現(xiàn)了。在該算法目前撰寫(xiě)的五章內(nèi)容中,詹姆最終殺死了瑟曦,囧雪成了龍騎士,而瓦里斯毒死了龍母。

如果你感興趣,可以在GitHub的主頁(yè)上查看所有章節(jié)。附上傳送門(mén):

https://github.com/zackthoutt/got-book-6/tree/master/generated-book-v1

Each chapter starts with a character's name, just like Martin's actual books.

和馬丁本人撰寫(xiě)的小說(shuō)一樣,每章打頭的文字都是一個(gè)角色的名字。

But in addition to backing up what many of us already suspect will happen, the AI also introduces some fairly unexpected plot turns that we're pretty sure aren't going to be mirrored in either the TV show or Martin's books, so we wouldn't get too excited just yet.

不過(guò),我們也不要太過(guò)興奮,因?yàn)槌舜嬖诤芏嗳艘呀?jīng)預(yù)測(cè)會(huì)發(fā)生的劇情外,這個(gè)人工智能算法還引入了一些令人意外的情節(jié),它們絕對(duì)不會(huì)出現(xiàn)在電視劇或馬丁的小說(shuō)中。

For example, in the algorithm's first chapter, written from Tyrion's perspective, Sansa turns out to be a Baratheon.

例如,算法編寫(xiě)的第一章從小惡魔的視角寫(xiě)道,珊莎其實(shí)屬于拜拉席恩家族。

There's also the introduction of a strange, pirate-like new character called Greenbeard.

書(shū)中還出現(xiàn)了一個(gè)名叫Greenbeard的怪咖,這個(gè)新角色的身份和海盜類似。

"It's obviously not perfect," Thoutt told Sam Hill over at Motherboard. "It isn't building a long-term story and the grammar isn't perfect. But the network is able to learn the basics of the English language and structure of George R.R. Martin's style on its own."

圖特在接受Motherboard采訪時(shí)告訴山姆?希爾,“這個(gè)算法顯然并不完美,它不能編寫(xiě)長(zhǎng)篇故事,語(yǔ)法也有問(wèn)題。但是神經(jīng)網(wǎng)絡(luò)可以自學(xué)英語(yǔ)的基本語(yǔ)言知識(shí)以及馬丁的文風(fēng)結(jié)構(gòu)。”

Neural networks are a type of machine learning algorithm that are inspired by the human brain's ability to not just memorize and follow instructions, but actually learn from past experiences.

神經(jīng)網(wǎng)絡(luò)是一種機(jī)器學(xué)習(xí)算法,設(shè)計(jì)靈感來(lái)自于人腦的記憶能力、遵循指令的能力以及從過(guò)去經(jīng)驗(yàn)學(xué)習(xí)的能力。

A recurrent neural network is a specific subclass, which works best when it comes to processing long sequences of data, such as lengthy text from five previous books.

一個(gè)循環(huán)神經(jīng)網(wǎng)絡(luò)是一個(gè)特定的子集,最擅長(zhǎng)處理長(zhǎng)的數(shù)據(jù)序列,比如《冰與火之歌》前5部冗長(zhǎng)的文本。

In theory, Thoutt's algorithm should be able to create a true sequel to Martin's existing work, based off things that have already happened in the novels.

理論上,圖特的算法應(yīng)該能基于書(shū)中已經(jīng)出現(xiàn)的劇情創(chuàng)作出《冰與火之歌》真正的續(xù)集。

But in practice, the writing is clumsy and, most of the time, nonsensical. And it also references characters that have already died.

但實(shí)際上,這個(gè)算法的寫(xiě)作能力還很低級(jí),大部分內(nèi)容都不知所云,還會(huì)提到已經(jīng)死掉的角色。

Still, some of the lines sound fairly prophetic:

不過(guò),有些臺(tái)詞還是有一定預(yù)言性的:

"Arya saw Jon holding spears. Your grace," he said to an urgent maid, afraid. "The crow's eye would join you.

他對(duì)一個(gè)焦急的女仆說(shuō),“陛下,艾莉亞看到雪諾拿著長(zhǎng)矛。烏鴉的眼睛會(huì)跟著你。”

"A perfect model would take everything that has happened in the books into account and not write about characters being alive when they died two books ago," Thoutt told Motherboard.

圖特告訴Motherboard:“完美的算法模型能把書(shū)中的所有劇情考慮在內(nèi),且不會(huì)再讓兩部以前去世的角色再次復(fù)活。”

"The reality, though, is that the model isn't good enough to do that. If the model were that good authors might be in trouble ... but it makes a lot of mistakes because the technology to train a perfect text generator that can remember complex plots over millions of words doesn't exist yet."

“然而,實(shí)際上這個(gè)算法現(xiàn)在還不夠完善。如果它有那么完美的話,作家們可能就要丟飯碗了……完美的文字創(chuàng)作機(jī)器可以記住數(shù)百萬(wàn)字的復(fù)雜劇情,現(xiàn)在的技術(shù)還不能訓(xùn)練出這種功能,它會(huì)犯很多錯(cuò)誤。”

One of the main limitations here is the fact that the books just don't contain enough data for an algorithm.

最主要的局限之一是書(shū)中包含的數(shù)據(jù)對(duì)一個(gè)算法而言是不夠的。

Although anyone who's read them will testify that they're pretty damn long, they actually represent quite a small data set for a neural network to learn from.

雖然《冰與火之歌》的讀者都認(rèn)為這部小說(shuō)太長(zhǎng)了,但是對(duì)于神經(jīng)網(wǎng)絡(luò)要學(xué)習(xí)的數(shù)據(jù)集來(lái)說(shuō),這些內(nèi)容太少了。

But at the same time they contain a whole lot of unique words, nouns, and adjectives which aren't reused, which makes it very hard for the neural network to learn patterns.|

此外,書(shū)中包含了許多獨(dú)特的詞匯、名詞和形容詞,它們沒(méi)有重復(fù)出現(xiàn),這使得神經(jīng)網(wǎng)絡(luò)很難學(xué)習(xí)到模式。

Thoutt told Hill that a better source would be a book 100 times longer, but with the level of vocabulary of a children's book.

圖特告訴希爾,更合適的數(shù)據(jù)源是一本比《冰與火之歌》長(zhǎng)100倍,且詞匯水平相當(dāng)于兒童圖書(shū)的書(shū)籍。

信息流廣告 競(jìng)價(jià)托管 招生通 周易 易經(jīng) 代理招生 二手車 網(wǎng)絡(luò)推廣 自學(xué)教程 招生代理 旅游攻略 非物質(zhì)文化遺產(chǎn) 河北信息網(wǎng) 石家莊人才網(wǎng) 買車咨詢 河北人才網(wǎng) 精雕圖 戲曲下載 河北生活網(wǎng) 好書(shū)推薦 工作計(jì)劃 游戲攻略 心理測(cè)試 石家莊網(wǎng)絡(luò)推廣 石家莊招聘 石家莊網(wǎng)絡(luò)營(yíng)銷 培訓(xùn)網(wǎng) 好做題 游戲攻略 考研真題 代理招生 心理咨詢 游戲攻略 興趣愛(ài)好 網(wǎng)絡(luò)知識(shí) 品牌營(yíng)銷 商標(biāo)交易 游戲攻略 短視頻代運(yùn)營(yíng) 張家口人才網(wǎng) 秦皇島人才網(wǎng) PS修圖 寶寶起名 零基礎(chǔ)學(xué)習(xí)電腦 電商設(shè)計(jì) 職業(yè)培訓(xùn) 免費(fèi)發(fā)布信息 服裝服飾 律師咨詢 搜救犬 Chat GPT中文版 語(yǔ)料庫(kù) 范文網(wǎng) 工作總結(jié) 二手車估價(jià) 情侶網(wǎng)名 愛(ài)采購(gòu)代運(yùn)營(yíng) 保定招聘 情感文案 吊車 古詩(shī)詞 邯鄲人才網(wǎng) 鐵皮房 衡水人才網(wǎng) 石家莊點(diǎn)痣 微信運(yùn)營(yíng) 養(yǎng)花 名酒回收 石家莊代理記賬 女士發(fā)型 搜搜作文 石家莊人才網(wǎng) 銅雕 關(guān)鍵詞優(yōu)化 圍棋 chatGPT 讀后感 玄機(jī)派 企業(yè)服務(wù) 法律咨詢 chatGPT國(guó)內(nèi)版 chatGPT官網(wǎng) 勵(lì)志名言 兒童文學(xué) 河北代理記賬公司 狗狗百科 教育培訓(xùn) 游戲推薦 抖音代運(yùn)營(yíng) 朋友圈文案 男士發(fā)型 培訓(xùn)招生 文玩 大可如意 保定人才網(wǎng) 滄州人才網(wǎng) 黃金回收 承德人才網(wǎng) 石家莊人才網(wǎng) 模型機(jī) 高度酒 沐盛有禮 公司注冊(cè) 十畝地 造紙術(shù) 唐山人才網(wǎng) 沐盛傳媒
主站蜘蛛池模板: 久久99精品综合国产首页 | 欧美日韩视频一区三区二区 | 国产精品成人自拍 | 欧美成人性做爰 | 久久综合久美利坚合众国 | 一级欧美日韩 | 九九视频免费观看 | 婷婷丁香久久 | 中国嫩模一级毛片 | 欧美理论大片清免费观看 | 美国人成毛片在线播放 | 久久综合丁香 | 中文字幕在线永久 | 中文字幕va一区二区三区 | 成人丁香乱小说 | 一级aaaaa毛片免费视频 | 免费精品久久久久久中文字幕 | 国产精品一二区 | 成年女人在线视频 | 免费的成人a视频在线观看 免费的毛片 | 午夜毛片网站 | 欧洲一级毛片 | 在线久久 | 国产亚洲一区呦系列 | 天堂入口 | 国产精品夜色视频一区二区 | 黄色在线视屏 | 欧美一区二区三区国产精品 | 香焦视频在线观看黄 | 成年片人免费www | 国产成人精品亚洲日本在线观看 | 国产一级毛片外aaaa | 深夜做爰性大片很黄很色视频 | 色偷偷亚洲女人天堂观看欧 | 国内精品久久久久影院免费 | 欧美日韩一区二区三在线 | 国产一区二区三区四区五区 | 性做爰片免费视频毛片中文i | 国产男女乱淫真视频全程播放 | 亚洲成人手机在线 | 国产精品毛片一区 |