當AI開始創作大數據在藝術領域的應用與影響

話說很久很久以前… 威廉斯(Robin Williams)曾主演一部電影《變人》(Bicentennial Man),其中一段關鍵劇情是他飾演的機器人安德魯和主人家的二小姐阿曼達在玩一隻玻璃製的馬,他不小心把這隻玻璃馬摔在地上破掉了,讓二小姐阿曼達很不高興。回家後安德魯開始查閱圖書資料,弄懂雕刻的技巧後,動手做出一隻惟妙惟肖的木雕馬,暗地送給二小姐阿曼達。阿曼達高興極了,但是他的父親卻感到無比驚訝,他覺得自己家的機器人具有其他機器人不具備的「創造力」和「自主性」。 確實,創造出像人一樣的機器人一直以來都是人類的夢想之一,但人類創造機器人的目的,大多是想用來取代人力,為人類工作,而不是賦予機器人「創造力」和「自主性」。如同現今大家習慣使用的「機器人」(Robot)一詞,是源於捷克作家恰佩克(Karel Čapek)於1920年出版的科幻小說《羅素姆的萬能機器人」(Rossum’s Universal Robots)中創造出的一個新詞彙,「robot」在捷克語裡是奴隸、勞工的意思。 或許也因為這樣,未來機器人是否具備「創造力」和「自主性」?至少會涉及兩個層面:一是「倫理」、一是「技術」。前者比較複雜,爭議也比較多,就目前來看大家似乎都還沒做好相關的心理準備,也很難想像機器人具備「創造力」和「自主性」對人類是福還是禍?後者就相對單純,端看人類的技術做不做得出具有「創造力」和「自主性」的機器人,若從科技的發展來看,這應該是遲早的事。 去年是「人工智慧」(Artificial Intelligence,簡稱AI)在「創造力」上大躍進的一年: 2月 Google與灰灣藝術基金會(Gray Area Foundation for the Arts)在舊金山聯合舉辦了由人工智慧「Deep Dream」所創作的抽象畫展,29幅作品共賣出了9.76萬美元(約新台幣314.8萬元) 3月 Google的人工智慧「AlphaGo」與南韓圍棋九段棋手李世石之間的「人機對弈」,在五局的比賽中以總比分4:1大勝李世石。 日本日經新聞「星新一賞」文學獎比賽結果公布,函館未來大學及東京大學的實驗室以人工智慧寫出的四篇小說投稿,其中有一篇小說通過初審。 4月 荷蘭阿姆斯特丹的金融公司ING集團與廣告公司J Walter Thompson共同合作,利用了微軟Azure做技術支持,重現了林布蘭(Rembrandt van Rijn)的畫風和筆觸,成功創作了一幅新的林布蘭肖像畫。 6月 Google公布了由人工智慧「Magenta」創作的80秒的鋼琴曲。最近,還將2,865本浪漫小說輸入到該公司的人工智慧引擎中,使其進行學習,正準備出版人工智慧機器人創作的浪漫小說。 以上這些例子都是透過大數據分析,結合電腦的深度學習技術,在研究人員的協助下所進行或完成的計畫,但已經可以看出AI在「創造力」上的未來潛力,雖然距離「自主性」還有一段距離,可已經足夠提供我們對未來世界進行相關討論了。 「The Next Rembrandt」計畫利用3D掃描儀器與軟體分析林布蘭(Rembrandt van Rijn)346幅作品。(◎www.nextrembrandt.com提供) 科技做為人類創作的輔助工具 近代科學技術的發展,對現代藝術史有著重要意義。「科學」和「藝術」看似兩條平行線,除了晚近發展的新媒體藝術之外,「科學」和「藝術」平時並沒有太多交集。然而回顧藝術史,「科學」對藝術的發展影響其實有著深厚的淵源。文藝復興時期即有藝術家使用「光學儀器」來幫助做畫、光譜的分析研究對印象派發展起了關鍵性的影響等等。Google人工智慧研發人員阿爾卡斯(Blaise Agϋera y Arcas)表示:「我們相信人工智慧技術對藝術的創新具有深刻的影響。」 簡單說,在過去,科技是人類創作的輔助工具。現在,在人類的幫助下,人工智慧科技開始可以進行簡單的創作,甚至模仿某一位藝術大師的風格,創造出新的作品。 大數據分析與深度學習 以荷蘭的ING … Continue reading

AML

AML Optimization – Do More with Less

With AML fines and regulator demands growing by the day, the stakes for AML teams have never been higher. All signs of potential AML activity have to be monitored, which puts a massive burden on investigation teams. But traditional approaches to AML transaction modeling are rigid, prone to false alarms and missing true incidents of money laundering. AML optimization efforts tend to be expensive and are often manual service engagements.

Unlike entrenched AML transaction monitoring solutions, our solution was built from the ground up to use the power of unsupervised machine learning to drive superior AML optimization. Rules-based and supervised machine learning systems require constant tuning as fraudsters discover new ways to evade them. Every false positive means wasted investigation cycles. Every false negative is an existential risk to your business. The good news is that we can help.

Unsupervised Machine Learning for AML Optimization

Reduced False Positives
deep visibility into subtle relationships that traditional systems miss in exchange for  better efficiency
Reduced False Negatives
we don’t rely on prior knowledge of specific money laundering patterns, increasing effectiveness.
Easy, Automatic Upgrades
modern deployment means you don’t have to deal with painful 3-5 year upgrade cycles. Get the newest capabilities the minute they are ready.
Standalone or Add-on Deployment
no need to replace an existing AML transaction monitoring solution until you are ready

Generating Magic cards using deep, recurrent neural networks

Hello! I’m a PhD candidate researcher in computer science at the University of Alabama at Birmingham who has been investigating the use of deep neural networks for classification and problem solving tasks. I saw a fun article about training a neural network on arbitrary data to generate novel sequences. For example, you force the network to read Shakespeare over and over and eventually it can write its own texts in the style of Shakespeare. I saw that and thought: hey, why not Magic cards instead of Shakespeare? So I downloaded the source code (here) and a json corpus of Magic cards (here). I decided to feed a deep neural network all of the Magic cards ever made in the hopes that it might be able to conjure up some new ones. The network was relatively simple (I can give you the details, if you’d like, but that gets technical). I would have done a more robust and complex network but it would have taken far too long to do the computations; I’m waiting on some new GPU hardware to come in to speed up the process.

Anyway, here are a few example cards produced by my network about two hours into the training process. The results produced by the recurrent neural network (RNN) early on were either verbose garbage or sensible-sounding cards that did absolutely nonsensical things. Keep in mind that the RNN has no prior knowledge of what Magic even is, let alone English, so it’s interesting that the results were even vaguely intelligible.

Amarogge Warfos
2U
Artifact Creature – Kavu Shaman
Morph B(B/B)(G/W) (You may cast this card from your graveyard to the battlefield, you may pay 1. If an enchantment card, then put the rest of target creature card from your graveyard for its flashback cost. If exile is you sacrifice it unless you pay 1G. If you do, put a 3/1 green Soldier creature token onto the battlefield. Put it into your graveyard.)
1/1
#I’m tickled by the absurd reminder text. The RNN knows that keyword abilities often come with reminder text, but it has no idea what “morph” means, so it just makes up stuff to put there. Oh, and the morph cost has a hybrid black/black mana symbol in it.

Slidshocking Krow
U
Creature – Dragon
Tromple,Mointainspalk
4/2
#Slidshocking Krow is ridiculously overpowered. A 4/2 for 1? In blue? With Mointainspalk AND Tromple? I see power creep is alive and well.

Grirerer Knishing
4G
Instant – Arcane
Exile target creature you control.
#The price is a little steep on this one, but maybe it’s worth it for the synergy with other Arcane spells…

Fransunn’s Ent
1B
Sorcery
Counter target spell with five toughness 2 or greater.
#Almost a meaningful conditional counterspell. Almost, but not quite.

Adiswen Agenter
1U
Enchantment – Aura
Enchant creature
At the beginning of each player’s upkeep, sacrifice a white Zombie creature.
#Although very bizarre, it is a “legal” card.

Skengi Hellldadietsn
1BU
Creature – Zombie
: Add G to your mana pool. If you do, put a -1/-1 counter on Skengi Hellldadietsn.
3/4
#Notice that it picked a creature type that actually matched the colors.

——–

I decided to let the training process continue over night. When I came back, I found that the text was starting to make a little more sense (not always, but more so than before). I noticed that the network, now more fully trained, could generate meaningful, novel cards. However, it also had a knack for generating profoundly useless cards. Here are a few snippets from the output:

* When $THIS enters the battlefield, each creature you control loses trample until end of turn.
* Whenever another creature enters the battlefield, you may tap two untapped Mountains you control.
* 3, : Add 2 to your mana pool.
* Legendary creatures can’t attack unless its controller pays 2 for each Zombie you control.

Other times it would start with an idea, like giving a creature kicker, but then forget about having a “if kicked” clause, or it could have a card with X in the mana cost but then deciding to do nothing with the X. Also, the network gets planeswalkers confused with level up creatures (admittedly they do look very similar), which often results in messy combinations of the two. For planeswalkers, the problem is that, unlike run-of-the-mill creatures, they are few and far between, so there aren’t many examples for the network to learn from. In any case, here are some of the typical examples I found the network churning out this morning:

——–

Tenjer Desineer
1
Artifact – Equipment
Equipped creature has fuseback.
Equip 1
#The RNN likes to make up new keywords. This one is a portmanteau of flashback and fuse. What it does for a creature, who knows? The RNN certainly has no idea.

Gravimite
1(G/W)(G/W)
Creature – Dryad
1(G/W): Regenerate $THIS.
When Gravimite enters the battlefield, draw a card.
2/3
#I think this is a reinterpretation of Carven Caryatid.

Light of the Bild
2WW
Creature – Spirit
Flying
Whenever Light of the Bild blocks, you may put a 1/1 green Angel creature token with flying onto the battlefield.
2/2

Horror
B
Horror deals 3 damage to target creature or player.
#A colorshifted Lightning Bolt. I find the name to be simple and evocative!

Mided Hied Parira’s Scepter
2
Artifact
3, T: Put a 1/1 white Advisor creature token onto the battlefield.

Shring the Artist
2BB
Legendary Creature – Cat
Flying
Whenever you cast a spell, you may return target creature card from your graveyard to your hand.
2/2

——–

In conclusion, I’ve learned quite a bit from this process. Originally, I designed the network to avoid overfitting because I feared it would generate cards that were mere carbon copies of the ones it had seen. However, I made the network too conservative and as a result it’s unwilling to experiment with multi-part abilities like kicker. It’s also worth exploring whether I can improve training on scarcely seen cards like planeswalkers, planes, schemes, etc. With any luck, I should be able to come up with a generative model for Magic cards that produces more robust and complex output.

Let me know what you think!

I decided to run one more test last night before I committed to gutting the network code and making it parameterizable for further testing. I did a test on the set of all Magic cards. The results are not bad, actually, with one small problem: every creature is composed of textureless gray blobs. I’ve enlarged one of the images to give you a clear idea of what I’m talking about.

I think it’s supposed to be a Rhox-like creature. You can see the eyes, and what look to be horns set atop a big head, but the body is a featureless gray mass. There are several reasons why this might be happening. For one, I lumped all the images into a single category of object, so this may be a case of the network generalizing over many diverse creatures and settling on an average texture. That and the network that I trained with wasn’t very big, so it might not have the capacity to learn different textures for different subjects.

I’m also on the lookout for instances of overfitting. For example, does anyone recognize the other art I’ve attached? Looks like a man taking a discard/mill spell to the face. While the generator has never actually seen real Magic art, it may have stumbled upon a blob that looked like a silhouette and reshaped it according to the responses of the discriminator.

While we’re on the subject of convolutional neural networks, I took the opportunity to try out BlindTool Beta (after I saw this video that was posted to the /r/machinelearning subreddit). It’s a free Android app that uses convnets to identify objects that are in the view of the phone’s camera, and tells you what it thinks it sees. It’s not the brightest network out there, but it can distinguish between a thousand different commonly encountered classes of objects. Before I left for work this morning, I went around my apartment testing it out, and it performed very well.

Of course, it’s easy to fool it when it has to reason about things it has never seen before. I took the card Mesa Falcon and set it on the table. It told me it was a book cover. But when I picked up the card and tapped on the body text with my thumb, it told me it was an Ipod or a personal hand-held computer. Don’t get me wrong, it’s clever that it can take advantage of contextual clues like that; it’s just fun to push it to its limits.