Hello! I’m a PhD candidate researcher in computer science at the University of Alabama at Birmingham who has been investigating the use of deep neural networks for classification and problem solving tasks. I saw a fun article about training a neural network on arbitrary data to generate novel sequences. For example, you force the network to read Shakespeare over and over and eventually it can write its own texts in the style of Shakespeare. I saw that and thought: hey, why not Magic cards instead of Shakespeare? So I downloaded the source code (here) and a json corpus of Magic cards (here). I decided to feed a deep neural network all of the Magic cards ever made in the hopes that it might be able to conjure up some new ones. The network was relatively simple (I can give you the details, if you’d like, but that gets technical). I would have done a more robust and complex network but it would have taken far too long to do the computations; I’m waiting on some new GPU hardware to come in to speed up the process.
Anyway, here are a few example cards produced by my network about two hours into the training process. The results produced by the recurrent neural network (RNN) early on were either verbose garbage or sensible-sounding cards that did absolutely nonsensical things. Keep in mind that the RNN has no prior knowledge of what Magic even is, let alone English, so it’s interesting that the results were even vaguely intelligible.
Artifact Creature – Kavu Shaman
Morph (You may cast this card from your graveyard to the battlefield, you may pay . If an enchantment card, then put the rest of target creature card from your graveyard for its flashback cost. If exile is you sacrifice it unless you pay . If you do, put a 3/1 green Soldier creature token onto the battlefield. Put it into your graveyard.)
#I’m tickled by the absurd reminder text. The RNN knows that keyword abilities often come with reminder text, but it has no idea what “morph” means, so it just makes up stuff to put there. Oh, and the morph cost has a hybrid black/black mana symbol in it.
Creature – Dragon
#Slidshocking Krow is ridiculously overpowered. A 4/2 for 1? In blue? With Mointainspalk AND Tromple? I see power creep is alive and well.
Instant – Arcane
Exile target creature you control.
#The price is a little steep on this one, but maybe it’s worth it for the synergy with other Arcane spells…
Counter target spell with five toughness 2 or greater.
#Almost a meaningful conditional counterspell. Almost, but not quite.
Enchantment – Aura
At the beginning of each player’s upkeep, sacrifice a white Zombie creature.
#Although very bizarre, it is a “legal” card.
Creature – Zombie
: Add to your mana pool. If you do, put a -1/-1 counter on Skengi Hellldadietsn.
#Notice that it picked a creature type that actually matched the colors.
I decided to let the training process continue over night. When I came back, I found that the text was starting to make a little more sense (not always, but more so than before). I noticed that the network, now more fully trained, could generate meaningful, novel cards. However, it also had a knack for generating profoundly useless cards. Here are a few snippets from the output:
* When $THIS enters the battlefield, each creature you control loses trample until end of turn.
* Whenever another creature enters the battlefield, you may tap two untapped Mountains you control.
* , : Add to your mana pool.
* Legendary creatures can’t attack unless its controller pays for each Zombie you control.
Other times it would start with an idea, like giving a creature kicker, but then forget about having a “if kicked” clause, or it could have a card with X in the mana cost but then deciding to do nothing with the X. Also, the network gets planeswalkers confused with level up creatures (admittedly they do look very similar), which often results in messy combinations of the two. For planeswalkers, the problem is that, unlike run-of-the-mill creatures, they are few and far between, so there aren’t many examples for the network to learn from. In any case, here are some of the typical examples I found the network churning out this morning:
Artifact – Equipment
Equipped creature has fuseback.
#The RNN likes to make up new keywords. This one is a portmanteau of flashback and fuse. What it does for a creature, who knows? The RNN certainly has no idea.
Creature – Dryad
: Regenerate $THIS.
When Gravimite enters the battlefield, draw a card.
#I think this is a reinterpretation of Carven Caryatid.
Light of the Bild
Creature – Spirit
Whenever Light of the Bild blocks, you may put a 1/1 green Angel creature token with flying onto the battlefield.
Horror deals 3 damage to target creature or player.
#A colorshifted Lightning Bolt. I find the name to be simple and evocative!
Mided Hied Parira’s Scepter
, T: Put a 1/1 white Advisor creature token onto the battlefield.
Shring the Artist
Legendary Creature – Cat
Whenever you cast a spell, you may return target creature card from your graveyard to your hand.
In conclusion, I’ve learned quite a bit from this process. Originally, I designed the network to avoid overfitting because I feared it would generate cards that were mere carbon copies of the ones it had seen. However, I made the network too conservative and as a result it’s unwilling to experiment with multi-part abilities like kicker. It’s also worth exploring whether I can improve training on scarcely seen cards like planeswalkers, planes, schemes, etc. With any luck, I should be able to come up with a generative model for Magic cards that produces more robust and complex output.
Let me know what you think!
I decided to run one more test last night before I committed to gutting the network code and making it parameterizable for further testing. I did a test on the set of all Magic cards. The results are not bad, actually, with one small problem: every creature is composed of textureless gray blobs. I’ve enlarged one of the images to give you a clear idea of what I’m talking about.
I think it’s supposed to be a Rhox-like creature. You can see the eyes, and what look to be horns set atop a big head, but the body is a featureless gray mass. There are several reasons why this might be happening. For one, I lumped all the images into a single category of object, so this may be a case of the network generalizing over many diverse creatures and settling on an average texture. That and the network that I trained with wasn’t very big, so it might not have the capacity to learn different textures for different subjects.
I’m also on the lookout for instances of overfitting. For example, does anyone recognize the other art I’ve attached? Looks like a man taking a discard/mill spell to the face. While the generator has never actually seen real Magic art, it may have stumbled upon a blob that looked like a silhouette and reshaped it according to the responses of the discriminator.
While we’re on the subject of convolutional neural networks, I took the opportunity to try out BlindTool Beta (after I saw this video that was posted to the /r/machinelearning subreddit). It’s a free Android app that uses convnets to identify objects that are in the view of the phone’s camera, and tells you what it thinks it sees. It’s not the brightest network out there, but it can distinguish between a thousand different commonly encountered classes of objects. Before I left for work this morning, I went around my apartment testing it out, and it performed very well.
Of course, it’s easy to fool it when it has to reason about things it has never seen before. I took the card Mesa Falcon and set it on the table. It told me it was a book cover. But when I picked up the card and tapped on the body text with my thumb, it told me it was an Ipod or a personal hand-held computer. Don’t get me wrong, it’s clever that it can take advantage of contextual clues like that; it’s just fun to push it to its limits.