Posts

Showing posts from 2017

Narrative constriction, and the zero sum game of complexity and meaning

I've finally reached a decent enough point in outlining my second draft of my WiP. I'll spare you the details of that process; it's a muddle of moving stuff around, throwing stuff out, putting new stuff in. But if there's any pattern that emerges from this chaos of decisions, it's something I'd call, for lack of a better term, narrative constriction. You know it well, you've seen it many times in stories. The friendly ranger turns out to be a lost king. The sleuth takes the case because it will avenge her former partner. And the big evil dude in a black mask and cape turns out to be the protagonist's father. There are a few things to say about this. First, the world we know doesn't work that way. I'm not complaining about the improbability of coincidences here. It makes sense, in each of those stories' universes, for the coincidence to happen; indeed, everything is arranged such that it wouldn't make sense, were the coincidence to be ...

Throwing the hero/ine into the quest

NaNoWriMo is upon us, again. I won't participate this year, but I take its start to also begin writing the next draft of my WiP. I have it all nicely summarized, except for one trifle: how to lay out the stakes before my MC (and the reader). The "Call to Adventure", as it is sometimes known, or Inciting Incident. The Call should happen reasonably early in the story. It's the moment when the reader gets to know the main conflict (or something that is a plausible main conflict until something even bigger shows up). Also, the reader gets to know the stakes. The hero/ine must prevail, or else ... and whatever the "else" is, hopefully it gets the reader to care about the narrative proceedings. I decided to have a look at some "Calls to adventure" from recent published first time novels (with a couple examples from more established authors thrown in as well), just to see what "the proper ways" to do this may be. But first, let's look at...

Machine Learning and the value of (human) knowledge

Recently Google Deepmind announced in a paper in Nature that it has produced an even better version of their Go playing AI, and that this time the AI, pretty much, taught itself. It was told the rules of the game, of course, but after that it simply played against itself millions of times and reached a level of play that surpasses anything else that came before it. Let's go to the original paper for a discussion of what this might mean for the future ... a pure reinforcement learning approach requires just a few more hours to train, and achieves much better asymptotic performance, compared to training on human expert data. Using this approach, AlphaGo Zero defeated the strongest previous versions of AlphaGo, which were trained from human data using handcrafted features, by a large margin. Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, start...