Horror apologetics

What place for horror? This article does a great job of listing possible reasons why some people enjoy the genre, and it even does this coming from a non-horror fan perspective. But of course, as a horror fan, I feel like I need to comment :) There's something that, I think, Rick's article doesn't emphasize enough, something paradoxically sympathetic to his sensibilities that he misses about Horror, because the genre's odor is, to be fair, rather strong.

But before I get to deep stuff I also must acknowledge that Horror is trashy in a way other genres don't seem to be. Sturgeon's Law is universal: 90%+ of everything is crud, but somehow the stereotypical Bad Movie is a Horror one. Indeed, it may well be that it was around Horror, or around Horror-infused fare like certain SF films of the 50s, that the "So Bad It's Good" style of movie enjoyment sprung up.

There's material enough in the "So Bad It's Good" concept for another artic…

Dark Magics to avoid

If there's anything true of art in general, it's that any rule beginning with "don't" should be broken on occasion. Art should engage the soul, the conscious soul even, and nothing better for that than the occasional thing that's just that bit out of place, that bit unexpected. Conform to every rule you heard and what you produce can be consumed automatically. It's when the rules fail that consciousness kicks in.

So in that spirit, treat the following "don'ts" as guidelines, and even break them-- but make sure you know what you're doing ;) For what follows is a list of magical powers which, if let loose on a plot, stand a good chance to render it hole-y. And the list is by no means exhaustive: feel free to argue for more items to be put on it.

To give a flavor of the disruptive nature of magic, let's look at an example that will NOT kill a story ... but which, the character who suggested it argued, would kill society.

NaN. Invisibilit…

Narrative constriction, and the zero sum game of complexity and meaning

I've finally reached a decent enough point in outlining my second draft of my WiP. I'll spare you the details of that process; it's a muddle of moving stuff around, throwing stuff out, putting new stuff in. But if there's any pattern that emerges from this chaos of decisions, it's something I'd call, for lack of a better term, narrative constriction.

You know it well, you've seen it many times in stories. The friendly ranger turns out to be a lost king. The sleuth takes the case because it will avenge her former partner. And the big evil dude in a black mask and cape turns out to be the protagonist's father. There are a few things to say about this.

First, the world we know doesn't work that way. I'm not complaining about the improbability of coincidences here. It makes sense, in each of those stories' universes, for the coincidence to happen; indeed, everything is arranged such that it wouldn't make sense, were the coincidence to be avo…

Throwing the hero/ine into the quest

NaNoWriMo is upon us, again. I won't participate this year, but I take its start to also begin writing the next draft of my WiP. I have it all nicely summarized, except for one trifle: how to lay out the stakes before my MC (and the reader). The "Call to Adventure", as it is sometimes known, or Inciting Incident.

The Call should happen reasonably early in the story. It's the moment when the reader gets to know the main conflict (or something that is a plausible main conflict until something even bigger shows up). Also, the reader gets to know the stakes. The hero/ine must prevail, or else ... and whatever the "else" is, hopefully it gets the reader to care about the narrative proceedings.

I decided to have a look at some "Calls to adventure" from recent published first time novels (with a couple examples from more established authors thrown in as well), just to see what "the proper ways" to do this may be. But first, let's look at an…

Machine Learning and the value of (human) knowledge

Recently Google Deepmind announced in a paper in Nature that it has produced an even better version of their Go playing AI, and that this time the AI, pretty much, taught itself. It was told the rules of the game, of course, but after that it simply played against itself millions of times and reached a level of play that surpasses anything else that came before it. Let's go to the original paper for a discussion of what this might mean for the future ...
a pure reinforcement learning approach requires just a few more hours to train, and achieves much better asymptotic performance, compared to training on human expert data. Using this approach, AlphaGo Zero defeated the strongest previous versions of AlphaGo, which were trained from human data using handcrafted features, by a large margin. Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, starting tabu…

Counter-state machines, continued

In a previous post I illustrated a way to count how many strings there are in a formally defined "regular language". One question was, supposing the state machine that recognizes the language has states from which no accepting state is reachable; is its observability matrix still full rank? The answer is no, and here's an example of that, and how to work around it.

First, the regular language. Define the alphabet of symbols as "0" and "1", and the grammar rule is this: a finite string is in the language if and only if it contains no "11" substring. Thereafter, I'll use the notions introduced in that previous post. The resulting state machine is given below (initial state omitted), where the "_0" state corresponds to a string ending with "0" and "_1" corresponds to a string ending with "01". "meh" is a rejecting state, meaning the string contains a "11" substring.
I'll list …

Hangman, entropy, and the robustness of stupid

One of the "procrastination tools" on Critique Circle is a little game of hangman, and I sure sunk a lot of time guessing (and sometimes failing to guess) words there. The rules are fairly simple and you probably know them. You start by knowing how many letters the word has. You guess a letter; if the guess is right, you get to see where the letter appears in the word. If you guess wrong, something is added to the hangman drawing, and if it completes because you make too many wrong guesses, you lose. Instead, if you find all the letters before that happens, you win.

Simple fun game, but it got me thinking that it's a neat illustration for the concept of information theoretic entropy.

Lets imagine then that we have a list of all possible words, and we know (this is VERY important) that the word we have to guess is chosen from this list AND all words are equally likely.

Lets ignore the rules of hangman for a while, and ask a question about how we may be able to select a wo…