Machine Learning and the value of (human) knowledge

Recently Google Deepmind announced in a paper in Nature that it has produced an even better version of their Go playing AI, and that this time the AI, pretty much, taught itself. It was told the rules of the game, of course, but after that it simply played against itself millions of times and reached a level of play that surpasses anything else that came before it. Let's go to the original paper for a discussion of what this might mean for the future ...

a pure reinforcement learning approach requires just a few more hours to train, and achieves much better asymptotic performance, compared to training on human expert data. Using this approach, AlphaGo Zero defeated the strongest previous versions of AlphaGo, which were trained from human data using handcrafted features, by a large margin.
Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games.
Here's some more context, taken from a Verge article on this paper (and at least this article should not be behind a paywall):

“By not using human data — by not using human expertise in any fashion — we’ve actually removed the constraints of human knowledge,” said AlphaGo Zero’s lead programmer, David Silver, at a press conference. “It’s therefore able to create knowledge itself from first principles; from a blank slate [...] This enables it to be much more powerful than previous versions.”
[...]
In the case of AlphaGo Zero, what is particularly clever is the removal of any need for human expertise in the system.
[...]
What are the applications for these sorts of algorithms? According to DeepMind co-founder Demis Hassabis, they can provide society with something akin to a thinking engine for scientific research.
[...]
Hassabis suggests that a descendant of AlphaGo Zero could be used to search for a room temperature superconductor

Oh dear.

To summarize the above (and admittedly putting a doomy spin on it), human knowledge is worthless. Why bother with parsing those thousands of years of human experience at Go if they can be replicated by a machine within a few couple days of number crunching. In fact, human knowledge is seen as worse than worthless. It is limited in some way. It's a liability to depend on it. Better let machines do the acquisition and learning.

All of which should make you rather worried and/or disgusted at such prospects, at least if you're the kind of human being who values (human) competence and knowledge. I'm recoiling at the prospect myself. But wishing for the clock to turn back is, if history taught us anything, fruitless. Throw all the sabots you want at a loom, the loom wins in the end (as long as something much, much worse doesn't happen.)

So then. Let's count to ten, get over the disgust, and have a guess at what the future may hold. It will likely be a guess that will seem very quaint in ten years or fewer, but failing to plan is a plan to fail and all that.

The thing of note here is the chief necessary condition behind the success of Alpha Go Zero: cheap reproducible experiments. Alpha Go Zero had all it needed to run millions of experiments (in this case, games of Go) to learn the lay of a land, and a game of Go is a sequence of moves that you can replay whenever you want. And in the end, it's simple "economics": if it's cheaper/easier to do the experiments yourself rather than tracking down whoever else did them and learning from their results, you'll just do those experiments yourself.

What various people in comments sections are keen to point out is that Go is a perfectly known system, at least in principle. It's a game made up by humans, so we know its rules, and from those rules all consequences follow as far as Go is concerned. I agree this is an important factor, but as a support for the "cheap experiments" condition.

In principle, I don't see a reason why some domain like superconductors couldn't be attacked by an AI similar to Alpha Go Zero-- and we didn't make up the rules of superconductors this time. We had to discover the rules of quantum mechanics. But judging by theory-experiment agreement, we seem to have made a pretty good job of that, and with a good computational chemistry package (which may already be available), we could do the AI approach. (There's a question here-- if those simulators exist why aren't we brute force searching the problem right now. We would, but brute forcing such a large search space won't get us anywhere and the learning in Alpha Go Zero isn't quite brute force. So it would give a significant gain to search.)

There is another very important condition behind the success of Alpha Go Zero: a clear success metric for an experiment. It's clear who wins a game of Go; indeed, one can even say "by how much", which is a nice feature when one does learning. The superconductor search case would also have a nice metric-- how hot can a material be and still superconduct (given certain limits on external magnetic field strength etc).

So for knowledge one can obtain from cheap, reproducible experiments with clear success metrics it is likely that we will soon have automatic AI-powered "scientists" to acquire it. In itself, not a surprising development-- we don't need to do mental arithmetic now when silicon chips can do billions of computations each second. It seems though like not only can the nitty-gritty details of scientific knowledge acquisition be automatized, but more so it would be unnecessary for humans to even know what those details are.

On the one hand, there's nothing actually new here. Very few people are actually interested in science and its workings anyway, and those who are often get labeled as nerds and shunned. The kinds of people who like knowledge for its sake still will do so in a post-automatized-science world, and pursue it in much the same way as enthusiasts of the Middle Ages still practice sword techniques.

On the other hand, there is something new. However tenuous, one could always make a case that a person possessing scientific knowledge is a valuable economic asset-- precisely because such people are rare and their knowledge useful. When an AI with an appropriate simulator/laboratory for the domain at hand is available, this is no longer the case. Hooray for science as a hobby, I guess, but science as a job is definitely going to change.

Ok, so is there any other kind of knowledge than that which can be obtained from cheap, reproducible experiments with clearly defined success metrics?

In their own coverage of Alpha Go Zero, slate expresses the hope that these human-independent AIs would learn and make decisions in a way untainted by biases such as racism. Funny, because here where knowledge makes contact with the social and political worlds is where we find what Alpha Go Zero hasn't (yet) rendered worthless.

The social world is what it is. You can simulate it, but it's a moving target. Attitudes change. Politics change. We want an AI to decide whether someone is a likely recidivist and what to do with them. We can't have that AI perform millions of reproducible experiments relevant for that decision. It needs to know social data we acquired-- "expensive" data because it cannot be reproduced in any lab at any time, and data that may be fraught with biases we may wish to avoid. What are those biases, and how we counteract them, means we need to know and understand what's going on in society at large, and in the AI learning about it and taking decisions about it.

We may, at some point, delegate more and more decisions to our AIs. But as the friendly AI movement is keen to remind people, one needs to be very clear about what exactly they want the AI to do. For society there is no clearly defined success metric-- and if there is one today, it may not be the accepted one tomorrow. I'd argue we must never delegate politics to AI, but that's another post for another time. In any case, for a while yet, such delegation is not possible. We're more than one Alpha Go Zero away from that.

And science as a whole, even exact rigorous science of the kind we have good computational models for (and therefore in principle at least good simulators for) still contains knowledge that is valuable for humans to possess. It may become a quaint eccentricity to want to know the details of how to design an antenna. But someone still needs to decide that we want an antenna, or a rocket engine, or a room temperature superconductor-- and what we want them for. This is not a question about which one can do millions of cheap reproducible experiments. It's a question about a vision: articulating it, detailing it to a level that its required component parts can be optimized by an AI, and convincing others that this vision is necessary, or just plain cool. Engineering and Science are political too.

There are "interesting" times ahead, for sure. Automation threatens to make many human activities redundant, and nerds seem quite happy with making themselves obsolete. It's easy to see before us a Huxleyan Brave New World: a world of immortal humans kept in idiotic bliss, deskilled by the very technology their ancestors once created, small incurious souls stripped of all necessity by the tools that master them. Knowledge is useless. Knowledge is a liability. For humans at least. Don't bother with it. Don't bother with anything. Everything is fine. You are happy. You don't need to bother with anything. You are happy.

Opposite that pessimistic scenario, here is the optimistic one. Yes, a lot of knowledge will be devalued. But in truth, who cares about the reams of specialist scientific literature published anyway? Most of it is useless by any metric. The future does not belong to the human specialist. The future belongs to the encyclopedic human-- one that knows a little about a lot so as to know a lot about the big picture. A human that knows about the classical subjects of philosophy, politics, rhetoric even-- for the knowledge that matters is the knowledge that is debated. The knowledge that is fluid. The knowledge that is not specialized but specific, linked to a real unique time and place. The knowledge that is general about knowledge itself, about what can be achieved, about what we may want to achieve.

Both scenarios above err too much in their own direction. The future, as always, is likely to be quite a bit more prosaic and quite, in its new way, not that unlike the past. But of this I am quite sure: specialist knowledge will become increasingly the province of machines, and at most a quaint hobby in humans. And this I hope will remain true: brush up on philosophy and politics, brush up on general knowledge, they are likely to matter more and more.

And beware the day when the machines come for our politics. That is the day when the human soul will really be in danger.

Cheers.

Comments

  1. Fascinating!

    Bland, your "cheap reproducible experiment" approach is illuminating. One thing it brings out is that there's a difference between testing a fully abstract system like Go, and testing a real world where results can still surprise you. If our theories about superconductors or antibiotics, say, are sound -- and, as you say, they seem to be pretty good -- then we might use this kind of expert system to design a conducting material or antigen. But we won't find out whether our theories are sound, or complete, until we put them up against reality.

    And it's also a vital point that an expert system can't jump the fact-value distinction just by number-crunching. What *ought* to be done is a different *kind* of knowledge than what *is* happening. We could use a good deal more attention to the former (while not shortchanging the latter!).

    Rick

    ReplyDelete
    Replies
    1. The, lets say, simulator verification is indeed a wrinkle for an AI expert A simulator is not the real thing, so without testing its results cannot be trusted.

      The solution: as long as the simulator is "good enough", it can whittle down significantly what you need to test in the real world.

      The solution, take 2: the simulator itself is "trainable", which may be possible in some domains where we can do cheap reproducible experiments on the real thing (various parts of chemistry may fit here once labs-on-a-chip take off)

      Notice a big fudge factor, the "good enough" simulator. I won't struggle to define it now. Let's say, for quantum mechanics we might have good enough ones. For robots operating in a household environment (even when humans aren't around), somehow we don't-- to the frustration of a few robotics project ideas I've encountered, and the motivation of another.

      Cheers.

      Delete

Post a Comment

Popular posts from this blog

Review of "Mind over money"

Parity Games: Intro

Dark Magics to avoid