Discussion about this post

User's avatar
Jesse's avatar

Hey JVL, long time first time here. Got me a master's in AI (fancy!). The thing about showing why a highly multivariate system did a particular thing is not so much that it's not possible, as that it's a huge pain in the ass.

Imagine if you decided to make a decision using a giant Pachinko machine with a million balls and a million pegs. You'd say, if more than twenty balls end up in this particular slot at the bottom, then a switch gets thrown and something gets fired off. Then you pour in your million balls and for good measure you videotape the whole thing to review later. The slot in question ends up with a hundred balls in it and whatever happens happens.

Now a reporter comes up and asks you why a particular ball ended up in that slot. You can go back and watch the video and see every peg that ball hit and every other ball that jostled it, and you can do that for every ball that ended up in that slot, but jeez man, what a ton of work that is, and at the end of the day, did you really even answer the question satisfactorily? You can sit and watch that video of all the balls pinging around, and strictly speaking, 100% of the information you want is right there. But it still just looks like a bunch of balls pinging around.

Now suppose that on our Pachinko board there's an element just above the slot in question that acts as a funnel. Now the answer looks a lot simpler! The slot filled up because there's a funnel above it. A funnel is a concept that people understand, that you can tell a story about.

AI systems and other highly multivariate decision engines are like this: an AI researcher will talk to you about _features_ and _contours_ and _trends_, and you may hear all of that, and your eyes start to glaze over, and then you say "yes, yes, but how did it make the decision?" Then the researcher sighs and shows you the balls pinging off their pegs and you are understandably dazed by the spectacle and come away thinking the system is chaos and nobody really understands it. But really what's happening is that you're failing to appreciate that they were answering your question the first time around. The human-comprehensible part is not in the mechanical end of things; it's in the design. Or: If I blow out a candle and you ask, which molecule of air is the one that extinguished the flame?, you're asking a question that is fundamentally wrong. Air molecules and the physics that dictate their interactions are very much involved, but that's not where the story is at. The misunderstanding comes from failures of communication between experts and lay folk about issues of granularity and scale.

Expand full comment
Kevin Bowe's avatar

I'm so out of the mainstream thinking for what is "important" political debate. Silly me to think that issues like the impact of accelerating waves of new technology is having on society--like this AI--as among the most important topics we need to have widespread public debate on.

Where would we be today if, in 2010, fours years after twitter was launched, we started talking about the complex and wide range of issues and implications of having a private company owning the public square online? But we didn't have that debate until circumstances forced us to and only AFTER the genie is out of the bottle.

We will indeed be having huge debates on the manipulation of AI code. But we won't have that debate now. We'll wait until the boiling pot is ready to explode.

Expand full comment
118 more comments...