AI isn't any more "magic" than the weather or the economy, or for that matter, society.
When engineers tell you they don't know "why" an AI made the decision it made, you have to put that in context. You are talking to people who are accustomed to being able to comment in detail on the behavior of systems precisely designed to behave in …
AI isn't any more "magic" than the weather or the economy, or for that matter, society.
When engineers tell you they don't know "why" an AI made the decision it made, you have to put that in context. You are talking to people who are accustomed to being able to comment in detail on the behavior of systems precisely designed to behave in strict accordance with certain algorithms. Algorithms which mimic the logic of our conscious minds - i.e. the ways in which we have already learned to understand, from a high level perspective, our ability to reason. We can "explain" their behavior because the kind of high-level explanation we'd be looking for was the very basis of their design, and was directly implemented.
AI is different - it actually works by simulating (or attempting to simulate) the fuzzy, more intuitive processes of our own minds – which operate more through association and pattern filtering than strict logic. Our intuitive minds are far from perfect, but they often enough produce results that we are ultimately capable of rationally defending, if not giving a precisely impenetrable proof of veracity. They work via the interplay of a subconscious set of heuristics, constituting a sort of "flawed logic" which works well enough because it's been configured through experience (i.e. our lifetime's worth of "training data"). Think of it as an indirect way of yielding logical decisions which, at the cost of overly complicating simple logical tasks, simplifies (approximately and imperfectly) tasks which would otherwise require a very high volume of precision logic.
Sometimes, as with Deep Blue, these heuristics are explicitly designed for a specific purpose, so that a designer could, for example, point to a specific parameter and say, this is how much weight is given to the position of the king or the queen or another piece in a certain kind of scenario. Other times the models are more generic and a designer would have to look at how the parameters have shaken out from the training data in order to say, ok, it looks like maybe this cluster of nodes (synapses) ends up being responsible for reasoning on this particular class of information, etc.
(Now, the difference with the human brain is that computers can also avail themselves of raw, brute-force computational power beyond the capacity of our minds. In addition to allowing them to quickly incorporate enormous corpuses of training data which dwarf what an individual person can experience in a lifetime, it can augment these pre-configured heuristics with a real-time boost to an AI's ability to "think on its feet". Ultimately, these advantages may simply compensate for all the ways in which our modeling of human intuition falls short of how our minds actually operate.)
Now, compare this to analyzing the economy. We understand the basics of what money is, how it represents abstract "value", and how individual transactions work. From there we attempt to divine more high-level "logic" of how markets operate, such as the law of supply and demand – which forms a basis for understanding how lots of individual transactions collectively determine a general "market price". The law doesn't give us perfect predictability because it operates under various simplifying assumptions, but it works well enough that it allows us to make a good deal of sense out of an otherwise random seeming interplay of countless interactions. From there we try to make other observations regarding high-level patterns of behavior, allowing us to describe and understand the economy according to a more simplified (yet imperfect) "logical" model.
The point is, traditionally engineered systems begin with this high-level logical understanding and then are implemented accordingly. They're top-down designs. In contrast, AI is more bottom-up; it begins with a low-level understanding of the basic dynamics of a system which hopes to produce (again, imperfectly), through carefully coaxed and tuned volume and complexity, something resembling high level "logical" patterns of behavior. Therefore, understanding how AI does what it does isn't any more "impossible" than understanding the weather or the economy or the human body or biological evolution or human society – it would just take a lot of careful analysis and observation, the kind we engage in when we "do science".
Here it's important to understand that there are more precise sciences (like physics, chemistry, biology), and then more general sciences (economics, meteorology, sociology). The latter operate with the understanding that they are attempting to divine an approximate understanding of a complex system. A system whose low-level mechanics can be very precisely described by a science of the the former type, but which ultimately wasn't specifically created to mimic an easily describable high-level logic. This is the kind of science which would be necessary to "understand" modern AI.
AI isn't any more "magic" than the weather or the economy, or for that matter, society.
When engineers tell you they don't know "why" an AI made the decision it made, you have to put that in context. You are talking to people who are accustomed to being able to comment in detail on the behavior of systems precisely designed to behave in strict accordance with certain algorithms. Algorithms which mimic the logic of our conscious minds - i.e. the ways in which we have already learned to understand, from a high level perspective, our ability to reason. We can "explain" their behavior because the kind of high-level explanation we'd be looking for was the very basis of their design, and was directly implemented.
AI is different - it actually works by simulating (or attempting to simulate) the fuzzy, more intuitive processes of our own minds – which operate more through association and pattern filtering than strict logic. Our intuitive minds are far from perfect, but they often enough produce results that we are ultimately capable of rationally defending, if not giving a precisely impenetrable proof of veracity. They work via the interplay of a subconscious set of heuristics, constituting a sort of "flawed logic" which works well enough because it's been configured through experience (i.e. our lifetime's worth of "training data"). Think of it as an indirect way of yielding logical decisions which, at the cost of overly complicating simple logical tasks, simplifies (approximately and imperfectly) tasks which would otherwise require a very high volume of precision logic.
Sometimes, as with Deep Blue, these heuristics are explicitly designed for a specific purpose, so that a designer could, for example, point to a specific parameter and say, this is how much weight is given to the position of the king or the queen or another piece in a certain kind of scenario. Other times the models are more generic and a designer would have to look at how the parameters have shaken out from the training data in order to say, ok, it looks like maybe this cluster of nodes (synapses) ends up being responsible for reasoning on this particular class of information, etc.
(Now, the difference with the human brain is that computers can also avail themselves of raw, brute-force computational power beyond the capacity of our minds. In addition to allowing them to quickly incorporate enormous corpuses of training data which dwarf what an individual person can experience in a lifetime, it can augment these pre-configured heuristics with a real-time boost to an AI's ability to "think on its feet". Ultimately, these advantages may simply compensate for all the ways in which our modeling of human intuition falls short of how our minds actually operate.)
Now, compare this to analyzing the economy. We understand the basics of what money is, how it represents abstract "value", and how individual transactions work. From there we attempt to divine more high-level "logic" of how markets operate, such as the law of supply and demand – which forms a basis for understanding how lots of individual transactions collectively determine a general "market price". The law doesn't give us perfect predictability because it operates under various simplifying assumptions, but it works well enough that it allows us to make a good deal of sense out of an otherwise random seeming interplay of countless interactions. From there we try to make other observations regarding high-level patterns of behavior, allowing us to describe and understand the economy according to a more simplified (yet imperfect) "logical" model.
The point is, traditionally engineered systems begin with this high-level logical understanding and then are implemented accordingly. They're top-down designs. In contrast, AI is more bottom-up; it begins with a low-level understanding of the basic dynamics of a system which hopes to produce (again, imperfectly), through carefully coaxed and tuned volume and complexity, something resembling high level "logical" patterns of behavior. Therefore, understanding how AI does what it does isn't any more "impossible" than understanding the weather or the economy or the human body or biological evolution or human society – it would just take a lot of careful analysis and observation, the kind we engage in when we "do science".
Here it's important to understand that there are more precise sciences (like physics, chemistry, biology), and then more general sciences (economics, meteorology, sociology). The latter operate with the understanding that they are attempting to divine an approximate understanding of a complex system. A system whose low-level mechanics can be very precisely described by a science of the the former type, but which ultimately wasn't specifically created to mimic an easily describable high-level logic. This is the kind of science which would be necessary to "understand" modern AI.
Which is possible, because it's not magic.