I started following Richard Hanania thanks to this post. Yikes. I had to almost immediately unfollow thanks to the tweet below. Is there such a think as an anti-trump righty troll?
His tweet: "I think people are overestimating how much drugs to treat obesity will work. The problem with fat people is they don't have the intelligence, self-control to put some things in their mouths and not others. Many will never show the initiative to get the pill or forget to take it."
Thanks for asking the question about which influences conservative culture more: out-group-near/out-group-far or oppositional defiance. In family members' posts on Griner/Whelan, they are angry that one American was chosen over another. The posts used images of Griner that evoke images of black Americans in prison/court but the images don't note that the court is Russian. The image of Whelan is not of a human who stole social security numbers & defrauded others to be convicted by a military court, but it is an image of the strait-laced, medaled-Marine. Biden chose the American that looks least like them (Griner) over the American that looks most like my family (Whelan). At a deep level, these family members are mourning that their identity is no longer highly prized by their own children & America generally. This family interprets their own children coming out as gay as a rejection of their way of life as well.
It's more than simple oppositional defiance or out-group-near/out-group-far, I suspect. It is a fear that they themselves are no longer valued in the same way by the youth in their own lives. They can't even influence their own children to choose their lifestyle & become straight. The generational shifts run deep & they are vast.
Don't get me wrong: Google is the nearest thing to Star Fleet the world has seen. It's benevolent, powerful, and really does mean to organize the world's information in service to humanity.
But it has a blind spot, in that it doesn't know (or can't act as though) most people are not "smart." And they're not. *Most* people -- meaning more than half -- are significantly ignorant. Google Search, for example, will always try to return some kind of result, even if a better, even necessary response would be more like, "Your query is laden with assumptions and unspecified context. Please ask a better question."
Providing a result for wildly diffuse queries tends to validate those queries. It's expected that the user, upon receiving a sub-optimal response, will naturally improve his query before taking further action. That's a politely unrealistic expectation.
I want to drill down on this AI thing a bit deeper.
But first, I want everyone to think of the dimmest person you know. I'm not being mean, I just want you think about their lived experience. Think about the shrinking opportunities for people without significant intellectual capacity. Hell, think about yourself, doing your job after a severe stroke or accident. This world we are creating is going to be impossible for those kinds of people. Our society says that person has little market value. Now, with the advent ChatGPT, we are all headed for that same place. We are all going to be in that same boat.
Tell me, how does that society "work?" And I don't just mean, work as in labor, I mean how is that society organized? I surely don't know, but a couple of things are striking to me. One: my man, John Maynard Keynes predicted "that technological change and productivity improvements would eventually lead to a 15-hour workweek." (note, I tried to find the exact quote, but the orginal lecture...oye...). But we aren't shifting those productivity gains to the workers are we? Nope, we are sending them straight to the shareholders and oligarchs. Two: everytime any one attempts to discuss these very much real problems, you essentially get called a socialist by about 40% of the country.
Our political system will significant changes to cope with the new reality of the coming AI disruption. We still haven't sorted out the information warfare taking place on Social Media. So many significant problems getting worse, so little coherence in our national dialogue. I may just have to build the manifesto cabin in the mountains and retreat...
The political right is great at inventing, redefining, and co-opting language to suit their purposes. Back in George HW Bush's time they were so successful at turning "liberal" into a dirty word that liberals including Hillary Clinton starting calling themselves "progressives." More recently they invented "RINO" (Republicans In Name Only) to isolate and punish Republicans who didn't support Trumpism. The same people who ban books and campaign against any mention of racial or LBGT issues in our schools have managed to associate "cancel culture" with "leftists." Likewise "woke" and "wokeness" originally used by black folk to describe the process of becoming aware of racism and white privilege is now used to denigrate and excoriate people for being ridiculous and excessively concerned about political correctness.
So why does our side of the political fence (currently comprised of a coalition of sane liberals, moderates, and traditional conservatives) still continue to politely - and incorrectly - refer to radical rightwing extremists who are edging toward fascism as "conservatives"? Donald Trump, his enablers, his base, and the host of imitators and wannabe autocrats like Ron DeSantis are not conservatives - they do not subscribe to conservative political philosophy. I am a liberal but I have more respect for true conservatives than to lump them in with those who are running today's GOP.
It's past time to stop calling the radical right "conservative" - maybe hold a national contest to invent a catchy name or acronym that reflects the reality of who and what they really are.
I have mixed feelings on Richard Hanania. When I saw his name was the section heading, I expected it to be about something dumb he said on Twitter, because he does that regularly. He also bounces between behaving ignorantly and making good points about the idiocy of the Trump Right. I basically read him with one eyebrow raised at all times.
AI isn't any more "magic" than the weather or the economy, or for that matter, society.
When engineers tell you they don't know "why" an AI made the decision it made, you have to put that in context. You are talking to people who are accustomed to being able to comment in detail on the behavior of systems precisely designed to behave in strict accordance with certain algorithms. Algorithms which mimic the logic of our conscious minds - i.e. the ways in which we have already learned to understand, from a high level perspective, our ability to reason. We can "explain" their behavior because the kind of high-level explanation we'd be looking for was the very basis of their design, and was directly implemented.
AI is different - it actually works by simulating (or attempting to simulate) the fuzzy, more intuitive processes of our own minds – which operate more through association and pattern filtering than strict logic. Our intuitive minds are far from perfect, but they often enough produce results that we are ultimately capable of rationally defending, if not giving a precisely impenetrable proof of veracity. They work via the interplay of a subconscious set of heuristics, constituting a sort of "flawed logic" which works well enough because it's been configured through experience (i.e. our lifetime's worth of "training data"). Think of it as an indirect way of yielding logical decisions which, at the cost of overly complicating simple logical tasks, simplifies (approximately and imperfectly) tasks which would otherwise require a very high volume of precision logic.
Sometimes, as with Deep Blue, these heuristics are explicitly designed for a specific purpose, so that a designer could, for example, point to a specific parameter and say, this is how much weight is given to the position of the king or the queen or another piece in a certain kind of scenario. Other times the models are more generic and a designer would have to look at how the parameters have shaken out from the training data in order to say, ok, it looks like maybe this cluster of nodes (synapses) ends up being responsible for reasoning on this particular class of information, etc.
(Now, the difference with the human brain is that computers can also avail themselves of raw, brute-force computational power beyond the capacity of our minds. In addition to allowing them to quickly incorporate enormous corpuses of training data which dwarf what an individual person can experience in a lifetime, it can augment these pre-configured heuristics with a real-time boost to an AI's ability to "think on its feet". Ultimately, these advantages may simply compensate for all the ways in which our modeling of human intuition falls short of how our minds actually operate.)
Now, compare this to analyzing the economy. We understand the basics of what money is, how it represents abstract "value", and how individual transactions work. From there we attempt to divine more high-level "logic" of how markets operate, such as the law of supply and demand – which forms a basis for understanding how lots of individual transactions collectively determine a general "market price". The law doesn't give us perfect predictability because it operates under various simplifying assumptions, but it works well enough that it allows us to make a good deal of sense out of an otherwise random seeming interplay of countless interactions. From there we try to make other observations regarding high-level patterns of behavior, allowing us to describe and understand the economy according to a more simplified (yet imperfect) "logical" model.
The point is, traditionally engineered systems begin with this high-level logical understanding and then are implemented accordingly. They're top-down designs. In contrast, AI is more bottom-up; it begins with a low-level understanding of the basic dynamics of a system which hopes to produce (again, imperfectly), through carefully coaxed and tuned volume and complexity, something resembling high level "logical" patterns of behavior. Therefore, understanding how AI does what it does isn't any more "impossible" than understanding the weather or the economy or the human body or biological evolution or human society – it would just take a lot of careful analysis and observation, the kind we engage in when we "do science".
Here it's important to understand that there are more precise sciences (like physics, chemistry, biology), and then more general sciences (economics, meteorology, sociology). The latter operate with the understanding that they are attempting to divine an approximate understanding of a complex system. A system whose low-level mechanics can be very precisely described by a science of the the former type, but which ultimately wasn't specifically created to mimic an easily describable high-level logic. This is the kind of science which would be necessary to "understand" modern AI.
I paid for a subscription to this site because I am interested in conservative points of view that is beyond "democrats are evil". I wanted access to JVL and normally appreciate the links to other writers in his post. But I have to say the link to Richard Hanania article was disappointing. He makes his point about scams by belittling and disparaging the: civil rights movement, the names some Black parents choose for their children (as if Black people are only group to move away from pre-1950 common names), unions, seniors and who knows who else as I had to stop reading.
I know (hope) that The Bulwark writers arent the only conservative writers that can make points without punching down? Was there no one else who has examined the right wing scam phenomena?
AI as magic is an interesting metaphor. A large part of magic is knowing the correct words and proper names of entities to get the results you want. In many ways AI is the same. Summon the entity you want with the correct words and you can manifest the seeming miraculous. Summon the wrong entity or get the spell slightly wrong and disaster awaits.
Currently that means your image won't be what you wanted, but imagine these "entities" hooked up to more powerful systems like electrical grids, financial systems, or military control systems.
You might even get a religion/magic dichotomy as the technicians (priests) trained on the system can summon it for proper uses, but hackers (dark magicians) can summon the system with adversarial prompts, or for unintended uses for their own gain.
I started following Richard Hanania thanks to this post. Yikes. I had to almost immediately unfollow thanks to the tweet below. Is there such a think as an anti-trump righty troll?
His tweet: "I think people are overestimating how much drugs to treat obesity will work. The problem with fat people is they don't have the intelligence, self-control to put some things in their mouths and not others. Many will never show the initiative to get the pill or forget to take it."
#3: cf Arthur C. Clarke
Thanks for asking the question about which influences conservative culture more: out-group-near/out-group-far or oppositional defiance. In family members' posts on Griner/Whelan, they are angry that one American was chosen over another. The posts used images of Griner that evoke images of black Americans in prison/court but the images don't note that the court is Russian. The image of Whelan is not of a human who stole social security numbers & defrauded others to be convicted by a military court, but it is an image of the strait-laced, medaled-Marine. Biden chose the American that looks least like them (Griner) over the American that looks most like my family (Whelan). At a deep level, these family members are mourning that their identity is no longer highly prized by their own children & America generally. This family interprets their own children coming out as gay as a rejection of their way of life as well.
It's more than simple oppositional defiance or out-group-near/out-group-far, I suspect. It is a fear that they themselves are no longer valued in the same way by the youth in their own lives. They can't even influence their own children to choose their lifestyle & become straight. The generational shifts run deep & they are vast.
Excellent post. The AI newsletters look like gold. Thank you again for finding such gems and telling us about them.
Don't get me wrong: Google is the nearest thing to Star Fleet the world has seen. It's benevolent, powerful, and really does mean to organize the world's information in service to humanity.
But it has a blind spot, in that it doesn't know (or can't act as though) most people are not "smart." And they're not. *Most* people -- meaning more than half -- are significantly ignorant. Google Search, for example, will always try to return some kind of result, even if a better, even necessary response would be more like, "Your query is laden with assumptions and unspecified context. Please ask a better question."
Providing a result for wildly diffuse queries tends to validate those queries. It's expected that the user, upon receiving a sub-optimal response, will naturally improve his query before taking further action. That's a politely unrealistic expectation.
LQ
Please read the first comment in Hanania’s article. It is absolutely incredible and the greatest diagnosis I’ve read of the Republican Party.
I want to drill down on this AI thing a bit deeper.
But first, I want everyone to think of the dimmest person you know. I'm not being mean, I just want you think about their lived experience. Think about the shrinking opportunities for people without significant intellectual capacity. Hell, think about yourself, doing your job after a severe stroke or accident. This world we are creating is going to be impossible for those kinds of people. Our society says that person has little market value. Now, with the advent ChatGPT, we are all headed for that same place. We are all going to be in that same boat.
Tell me, how does that society "work?" And I don't just mean, work as in labor, I mean how is that society organized? I surely don't know, but a couple of things are striking to me. One: my man, John Maynard Keynes predicted "that technological change and productivity improvements would eventually lead to a 15-hour workweek." (note, I tried to find the exact quote, but the orginal lecture...oye...). But we aren't shifting those productivity gains to the workers are we? Nope, we are sending them straight to the shareholders and oligarchs. Two: everytime any one attempts to discuss these very much real problems, you essentially get called a socialist by about 40% of the country.
Our political system will significant changes to cope with the new reality of the coming AI disruption. We still haven't sorted out the information warfare taking place on Social Media. So many significant problems getting worse, so little coherence in our national dialogue. I may just have to build the manifesto cabin in the mountains and retreat...
The political right is great at inventing, redefining, and co-opting language to suit their purposes. Back in George HW Bush's time they were so successful at turning "liberal" into a dirty word that liberals including Hillary Clinton starting calling themselves "progressives." More recently they invented "RINO" (Republicans In Name Only) to isolate and punish Republicans who didn't support Trumpism. The same people who ban books and campaign against any mention of racial or LBGT issues in our schools have managed to associate "cancel culture" with "leftists." Likewise "woke" and "wokeness" originally used by black folk to describe the process of becoming aware of racism and white privilege is now used to denigrate and excoriate people for being ridiculous and excessively concerned about political correctness.
So why does our side of the political fence (currently comprised of a coalition of sane liberals, moderates, and traditional conservatives) still continue to politely - and incorrectly - refer to radical rightwing extremists who are edging toward fascism as "conservatives"? Donald Trump, his enablers, his base, and the host of imitators and wannabe autocrats like Ron DeSantis are not conservatives - they do not subscribe to conservative political philosophy. I am a liberal but I have more respect for true conservatives than to lump them in with those who are running today's GOP.
It's past time to stop calling the radical right "conservative" - maybe hold a national contest to invent a catchy name or acronym that reflects the reality of who and what they really are.
They are neo-fascists. And it's about time that we start calling them that.
That first AI-generated photo of you looks like one of these Robert Zemeckis uncanny-valley CGI movies.
To paraphrase Arthur C. Clarke, any sufficiently advanced and complex technology can be both magical and frightening.
I have mixed feelings on Richard Hanania. When I saw his name was the section heading, I expected it to be about something dumb he said on Twitter, because he does that regularly. He also bounces between behaving ignorantly and making good points about the idiocy of the Trump Right. I basically read him with one eyebrow raised at all times.
AI isn't any more "magic" than the weather or the economy, or for that matter, society.
When engineers tell you they don't know "why" an AI made the decision it made, you have to put that in context. You are talking to people who are accustomed to being able to comment in detail on the behavior of systems precisely designed to behave in strict accordance with certain algorithms. Algorithms which mimic the logic of our conscious minds - i.e. the ways in which we have already learned to understand, from a high level perspective, our ability to reason. We can "explain" their behavior because the kind of high-level explanation we'd be looking for was the very basis of their design, and was directly implemented.
AI is different - it actually works by simulating (or attempting to simulate) the fuzzy, more intuitive processes of our own minds – which operate more through association and pattern filtering than strict logic. Our intuitive minds are far from perfect, but they often enough produce results that we are ultimately capable of rationally defending, if not giving a precisely impenetrable proof of veracity. They work via the interplay of a subconscious set of heuristics, constituting a sort of "flawed logic" which works well enough because it's been configured through experience (i.e. our lifetime's worth of "training data"). Think of it as an indirect way of yielding logical decisions which, at the cost of overly complicating simple logical tasks, simplifies (approximately and imperfectly) tasks which would otherwise require a very high volume of precision logic.
Sometimes, as with Deep Blue, these heuristics are explicitly designed for a specific purpose, so that a designer could, for example, point to a specific parameter and say, this is how much weight is given to the position of the king or the queen or another piece in a certain kind of scenario. Other times the models are more generic and a designer would have to look at how the parameters have shaken out from the training data in order to say, ok, it looks like maybe this cluster of nodes (synapses) ends up being responsible for reasoning on this particular class of information, etc.
(Now, the difference with the human brain is that computers can also avail themselves of raw, brute-force computational power beyond the capacity of our minds. In addition to allowing them to quickly incorporate enormous corpuses of training data which dwarf what an individual person can experience in a lifetime, it can augment these pre-configured heuristics with a real-time boost to an AI's ability to "think on its feet". Ultimately, these advantages may simply compensate for all the ways in which our modeling of human intuition falls short of how our minds actually operate.)
Now, compare this to analyzing the economy. We understand the basics of what money is, how it represents abstract "value", and how individual transactions work. From there we attempt to divine more high-level "logic" of how markets operate, such as the law of supply and demand – which forms a basis for understanding how lots of individual transactions collectively determine a general "market price". The law doesn't give us perfect predictability because it operates under various simplifying assumptions, but it works well enough that it allows us to make a good deal of sense out of an otherwise random seeming interplay of countless interactions. From there we try to make other observations regarding high-level patterns of behavior, allowing us to describe and understand the economy according to a more simplified (yet imperfect) "logical" model.
The point is, traditionally engineered systems begin with this high-level logical understanding and then are implemented accordingly. They're top-down designs. In contrast, AI is more bottom-up; it begins with a low-level understanding of the basic dynamics of a system which hopes to produce (again, imperfectly), through carefully coaxed and tuned volume and complexity, something resembling high level "logical" patterns of behavior. Therefore, understanding how AI does what it does isn't any more "impossible" than understanding the weather or the economy or the human body or biological evolution or human society – it would just take a lot of careful analysis and observation, the kind we engage in when we "do science".
Here it's important to understand that there are more precise sciences (like physics, chemistry, biology), and then more general sciences (economics, meteorology, sociology). The latter operate with the understanding that they are attempting to divine an approximate understanding of a complex system. A system whose low-level mechanics can be very precisely described by a science of the the former type, but which ultimately wasn't specifically created to mimic an easily describable high-level logic. This is the kind of science which would be necessary to "understand" modern AI.
Which is possible, because it's not magic.
I paid for a subscription to this site because I am interested in conservative points of view that is beyond "democrats are evil". I wanted access to JVL and normally appreciate the links to other writers in his post. But I have to say the link to Richard Hanania article was disappointing. He makes his point about scams by belittling and disparaging the: civil rights movement, the names some Black parents choose for their children (as if Black people are only group to move away from pre-1950 common names), unions, seniors and who knows who else as I had to stop reading.
I know (hope) that The Bulwark writers arent the only conservative writers that can make points without punching down? Was there no one else who has examined the right wing scam phenomena?
JVL. Offspring of Hemingway and his cat. According to AI.
Yay!
AI as magic is an interesting metaphor. A large part of magic is knowing the correct words and proper names of entities to get the results you want. In many ways AI is the same. Summon the entity you want with the correct words and you can manifest the seeming miraculous. Summon the wrong entity or get the spell slightly wrong and disaster awaits.
Currently that means your image won't be what you wanted, but imagine these "entities" hooked up to more powerful systems like electrical grids, financial systems, or military control systems.
You might even get a religion/magic dichotomy as the technicians (priests) trained on the system can summon it for proper uses, but hackers (dark magicians) can summon the system with adversarial prompts, or for unintended uses for their own gain.