Sunday, April 7, 2024

How Liars Create the Illusion of Truth


Controlling the public mind was prioritized and operationalized scientifically in the 20th century:

Edward Bernays (1928) Propaganda:

“The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in a democratic society . . . . Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country”

"In absence of trusted leader, group relies upon 'clichés, pat words, or images' .... Clichés can 'automatically' condition public emotion" p. 74

One of the most important lessons from the 20th century for conditioning social cognition is the power of repetition:

Tom Stafford (26 October 2016 ). How liars create the ‘illusion of truth’ The BBC, http://www.bbc.com/future/story/20161026-how-liars-create-the-illusion-of-truth

“Repeat a lie often enough and it becomes the truth”, is a law of propaganda often attributed to the Nazi Joseph Goebbels. Among psychologists something like this known as the "illusion of truth" effect. Here's how a typical experiment on the effect works: participants rate how true trivia items are, things like "A prune is a dried plum". Sometimes these items are true (like that one), but sometimes participants see a parallel version which isn't true (something like "A date is a dried plum").

After a break – of minutes or even weeks – the participants do the procedure again, but this time some of the items they rate are new, and some they saw before in the first phase. The key finding is that people tend to rate items they've seen before as more likely to be true, regardless of whether they are true or not, and seemingly for the sole reason that they are more familiar.
Today, AI offers a powerful tool for those seeking to shape the public's social imaginary, as explained in this excellent synopsis:
Johann Eddebo via Off-Guardian.org (2024, February 14). AI & The New Kind Of Propaganda. ZeroHedge. https://www.zerohedge.com/geopolitical/ai-new-kind-propaganda

...So the idea is to use massive data collection and AI pattern recognition to preemptively disrupt the formation of behaviourally significant narratives, discourses or patterns of information.

With these tools of “early diagnosis” of information that potentially could disrupt the power structure and its objectives, it then becomes possible to nip it in the bud incredibly early on, way before such information has even coalesced into something like coherent narratives or meaningful models for explanation or further (precarious) conclusions....

...What this ingenious AI propaganda system then does, is to automatically cordon off this statement [identified as problematic] by shadowbanning, downranking and other forms of concealment in the information flows....

What’s added on top of this is the seeding of counter-narratives, and the two obvious ways that this can be effected is by situating the relevant statement in a context of contrasting or discordant information so that both the “disruptive agent” and other recipients of the information get a clear message that this piece of information is both contested AND an off-key minority perspective fit for social ostracization.

This can be further supported by counterintuitively promoting the Facebook post or forum message in the flows of networks of singled-out users that have been identified as “loyalist” proponents of the preferred views (the volunteer thought police corps) so as to provoke their criticism and rejection of the message.

Another potential aspect of this proactive seeding of counter-narratives is to employ bots. Fictitious users, that through generative language models provide targeted responses to potentially disruptive pieces of information. Another interesting possibility is to generate fake, short messages by actual users in your social network to produce these targeted responses – messages which can’t be seen by themselves and so will generate no interaction, and which mimic their style and tenor...

We live in a time where "hyperreality" prevails - where communications are no longer tied to the reality principle but rather are organized by abstracted and circulating images and narratives, whose prioritization is increasingly driven by AI.

The reality principle, the Enlightenment faith in the capacity for certain forms of human reasoning, is losing a privileged epistemological stance. Jean Baudrillard (1994) denoted modern communications as “hyperreal” due to their increasing detachment from the reality principle and their tendency toward self-referentiality.

Accordingly, following Baudrillard, Jayson Harsin (2018) uses “post-truth” to describe contemporary public communications. Post-truth does not imply a simple dualism between truth and ideology, but rather encapsulates the public’s loss in institutional trust, resulting in increased public confusion and suspicion. According to Harsin, post-truth addresses “discord, confusion, polarized views, and understanding, well-and-misinformed convictions, and elite attempts to produce and manage these ‘truth markets’ or competitions” (p. 3). 

The proliferation of significations and narratives are unmoored from Neoclassical logics of empirical verification and structured argumentation, described, for example, by Stephen Edelson Toulmin (1969) in The Uses of Argumentation.

The unmooring of abstracted symbols and the lack of demand for verification enable elites who seek to control media messaging to monopolize the public mind by offering the public a binary code of information/dis-information tool to simplify symbolic clutter and break through the noise of conflicting messages.

Despite the lack of institutional trust, the public grasps the binary code of information/dis-information to anchor understandings of concepts and accounts that they cannot empirically verify in their everyday lives, including foreign events, abstract hazards (such as radiation or genetic engineering), and technological promises.

Liars today create the illusion of truth by coding information as "mis," "dis," and "mal" information with little-to-no demand for verification beyond their own self-referentiality.

 




No comments:

Post a Comment

Note: Only a member of this blog may post a comment.