In the light of the new world wide wave of memetic-narrative disinformation (misdirecting false statements) and misinformation (partial truths with fiction admixed), which is generally based on the mixing of encodings of information with pseudo-information (real signals with fictions): the relationship between logical deductive inference, induction, knowledge, and information theory remains an urgent matter. Arguably the urgency is increased commensurate with the speed of social media communications and historically unprecedented speed, and scope, of group-knowledge updating.
Noam Chomsky recently reposted some of his historical observations about postmodern anti-truth-ism: the phenomenon more recently labelled 'post-truth' by political commentators. In this historical video he notes the pernicious impact, and importance, of the rejection of truth (with a careful caveat regarding individual beliefs):
For the philosophically and formal-logically uninitiated, the kind of rejection of truth Chomsky is referring to is not denial of specific truths or facts - although that can certainly logically follow and is generally part of the outcome. It is about the rejection of the entire concept of truth as something that exists in what philosophers call a classical bivalent sense: true or false and nothing in between (called the law of excluded middle), where true has both a syntactic (formal symbols) and a semantic (meaning) component, and the meaning component is grounded in objective reality.
The syntactic concept of bivalence and truth is formal, and is about mathematical logical symbols arranged into formulae. The linked semantic concept is about a true statement in natural language - or else in its formal logical equivalent form - objectively corresponding to, or picking out, some real thing, event, fact, or phenomenon in the world. In simple terms, a statement that makes an assertion - or a proposition – is true if it refers to something objectively real which makes the statement true. Philosophers usually refer to such ideas under the head of correspondence theories of truth. Versions of such theories date back to Plato, but fact-based versions did not emerge until the 20th century with the work of Bertrand Russell and his contemporaries. (Philosophers and logicians also argue about what constitutes a fact, which is linked with the theory of knowledge).
Chomsky’s most famous work has been about epistemic innatism about language, or linguistic innatism. The idea is that there is not enough information in the developmental, psycho-social environment of children to support their linguistic development. Therefore, there must be innate (genetic) informational resources in the brains of human beings that facilitate their development of natural language faculties. Subsequent related debate(s) about the possibility of the existence of innate mental language forms – called mentalese or the language of thought – have preoccupied psychologists and cognitive scientists for decades (Barbieri, 2010; Carruthers et al., 2007; Chomsky, 1975; Fodor, 2005; Lundie, 2019; Machery, 2007; Pinker, 2005; Rice, 2011).No solution is yet available, and it is expected that neuroscience will have the final say.
Given the importance of theories of truth to the concept of knowledge – and the study thereof (epistemology), and given the importance of innate knowledge to Chomsky’s theory of linguistic faculties, it is not hard to see why theories of truth are regarded as important, both as a basis of semantic content (meaning), of thoughts, and of sentences.
That is just one set of theoretic considerations. Few people today are unaware – to some extent - of the relevance of truth, versus post-truth, to political narrative construction.
Post-truth in History - Sort Of
Post-truth is literally a rejection of the entire idea that anything is true - versus a sole other option of false - in any concrete, immutable, objective sense. It is not an entirely new idea, historically speaking, but its degree of popularity and its basis in recent Continental philosophy, is. Related ideas in ethics and moral philosophy - like moral relativism and moral individualism - have been around much longer.
Theoretically speaking, post-truth is closest to – in fact largely equivalent to - nihilism about truth. Nihilism about X means that, whatever X is, there is no X. To get an idea of how difficult it is to really be a nihilist about anything, consider the famous 19th century philosopher Friedrich Nietzsche. Many popular commentators refer to Nietzsche as a nihilist about truth. However, this is hard to support with data, because Nietzsche continually argued for the truth of various positions – such as that the Christian megacult did not have the truth in its doctrines. He presented positive theses about what it was to live a worthwhile life:
Hence, Nietzsche believes only a return of the Dionysian art impulse can save modern society from sterility and nihilism. (https://plato.stanford.edu/entries/postmodernism/#1).
These are not the behaviours and ideas of a nihilist about X, where X=truth as usually construed. However, Nietzsche did famously challenge the duality of good and evil as epistemically viable: as able to be used for gaining moral knowledge. Yet, the roots of the post-truth concept are even older than Nietzsche. The problem with moral knowledge is that, unlike bananas, proteins, and go-karts, it isn't at all clear what the term and concept 'moral' even refers to: Ideas? Some kind of strange Platonic thing? Religious principles? Conventions? Rules? Feelings? Reasons? Cultural tropes and taboos? The opinion of the King?
17th Century empiricist philosopher David Hume is famous for (among other things) his theory of moral sentimentalism, according to which moral truth depends on people’s sentiments – and the emotional strength of sentiments – about what they feel is right or wrong. Put simply, according to Hume something is moral or amoral to the extent that one feels – emotionally and sentimentally - like saying either ‘yay’ or ‘boo’ to it (or perhaps some mixture of the two). Reason, as construed by Hume’s contemporaries, is not much involved, said Hume. Hume is also famous for his simultaneous rejection of religious belief and narratives, of unemotional reason, and of realism about causality. Such ideas are not remotely comfortable for the ideologically conservative.
Hume's life provides an object lesson in the historical popularity of post-truth types of ideas about moral truth. As a consequence of both his rejection of religious ideas of truth and his sentimentalist theory of morals and right actions (which philosophers also refer to as a theory of the good) Hume was unpopular enough as a philosopher to never get tenure, despite being one of the brightest minds in the history of philosophy.
Yet Hume was also an early champion of one version of the correspondence theory of truth: essentially the conception of truth that Chomsky endorses. Hume evidently believed in the objective truth of his negative theses about religious truth, causality (or causal realism), and morals (sentimentalism). Thus, in accordance with a penchant for correspondence theory, he was not anti-truth in the sense of the more (purportedly) nihilistic post-truth approach to truth.
In fact, dear reader, pedantic philosophers will point out that even post-truth is not necessarily nihilism about truth, since it tends to rely upon the truth of such ideas as the deconstruction of binaries (more on this below), and subjective relativism about truth, including moral truths. It is perhaps somewhat more difficult than both analytical and postmodernist philosophers suspect to get by without any belief in truth that refers to the world at all. Psychologists and political scientists aren't faring any better in this connexion.
Postmodernism is the philosophical disposition and movement associated with post-truth. Postmodernism has its place in academia, and is aesthetically pleasing when applied in literature, cinema, and the arts. It is an important French Continental-philosophical call to arms against memetic-narrative control. Just, perhaps, what one would expect from the nation that spawned the first modern revolution to depose a monarchy.
It is somewhat ironic that post-truth allegedly evolved out of the work of French philosophers like Jean-François Lyotard, Michel Foucault and Jean Baudrillard. These are philosophers who were interested in identifying harmful grand narratives and hyperrealities that invoked (in Baudrillard's famous terms) simulacra and 'the desert of the real', thus disconnecting people from reality in a harmful and disempowering way.
Essentially - Foucault and Baudrillard were opposing very powerful socially-constructed memetic propaganda and spin, but also inadvertently constructed misleading, disempowering, overarching, influential narratives that grow out of the epistemic commitments of influential social agents and institutions like science, religion, political movements, and psychiatry (a favoured target of Foucault).
Postmodern conceptions of logic and truth are often what is called deconstructive, after the later theories of Jacques Derrida and Gillies Deleuze. That is to say: they eschew the objective true-false binary associated with bivalent logic as applied to human experience. Historically this happened because the bivalent true-false binary was seen by postmodern philosophers and critical theorists as being the basis for the kind of religious, abusive scientific, political, and cultural narratives that contributed to instantiating grand narratives and the hyperreal desert of the real that Foucault and Baudrillard, respectively, attacked. This includes the kind of memetic narratives infamous in WW2 propaganda (Adolf Hitler's Big Lie) and The Cold War (McCarthyism).
The binaries that accompany grand control narratives are regarded by postmodern theorists as overly simplistic, anti-humanistic, and oppressive: man-woman, right-wrong, black-white, gay-straight, worthy-unworthy, good-evil, righteous-unrighteous, godly-ungodly, sanctified-unsanctified.
Scientific-unscientific. Natural-unnatural. Liberal-illiberal. Red-blue. Patriot-traitor.
Postmodern theorists regard the imposition of such binaries upon messy, complex, psychologically variable beings – but especially upon people vulnerable to linked asymmetric power relations - as destructive, oppressive, and as the root of much suffering. The general idea is that if you are a person who gets subjected to the logic of a binary opposition, and on the other side of the binary is a powerful, venal, insensitive institution, idea, or group: then you will likely get abused and hurt.
One can see why this idea has traction. For the same reason as the French deposed their monarchs, perhaps?
It is not easy to deny the reality of harmful binaries, or - at the very least - the very real perception of the reality thereof. Eschewing - if not eliminating from consideration - correspondence type theories of truth, is perhaps a desperate approach that follows from deconstruction of discrete, bivalent, nothing-in-between theories of true versus false (Patriotism versus not). However desperate it is, it is nothing if not innovative. If someone is abusing one with a narrative-constructive tool: take away the narratological tool. Discredit it. Damage it. Remove its power to harm.
Remove its memetic influence.
The approach is not nihilism, but does not embody logical bivalence and alethism (alethism is truth aptness, or the ability to have the property of being true). Scholars who are old hands at feminist theory know very well the associated significance of logic that is not - in more Continental philosophical terms - binary, digital, and oppositional. Feminists might - not unfairly - use the term 'phallic' to describe such logic (albeit metaphorically). Manly versus not manly? "Put your erect binary digit away", they intimate. Understandably.
(Don't get me wrong. There are very capable women scientists and logicians who embrace correspondence theories of truth too.)
Deconstruction Meets Logic, and the Philosophy of Science
To understand why and how post truth and its Chomskyan correctives are salient to knowledge and memetic narrative construction, and the deep associated problems, it is perhaps helpful to approach the issue laterally from the perspective of a swathe of related disciplines that have worked the technical roots of the same problems: analytic philosophy, cognitive science, logic, and the philosophy of science and information.
In the 1980s and 1990s Jaakko Hintikka confronted the problem of trying to model logical deductive inference (premises leading to a conclusion via an argument) in terms of information flow. To make it clear how difficult this is: it is far from clear what even comes first: logic of some kind, or information somehow construed apart from logic. It is not even clear if this is the correct distinction, or the right question. If logic and information are the basis of narratives: then what are they exactly?
Philosophers care about such stuff, but so do psychologists and philosophers of science. Hard scientists are somewhat preoccupied with it too, since many of their theories and ideas must be communicated with natural language, and since the explanations associated with theories involve a kind of narrative (albeit a very rigorous narrative in the case of theories like quantum field theory – the most predictively reliable theory in history.) This is one thing that makes the philosophy of science and metascience a constant going intellectual and scientific concern.
It has been thought since the late 20th century that the information age and the intimate relationship between communication theory, data theory, information science, and computer science demanded an explanation of how logic – especially the Boolean logic of digital computers – relates to the nature of the information and information flow (which is not even necessarily the same as information transmission). In fact: that the relationship between logic and these sciences demanded their integration in a convincing and formal manner.
This imperative has been regarded as even more pressing with the development of AI and social media, and especially AI that is applied in social media systems. Hintikka’s problem reconciling information flow with deductive inference - that it apparently does not accommodate information theory and the concept of information flow very well – has come to be called ‘the scandal of deduction’.
The main cause of Hintikka’s ‘scandal’ is that the hard science of mathematical communication theory demonstrates that no signal goes anywhere without loss of structure and content due to noise. Information transmission, which is necessary any time signals or messages (or structures, or symbols or emissions) go from one spatiotemporal location to another – as they do with every thought a human brain processes – necessarily involves information loss. Thus, in trying to marry deductive inference in logic to scientific information theory, one must try to ground alethic all-or-nothing conceptions of truth-transmission qua classical, bivalent, formal logic in an applied science telling one that it is impossible to have lossless information transmission, and which also characterises information transmission both statistically and in terms of discretisation to bivalent binary digits (bits).
Or should we instead ground the concept of information in some kind of formal logic? Or is this the wrong question? I can guarantee you that the brightest of minds in philosophy of science are not certain. If any say they are certain, they’re going to get an argument of some kind from the brightest of minds in logic (who are also busy arguing about bivalence and pluralism in logic).
It is little wonder that memetic narratives are often not easy for media consumers to decode and evaluate.
Memetic Narratives, Information, and Knowledge
Epistemic content – or knowledge – and theories of how it realised in the brain-minds of individuals, how it is encoded into memetic narratives, and how it is transmitted in groups, are critical to political science, cognitive science, linguistics, metascience: to every field of human intellectual endeavour. As mentioned in the discussion about Chomsky’s linguistic innatist theory above, concepts and theories of truth are critical to the concept of knowledge and knowledge acquisition. Just as it has been vexingly difficult to merge deductive logical inference and information theory, it has been notoriously difficult to make informational theories of knowledge, or epistemic theories, work.
The informational process-reliabilist (based upon reliably reproducible causal processes) epistemic theory of Fred Dretske provides an instructive example. Dretske proposed what is known as the veridicality thesis - that information must necessarily be alethic and true – in order to overcome the problem of knowledge transmission, based upon meaning transmission, based upon information transmission, based upon lossy signal transmission. As for misinformation: false information is information like a decoy duck is a duck, said Dretske.
The associated technical issues surrounding the veridicality thesis are not for the faint of heart (Deacon, 2010; Floridi, 2011; Lundgren, 2019; Stegmann, 2015). Dretske developed a kind of a cheat in his conception of information flow to make his reliabilist epistemology work – the idea that the statistical portion of the signal transmitted must have total probability 1, or certainty. This is because Dretskian information flow relies upon his Xerox Principle:
Dretske's Xerox Principle: If C carries (encodes) the information that B, and B carries (encodes) the information that A, then C carries the information that A.
One cannot have part of the information moving from A to B, then B to C, since eventually should the information - and any truth associated with it - flow on through D, E, F, G... - it would degrade to zero. The problem is that this 100% fidelity never happens in practice in transmission due to physical nomic constraints (natural laws), but 100% fidelity of truth qua information is necessary for bivalence in classical logic, and for the correspondence theory of truth.
Dretske's theory also tends to force a subjectivist (person-subject-centric) element into his otherwise objectivist conception of information, which conception literally states that information is an "objective commodity" (Dretske, 1981, 1983; Floridi, 2004). The problem arises because, when it comes to passing knowledge between agents, the receiving agent already needs to know something about the source and its codes. That's not very objective. This also creates further difficulties for marrying information flow to logical deductive inference (this premise, plus that premise, gives certain conclusion ‘C’), which latter is very much supposed to be objective.
Cognitive science offers up even more problems. Psychologists and neuroscientists know that cognitive information processing is very prone to loss, noise, confabulation, decay, and to flaws in metacognitive and cognitive processes (although there are some surprises). Human brains can be quite flaky information processing systems.
All the more reason to have some tools for discerning when narratives are based upon real information.
Decoy ducks and mental confabulation aside, there are even more vexing related issues about the nature of information itself, and if and how information is meaningful, and if information must necessarily be true, or whether instead it is not even alethic (Allo, 2011; Brady, 2016; Demir, 2014; Sequoiah-Grayson, 2012).
My own view is that information is a truthmaker, and not a truthbearer, and so not alethic. Information is not, in fact, something that is true or false. Information is information (involving causally-induced configurations of structure). One has some information – as a causally affected source structure or signal – or one does not. If you have it, and encode it (using some kind of natural or artefactual rules) into messages or representations, then it is those representations and messages that are alethic (able to be true or false). If you have a message or a representation that seems to encode real information, but does not, then you have pseudo-information.
However, information is also intrinsically meaningful due to various kinds of causal indication. If you get a visual signal of a duck, but are not sure if it is a duck or a decoy duck: then you at least have a local existential indication that it is possible there is a real duck in visual distance, as opposed to their definitely not being any duck. Unless you're hallucinating ducks, or have a duck-delusion, of course. Then all bets are off.
The above issues make it clear that the concept of deductive inference and truth is not as easy as it seems. Certainly not in information theoretic terms (D’Agostino & Floridi, 2009). Happily, not all of the news is necessarily bad.
In a striking convergence of conceptual and theoretic trajectories, analytic philosophy and formal logic have in many ways come close to offering a solution to these problems – especially to Hintikka’s ‘scandal of deduction’ – which parallels deconstructive theory in important ways. Paraconsistent and dialethic logics – also variously called non-classical, non-monotonic, and many-valued logics – have been suggested as the best solution for making deductive inference (premises -> argument -> conclusions) and information flow work together.
The idea is that logics that accord truth in a proportional manner – allowing value between true and false in various ways - will best match information transmission and flow which is proportional and lossy. These kinds of non-classical logics are important in this sense because they often have two critical relevant properties:
1. Nonmonotonicity
2. Polyvalence
(1) means that in the process of deductive inference, the conclusion does not necessarily follow from the premises with 100% certainty. (2) means that the logic is not bivalent or two-valued, but many valued: the law of excluded middle is excluded. This allows values in between true and false, and even some level of contradiction. It is certainly not the exact same concept as postmodern deconstruction, but the reader should be able to see that it is much closer to the postmodern conception of truth than that which is associated with bivalent classical logic and the correspondence theory of truth.
What to do? The Illusion of Consensus and Basic Source Discrimination.
In the light of these problems, if we want to respond to the problem of memetic control narratives effectively, it is likely that we need to develop new tools for doing so. We must at least identify how to better deploy our individual and group intuitions to account for the associated problems. One of the most pressing skills to learn is that of identifying when information is real, and when we can have any knowledge based upon it.
To do so we must be able to overcome what has been called ‘the illusion of consensus’ (Yousif et al., 2019). The illusion of consensus is the illusion that the set of information encodings (mesages, representations, narratives, reports) that one is presented with are intersupporting: that they increase the fidelity of the overall message and increase the amount of truth that is being received, or else increase the likelihood that true sources are at the other end of the transmissions and information flow. It is an illusion because people are generally bad at judging this. Why? Because we're bad at determining when sources and messages are really independent and heterogeneous, and when this matters.
The shortcut trick to better informational power of discernment is: it (source type-heterogeneity) always matters. There is a corresponding recipe – simple tool - that anyone can apply in order to get a start in the right epistemic direction. This tool involves a measure of careful scepticism about apparent information, and requires the application of three principles:
i. Check that information sources are independent, disparate, and type-heterogeneous (of completely different kinds)
ii. Check that the set of information sources are not all fed from the same (non-primary) upstream information source
iii. The more sources that obey i. and ii., the better
Type-heterogeneity of sources is an important concept that means sources in a set of sources must be fundamentally and materially different in nature. It is important for the same reasons that forensic evidence is superior to eyewitness testimony in legal settings, and why intersupporting combinations of both are better still.
Type heterogeneity applies in many ways. Take the exampe of the situation in Xinjiang in China:
The sources used to determine if there is genocide and abuse should not all be people giving testimony – especially if those people have the same belief biases, locally biased knowledge, and cultural biases and bigotries. If all of the sources are from a group of people whose epistemic updating is based upon the edicts and claims of a local religious institution, for example.
Local knowledge may be important, but it needs to come from other sources too: candid security cameras, smartphone capture, government documents that are clearly authentic, and reports from those not ideologically aligned with all of the other witnesses.
Type-heterogeneity of sources is undermined by common, upstream, root, or originary, sources. This is part of how the World War II Nazi regime’s ‘Big Lie’ strategy worked, and is the basis for most fundamentalist religious-megacult grand memetic narratives (Biblical ‘scholars’ know very well that original sources for their doctrines contravene i. - iii.)
Information from sources encoded into narratives (themselves reducible to a set of interrelated sources) and messages must be properly tested. This is the only way to properly adduce the proportion of real information, versus noise and pseudo-information, encoded into them. Most people are not currently equipped to do this, but i)-iii) above are an easy way to get back some informational power.
References
Allo, P. (2011). The logic of “being informed” revisited and revised. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 153(3), 417–434.
Barbieri, M. (2010). On the origin of language. Biosemiotics. https://doi.org/10.1007/s12304-010-9088-7
Brady, R. T. (2016). Comparing Contents with Information. In Outstanding Contributions to Logic. https://doi.org/10.1007/978-3-319-29300-4_9
Carruthers, P., Laurence, S., & Stich, S. (2007). The Innate Mind: Structure and Contents. In The Innate Mind: Structure and Contents. https://doi.org/10.1093/acprof:oso/9780195179675.001.0001
Chomsky, N. (1975). Reflections on language. Random House.
D’Agostino, M., & Floridi, L. (2009). The enduring scandal of deduction. Synthese. https://doi.org/10.1007/s11229-008-9409-4
Deacon, T. W. (2010). What is missing from theories of information? In Information and the Nature of Reality: From Physics to Metaphysics. https://doi.org/10.1017/CBO9780511778759.008
Demir, H. (2014). Taking stock: Arguments for the veridicality thesis. Logique et Analyse. https://doi.org/10.2143/LEA.226.0.3032651
Dretske, F. (1981). Knowledge and the flow of information. Blackwell.
Dretske, F. (1983). Précis of Knowledge and the Flow of Information. Behavioral and Brain Sciences, 6(1), 55–63.
Floridi, L. (2004). Outline of a Theory of Strongly Semantic Information. Minds and Machines. https://doi.org/10.1023/B:MIND.0000021684.50925.c9
Floridi, L. (2011). Semantic information and the veridicality thesis. Oxford University Press.
Fodor, J. (2005). Reply to Steven Pinker “So How Does The Mind Work?” Mind and Language. https://doi.org/10.1111/j.0268-1064.2005.00275.x
Lundgren, B. (2019). Does semantic information need to be truthful? Synthese. https://doi.org/10.1007/s11229-017-1587-5
Lundie, M. (2019). Systemic functional adaptedness and domain-general cognition: broadening the scope of evolutionary psychology. 34(1), 1–26. https://doi.org/10.1007/s10539-019-9670-6
Machery, E. (2007). Massive modularity and brain evolution. Philosophy of Science, 74(5), 825–838. https://doi.org/10.1086/525624
Pinker, S. (2005). A Reply to Jerry Fodor on How the Mind Works. Mind and Language. https://doi.org/10.1111/j.0268-1064.2005.00276.x
Rice, C. (2011). Massive modularity, content integration, and language. Philosophy of Science, 78(5), 800–812. https://doi.org/10.1086/662259
Sequoiah-Grayson, S. (2012). Giovanni Sommaruga (ed): Formal Theories of Information: From Shannon to Semantic Information Theory and General Concepts of Information. Minds and Machines. https://doi.org/10.1007/s11023-011-9250-2
Stegmann, U. (2015). Prospects for Probabilistic Theories of Natural Information. Erkenntnis, 80(4), 869–893. https://doi.org/10.1007/s10670-014-9679-9
Yousif, S. R., Aboody, R., & Keil, F. C. (2019). The Illusion of Consensus: A Failure to Distinguish Between True and False Consensus. Psychological Science. https://doi.org/10.1177/0956797619856844
Comments