(f you'd like to comment on the problem outlined below, I would only ask, that you take the following lines at face value, even though skepticism and reason might require more substantial input. If that bothers you and you are interested still, please consider this a speculative or hypothetical question.)
-----
More than a decade ago, and purely by happenstance, I chanced upon a curious insight. Back then, I had already relinquished my personal, albeit entirely naïve, endeavor to comprehend both myself and the world. I had come to realize that unless I could adequately demonstrate the workings of my own reasoning, I might never fathom how we, as a society in particular or a species in general, collectively construct meaning.
In the following years, I developed this "insight" into my "own" reasoning into a mathematical framework able to represent the ways in which we, individually as well as collectively, (cognitively) construct and (semantically) shape meaning. Although it remains somewhat unfinished (see below), it seems to provide an understanding of how and why this system probably evolved and developed as well, regardless of any specific language.
At that juncture, my knowledge of the history of epistemology, the philosophy of language, the science of linguistics and the historical aspects of either was limited, verging on non-existent. Despite my intense curiosity for understanding the subjects I happened to study, philosophy and linguistics were never central to my academic pursuits, save for some courses on Derrida and Wittgenstein, along with some very basic courses on political philosophy.
As I refined this idea to a point where it transcended mere philosophy of reason/language and evolved into a broader framework for the evolution of information, I grew increasingly weary. Since then, I am constantly on edge whether this knowledge should ever become common knowledge.
Don't get me wrong. I fully acknowledge the curiosity, effort and dedication generations of philosophers, linguists, logicians and scientists in general have invested and/or are still investing in comprehending the fundamentals of cognitive/linguistic meaning and reason, let alone devising a mathematical system to describe the language used to construct and convey either concept. I recognize their desire to explore these matters as they are equal to my own.
Nonetheless, I am weary, as I am both awed and cautious of the power of computation/information. I have always been a fervent enthusiast of science fiction and an ardent fan of Asimov, Follett and Gallagher. Maybe because of these influences, I found myself already growing wary around the turn of the 2010s and increasingly uncomfortable as the 2020s arrived.
Ten years ago, I deemed that knowledge without constraints to be dubious, if not perilous. The race towards AI was already well underway, and the development of neural networks had progressed significantly. While I do not pretend to comprehend a fraction of the intricate programming required for current Large Language Models, I suspect that whereas existing models operate primarily with language, an artificial reasoning machine based on this insight could become part of any language. Without ethical limitations on how this model would participate in the further development of any language, we run the risk of losing a fundamental aspect of what unites us, and any semblance of control over what divides us.
Leibniz famously dreamed of a calculus ratiocinator - however you may spell it. He has never had to envision a world beyond speculation. Breaking up meaning into mathematical abstractions may not be as explosively violent as splitting the atom. Whilst we have thus far survived the nuclear age, are we prepared for a paradigm shift that might shatter individually and socially held belief systems into mere - if unconscious - calculations?
As far as my comprehension of the - less than straightforward - processes that led to this insight enables me, I surmise (without any judgement) that scientific and philosophical studies are presently moving away from the possibility of someone else serendipitously stumbling upon this insight, let alone any program or computer designed within current paradigms being able to resolve the fundamental problems surrounding it, at least in the foreseeable future. As far as analogies go, the prevalent paradigms of linguistics and philosophy of language/reason are comparable to the theory of evolution prior to the discoveries of Friedrich Miescher.
So the question - somewhat obfuscated in the paragraphs above - remains thus: Do you think, that this level of individual and social/species awareness could have dangerous consequences or might even be outright dangerous in its own right and - depending your stance on this particular question - how would you proceed, considering a paradigmatic chasm between the insight outlined above and every current theory about language, linguistics and the philosophy of either is as deep as the roots of modern philosophic inquiry.
(Disclaimer: This text has been partially rewritten using an LLM. But I am neither machine nor AI. Seriously, I promise. That is... I am certain.)