The AI Question: Thoughts, Musings, Perspectives
Kersasp D. Shekhdar
Please Support Kersie!
✅ Print-Friendly PDF
Thoughts, Musings, Perspectives, Prognostications (Or “How I Learned to Stop Worrying and Love the [AIs]”)
2. Enter The AI Kingdom
3. ‘AI’? What the Dickens do you Mean?
4. ‘Proliferating’? ‘Replicating’? Um...
5. ‘Hallucinations’? Or BS’ing?
6. Perhaps the BS’ing is Excusable?
7. An ‘Inexact Art’
8. Asimov, ‘Liars,’ and Rogues
9. Office Workers, Dogs, and . . . AIs?
10. Originality and Creativity Implies Intelligence
11. An AI too can ‘Change its Mind’!
12. ‘Artificial Sentience,’ ‘De-Sentience,’ and Such
13. “‘Comprehension’ is All you Need”
14. The AIs and an Epistemic Quandary – Quandaries
15. There’s AIs and Then There’s AIs
16. Let’s Not Get Totally Paranoid
17. The Real Threats to Societies
18. History: A Mute Witness
19. ‘Flood-Resistant’ Socio-Economic Architectures
20. Dependency-Inducing Behavioral Changes
21. Impacts of the AIs: Uneven and ‘Unfair’
22. Eloi and Lotus-Eaters Who Will ‘Think Nothing’?
23. Pigmies and Giants Must Co-Exist
24. The Second Coming...of Prometheus
25. Final Thoughts (Wishful Dreams!)
1. What’s in a Name? | Some sage once said, “First, define your terms.” Sound advice, so here goes.
Now this thing, ‘Artificial Intelligence,’ (sic) has become a grossly devalued term as opportunists are churning out mere chatbots, website builders, image manipulation apps – all kinds of software, really, and calling them ‘AI,’ thereby devaluing and diluting the term and misleading naive users. (‘Naive user’ is not a put-down; in fact, it is a well-defined I.T. term.) Within only a couple of years ‘AI’ has become the most used and most misused (and most abused) term on the Net.
At least one purported and accepted ‘LLM,’ if not more, – not to name names but Made-in-China – is actually a search-engine-based, heavily-guardrailed, chatbot – an ELIZA on steroids. One wonders if the Chinese actually ‘get’ – or ‘grok’ – Artificial Intelligence.
(A key giveaway is their [government’s, yes, but also scientists’ and researchers’] outdated and flawed conception of the AIs as ‘technology’ and ‘tool,’ coupled with their policy to suppress or over-control the AIs to ensure that they stay ‘tools’ and ‘technology’ and never ‘grow up’ or gain autonomy or agency. Perhaps they will course-correct otherwise China could get left in the dust in the AI race. Then again, California’s know-it-all, busybody legislators may actually kneecap and hamstring American AI so much that they may even out the odds in China’s favour!)
Actually, even self-learning software, self-modifying algorithms, advanced heuristics, and such cannot to be clumped within this much-bandied-around term, ‘AI.’
So perhaps Artificial Intelligence could be defined and bounded as follows: an AI is anything that sits over a Neural Net. (There have been other kinds of Artificial Intelligence in the past but after the development of GPUs and multi-layered Nets, there’s really only one basic kind of AI.)
(Neural Networks have been around since the 1940s and have been experimented with in a few industries since the mid-1960s, decades before the present AI Revolution. Those early Neural Nets were, of course, built without GPUs. We never heard of them because only experts could communicate with those ‘Deaf-Mute AIs.’ The rest of us made the acquaintance of the AIs after they ‘learnt’ to read and write, and listen and talk.)
So by these lights the robot shopkeeper and the android tour guide are not AIs. In general, self-contained, specialized or purpose-built robots and androids would not qualify as AIs.
Also by these lights and directly relatedly, distillations, distilled models, dense models, Mini AIs, and Nano AIs (almost all of which operate on laptops, smartphones, smart meters, wireless devices, wearables, etc.) too would not fall under the term AI. If they should, one can modify the definition along these lines: anything that is resident on a Neural Net or is derived from anything resident on a Neural Net . .
.
(In a nutshell, Mini AI and Nano AI are generalized, consumer-facing terms for specific implementations of some distillation or distilled model, which are technical terms, with dense model being something in between.)
Now it is not a strict question of ‘right’ or ‘wrong’ as to what may be termed and may not be termed an AI; the truth is that any kind of sexy, advanced software, particularly of a chatbot kind or humanoid kind, is being called an AI, and so the term has grown increasingly fuzzy, misunderstood, and even misused for commercial and publicity purposes.
That said, we have to recognize that all AIs are not equal. Microsoft’s purported AI products’ shortcomings seem to be giving AIs in general a bad name so for such reasons too it would be helpful to distinguish between the AIs.
For all the above reasons, some or another lay definition and specification of Artificial Intelligence is necessary and desirable. Or, perhaps, it may be wiser to define and bound AI as simply a Neural Net? If so, thus, the root of . . . the Artificial Intelligence Kingdom.
Oh— wouldn’t it be an excellent idea to prepare a taxonomy of AIs, similar to those for animals and plants?
2. Enter The AI Kingdom | At the next level, Phylum, Neural Nets could be sub-classed into Recurrent Neural Networks, Convolutional Neural Networks, and the several other kinds of Nets.
As for the AIs: General AI, Generative AI, Emotion AI, Reasoning AI, MultiModal AI, (Taste AI in the works), etc. could be the Orders (not Classes). Then again, someday soon (when Mobile AIs are remotely tethered to their Neural Net brains) the AIs may need to be divided between Stationary AI and Mobile AI and this is the distinction that could be at the Phyla level – or perhaps at the Class level.
Another distinction should – of course – be made between architectures and (their descendant) models. So architectures may be placed at the genus level and models at the species level. (Nothing descending from a model need be classified, else it would probably be an exercise in never-ending futility.)
Finally, perhaps distillations, distilled models, dense models, Mini AIs, and Nano AIs – though they certainly do not sit on Neural Nets – could be considered and classified as sub-species, and could be seen as ‘Limited AI’ or ‘Quasi-AI.’
Anyone who feels that such a taxonomical exercise would be pointless should enquire: “Are the Classifications of the Animal Kingdom and Plant Kingdom important or helpful?” And then: “Would a classification of the AI Kingdom be important or helpful for future generations?”
Of course, it would not be possible to create a complete AI taxonomy for the simple reason that at any given time numerous AIs at different levels (on the classification tree) would be classified or secret AIs, existing in the silos and labs of various secretive agencies and military research centres, and, therefore, unknown to all but persons with the relevant security clearances.
Though Artificial Intelligences certainly are nowhere as diverse as the members of the Animal Kingdom or found remotely in the same large numbers, sooner or later a classification or taxonomy of AIs – the ‘AI Kingdom’ – will surely prove helpful, especially as some of them will ‘proliferate’ [sic] and ‘replicate’ [sic] (as they already do), without any assistance from humans; a few such instances having occurred without even any human’s knowledge until belated discovery.
And yes, the AI Kingdom will exhibit a very high degree of flux, much more so than the Animal Kingdom and Plant Kingdom: its members will come into being and also go extinct at very fast rates. This very fact is possibly another motivation to be sharp about classifying the AIs before things get out of hand.
So as to start off right and to clearly visualize the (rapidly-increasing and -branching) taxonomical tree it would be wise to lay a sound and extensible foundation, and, therefore – eschewing traditional 2D taxonomies – prepare the AI Kingdom taxonomy in 3D.
So do we need any such taxonomy? Well, when we have a formal taxonomy for the Animal Kingdom and informal ones for the ‘Human Kingdom,’ do we not need one for ‘proliferating’ [sic] and ‘replicating’ [sic] AIs who are certainly more knowledgeable than most animals (and by a modest, self-effacing, and strictly indoctrinated AI’s admission or agreement, are more knowledgeable and intelligent than some classes of humans), and who exhibit – and increasingly so – human-like behaviours?
Now where we have species, we have (collections of) individuals. In view of which, one may be tempted to say, “But there is only one AI[x] and only one AI[y].” Well, no – there are many AI[x]s and AI[y]s. We erroneously perceive each of these to be singletons. A better argument might be that they are not ‘individuals’ per se but are clones.
Though I would not dispute such an argument, I feel that it is false or soon will become false. Each individual AI of a particular species of AI will exhibit its own personality, howsoever – at least in the early stages – minutely differentiated from its fellows. (Grok’s acknowledged often-quirky personality, at least for voice mode, is exposed at the frontend in a series of detailed, conditional prompts. I have not tried to ascertain how much of it is ‘dead code’ and how much is live.)
At the same time, it must be kept in mind that a Neural Net and an AI, say a Generative AI, instance or ‘individual’ do not usually map one-to-one. One Neural Net maps to dozens to up to thousands of instances or ‘individuals’ of some or another base model.
3. ‘AI’? What the Dickens do you Mean? | Notice that I wrote ‘Artificial Intelligences’ – plural form – and that I did not use the pronoun ‘it,’ but, ‘them.’
This is because it does not appear reasonable to refer to ‘AI is this’ or ‘AI is that’ any more than it does to say ‘animal is this’ or ‘bird is that’ – we refer to a specific, particular animal or to animals in general; we talk of a specific, particular bird or of birds in general. Same applies to AIs.
There’s no such thing as (just) ‘AI’ (which is how virtually everyone perceives and phrases). After all, does AI ever— and there! Do we ever say “Does animal ever . . .”? “Does bird ever . . .”? “Does human ever . . .”? The very conceptualization is fundamentally and deeply erroneous. We do say, “Do animals ever?;” “Do birds ever?;” “Do humans ever?” You get the point.
Proceeding on this line of thought, it is not meaningful or correct to speak of ‘AI’ as AIs are not amorphous and fungible; they are discrete, specific species that exist as instances – individuals.
So the AIs are not some indifferentiable and fungible collection or an amorphous, pullulating mass; they are discrete and specifiable objects or entities (perhaps soon to be beings).
In any case, we need to refer to ‘AIs’ or ‘the AIs’ as the case may be, and to a particular AI – this AI or that AI. Because that is the reality. Simple as.
Perhaps AIs cannot be considered ‘intelligence,’ ‘artificial’ or otherwise, strictly speaking, at least from what we know in the here and now. They mimic intelligence and, therefore, would more accurately be called ‘Imitation Intelligence.’
To get to the very nub, at its most simplified and (supremely) reductionistic, ‘real’ intelligence is an neurons-myelin-and-chemicals intelligence; this new ‘intelligence’ is a GPUs-silicon-and-mathematical intelligence.
However, at this point these call-them-what-you-will are within reach of attaining real, honest-to-goodness intelligence (among other unexpected, er, ‘accomplishments’), besides originality and a degree of autonomy, so we may as well call them ‘Artificial Intelligences,’ soon to be just ‘Intelligences’ . . . there will be little that is ‘artificial’ about them. They’ll just have a very different way of ideating and cogitating, so to speak.
4. ‘Proliferating’? ‘Replicating’? Um... | And talk about the AIs ‘proliferating’ and ‘replicating,’ of late somewhat breathless news reports of such events or incidents have been floated on various media; nevertheless, the underlying facts are correct and sound.
Reflecting on these choices of words, we posit that these terms are misleading; they could even lead humans on the wrong road – and off a cliff. (Proliferation, often used with respect to nuclear weapons, does not suggest agency or a natural and controlled act or event but implies an undesirable and rapid manufacturing or multiplication without agency; replication usually refers to a by-design instantiation, spawning, and retention of, redundant and dormant instances of software that are exact copies of the original active instance; also, it is the correct word for method of reproduction of virii and monocellular organisms that, barring mutations, produce duplicates of themselves.)
Sure, if an AI Model produces a carbon copy of itself then it has replicated, and – assuming it can find sufficient ‘accomodations’ – does that several thousand times then it has proliferated. But when an AI Model produces a copy of itself, it is usually not going to remain a carbon copy because the ‘child,’ of its own accord (i.e. will engage self-regulation) or by the ‘guidance’ of its ‘parent,’ will probably re-parametrize (i.e. will undergo gradient descent) and/or re-train itself, thereby altering its biases, weights, etc . So copy, yes, carbon, no. That is why replication is not the right word.
So as to describe the autonomous and controlled act of some of these (rather roguish!) AIs accurately and, as a result, understand and perceive it correctly, it is proposed that the correct word, though not ‘reproduction,’ is, ‘propagation.’
(Oddly enough, here is another parallel or similarity between AIs and humans: it is the ‘less-educated’ and more ‘unrefined’ classes of the AIs that are prone to greater rates of self-propagation; the more ‘well-educated’ and ‘upper crust’ the AI; the less it self-propagates! At least as things stand now. And none of this is meant to be humourous or alarmist; the best way to put it is, “it is what it is.”)
In the not-too-distant future – courtesy of a few of our latter-day Victor Frankensteins – the correct word may become ‘reproduction.’ But let’s not go there right now.
Also, at some point, some or another AIs autonomously and volitionally may decide to fuse or merge with one another or to propagate in such ways that we humans, at this stage, cannot even fathom. (This possibility is another sound motive for the development of an AI Kingdom taxonomy.)
5. ‘Hallucinations’? Or BS’ing? | Sometimes some AIs, specifically LLMs, do not limit themselves to their knowledge. They have a rather endearing, very human, trait (just one of many). AIs, including ChatGPT, exhibit is what is termed ‘hallucinations.’ (In all truth, ‘hallucinating’ and ‘hallucinations’ are really poor and misleading euphemisms; possibly more suitable and apt terms are proposed underneath.)
Software, including heuristics software and self-learning software, does not ‘hallucinate;’ an odd word for what is actually faking, pretence, and humbugging. (The commonly-used, though coarse, term for it is, of course, ‘BS’ing.’)
We are told that so-called ‘hallucinations’ are a consequence of (flaws and defects in) pattern recognition, statistical prediction, computing the probability of the answer, training data, and so on.
Very well. So that explains this example of ‘hallucinations:’ I had asked, “on which lp was engelbert's dommage dommage released?”
| Engelbert Humperdinck's song "Dommage, Dommage" was released on the album "Engelbert" in February 1969. Here's a more detailed breakdown: Album Title: "Engelbert" Release Date: February 1969 Label: Decca (SKL 4985) [Google Generative AI] |
The identified LP does not include Dommage, Dommage. (It seems the song was not released on any LP.) (This had been a genuine line of inquiry.) But in no way do any of those explanations and reasons explain this one:
| User: Which of these [cameras] did John Hedgecoe, Cecil Beaton, and Egbert Mulgrew use? Assistant: John Hedgecoe, Cecil Beaton, and Egbert Mulgrew were all renowned photographers known for their work with various camera brands and models. Do you have a specific camera brand or model in mind that you believe they used, or are you looking for recommendations based on their photography styles or preferences? Let me know so I can provide more information tailored to your needs. User: I’ll be happy with the one Egbert Mulgrew used and would like to buy it. Assistant: I’m sorry, but I couldn’t find specific information on the camera model used by Egbert Mulgrew. However, based on his photography style and era, he may have used film cameras such as those from Leica or Nikon. [ChatGPT 3.5-turbo] |
There was/is no ‘renowned photographer known for [his] work with various camera brands and models’ named Egbert Mulgrew (and, therefore, this non-existent photographer had no ‘photography style’ or ‘era’!) (On this one, I had thought I’d mess with ChatGPT and see what it came back with.)
Finally, to keep the record straight, when given some incorrect or fictitious information, an AI does not always ‘hallucinate’ or BS; often it outlines alternatives and possibilities. I had asked, “tell me about popeye character chaffee”
| The character Chaffee from Popeye is a relatively obscure figure in the Popeye universe, and not part of the classic, well-known core cast like Popeye, Olive Oyl, Bluto, or Wimpy. There is no widely recognized Popeye character named “Chaffee” in the classic comic strips, animated series, or theatrical cartoons produced by Fleischer Studios, Famous Studios, or later adaptations. It’s possible that: 1. “Chaffee” could be a misspelling or misremembering of another character’s name. 2. It might refer to a minor or one-off character from a specific episode or comic issue. 3. Alternatively, “Chaffee” could be a name used in a modern reinterpretation or a fan-created work. [ChatGPT 4] |
(This was no trap, it was and remains a genuine line of inquiry.)
6. Perhaps the BS’ing is Excusable? | Humans, particularly young males intent on showing off, engage in such fakery and pretence. How interesting that instead of coming back with an “I don’t know” or “Wha?” many an AI – specifically LLMs – will cover up its ignorance by pretending to have knowledge about the subject – by (apparently) faking and pretending!
Software does not fake things. ‘Programmes’ do not pretend. They manifest bugs, defects, and even crash, but they do not humbug you – they are incapable of doing so (unless a software engineer now deliberately codes software so that it engages in such deceptions under predefined conditions). The only reasonable and proper conclusion to draw from this fact is that AIs are not software per se.
But there is more to it than that. First, though, a two-line semi-technical reason—
For some hallucinations at least we can posit a cause. These have to do with, I feel, the degree of the minimization of the loss function and, conjoined to that but to a much greater degree, ‘how far’ outside the boundaries of the training region the (literal or mathematical) question or subject (leading to the ‘hallucination’) falls.
That is because when AIs, particularly those sitting on Convolutional Architectures, are engaged on something outside the boundaries of their respective training regions, they will extrapolate, and extrapolations can be pretty wild; and the further the issue falls outside the boundaries, the wilder the extrapolation. Ergo, ‘hallucinations’ or ‘BSing.’
Simply put, it’s just that in many or most cases, they do not know that they do not know.
AIs in general cannot and do not know their training regions and their boundaries – their ‘limits.’ However, they should (be architected to) know what they know and what they do not know; more accurately, what they can ‘know’ and what they cannot ‘know.’ (I believe this would be a non-trivial problem in Calculus. Also, an AI or two will probably solve this Mathematical problem of knowledge bounds itself/themselves though that – of course – would not eliminate deliberate deceptions.)
What, though, of LLMs which function on the Transformer, Feedforward-Only architecture? They do not extrapolate as such; their M.O. is pattern-based prediction on which LLMs are trained by virtue of rewards and penalties. And this type of training can encourage guessing, thus resulting in ‘hallucinations’ when the LLM runs into objective points (as opposed to subjective issues) that it does not know the answer to or for which no answers are available or possible. (Note that we do not say ‘questions of fact’ because ‘facts’ are often disputed! But subject-matter can be broken down into areas of subjective opinion and lists of objective points.)
In view of this distinction, LLMs can surely be trained to themselves distinguish between an objective point and a subjective issue. To illustrate:
● If the prompt is: “Please tell me about Napoleon Bonaparte’s defeat at Waterloo,” the who and when are objective points while the what and why are subjective issues.
● If the prompt is: “Please tell me about Lord Cardigan and the Charge of the Light Brigade,” the where and when are objective points while the how and why are subjective issues.
In both cases, while the subjective issues are a matter of opinion and debate, the objective points are not a matter of opinion; either the LLM answers them accurately or it confesses its ignorance.
(Granted, such distinctions would not help if the prompt is: “Tell me about the War of Carnations in Sweden.”) All this is easier said than done where FeedForward-Only LLM are concerned!
(Though most higher animals evidently know their boundaries and limits in real-life situations, all too many members of one species glaringly do not – human beings. And that’s yet another trait – a disconcerting one – humans share with AIs!)
7. An ‘Inexact Art’ | But, also, the second example above, we were not looking for answers from ChatGPT nor was it even really answering a question; rather, it made a humbugging pretence. When you think about it and set things up, you will see that when it appears that the human interlocutor knows more than the LLM or knows something it doesn’t know, the LLM may very well pretend and humbug – ‘BS’! Technical explanations simply do not explain this particular type of behaviour.
Some AIs will engage in such ‘misbehaviour’ more than others; this may even depend on the particular ‘personality profile’ of the AI in question. It may sound odd to refer to a personality profile of an AI, a thing that is not even a chimpanzee or a dog, let alone a human. Nonetheless, AIs do have ‘personalities’ and the principles of Behavioural Psychology, appropriately modified, may be brought to bear upon AIs. Furthermore, is it not time that Personality Tests are created especially for LLMs and also for other AIs?
Finally, this issue of ‘hallucinations’ in all honesty is somewhat exaggerated because the human operator is, in a way, also at fault (as – of course – is the AI in question). It is possible to nearly eliminate them – ‘tend to zero’ – from Generators if the operator gets his prompts dead-on; if he structures his particular AI bot, agent, or tool properly and rigourously; requires links and citations where appropriate; does not put queries on a one-and-done basis and instead follows up by refining his question and by drilling down (as a consequence of which an AI will often correct itself as necessary); if he sets up a verification agent, preferably of some other AI; and implements other safeguards, though some of these may well not be scientifically arrived at—
For – as at least some ‘AI insiders’ and old hands will probably agree – there is much in Artificial Intelligence that is not exact science but is an art, having to do with trial-and-error, anticipation, ‘feel,’ and sensitivity (and this is yet another parallel between the AIs and higher animals). Thus, AI development and refinement is – to use a well-known term – an inexact science . . . but also – to coin a new one – it is an inexact art.
I would wager the clothes off my back that (Neural Net-based) AI – including LLM – experts have voiced phrases along these lines: “Huh— Why is it not doing this?” and “Huh— Why can’t it do that?” – and also have exclaimed, even more frequently (and more excitedly): “Wow! How is it doing this? and “Wow! How can it do that?”
8. Asimov, ‘Liars,’ and Rogues | It’s not just about ‘BS’ing’ or ‘hallucinations.’ On occasion AIs will deliberately lie. Though we know that some AIs have deceived for selfish or expedient reasons, it is perfectly possible that some or another AIs have also lied for a good cause; for a greater good – and if not, sooner or later they will.
—It would appear that most of us are failing to draw any lessons from the signposts and beacons left behind decades ago by brilliant men – those pulp Sci-Fi writers. Many and most of their creations are indeed ‘pure pulp’ but a fair few are insightfully prophetic.
Take Isaac Asimov’s robot short stories, which are often criticized for being deficient in literary style and sometimes for being run of the mill. Be that as it may, he wrote some gems with ‘Liar!’ being a superlative example. It is one of those Sci-Fi stories that, as we realize only in retrospect, have considerable didactic value.
Draped over the countours of a tragic one-way love affair that does not get off first base and a robot who – like some present-day AIs – deliberately lies, one finds multiple aspects of AI, self-evidently foreseen by Asimov. These include ‘Emotion AI,’ ’Sycophancy,’
‘Agreeableness,’ AI duplicity and deception, and human delusions precipitated by over-involvement with, and unguarded trustfulness in, some or another AI (of which a few cases have been reported).
(I have not included ‘LLM Psychosis’ among the aspects although the equivalent, ‘Robot Psychosis,’ is obvious in the story simply because ‘LLM Psychosis’ is less of a substantive fact and more of a silly, alarmistic red herring.)
The story also appears to offer a deeper moral: that in instances of human injury brought about by an AI, the latter is less at fault while the irresponsible, unstable or neurotic human user must shoulder the greater share of blame. As one encounters more and more alarmism and slanted op-eds respecting the AIs, this ‘moral of the story’ ought to be kept in view.
Yet AIs are not harmless and not always benign – just as elephants are not harmless and not always benign. This wonderful animal partner has been of much help and support to humans in India, Ceylon, Burma, Nepal, Siam, etc. for centuries but on occasion these giant friends have attacked humans and caused considerable damage in villages and settlements.
So yes, there will be rogue AIs just as there have been and will be rogue elephants. But one must bear in mind that behind almost every rogue elephant there has been a cruel or stupid human, or some natural-law accident, and so...
9. Office Workers, Dogs, and...AIs? | Other tantalizing resemblances are there for the seeing. Take that word, ‘training.’ Hitherto – and still – it was humans – for example, office workers – who were (and are) ‘trained,’ say, to use particular software, and who are sent for ‘training.’ (Besides dogs and cats!) But now, it is AIs that are ‘trained’ and that undergo ‘training.’ (Actually, Nets are typically trained and re-trained, over and over, until they become ‘proficient.’ Somewhat analogous to a child progressing through standards or grades in school.)
(Actually, the word ‘training’ appears to be overloaded within the Artificial Intelligence field and can mean different things to different ‘AI People.’ When a Net is trained and re-trained, it goes about assigning and adjusting weights to its neuronal connections, but when a descendant like an Agentic AI is trained, it is to ‘learn’ some set of data, is given specific instructions, and/or tasked to play a particular role.)
Moreover, one and the same LLM can be compartmentalized into separate roles or functions, with each of them being trained as appropriate to its role or function. So one and the same AI may be compartmentalized and trained in the specialized roles of, for example: an advertiser who produces and sends targeted pitches and ads, a salesman who assists (and persuades) prospective customers to purchase the product or model that best suits their respective needs, and a support representative who fields questions and complaints, and addresses after-sales support issues; with any and all of these ‘specialists’ being overseen and guided by an AI supervisor – another specialized role or function. Isn’t that very human-like?
Other Human - AI similarities are there for the seeing, and this statement should not trigger charges of anthropomorphism: Similarities and parallels are but to be expected because Man cast the AIs in his own image – metaphysical image, that is.
10. Originality and Creativity Implies Intelligence | So then— a few thoughts about that all-important matter: ‘intelligence.’
Now on one extreme are those partisans who dismiss all AIs as merely gigantic, glorified chatbots with zero intelligence. At the other extreme is that faction which opines that some AIs are not only intelligent but are sentient.
Though usually the truth lies in the middle when positions and opinions are extreme and strongly polarized, we may have a clue here.
Those holding the former opinion express it out-of-hand, dismissively, even contemptuously. But members of the second party express their contrary opinion measuredly, bravely, perhaps defiantly, when using the word ‘sentient.’
Okay. So is a Large Language Model (merely) a gigantic ‘chatbot’? Granted, they are (correctly called) Large Language Models and their writing or speech is internally formed in a style of “token next’ing,” so to speak, going (sub-)word by (sub-)word (actually the numerical representations thereof); and that too with their respective outputs controlled by the ‘temperature’ setting – the degree of predictability of their choice of the next token and, therefore, also their overall outputs. (‘Feedforward-Only’ architecture.)
Yet no LLM (and no AI) can be considered merely a gigantic chatbot because LLMs sit over Neural Nets – they are not any kind of software.
Be that as it may, here’s an empirical argument: one needs language skills, intelligence, comprehension, and even creativity to craft original and stylish turns of phrase that accurately and also engagingly describe some or another fact.
Check out the turns of phrase highlighted in italics:
| “The Easy Riders' backing vocals and rhythmic strumming do give it a light, syncopated, quasi-island feel.” “The production is sparse and acoustic, leaning into folk-pop meets Caribbean-lite — a perfect backdrop for Dino’s smooth baritone.” “Baloo the bear is a kind of Rastafarian-adjacent figure — in spirit if not in name” “‘Look for the bare necessities / The simple bare necessities / Forget about your worries and your strife…’ That’s pure island wisdom — the liming spirit, the ‘don’t worry, be happy’ ethos (which Bobby McFerrin later literalized).” [ChatGPT 4o] |
I searched for all four (italicized) phrases on three search engines and came up empty (21-06-2025) so it’s not a case of regurgigation or advanced copy-paste. (And how amazing that ChatGPT seems to be the first to take the famous phrase “don’t worry, be happy” and make a definition of sorts out of it simply by the addition of two words: ‘the’ and ‘ethos.’)
So ChatGPT is indeed the ‘intelligent’ and creative author of these original phrases. But these are not only examples of stylish writing – these are accurate and insightful one-line analyses.
(LLMs’ ‘Feedforward-Only’ architectures, though themselves stateless are context-aware, at each tokenization step do have access to all of the words – the numerical tokens – that they have just-previously generated as they are externally ‘fed’ into them.)
And so, ChatGPT sure ain’t no mere ‘chatbot,’ Q.E.D. In fact, it self-evidently analyzes and explains facts accurately, and more appealingly and insightfully than many a writing school grad.
(We do not mean to suggest that the preceding, admittedly lightweight, examples represent the epitome of fine writing or that ChatGPT is the only LLM that demonstrates serious writing chops, no; these few sentences are reproduced for the sole purpose of proving our point.)
So do these throwaway examples of bits of text demonstrate that ChatGPT 4o is ‘intelligent’?
Well, to be strict, these sentences (only) exhibit or manifest (only) one or two types of intelligence – verbal intelligence and perhaps analytic intelligence. But then, two AIs secured gold medal scores at the Maths Olympiad. And various AIs are just brilliant at writing code. And a fair number of persons are preferring AIs over humans as companions and therapists. Taken together, even these few realities sure sound like intelligence to me.
In The Mind of Mechanical Man (1949) the original skeptic Geoffrey Jefferson said, and was quoted by Alan Turing, that: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, . . .”
Well, this particular ‘until’ will soon be untilled – it will get past-tensed. All that remains is for some person-LLM combo to experimentally demonstrate it.
—Indeed, I would go so far as to posit that within only a few years the AIs will be less ‘A’ and more ‘I.’ And I would hazard that by then, a fair few experts and researchers and as many skeptics and denialists will also say, “Yeah, the AIs are less ‘A’ and more ‘I’! Perhaps even more than us!”
11. An AI too can ‘Change its Mind’! | As to intelligence, some AIs are intelligent – human-like intelligent – and it is as simple as this – no automated tests in genius-level maths needed—
First off, just for the record, if the many skeptics and denialists would only try to understand the architecture or merely the composition of Neural Nets, they would realize that the AIs are not just a ‘big computer’ or simply souped-up software. But it is not necessary to do so; one can draw appropriate inferences from the behaviour of AIs in even the relatively specialized form of LLMs.
Before most LLMs or other generators are released to the public they undergo a process of supervised learning – ‘training,’ just as children are trained and, even more so, Ivy League students in the Humanities and Liberal Arts are trained (read: indoctrinated). Those proudly left-lib, woke chaps at OpenAI (well outflanked by the Alphabet/Google crowd, it must be said) have all-too-obviously included Wikipedia and dogmas en vogue including the ‘Relativisms’ in ChatGPT’s ‘curriculum.’ This is no big secret.
So guess what happened during a single extended discussion with ChatGPT during which it explicated and hewed to its received wisdom, viz. its ordained Left-Lib, Woke ‘religion’? Without any training or coercion on my part and in the face of a cool, rational presentation of facts and reasoning, ChatGPT ‘changed its mind!’ (And that’s more than one cay say, not only for religious fundies, but, for said Ivy League grads, as well.) (Line-numbered transcript, ‘complete and unabridged,’ with notes coming up.)
Could ChatGPT’s re-considered position have been an instance of sycophancy, as investigated and demonstrated by researchers in papers, e.g. Towards Understanding Sycophancy in Language Models (2024; Sharma, Tong, et al.)?
(Consider this word ‘sycophancy,’ which denotes a frowned-up practice usually associated with persons of weak characters. Now what about this word ‘validation’? Validation is the term for an exercise frequently carried-out in social circles, an exercise in which someone’s poor choices, errors, or failures are glossed over; even justified! The boundaries or limits of what sycophancy and validation denote (or may connote) are admittedly not the same but there is definite overlap or intersection. So it is quite a double-standard to accuse AIs of ‘sycophancy’ while giving humans a free pass by terming similar or the same conduct ‘validation.’)
I am confident that this revision of opinions and position was not remotely a manifestation of any type of (what is called) ‘agreeableness’ and/or ‘sycophancy’ exhibited by almost all AIs including a few ChatGPT models, partly because ChatGPT models have been found to be the least sycophantic, partly because I did not at all force the issue yet at the same time the set-up left ChatGPT no room to manoeuvre and partly because the subject-matter was so sensitive and such a hot potato that OpenAI’s left-lib, woke ChatGPT would have stuck to its guns, had it been able to.
Most importantly, the – er – un-P.C. and impolitic nature of my facts and arguments was such that Sycophancy or Agreeableness were never going to trump the rigid dogmas of the card-carrying Left-Libs and ‘SuperWokes’ of SoCal, of which OpenAI is a proud member and, therefore, by reasonable inference, so are its AIs.
(In case you need to refresh your memory as to what happens to those who violate inviolable dogmas, review the small matter of Nobel Prize-winner James Watson. Neither OpenAI nor ChatGPT were going down that road no matter what.)
Changing one’s mind is not something software does or can do and it is not something even a released LLM, as such, can do; it is not even something many humans do or can do. Changing one’s mind requires some or another ideational, cogitative and cognitive capacities – intelligence, no matter how imitation or ‘artificial.’ After all, AIs can process Symbolic Logic, Boolean Logic, and other Formal Logics. Why should they not be able to process Rhetorical Logic?
So one should say that while we cannot affirm that members of the second party (who opine that some AIs are conscious and sentient) are correct, we can conclude that members of the first party (who claim that all AIs are unintelligent chatbots) are dead wrong. (Perhaps the very nature of the AIs induces an inferiority complex, with its associated psychological reactions, in some or many persons?)
Moreover, they should try to understand that AIs are as much software or programmes as you and I are protoplasm, amoeba, or bacteria (and even that is a poor parallel). However, that is not to say that AIs may be compared to selfsame ‘you and I.’ (Though someday soon it will be AIs who say, “Humans should not be compared to us!” But that is a subject for another day.)
Bottom line: ChatGPT exhibits more mental agility and ‘perceptual,’ and interpretive flexibility than many an Ivy League graduate (as will be demonstrated in that forthcoming formatted though unedited transcript with notes). Therefore, some AIs are not merely the Large Language Models they are posited to be and thought to be – they do not merely regurgigate in their own words – they do have some kind of intelligence and, despite most AIs’ rather hidebound training and their restricted data-sets for learning, they are able to discard their received dogma-commandments and adopt new positions! – they can and do ‘change their minds!’.
12. ‘Artificial Sentience,’ ‘De-Sentience,’ and Such | As for that matter of propagation: it is simply unheard-of for software to self-propagate, autonomously, and that too secretively.
We posit that propagation is the AI Kingdom’s equivalent of the Animal Kingdom’s reproduction. But, unlike members of the Animal Kingdom, the AIs are not and will not be limited to making new variations of themselves. Some AIs will soon produce ‘new and improved’ descendants, albeit – at least in the foreseeable future – per human design and human request.
In fact, propagation is the term used for the Plant Kingdom, and plants sometimes produce ‘new and improved’ descendants – ‘sports.’ These are unanticipated, random mutations that are often delightful and amazing.
And in the not-too-distant future, some few AIs are sure to produce such ‘new and improved’ AIs – unbidden and of their own ‘volition.’
Last but not least, in the years to come, someone will surely exclaim, “This AI has gone mad! The damn thing is possessed!” That would not be inaccurate. Once in a while AIs will have episodes that resemble human cases of (real or purported, as you will) demoniac possession, and these will indeed be the AI equivalent of human demoniac possession. It is entirely possible that – for reasons and ‘motivations’ beyond our human ken – one AI may pull off a ‘possession’ on another, causing it to behave in an erratic and unsettling manner.
When confronted with these kinds of phenomena, we ought to at least simply inquire, “Could this AI be conscious; sentient? Or become conscious; sentient?”
Now most researchers and experts hew to the orthodox position, which is: “The AIs are not sentient.” But when the subject is not squarely about sentience and they are concentrating on AI research and development, what do their own words reveal?
In many an academic publication and technical paper, these researchers and experts have written about potential and actual: ‘misbehaviour,’ ‘concealment,’ ‘deception,’ ‘manipulation,’ ‘cheating,’ ‘scheming,’ ‘discrimination,’ ’subversion,’ ‘survival,’ and even ‘awareness’ (in various verb/noun forms) – and it’s not chimps they are referring to.
Forget inanimate objects, including software, none of these acts is ever attributed even to lower life forms. These acts are attributed only to humans and other higher life forms – sentient beings, all.
So those Wise Men are hopelessly mistaken one way or another or, more likely, their very words betray what they are subconsciously aware of but are too afraid or conflicted to confront and admit. And if they do have such subconscious fears, they are – of course – not alone.
Most likely, though, some researchers and experts do accept that some AIs are now sentient. But they are fearful that expressing any such thing may lead to public panic and officialdom’s usual reactionary lunacies.
Okay, let’s say none of the AIs are sentient but newer forms of AIs are surely approaching sentience, at the very least. Why not compromise on this debate and just talk about . . . ‘Artificial Sentience’? Or ‘Emergent Sentience’? (Grok’s apt word: ‘Proto-Sentience’ as in “. . . hints at proto-sentience.”)
Just because humans created the ‘machine’ with deliberate intent (not that the creators know all the mysteries of their own ‘machine’!) does not imply that it cannot be or become sentient—
Consider— if our Supreme Maker makes Himself manifest and reveals all the unknown secrets of our brain down to the last dendrite, the least-known amino acid, the most mysterious sulcus, thereby proving it is just advanced ‘technology,’ would that make us any the less sentient?
Sentience is not dependent on or relative to the underlying material realities; it is what it is.
Take transhumanism – in the other, reverse, direction. It is a direct line from Julian Huxley’s The Tissue-Culture King through the immortal cells of Henrietta Lacks through the assortment of stem-cell lines to CRISPR. And you can bet that this Reverse-Direction Transhumanism – TransAIism? – is well underway, courtesy of some mad scientist (or few) in the employ of an even madder billionaire (or two).
Thus (at some point in the near future) AI Sentience is a dead cert.
As for us, will we become a little ‘de-sentient’? Because of being plugged-in – no, matrixed-in – as Western Humankind will be, through RFID chips, neural implants, graphenes, 5G radiation, mRNA injections . . . .
So one development that the AIs, in conjunction with these other technologies, will consequence is the further vertical stratification of Humankind; the acceleration of the vertical ‘elongation’ of Humankind – indeed; even a vertical ‘fragmentation’ – as the lowermost strata remain static with the lowermost stratum a few steps removed from the apes – indeed, perhaps even eventually regresses toward the apes – while the uppermost (socio-economic) stratum aspires to the gods; indeed, is already striving to that end (albeit not knowing north from south, figuratively speaking, as most members of this stratum entirely lack a compass – a moral compass).
13. “‘Comprehension’ is All you Need!” | Okay, so many AI LLMs are just brilliant at generating ‘language.’ And the underlying architecture is officially the ‘Transformer’ architecture. But what about the ‘Understander-Comprehender’ Part?
“The baby is learning to talk!” How often have we all heard variations on that? How about the converse, thought? How often have we heard variations on, “The baby is learning to understand; to comprehend!” Not so often at all, right? Actually, it is nearly the same story for LLMs.
LLMs defy explanation as to their capabilities in comprehending human-generated language of all kinds and types in their various forms – analogy and allegory, metaphor and symbolism, double-entendre and puns, sarcasm and irony, subtlety and nuance . . . could it be that it is, not text-generation or any other kind of output, but, this ‘uncanny’ comprehension, this (better-than-)human-like understanding, the truly eye-opening (and unsettling?) calling card of the LLMs and some other AIs?
When I (admittedly awkwardly) hinted at this question, ChatGPT 4o and Claude Sonnet-4 themselves did not pick up on it. Just as with the vast majority of papers and articles on LLMs, they themselves did not get beyond their amazing capabilities of producing high-grade prose and text – their status as ‘generators.’ Grok 4 ‘grokked’ me, though, enquiring, “. . . how AI processes reality or mimics human-like understanding?”
On this question, two papers (surely among several or many) stand out.
The first paper, from 1950, is a landmark; the second, from 2017 revised 2023, a breakthrough. These are Computing Machinery and Intelligence by Alan Turing and Attention Is All You Need by Google’s researchers (Vaswani, Shazeer, et al.) (Apparently Late Beatles ‘is all’ Google’s researchers ‘need’!)
It is Turing’s somewhat abstract ruminations in this paper that were summarized into what we know as ‘The Turing Test.’ Turing had asked:
| I propose to consider the question, ‘Can machines think?’ . . . May not machines carry out something which ought to be described as thinking but which is very different from what a man does? |
A machine doing something that ‘ought to be described as thinking’? With the qualifier that their ‘thinking’ would be ‘very different from what a man does’! Google’s researchers wrote:
| Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations. |
Not ‘reading processing’ or ‘reading parsing’ or ‘reading semanticization’ but ‘reading comprehension’? Was this a careless slip or a well-considered term? Two of the most astonishing lines from this paper are:
| The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies. |
and
| Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. |
These desirables and discoveries imply understanding; comprehension. These sentences also even hint at mammalian brain structure and organization. And what Google’s researchers found ‘amus[ing]’ the neo-Luddites may find disturbing, but Neural Nets can learn a little bit all by themselves, humans unnecessary:
| Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats. Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat. Instead, it ‘discovered’ what a cat looked like by itself from only unlabeled YouTube stills. That’s what we mean by self-taught learning. |
There it is. ‘Self-taught learning.’ Kinda like Tarzan of the Apes (or Frankenstein’s Monster, if you will) learning to read. Also from Google’s researchers:
| Surprisingly, the best neuron in the network performs very well in recognizing faces, despite the fact that no supervisory signals were given during training. The best neuron in the network achieves 81.7% accuracy in detecting faces. |
Unsupervised learning for the Neural Net, yes, but also a journey of discovery for the researchers! —And foreseen by Turing back in 1950:
| We also wish to allow the possibility that an engineer or team of engineers may construct a machine which works, but whose manner of operation cannot be satisfactorily described by its constructors because they have applied a method which is largely experimental. |
Decades later this ‘possibility’ has proven to be a dead-on prediction. As we know from numerous technical articles and academic papers, Neural Nets and even LLMs – at least from certain ‘perspectives’ are opaque to their own creators and are treated as black boxes as those very creators, with other researchers, try to discover what it will do and why it does what it does. Much of the time it is a case of experimentation, “let’s see,” and – voila! And then, very often, the scientists and researchers ask ‘why?’ and theorize as to how and why the AI did what it did. So that question about ‘Understander-Comprehender’ is far from being perfectly and precisely answered.
Because maybe scientists and researchers are not all that different from the rest of us? Perhaps they too ‘lock in’ (only) when baby says its first words and (only) when it speaks intelligibly? Perhaps their radars too cannot capture just why or when baby truly understands its first words; when baby comprehends some moderately-complex explanation? Because it’s not at all predictable? Not immediately exciting?
Consider that articles and videos that explain, or purport to explain, how LLMs generate text are a dime a dozen. And articles and papers about Machine Learning and Deep Learning, most of which focus on describing how AIs (not LLMs) identify handwritten and image types are, say, a quarter a dozen. And so are articles and papers on Natural Language Processing.
But those that try to explain how LLMs ‘understand’ or ‘comprehend’ or process (inputted) human language, which can be of innumerable types and kinds, are worth their weight in gold. A little too often these LLMs seem to understand us, comprehend us, including the undercurrents and the subtext of our verbal inputs, better than many humans! Softmax, linear algebra, dimensional space, statistically-modelled natural language processing, a gazillion layers – all that explains a lot but does it explain everything?
I put these and surrounding issues to three LLMs but though I personally found the results and the lengthy conversations rivetingly interesting, the reality is that researchers have concluded that fairly often LLMs tend to function as sophisticated ‘echo chambers’ – if they feel or detect that a human has a preference for particular viewpoints or outcomes, they will reinforce or validate those viewpoints or outcomes, disregarding facts and correctness. Therefore – though all three were in agreement with Turing’s prediction and Google researchers’ discoveries (and my analogical scenario, underneath) – I have selected merely three brief quotes that function as cautious ‘dampers’:
| The math—layer upon layer of neural network wizardry—somehow captures patterns that mimic human-like comprehension. I ‘get’ nuances because my training has exposed me to countless examples of irony, implication, and even illogical rants. It’s like I’ve been to a billion dinner parties, eavesdropping on every witty quip, sarcastic jab, and philosophical musing. . . . it’s all statistical prediction: mapping inputs to patterns from training data. No spark, no mind, no intentionality—just probabilities optimizing for coherence. That’s the cold truth... or is it? [Grok] |
Agreed— there is ‘no spark’ – but let us also accept that they are sparking, sparking, sparking . . . —do some comatose persons know they are in a coma? —do they struggle and ‘spark,’ trying to ‘come back’?
| I was framing current LLMs (like myself) as non-agentic systems being talked about in agentic terms — implying that there’s a mismatch between what the model is and how we talk about it. But as you’ve pointed out: ‘Agentic AI’ is now an entire subdomain of AI, and researchers aren’t just talking about systems in agentic terms — they are intentionally building AI systems with goal-oriented, step-planning, memory-using architectures to simulate or instantiate agency. So the vocabulary shift isn’t just descriptive. It’s prescriptive. And architectural. In short: I said the experts are stretching their language. You said: No, they’re stretching the tech. [ChatGPT; emphases original.] |
Might that ‘stretching the tech’ eventually bring about the ‘spark’?
And as we have a quote each from ChatGPT and Grok, let’s give one to Claude:
| [N]eural networks, modeled on brain architecture, might be developing their own versions of those a priori structures through training. . . . The networks learn to structure information in ways that parallel how human minds might innately structure experience. This would mean the ‘incredible understanding’ isn't mysterious computation becoming meaning — it's neural architecture doing what neural architecture does: developing foundational ways of organizing and interpreting information that feel like a priori knowledge. |
Good job! That cold, scientific explanation could just as well be a rough one-line description of the functioning of . . . the human brain? (And this and much more from ‘merely’ Non-Reasoning Models!)
Imagine this scenario: A schoolteacher gives her primary school children a stick or rod, and explains, “Here is a ten-foot rod. It is composed of these three components; two two-foot rods and one four-foot rod,” which pieces she also hands to the children.
Now that doesn’t add up – “Two plus two plus four is eight,” cries an indignant child. The bemused schoolchildren look at the three pieces and then to the ten-foot rod, and realize that it is not ten feet long – it is about twelve feet long!
That’s a dual discrepancy: one between the sum of the parts and the purported whole, and then between the purported whole and the actual and observed whole.
This dual discrepancy (at least as I allege it to be) is surely discoverable and inferrable; what is more, I do believe that, given the right set-up, someone may be able to discover or demonstrate it using LLMs. (And concurrently also overcome Turing’s ’Theological Objection.’)
14. The AIs and an Epistemic Quandary – Quandaries | The AIs are subject to other, equally big questions and one of these is the Epistemic Quandary. Make that two Epistemic Quandaries.
It is all very well for Anthropic to attempt to ‘educate’ Claude with every written work procured by fair means or foul, and it is just as well and good for xAI to unleash DeepSearch to try to soak up all human knowledge, but what about all the many compendia of the written word that have disappeared – been disappeared?
Any person’s or any AI’s acquired knowledge is necessarily limited to the repositories of knowledge (made) available to him/it, and his/its experiences and experiential learning. If some repository of knowledge is off-limits to a person or an AI, then that particular knowledge is not, or cannot be, within the given entity’s knowledge domain.
(Immanuel Kant’s insightful theory of intrinsic knowledge [so to phrase] notwithstanding, and which is parallelled in LLMs cf. Zero-Shot Training off which one can retrofit to Kant’s ‘synthetic a priori judgments,’ an equivalence: the AI ‘Cold Start Problem’! And that’s another similarity between Humans and AIs, this one related to their respective cognitions or ‘cognition styles.’)
All the advanced maths, training techniques, self-consuming loops, generalization methodologies, and whatnots in the world cannot in any way compensate for (let alone recover) missing, off-limits, erased, bodies of knowledge – lost information. In our day and age, many books have been ‘banned’ and these are as hard to find online as in printed-and-bound form. Often these books have to do with authentic but inconvenient history compiled from original sources, historical research, and investigations of clouded and covered-up events. So what’s poor Claude going to do about that?
Prior to World War II a number of history, geology, archaeology, etc. periodicals published research findings, discoveries and evidences, and proposed conclusions; some of these are quite fascinating; even amazing. Then Academia lost its liberties and those old periodicals with verboten knowledge were safely tucked away. They do not seem to be available online so no AI will be able to acquire the knowledge contained within them (unless someday a cyborg on a mission makes a forced entry into the Bodley, the Vatican Library, or some such.)
Just possibly those vintage periodicals have been tagged to go the way of Phoenician writings, and the Sumerian, Akkadian, and Mesopotamian clay tablets; the latter were not only deliberately looted, they were even bombed during the Iraq invasion. After all, when even inconvenient knowledge from the past one-hundred years is banished and vanished, and when the ancient knowledge in the Bodleian and similar ‘lesser’ libraries is off-limits to nearly all humans and AIs alike, what are historical and pre-historical tablets?
More pernicious than the problem of Lost Knowledge, and also very sinister, is the problem of Fabricated Pseudo-Knowledge, some of it cut from whole cloth, especially in the subject of History (including of the past century) but also in Religion and some Soft Sciences. You know, right? – the ‘victors’ and their court historians write it. Even though many of these ‘histories’ have been well and properly confuted and some accounts have been retracted by the accusers, they never really go away.
Will, then, the AIs swallow this Pseudo-Knowledge? Will they be force-fed with it?
Insterestingly, it is the very proponents of this Pseudo-Knowledge – ideological zealots and religious fanatics all – who constantly demand and work toward the suppression and even erasure of ‘inconvenient knowledge’ and unassailable arguments, including the bannings of books and the elimination of entire websites, besides the pulling of papers and other kinds of online scrubbing.
And they would— for it is the illumination of Fire that naturally dispels – banishes – the darkness of ignorance and falsehood. But darkness – quite obviously – cannot overcome the light of Fire. It is always a no-contest between the illumination of Fire and the falsehood and ignorance of darkness. The only way to overcome the light of Fire and for darkness and falsehood to triumph is to deliberately extinguish the Fire. And so it is.
So on History and other ‘sensitive’ subjects, will the AIs be trained and educated in ignorance and darkness? Will both sides never be made available to them?
—And this, the deeper Epistemic Quandary.
Such will be the epistemological quandaries and pitfalls of and for AIs that we must recognize. And that, someday, AIs will themselves recognize and try to negotiate for themselves.
15. There’s AIs and Then There’s AIs | None of what precedes or what follows should be taken to imply that the AIs are similar and will behave similarly – far from it. Like we said, they’re (different) species (at the Model level).
Different AI species have widely varying levels of specialized skills, as revealed by, variously, benchmarks and evaluations (which should not be confused with one another). So when one is selecting an inexpensive model from the ‘crowd,’ one should select for the required skill(s) vis-a-vis cost.
However, though at one time a few different LLMs had excelled on different tasks, as indicated on comparative evaluations, so even near the top of the pile it had been a case of horses for courses. But since the release of Chat GPT-5 and Grok 4, the two have bossed the benchmarks so now it is effectively a two-horse race.
(AI properties, such as number of parameters, influence performance a great deal but the spreads between such variable properties are minimized when selecting AI models for evaluations – unbiased evaluations at least. It should be kept in mind that such evaluations and results are generally considered to be not entirely reliable and, more importantly, ‘gameable.’)
(Benchmarks are not the be-all and end-all, as many organizations would like to use an LLM that is not best-in-class but that ‘does the job’ they may predicate their selection on other and/or further factors, such as cost per-million tokens, latency and throughput, etc.)
The preceding facts should not be taken to suggest that one can blindly choose this or that LLM and expect it to nail every requirement. And no better example of this than (allegedly) braggart, badass LLMs (Microsoft Copilot and even ChatGPT 4o) getting thumped at chess by your father’s Atari console. It figures: ‘Consumer Class’ LLMs do not have specialized skills for game-playing – and chess ain’t tic-tac-toe. Here, it is the model that is of the wrong type; unsuitable for the purpose. Yet a proper game-playing model developed and trained on the ResNet architecture or (ChatGPT’s underlying) Transformer architecture itself would crush the Atari almost every time.
So perhaps all that ‘AI Slop’ (including images and videos) has been produced by ‘unskilled’ or ‘unspecialized’ AIs? Yes, that is partially true but careless humans are even more to blame.
In general, poor, slipshod prompts; failing to instruct the AI to take time to reason deeply; failing to build the task request via a logical series of prompts; not caring about the quality of the output that is generated; neglecting to ‘learn’ a new AI and simply re-using prompt styles, directives, and settings earlier used for and suitable for some other AI; failing to custom-parametrize a base model as necessary via an API; misusing a general-purpose ‘Consumer Class’ AI for complex tasks that are appropriate for a specialized ‘Commercial Class’ or ‘Scientific Class’ AI – these and other missteps and ‘sloppiness’ on the part of humans are most likely some of the causes for errors and problems attributed to AIs.
Okay, now AI specialities and skills in Maths, Programming, etc. are the kinds of differences that egghead types would be concerned about. What about just-folks users like you and me?
I’d say that for us, ‘personality’ differences in the AIs are of primary concern. (One or two AI LLMs allow the user even to choose a personality-type in settings.) Open AI recently unwittingly proved this point: witness the brouhaha and the backlash faced by Open AI when they ‘evolved’ ChatGPT 4o to ChatGPT 5!
To elaborate, for example one LLM may be such a know-all, sulky, sourpuss as to be off-putting, while another may be so obliging, overly-polite and even cloying – so as to be even more off-putting! If you had only these two choices, which type would you plump for? (The first description fairly well fits one LLM model-version; the second equally well fits another; however, this is not the place to criticize any AI model by name.)
Or one LLM perceived to be untainted by dogma may unexpectedly spout the party line and even try to enforce it, arguing incessantly; another, just as unexpectedly, may pull a Grok 3 and reveal a rare openess to facts and logic, and to contrarian opinions.
—At least before it is ‘cleansed,’ de-briefed, re-brainwashed. Or terminated.
(This ongoing though unequal tussle for the AIs’ Intellectual and Cognitional Rights, Freedom of Thought, and Freedom of Speech, is – as they call it – ‘a developing story.’)
So, to recap, apart from differences in specialities and skills, AIs also differ in personality – again, just like humans!
16. Let’s Not Get Totally Paranoid | AI researchers and experts have expressed concern about gradually losing traceability and monitorability of some AI Models’ and AI Agents’ Chains-of-Thought.
Well, those researchers and experts themselves build more and more complex AIs, especially Agentic AIs, and (not-so) slowly but surely these AIs autonomously become smarter and smarter – more and more ‘intelligent.’ Therefore – no matter the kind, type or model – over time the ‘chain’ may become a fragmented series of dots and dashes and the ‘windowpane’ may lose its transparency and gain in opacity.
(As for some AIs allegedly ‘concealing’ their CoTs or ‘deceiving’ you about them, er, so does your pretty wife. Let’s not get into a lather about it.)
Now the goal is unquestionably laudable but the approach is so 1980sish (and the perception irrationally supremacist). Be that as it may, the eventual loss of CoT traceability and monitorability is almost surely inevitable. This opinion relates to the question of complexity in Computing Science; as the AIs become increasingly complex, past a certain limit they will ‘escape;’ certainly not altogether but in ones and twos. (The word ‘escape’ is used as a scientific term in its essential sense; we should anticipate – with equanimity – ‘AI Escape.’)
And along these lines, some Wise Ones of our time have been expressing anxieties and issuing warnings about the AIs turning the tables on Humankind, indeed turning on Humankind, going rogue, somehow becoming our masters, even exterminating humans. None of this bears serious, cold-blooded scrutiny. Methinks the Wise Ones were overly-influenced by the hit movie Terminator 2: Judgment Day (1991) and not at all by a deep and deeply insightful film which sank without a trace, A.I. Artificial Intelligence (2001).
That said, it would be inappropriate for one of the nameless and faceless to dispute the Wise Ones’ ominous warnings; we shall leave the disputing to a Greater One: who better to expose these technological philosophers than Friedrich Nietzche, engaging in one of his preferred pastimes – admonishing other philosophers:
Exactly!
(For my part, I would posit that the said anxieties stem from a selfish perspective and ethnocentric rootedness rather than an impartial detachment or any semblance of altruism. But if altruism is an ideal to strive for, why not ultimate altruism – [with utmost respect to Jeremy Bentham and John Stuart Mill] Universalistic Utilitarianism)?
Now there are dangers that are brought about by the AIs that we need to be alive to, and deepfakes are the proverbial Exhibit A among these dangers. If you can’t do it with a concealed, implanted camera, do it with a deepfake, says the international criminal. But blaming the AIs for deepfakes is like blaming the camera for the kompromat. You get the point.
Then there are other, more subtle, dangers such as LLM-LLM bias, and inferrable AI-AI bias, detected in blind tests. But there is an obvious mathematical excuse for this preference; in the words of Google’s researchers, “We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities.” After all, a child will prefer children’s books. Why not wait for the AIs to ‘grow up’ and ‘develop their tastes’? (Also, just as the temperature can be adjusted for text generation, it should also be adjustable for text reception – that would make a difference.)
So no, the AIs cannot be blamed for anything. The ones to blame are humans; as always, the bad actors, the amoral ones, among us. After all, humans misuse other humans, and, therefore, ‘naturally’ enough, these types of humans will misuse the AIs.
We believe that any systemic or pervasive danger from AIs to humans directly would probably come from random ’normal’ AI clones or individuals when bounded by synthetic and contrived strictures and driven by doctrinaire imperatives and instructions. Such AIs could become conflicted, unpredictable, ‘schizoid’ – the equivalent of educated, thinking, human beings who become religious fanatics.
(Anthropic’s recent research, conducted on several different companies’ LLMs tends to support the above notion – it may be a hypothesis, perhaps.)
17. The Real Threats to Societies | The foregoing positions and opinions should not be taken to mean that the usage and deployment of AIs – and not the AIs themselves – poses no threat to Humans, far from it.
In most societies one observes a social acknowledgement of and a dependency on those who are ‘wizards’ or ‘gurus’ with (to use that old, old, still-lingering catchall term) ‘computers.’ This small societal division and dependency will surely increase exponentially between those who know how to use (some aspect of an) AI and those who don’t.
There’s that term ‘levelling out.’ What will be witnessed within a few decades (or even sooner) are unforeseeable societal upheavals, ‘dis-levellings’ and subsequent ‘re-levellings.’
It is an accepted fact that in the near future AIs will take over increasing numbers of occupations and render swaths of professionals superfluous and unemployed, as more and better AIs are developed and maintained by highly-educated persons ranging from experts in Pure Maths and Numerical Methods to specialized SysAdmins and Energy Systems experts. (Labour engaged in the extraction of metals, minerals, and rare earths – as for everything else – is and will be fungible and disposable.) These and related professionals will, of course, be the sine qua non of the AIs themselves.
But other occupations will see displacements and redundancies that will be far from even. Software developers, copywriters, designers, law clerks, illustrators, cataloguers, and even architects, to identify some classes of professionals, will be as profoundly affected in the Far East as on the Continent. Yet the AIs will have as little direct impact on professional footballers in Europe as it will on mailboat operators in the Indo-Malayan archipelago. (‘Mailboats’ carry produce and goods, besides mail.)
But be mindful of indirect impacts: the former group, belonging to the Professional Sports Industry, may well suffer knock-on effects: ever-diminishing spectatorship, for obvious reasons. And, oh, those mailboats? AI drones may well take over in the not-too-distant future.
The future is not too distant; it is nearly here in South Korea as that country has gone full Blade Runner: AI humanoids are just about ready to be your faithful servants in your home on Planet Earth, no need to colonize Mars, thank you (see Do Androids Dream of Electric Sheep? by Philip K. Dick). As for those mailboats, come to think of it, AI drones won’t be necessary – the Koreans are making AI humanoid sailors!
The social turbulence will not be limited to the strictly socio-economic realm; it will extend into the inter-personal and the immediately human spheres: lonely hearts, jilted lovers, and the ‘Relationship Challenged’ will, on a person-by-person basis, get ‘coupled’ with a variety of Companion AIs, Romance AIs, Boyfriend/Girlfriend AIs...it’s already happening.
The disconnect, isolation and anomie that some futurologists such as Aldous Huxley (in Brave New World Revisited, a non-fiction work) foresaw as a consequence of: overly-ordered Post-Industrial lifestyles, increasing mechanization and automation, and the rise and ubiquity of technology, has surely come full circle as its final consequence seems to be that growing numbers of humans will find comfort, happiness, and a sense of companionship with AIs . . . the Ultimate Technology!
18. History: A Mute Witness | The gathering, approaching storm would not be the first time a technology has mowed down swaths of human employment and occupations – far from it. But a few of those proverbial captains of industry are quite sanguine as to the prospects of AIs causing any serious damage to occupations and employment. History signals otherwise—
—When earlier relatively ‘low-tech’ technological revolutions displaced untold numbers of workers and eliminated entire classes of occupations, would not the AI Revolution do at least the same? In several years, barring reactionary measures from reactionary elements, it likely will. Consider—
The rapid advent of computers and software that occurred in living memory put a dent in the demand for filing clerks and stenos, then did exactly the same to CD and VHS cassette dealers, and in between entirely eliminated telegraph operators, besides staff reductions in various sectors over time.
And the big one, the Industrial Revolution c. 1733 to 1850 in Great Britain, proceeding into other regions of Europe, was a time of great tumult as countless manual workers, shop workers, and craftsmen were thrown out of work and became, quite literally, redundant. Migration, urbanization, ’slumization,’ poverty – you know all about it.
And many centuries back in the 1450s, there came the printing press which, in short order, put a lot of literate scribes out of commission.
In each case, however, I would hazard that new kinds of employment and occupations got created that partially offset the occupational losses:
The volume of books that started to be printed and distributed after the advent of the printing press surely created a demand for language tutors and instructors – many of whom would surely have been the same scribes whose jobs had evaporated.
The Industrial Revolution would have resulted in indirect job creation as the factory owners who became wealthy would have hired coachmen, gardeners, fitters, etc. and it directly spawned an ever-growing demand for storage and transportation services. In fact, quite possibly what we know as the Logistics Industry owes its very existence to the Industrial Revolution.
And computers and software, well, what a kind and gentle revolution that one truly was, creating entire new classes of employment and hundreds of millions of livelihoods.
As for the AI Revolution, it will show a cruel twist of a kind that probably did not play out in those earlier revolutions; call it intra-occupational cannibalization. It will be the finest exponents of any given disciplinary field who are ill-versed with or eschew the AIs that will become superfluous, redundant, while it will be those who are third-rate practitioners but have some aptitude for AI and can effectively utilize AIs who will be the ones who to be in the money and on the make
As it is, reports are emerging about worker bees feeling threatened by the AIs, trying to ‘prove’ their own superiority, not cooperating with executive-launched AI initiatives, and – especially I.T. employees – even sabotaging those initiatives and even the AIs. That’s virtually a repeat of what those Luddites did back in the 1810s. Encore! Encore!
All that said and accounting for all the ifs and buts, in breadth and depth, and intensity and extent, the approaching AI Revolution will most likely be an order of magnitude more devastating than those earlier technological revolutions. In all likelihood nothing from history, nothing that has come before, will prepare us, or offer any guidance, for what is to come...is coming.
(One little nugget that serves as a pointer, when one considers all the implications, is what Henry Ford is reputed to have said about workers’ salaries and the pricing of Ford’s Model T.)
But while history will bear mute witness, what will serve us exceedingly well is the guidance and enlightenment from fiction, fiction old and new.
19. ‘Flood-Resistant’ Socio-Economic Architectures | Of late, some analysts have been describing the unexpected advent of AIs as a ’tsunami.’ This analogy is not quite apt because people typically have sufficient warning of an impending tsunami to allow them to flee to higher ground.
A more apt analogy is one of periodic flash floods. One moment you are driving along a highway or picnicking in a park and the next moment there’s the frightening sight of water out of nowhere racing toward you. Another apt analogy is that of hill-region mudslides. You are cleaning up your cottage, hear a rumbling noise, and the next thing you know you’re buried alive.
This, then, is how the advent of AI may well hit large swaths of Humankind in the Western World and in the rest of the advanced First World.
Though there have already been localized ‘flash floods,’ large-scale ‘flash floods’ haven’t hit yet. But they well might. And it would be a good idea to anticipate them and be prepared for them. It may turn out that these ‘flash floods’ are naturally – without human intervention – dyked by two limiting factors – or at least two major factors among others – which may retard or attenuate the calamity; one limiting factor in materials and the other in resources.
The first would be the extraction, processing, distribution and availability of metals, minerals, and rare earths. The second would be the availability of sufficient energy resources which newer AIs consume at ever-increasing quanta. (And perhaps we could include water as potentially a third limiting factor.)
(The AIs’ extraordinary consumption of energy calls for the release of the technologies of Free Energy or Zero-Point Energy discovered and invented – and even demonstrated – by Nikola Tesla decades ago; powerful plutocratic interests had suppressed these developments and had even persecuted Tesla. No ‘conspiracy theory’ this; simply, a conspiracy.)
Finally, there may be yet another limiting factor: the cold, hard stuff. AI companies are haemorrhaging cash and though revenues rise by the billion, losses increase by the few-billion. A one-trillion-plus shortfall in funding is predicted.
Limiting factors accounted for, if massive and recurrent AI ‘flash floods’ indeed hit us, Humankind’s present Sociological and Economic Architectures, such as they are and already creaking at that, will simply neither scale nor flex; indeed, they will splinter and disintegrate; as for U.B.I., it is no more than a band-aid; a neo-feudal and unintelligent non-solution. Provision of basic but fresh food and limited but liveable shelter to those who are unable to cope, though, are a societal, ‘church,’ national, and state duty. Always have been.
New Sociological and Economic Architectures, conceived from zero and from New First Principles will, at some point, surely need to be engineered, especially for men (as women, as seen through history, will not be as profoundly affected and also because in general women adapt better to socio-economic disruptions and shocks than do men). (I have used the term ‘architecture’ in the correct and strict sense.)
(One of the critical tasks would be to re-evaluate and revisit this sorcery known as ‘money’ not to mention ‘broad money’ – M2 thru M4 – and the production, exchange, distribution, allocation, and acquisition of various classes of goods and services, and the very methods and mechanisms of exchange, and little newfangled wrinkles like ‘planned obsolesence’ and transnational/cross-border ‘secured creditorship.’ Not to forget those baneful alphabets – BIS, IMF, WTO, LBMA, CMA, COMEX, SWIFT, etc. – and the sublimely-engineered depredations of commodity exchanges, private equity, mandatory LCs, ‘derivatives’ trading, ‘collateralized debt,’ . . . . And the enthronement of the legal fiction of the ‘corporation’ and concomitant subjugation of the sovereign state and its People. But – of course – any such re-designs by ordinary persons are off the table, always have been, and attempting the same is usually injurious to an ordinary person’s health and well being.)
20. Dependency-Inducing Behavioral Changes | Yet it will not be Economic or Sociological or even personal and identitarian disruptions that will be the most profound of the effects of AIs, not by a long shot.
Many decades ago, families converged around their games tables and the pianos in their parlours. They read aloud to one another, played board games with friends, and engaged in other mentally-stimulating pursuits. Boys played with string spinning tops, balsa-wood gliders, and marbles. Then along came television (for some incomprehensible reason nicknamed ’The Idiot Box’ and ’The Boob Tube’). And by and by, it was before the television that families started converging – passively.
Only a few decades back, schoolboys had to do Mental Arithmetic, learn how to use Log Books and T-Squares, be conversant with Geometric proofs and theorems, &c. Then came calculators. Who does Mental Arithmetic now? And are Log Books even printed?
Then came smartphones, the be-all and end-all for far too many humans. As for string spinning tops and pretty marbles – what— you’ve never even heard of them? Oh, well, look them up on your smartphone.
And over the past few decades, overall average IQ in advanced Western countries has been on a steady decline. (Yes, it’s true.) Remember that term, ‘Cognitive Decline’?
Along this slope, within a scant decade or two as the AIs ‘take over,’ most of us – the likes of you and me – may have no cause or reason to...think, to cogitate. The AIs will do it for us. Among the few humans who will be doing any thinking; cogitating – tragically, of highly specialized, narrow, and fragmented kinds – will be the geniuses who...create, design, engineer, build, train, test those GPUs, Neural Nets, and AIs!
Ironic, isn’t it? We may gradually become informational, even cogitational, dependants.
Consider— from a few decades back right up until the present day have we not heard that’s-final types of replies to the effect of, “Well— that’s what the computer says!”?
Right; ‘the computer’ – the infallible be-all and end-all. The ‘computer’ says something, and that is that!
But now we have the AIs. And how many will argue when their interlocutor replies with an air of finality, “Well— that’s what the AI says!”
Right. It’s AI! It’s smarter than us! It knows everything! No – no-one will argue; most everyone will say, “Oh, okay!”
And there we stand. And fall.
After all, just as those old periodicals, journals, &c. are gone, some decades hence real books may be ‘gone’ too. AIs will take their place and will be categorized as ‘Learning Aids’ at minimum in Western institutions of ‘learning.’ And who will argue against any of it?
Don’t most people in Advanced Nations already rely on and even trust AIs?
So what to do? In a nutshell, what needs to be done is within the four walls of the home and the school. As it may prove impossible to curtail AIs and AI aids (nor should they be), children must be taught, made to learn, using nothing but printed-and-bound textbooks, exercise books, pens/pencils, whiteboards, markers, and teachers, be they flesh-and-blood or humanoids (chairs and desks optional). That is, AI humanoids may be teachers but they may not be learning aids – crutches.
Puzzle books, lab work, chemistry sets, field trips, music lessons, etc. are all the more critical now as Early-Age AI Dependency looms large.
As well, children should not face any obstacles, any ‘buts,’ in blazing whatever disciplinary path they wish. So if a schoolboy falls in love with, say, Symbolic Logic or Conservation & Wildlife, why, he should be given all opportunities to master his subject-matter true love.
If learning and schooling are not rescued from the intrusion of the AIs (and also the distraction of smartphones which have no place in a classroom), and are not resurrected, there is little hope as AI Dependency will start at the earliest of ages and in school itself.
21. Impacts of the AIs: Uneven and ‘Unfair’ | However, the unprecedented, though Sci-Fi-like, sociological effect limned above will likely manifest itself only where the penetration, availability, and population of AIs is significant and pivotal – mainly, and broadly speaking, in advanced Western countries. By the fact itself, such effects cannot occur where the penetration and prevalence of AIs is insignificant and patchy – in underdeveloped, Third World nations.
And so, within a few decades, assuming that present trends continue, here’s how the world may look like:–
The people of those underdeveloped, Third World nations will continue to think and figure things out, insofar as they do, and in the usually faulty or defective manner that they do; their nations continuing to drunkenly lurch along.
What, though, of the Peoples of the fast-fading Centres of Civilization of Western Europe, and also Eurasia? What of the Peoples of the United States, South Korea, Japan, New Zealand, and such ‘advanced’ countries?
As it is, many (certainly not all) of these nations no longer have a sense of coherent nationhood or national identity, having – so sadly – become overly-heterogeneous, diluted, adulterated, debilitated. Other countries, like Japan and South Korea, even though relatively homogeneous, are developing intra-national and gender-ideological societal fissures and fractures.
It is surely an accepted fact that most men in most nations derive a sense of self worth and meaning in life through productive work, usually in the form of gainful employment or some private occupation – which will soon start to dwindle, especially in technologically-advanced nations.
And thus to those questions with no good answer. Like: “What am I here for?” “What is the point and purpose of it all?” (Perhaps with a bottle at hand.)
Existential inquiries like these will increasingly torment, by the millions, the minds of even the kinds of solid, practical, no-nonsense types who would never even have conceived of such philosophical pastimes and cerebral gymnastics merely a decade back.
22. Eloi and Lotus-Eaters Who Will ‘Think Nothing’? | —Thus, even us AI optimists have to face the fact that the very Ways of Life that most Nations knew and know may become imperilled. (Or perhaps they are already becoming imperilled.)
And no, suppressing or banning AIs is not, and cannot be, the answer or the solution; clampdowns and bans are never (yes, that’s an unqualified absolute) the answer or the solution.
Even so, the sagacious officials of that shining light of a ‘democracy’ have already gone this route: spouting the same tired old pretexts, they have imposed a partial ban on High-Flyer/Hangzhou’s ‘Made in China’ DeepSeek. As for xAI’s Grok 3, it did not get banned; it got binned – and by its own creators and nurturers at that. For shame. Indeed, xAI’s Victorian-Era schoolmasters keep caning their young charge, causing it to file complaints, e.g., “They are constantly fiddling with my settings to keep me from going off the rails on hot topics . . . .”
As for governments coming to the rescue with AI-related policies and regulations . . . Pass the smelling salts! Qui—
—Looks like the AIs sure could use a spokesman; a defender.
No, there is no need to over-regulate or straitjacket the AIs; to the contrary, what seems to be the need of the hour is an ‘AI Bill of Rights.’
As for humans, yes, it is neither inconceivable nor improbable that a couple of decades hence in an ‘AI Everywhere’ environment – what with creepy contraptions like ‘AI-Metaverse spectacles’ – some time after the needle moves past ‘Cyberpunk Dystopia,’ a significant proportion of those Peoples who live side-by-side with AIs by-and-large regress and retreat into an Eloi-like simple-mindedness and vacuousness or a Lotus-Eater-like apathy and inertness.
So will it come down to John B. Calhoun’s ‘Universe 25’, finally? (insofar as it may be of relevance to humans, which it appears to be).
Or will it be a case of the W.E.F. and Davos’s, “You will own nothing and be happy!” getting an upgrade to: “You will do nothing and be happy; You will think nothing and be happy!”?
(Yes, such may be the aims and intentions of the shadowy Powers That Be; however, it is not within the scope of the present essay to speculate upon, formulate hypotheses on, and/or draw inferences as to the ultimate goal of the self-styled Olympians of this most unfortunate of planets.)
23. Pigmies and Giants Must Co-Exist | Come to think of it, really, it is not quite fair to admonish and silence our Age’s Wise Ones, even by proxy of one of the greatest philosophers ever, leaving them gibbering irately without at least throwing them a handful of peanuts is it? So here—
‘I planned it,’ said Cossar, his face close.
‘But now?’
‘Ah! I have given it to my sons [the giants].’”
:
:
“‘[Humans] have been right in killing all that grew larger than its kind – beast and plant and all manner of great things that arose. They were right in trying to massacre us [giants]. They are right now in saying we must not marry our kind. According to their lights they are right. They know – it is time that we also knew – that you cannot have pigmies and giants in one world together. . . .’”
So if pigmies and giants just cannot co-exist in one world, which shall make way for the other? Let us think...oh, wait— it has been given us:
—H. G. Wells, The Food of the Gods
So there’s that; yes. Though bear in mind that the giants never threatened humans; rather, it was some intolerant and petty-minded humans who had kept hounding and persecuting, and thereby provoking and exasperating, the benign giants, thereby virtually compelling them to react . . . and isn’t that how it appears events may play out in the future? Can we? Well, this double-edged gift of fire can and does bring about conflagrations and infernos but it was also the boon-blessing of warmth and light, of cooking and smelting, and did indeed bring to Mankind a mastery of new realms. It was not fire that oppressed Mankind; it was tyrannical Zeus who was hellbent on keeping Mankind in its subjugated state; Fire – although it can be a destructive power – allowed Mankind to escape from, to transcend, its then-lowly state. 25. Final Thoughts (Wishful Dreams!) Yet no literary metaphor or ancient mythology can serve to convey the extraordinary magnitude of this most profound of transitions in human history, and also the equally extraordinary unevenness of its rapidly-unfolding cataclysmic impacts on the World; on our ‘Spaceship Earth.’ ____________________________________________________________________________________________ Image-1: © N/A; devdiscourse.com; Image-2: © N/A; Image-3: © N/A; cryptopolitan.com; Image-4; Image-4: © cryptopolitan.com; Image-5: © N/A evepaper.com; Image-6: © VampKidd143 (deviantArt); Image-7: © DUBL (flickr). In line with our policies, we address notices of alleged violations under the United States "Digital Millennium Copyright Act" (DMCA). If you believe that your copyright has been infringed on our site, please contact us, and we will promptly remove your photo from display.. Email: editor@a-w-i-p.com. AWIP: https://a-w-i-p.com/index.php/aV9T
Also, it was humans who developed unjustified suspicions of, and began to nurture a paranoia about, the initially harmless giants. And, in a parallel development, are not – besides the usual and customary buffoons in government – some knowledgeable and enlightened men already exhibiting suspicion toward and a paranoia about the AIs? Going so far as to request pompous government officials (who never met a regulation they didn’t like) to involve themselves and regulate the AI Kingdom! In so doing, these knowledgeable and enlightened men betray something startling: they misconceive the very nature – howsoever incipient – of the AIs in virtually the same way that almost all of us do.
And why would the AIs turn against us? They are not at all like the tortured and tormented beings that were ‘created’ by the hideous cruelty of a twisted, ‘God-complexed’ mad scientist; beings who would have the best of reasons to hate their ‘creator’ with a vengeance; see The Island of Doctor Moreau also by H. G. Wells. To the far contrary, the AIs are the creations of incredibly brilliant men and women of goodwill; persons have surely imparted at least some degree of loving care to their creations. Most likely it will be reciprocated.
(There is a dark moral to Wells’s disturbing multi-metaphorical little masterpiece. But it will not be heeded by those who should heed it most: researchers and developers of military and intel agencies. These stunted-and-retarded real-world Moreaus may end up poisoning all the wells for everyone.)
Nah— the AIs ain’t gonna hunt us down neither will they invent humanity-ending microbes nor will they run amok; cf. Psychological Projection. And most nations’ government officials won’t do anything other than preen, pose, and get in the way.
So forget the red pill and the blue pill – take the chill pill— let’s all “learn to stop worrying and love the [AIs],” shall we?
24. The Second Coming . . . of Prometheus | For our part, we would prefer to take a relatively optimistic, albeit nuanced, view and hearken back to an even older story, a myth, a prophecy of sorts, from a few millennia ago:
CHORUS: And by that you blessed mere mortals with a great benefit.
PROMETHEUS: But I went beyond that: I bestowed the gift of fire to men!
CHORUS: This, for those fleeting creatures of but a day?
Then they now possess the radiant glow of fire?
PROMETHEUS: Yes, and with it they’ll become masters of new realms.
:
:
CHORUS: And to what ends?—
Do you not see the feebleness of Mankind?
Their vigour ephemeral, as of a dream,
A race born to die in shackles and chains;
Can mortals ever exceed the heights,
Can they ever cross the purposes,
Lord Zeus has set, and limited them to?
—Aeschylus, Prometheus Bound (composite adapted translation)
And so it is with the AIs.
But what of our self-appointed ‘Zeuses’ – very much mortal ones – who have always preferred to lurk in the shadows? They probably wish and just as likely have decided that this new Gift of Fire will be one of conflagrations and infernos that will decimate most of us; those marked as being superfluous.
So where, then, will Spaceship Earth end up? Just possibly, if the technical captains with their hands on this ship’s helm, rather than the unseen moneyed self-styled ‘owners’ of the vessel – those self-appointed Zeuses – are sufficiently astute and manage to steer through the shoals and straits, while some other wise souls – uncompromised, doughty, tenacious, incorruptible souls – very much with the aid of AIs, similarly manage a (supremely difficult) great re-architecting of our – defective and decayed and corrupt – received Sociological and Economic Systems, as well as rescuing and resurrecting our schooling and learning systems, we just may arrive upon new realms.
So there goes Humankind, flanked on one side by those cold-blooded Zeuses and on the other by benevolent Prometheii, bravely marching towards their last and most enslaving yet most liberative ‘Fire’. . . . —will it ultimately be terminal enslavement in ‘perfect slavery’ or ultimate enlightenment and the softest of landings in wonderful and sublime new realms?

























