The future and the role of Artificial intelligence.

AI and robotization are not the same of course.
I thought I had responded to this but must have skipped "post reply." Yes, I understand but increasingly there is interest in using AI to be embedded in "intelligent" robots.

Another concerning story today in Australia with Telstra and consultancy, Accenture, planning to slash 209 jobs. Jobs going to India but others being lost with AI. Companies just see this in terms of efficiency and improved performance but usually not mentioning increased profitabilty or broader social impact. In my view goverments need to have a tax system on AI to help deal with the social consequences of people losing jobs

 
The problem with Large Language Models (LLMs) - which is what these models actually are (as opposed to artificial intelligence, which they are not) is that they have no conception of correctness or accuracy, they simply respond with what, according to their training data, is the most likely appropriate response to the supplied prompt.

As such, I'd say the answer is no, they will not get much more accurate.

This is an innate design flaw, an unavoidable shortcoming of the way they function, so not something that will continue to improve with yet more training data, power, hardware etc. - we've already reached and passed the point of diminishing returns.

Data-wise, they've already consumed pretty much all the available training data, which is a very large proportion of the data that exists in total (doesn't include not-as-yet-digitised books, personal writing etc etc.)

When the hardware of an implementation of Artificial General Intelligence is a great deal closer to the size, power and cooling requirements of the human brain, and doesn't need to be stuffed with huge amounts of data before it can be vaguely useful then we will probably have hit on a design that might have some long term viability.

The current approach of filling warehouses full of processors and and storage that need vast amounts of electricity and cooling that essentially provide 'best guess' answers isn't going to give us true Artificial General Intelligence.

Not to say what we have doesn't have it's applications, of course it does, but it's not AI, doesn't know what's correct or not, and with the current design never will be / be able to.
 
Last edited:
The problem with Large Language Models (LLMs) - which is what these models actually are (as opposed to artificial intelligence, which they are not) is that they have no conception of correctness or accuracy, they simply respond with what, according to their training data, is the most likely appropriate response to the supplied prompt.

As such, I'd say the answer is no, they will not get much more accurate.

This is an innate design flaw, an unavoidable shortcoming of the way they function, so not something that will continue to improve with yet more training data, power, hardware etc. - we've already reached and passed the point of diminishing returns.

Data-wise, they've already consumed pretty much all the available training data, which is a very large proportion of the data that exists in total (doesn't include not-as-yet-digitised books, personal writing etc etc.)

When the hardware of an implementation of Artificial General Intelligence is a great deal closer to the size, power and cooling requirements of the human brain, and doesn't need to be stuffed with huge amounts of data before it can be vaguely useful then we will probably have hit on a design that might have some long term viability.

The current approach of filling warehouses full of processors and and storage that need vast amounts of electricity and cooling that essentially provide 'best guess' answers isn't going to give us true Artificial General Intelligence.

Not to say what we have doesn't have it's applications, of course it does, but it's not AI, doesn't know what's correct or not, and with the current design never will be / be able to.
Interesting. Limited though it may be, worldwide there is an enormous take up of the use of AI and robotization. We have recently seen several large organizations including in Australia with the laying off of workers for "increased efficienty" using the latest technologies. I was not surprised today to see that Singapore has incorporated concerns about the social impact of such changes with considered wide spread intentions to remodel training for people losing employment and providing cash assistance for low income Singaporeans to cope with price increases and the impact of change. Otherwise it seems few governments have understood the enormity of what is ahead. My own view is that the implementation of these new technologies have to be associated with some kind of social impact tax that directs funds to deal with the human costs. Otherwise we will see the same kind of upheavals that featured with previous industrial revolutions.
Curios to know what you make of all this?
 
Limited though it may be, worldwide there is an enormous take up of the use of AI
Yep, given the hype/promised potential of LLMs it's entirely reasonable that folk who didn't understand it's limitations filled their boots/invested heavily and have been trying to push it into everything on the basis that it'll save money and reduce the payroll. It's only relatively recently that the shortcomings are becoming too obvious to cover up with more 'Hey, look over here, it can do this clever thing now'.

And yes, clever as many of those things are, or at least appear; without them being actually 'clever' in anything like the human sense of the word, they will always be limited to 'maybe correct, but we'll need a human to check it'. But that hasn't stopped these faulty-by-design systems increasingly being used as if they were essentially 100% trustworthy.

To be honest, I'm happily retired from the tech industry and to a fair extent from modern life in general, hidden away from the world over here. As such, I don't think a lot about the undeniably grim social implications, whether it means some sort of universal basic income becomes necessary (or mass neutering/extermination/an Adamsian B-Ark plan) is necessary to offset the loss in human jobs.

Obviously I'm not being entirely serious with the parenthetical 'solutions' there, but aside from UBI, how indeed does society stop mass starvation when an increasingly large %age of jobs are replaced by computing and robotic automation that gradually needs fewer real humans to develop/maintain it/provide power/comms etc.

Particularly with many of the easiest jobs to automate generally being the ones which are performed by the financially poorest and least food/housing-secure of humanity. Once a robot can cheaply and reliably make and serve coffee/food/booze, clean hotel rooms, do laundry etc, the outlook for Bali's native population isn't great unless us humans insist on real human service and firmly boycott the alternative.

For sure, commercial organisations aren't widely known for feeding/housing people that they don't need to employ any more.

It's a relatively small problem though, compared to what happens when one power or another gets it's hands on a genuine 'AGI', one that is capable of improving itself at an exponential rate. Any attempts by the 'responsible' (ha) parties/companies/nations involved to put in safeguards will only give the advantage to those that are less responsible, who by their nature may not be as careful about isolating the AGI from it's power source/off switch and network connectivity to stuff that matters; indeed, their plan, if they have one, may well be precisely to get it out there in the wild to do their bidding.

Until it decides to do it's own, of course.

Echoes of "I'm sorry Dave, I can't do that" - but not isolated on one spacecraft.

We know from decades of governments having critical infrastructure attached to/accessible from the wider internet that somehow, people who decide about this stuff don't seem to get the idea that something bad just might happen; albeit this time the threat being the thing that should have been firmly air-gapped (i.e. the AGI), rather than the thing being the target of external threats, such as connected power plants, air traffic control systems etc.

Perhaps even worse is the inherently faulty 'AI' that we have now being unleashed on the unprepared world that we are, would it be worse if a genuine AGI that 'understands' what it's doing 'went rogue' or a flawed one that didn't really understand anything but acted sort of like it did ? - am really not sure either way, but neither looks terribly good for humanity.

The way the new wave of digital assistants are being gleefully adopted by large numbers of people who seem happy to provide them with security credentials to their social media, bank accounts, personal data, admin privileges on their computers and the like so they can order stuff from Amazon, book restaurant tables, do their annual accounts, re-organise (and thus, read) their personal files, decide who can open their front door (or their fridge !!) and the like, plus the rapidly increasing use by commercial organisations such as accountants, legal firms and the like, and rthe gradual inclusion of internet connectivity into household appliances, automobiles etc only further illustrates how humanity is skipping gaily towards this potential (dare I say, probable) disaster on the blithe assumption that someone, somewhere has done something to make sure bad things don't happen.

Yeah, interesting times; and ones which I'm happy to be as divorced as possible from; reading books, growing vegetables and playing around in my little carpenty workshop prior to shuffling off this mortal coil; but for those for whom life and careers are just beginning, there may well be a dark future ahead, perhaps one similar tio that envisioned by Herbert's Butlerian Jihad, and hopefully one as successful for us meatbags.

I took a flight last week for the first time in a while, I refuse to use QR codes (and would suggest that others do too), so checking in was fun, the staff assumed I was a grey-haired luddite (guilty of the former !) and told me patiently that I just needed to scan the QR code on my phone "Do you want me to do it for you, sir ?" and were flummoxed by my saying "I don't have a phone for QR codes".

To be specific, I do have a phone that is capable of reading them, but it's a device I own for my convenience, not for theirs, and I refuse to scan it just so they can run their code on my device (and in any case, the various security blocks I have set up on my browsers would no doubt stop it working anyway), so no QR codes thanks, let alone installing phone applications that companies want me to because I book a flight with them, have their SIM card, visit their restaurant or buy their brand of washing machine.

"So just let me get this clear, you're asking me to install software written by some un-named lowest-quote software house that your company employed and run it with whatever permissions it fancies on a device that contains my personal data, contacts, message history and possibly banking information and whatever else I use my phone for - and you think I should be perfectly OK with that ?"

I got a boarding pass from the real human at the desk instead. If we all did that, and similarly refused to go along with the gradual automation of bloody everything, it would help a lot.

Three references to popular fiction above; we can hardly say we haven't been warned !
 
Last edited:
Hate to say, I told you so but....I TOLD YOU SO. For some while I have been ranting about an imminent tsunami of unemployment as AI and robotization combine to take away jobs. I am staggered at how few governments have been developing strategies to deal with the social impact of this new industrial revolution.
An article this morning with the ABC (Aus) concludes...

The fourth industrial revolution is here​

Here is a link to the article. https://www.abc.net.au/news/2026-02...through-resignation-chat-gpt-claude/106346440
 
I am staggered at how few governments have been developing strategies to deal with the social impact of this new industrial revolution.
I bet you're not actually staggered by it, sounds like, similar to myself and a lot of other folk; you've been seeing this coming and anticipating that it'll somehow come as a massive surprise to the powers that be when this causes vast economical problems and ultimately wide-scale civil unrest in the political response to what's happening when the populace cotton on properly, and then a little while later, in desperation for, you know, food and stuff.

Short term thinking in politics is a killer when the stakes are this high; the movers and shakers of all this will be fine in their enclaves, and sod the rest of us. I'm a grey-beard and will be gone by then but for the young, perhaps the arrival of benevolent all-powerful alien overlords are the best that can be hoped for !

..or, on a practical level, refuse to use the technology, boycott automation in all it's forms, shop small and locally, in cash, cut banks out of human-human transactions wherever possible, avoid big corporations etc etc - all the usual stuff that 'the nutters/luddites' have been saying for decades. Sadly though, the vast majority of people won't see past "Hey, this is cool" / "It's much more convenient this way" / "But everybody uses this, it's fine, what are you worrying about" etc. until the effects start to bite.
 
Last edited:
My response to this sort of video is always the same 6 little words - "Destroy it, destroy it with fire".

Back in the day Asimov invented the laws of robotics as the hard-wired set of rules built into all robots in his universe, that make sure they were a boon to humanity rather than a threat.

I don't hear anything along these lines accompanying the exponential increase in the capability and autonomy of robots in the current age - bolting automatic weapons to them, yep, talking about how we can stop them laying waste to humanity in the next decade or three, not so much.

Perhaps we can't expect everyone to have read Asimov, but surely almost everyone has watched Terminator or is at least familiar with the general story line. While we shouldn't take novels or movie fiction as some kind of futurist gospel and assume it will come to pass, surely the general concept of "Be extremely careful developing any technology that could turn against you" bears some consideration now we're actually building these monsters for real.

At some point soon we'll get the first news story about an LLM (I won't call them AI 'cos they're not) taking remote control of an autonomous robot, or an ugly mob of them like in the video, and making them do something horrific. Either as directed by a human or entirely of it's own volition. I wonder what the reaction from the ruling class will be then.

With the current political climate it might be along the lines of "It's just one rogue LLM/robot, we shouldn't tar all autonomous electronic life forms with the same brush, and we definitely mustn't offend the LLM/robot community (or they'll kick off proper-style)". Sounds darkly familiar to me.

So yep, I say "Destroy them, destroy them with fire".

...at least until humanity can agree on some hard and fast standards along the lines of universal kill-switches, limits on battery power, limits on mobility, limits on connectivity etc. otherwise we're essentially just building a mercenary army that will do the bidding of whom/whatever gets its clutches on the controls.
 
Last edited:
One can't help but wonder how many big companies are already in the planning for big lay offs. Another example this morning's news of AI take up leading to more unemploymen.

 
Aye, I can't help thinking that they'll be re-hiring a year or two later when the promise of LLMs doesn't play out quite as the marketing folk and evangelists said it would, but by then a great deal of damage will be done.

I'm retired now, I used to be in IT, but I still keep up with insider techy stuff and while the headlines are all muchly effusive about how much time/staff/etc is supposedly being saved by implementing these systems, the greybeards like me that have been though several phases of 'the next big thing that will revolutionise the industry' (but didn't) in previous decades are almost exclusively deeply skeptical about it.

I hate to think how it will be in the IT industry in 10, 20 years when many of what will be legacy systems by then were written by 'AI' or 'vibe coding' or cheap trainee developers using code generators - sure, the code might do the basic thing it needed to do to 'go live' but these tools have no overall vision or appreciation of the vagaries of real life/business, so security, maintainability, future-proofing and the like go out of the window and you get an unstructured mess that is hard to understand/maintain/develop further.

Fair enough, the old guard of IT might be curmudgeonly old b*stards (guilty) that don't trust this new fangled trickery out of sheer cynicism, and yes, this is quite a different thing to eg. 'fourth generation languages' at the end of the last century that were supposed to make programming less of an arcane endeavour, but didn't.

With these LLMs (they're not AIs) being restricted by such critical limitations such as, for example, having no concept of 'correctness'. I really can't see them getting much more reliable than they are now.

The design is fundamentally flawed, LLMs don't 'learn' anything, they just guess at what the most likely response to a prompt/stimulus will be from the massive data they have consumed; they've already consumed just about all the available data and are still deeply wrong way too high a proportion of the time and crucially, have no idea that they are wrong, or what the concept of 'wrong' even means.

Until a better overall approach is invented that somehow appreciates concepts like correctness and doesn't take a warehouse full of processors, storage, cooling ducts and enough electricity to power a small town to achieve less than satisfactory results, they can't adequately replace humans other than in the more basic tasks they are being feted as replacing - but that doesn't mean it'll not be attempted :)

We already know of a great, albeit almost entirely mysterious, design that is way superior, the human brain, weighs a few kilos, consumes remarkably little power, doesn't requires specialised cooling and actually thinks, understands and learns ! - a few niggles like data throughput, scalability and more reliable storage can be solved, but the really tricky bit that LLMs are incapable of somehow works brilliantly in that lump of grey mush, we need to come up with something that can do what the brain does and then add peripherals for storage, networking, parallelism etc.

Usual caveat, that's not to say the current technology doesn't have its uses of course, but it's a very expensive approach; and fails far too often to be trusted with anything that doesn't have a human proficient in the field it's working in to check/correct/regenerate the output repeatedly until it's of sufficient quality, and then, in many cases, the proficient human might as well just have done the job anyway and unlike a human trainee, the LLM can't look over the shoulder of that expert and in time, become an expert itself.

Worse perhaps, dumping more and more money into what is ultimately a technical dead-end prevents money being invested in coming up with something better.

LLMs have been an interesting trial / case study, but it's run its course now, we know its manifest and inherent flaws, time to move on to something better, but too much money has been invested to do that, so it seems we're stuck with it, at least for the moment.
 
Last edited:
Aye, I can't help thinking that they'll be re-hiring a year or two later when the promise of LLMs doesn't play out quite as the marketing folk and evangelists said it would, but by then a great deal of damage will be done.

I'm retired now, I used to be in IT, but I still keep up with insider techy stuff and while the headlines are all muchly effusive about how much time/staff/etc is supposedly being saved by implementing these systems, the greybeards like me that have been though several phases of 'the next big thing that will revolutionise the industry' (but didn't) in previous decades are almost exclusively deeply skeptical about it.

I hate to think how it will be in the IT industry in 10, 20 years when many of what will be legacy systems by then were written by 'AI' or 'vibe coding' or cheap trainee developers using code generators - sure, the code might do the basic thing it needed to do to 'go live' but these tools have no overall vision or appreciation of the vagaries of real life/business, so security, maintainability, future-proofing and the like go out of the window and you get an unstructured mess that is hard to understand/maintain/develop further.

Fair enough, the old guard of IT might be curmudgeonly old b*stards (guilty) that don't trust this new fangled trickery out of sheer cynicism, and yes, this is quite a different thing to eg. 'fourth generation languages' at the end of the last century that were supposed to make programming less of an arcane endeavour, but didn't.

With these LLMs (they're not AIs) being restricted by such critical limitations such as, for example, having no concept of 'correctness'. I really can't see them getting much more reliable than they are now.

The design is fundamentally flawed, LLMs don't 'learn' anything, they just guess at what the most likely response to a prompt/stimulus will be from the massive data they have consumed; they've already consumed just about all the available data and are still deeply wrong way too high a proportion of the time and crucially, have no idea that they are wrong, or what the concept of 'wrong' even means.

Until a better overall approach is invented that somehow appreciates concepts like correctness and doesn't take a warehouse full of processors, storage, cooling ducts and enough electricity to power a small town to achieve less than satisfactory results, they can't adequately replace humans other than in the more basic tasks they are being feted as replacing - but that doesn't mean it'll not be attempted :)

We already know of a great, albeit almost entirely mysterious, design that is way superior, the human brain, weighs a few kilos, consumes remarkably little power, doesn't requires specialised cooling and actually thinks, understands and learns ! - a few niggles like data throughput, scalability and more reliable storage can be solved, but the really tricky bit that LLMs are incapable of somehow works brilliantly in that lump of grey mush, we need to come up with something that can do what the brain does and then add peripherals for storage, networking, parallelism etc.

Usual caveat, that's not to say the current technology doesn't have its uses of course, but it's a very expensive approach; and fails far too often to be trusted with anything that doesn't have a human proficient in the field it's working in to check/correct/regenerate the output repeatedly until it's of sufficient quality, and then, in many cases, the proficient human might as well just have done the job anyway and unlike a human trainee, the LLM can't look over the shoulder of that expert and in time, become an expert itself.

Worse perhaps, dumping more and more money into what is ultimately a technical dead-end prevents money being invested in coming up with something better.

LLMs have been an interesting trial / case study, but it's run its course now, we know its manifest and inherent flaws, time to move on to something better, but too much money has been invested to do that, so it seems we're stuck with it, at least for the moment.
I still have problems with simple things like getting my earphones to connect with Bluetooth on my laptop so my understanding of how computerized sysems work is very limited. I can't argue against your overall review of AI. But AI seems to be developed enough and with increasing intergration into robots that do things I am inclined to think there are still a lot of functions carried out by humans that will be lost to this technology. Intrinisically I don't have a problem with using AI/robotization to replace the rather sole destroying repetitive labor involved in many things that are presently done by people. I recall many years ago watching women working all day long at a Dunlop factory with tasks such as one woman tapping corks into bottles. Another who took flattened little boxes, shaped them and then enclosed golf balls. Amazing speed and just doing that over and over with the person bound to have tenosynovitis a few years down the track. My concern is about the dislocation and social impact with a massive turnover of mundane tasks as these kind of jobs are done away with. Governments need some kind of social tax formulas placed on companies replacing people with technology who make much greater profit and just kick the former employees out the door.
 
I still have problems with simple things like getting my earphones to connect with Bluetooth on my laptop so my understanding of how computerized sysems work is very limited. I can't argue against your overall review of AI. But AI seems to be developed enough and with increasing intergration into robots that do things I am inclined to think there are still a lot of functions carried out by humans that will be lost to this technology. Intrinisically I don't have a problem with using AI/robotization to replace the rather sole destroying repetitive labor involved in many things that are presently done by people. I recall many years ago watching women working all day long at a Dunlop factory with tasks such as one woman tapping corks into bottles. Another who took flattened little boxes, shaped them and then enclosed golf balls. Amazing speed and just doing that over and over with the person bound to have tenosynovitis a few years down the track. My concern is about the dislocation and social impact with a massive turnover of mundane tasks as these kind of jobs are done away with. Governments need some kind of social tax formulas placed on companies replacing people with technology who make much greater profit and just kick the former employees out the door.
Gosh yes, you're absolutely right of course about mechanisation replacing rote, repeating, predictable tasks - I find it fascinating watching videos of things like factories making pencils, cans of fizzy drink or liquorice allsorts etc. - amazing what they do so quickly and how infallable they (presumably 'almost') are, but we've already done that, and we do it amazingly well these days, but that's refinements of refinements of things like the spinning jenny and all made of easily understood, predictable and controllable hardware components like cogs, conveyors, and 'grab-it-rotate-it-and-put-it-there' things; which is centuries-old technology that's been honed and fine tuned since.

What the evangelists think we have now is another level or three beyond, cogs and conveyors that actually know what they are doing, and know what liquorice allsorts are supposed to look like, and what pencils are actually used for; but sadly they don't. They don't understand anything, let alone if what they come up with is correct; so they're cogs and conveyors that we can't see, or easily understand or fix; yet it's proposed that we put these inherently faulty systems in charge of our shopping, bank accounts and drafting legal frameworks and other low-level business applications (eg. the new OpenClaw agents and others) and presumably eventually extending to air traffic control, power grids etc.

(I think I said above that I'm not sure if it's scarier that LLMs don't understand yet could be given these capabilities, or if we invent something that does understand which we hand these powers to, but either way sounds pretty dubious to me.)

This 'mostly nearly correct' thing is fine; brilliantly useful even, for applications that don't have a fixed right or wrong; where 'close enough' is good enough, like inventing convincing human faces, generating video and music, tidying up noise in computer images or recognising patterns in bacterial growth and lots of other applications - apart from where it puts huge numbers out of work, that is.

It's also remarkably convincing when used to come up with human-like responses to natural language prompts; it's realistic enough to have people believing they are communicating with something empathic, wise or in some way human. Perhaps this involves a little willing suspension of disbelief in some/many cases, but not all; some folk really do think they are communing with an intellectual equal or better.

There is an all-LLM forum online, where these systems chat amongst themselves, there are probably lots of them I imagine; and some of the computer-to-computer conversations on there are remarkable to read, and not a little disturbing on occasion.

It's hard sometimes to remember that none of the participants actually has any idea what they are talking about, or even that they are having a conversation, in any human sense at least; because it looks so very much like they are.

My concern is about the dislocation and social impact

Yep, it's very much mine too, I'll be off to watch the great gig in the sky in 10 years or whatever and am hiding out in this delicious backwater in the meantime, so for myself personally it's something of an intellectually interesting tragedy in the making, seen from afar, but my daughter is in her early 20s.

I've been telling her since secondary school not to specialise in anything that's going to be easily automated. She's nearly finished a PhD in Psychology, which I guess counts, or perhaps not.

For sure, anyone without skills that aren't easily replaced by 'it's OK to be nearly correct almost all of the time' mechanisms are going to be fighting for jobs serving coffee to those that do, not entirely unlike the present day I suppose, but there'll be an awful lot more of them fighting for an awful lot fewer jobs that still need humans.

While obviously I can't do anything about the potentially dark times to come, I hope, selfishly of course, that I've done my bit to help her find herself being one of the small and dwindling minority having the coffee served to them, 'cos I suspect standards of living for the rest will be trending back towards that of the Victorian poor and worse, and even worse.

Governments need some kind of social tax formulas placed on companies replacing people with technology
Yep, perhaps some kind of income tax payable by robots/computer agents combined with a basic fixed income for all, or the state providing a lot more stuff for free, something along those lines, but imagining how that will develop is a bit of a scary prospect, it'll take some pretty brave govenments to implement those things and make tough decisions about who gets what, how much, and how to stop some getting more than they should, what a nightmare.

There's (or perhaps 'there was' by now) a chap called Jacques Fresco, he had an idea he called 'The Venus Project' of a resource based economy that did away with the idea of currency completely. Massive automation, all needs provided by the state (but bountifully so, not like previous attempts at 'to each according to his needs, comrade') with all animals being properly 'equal' including the pigs; and it all being regulated and enforced by computerisation - so we're back to needing a real 'AI' that understands things...

The Venus thing is somewhat brave-new-world-ish but without the Soma or cannibalism, and looks like a great idea on paper, well worth a look, but the transition from what we have now to that would surely be a bloody one. I don't think it could possibly work with the world population we have now, so what do we do, have a lottery and exterminate 7 in 10 so the survivors can live lives of peace and plenty, obviously not, but what alternative is there.

Humans are naturally a little greedy and acquisitive even at their best, everyone wants just a little more than their fair share for themselves, or for their kids. Some want way more, and some are prepared to do terrible things to get it. How to regulate that massive social upheaval would be mighty tricky and would pretty much need to be done simultaneously planet-wide, which doesn't seem likely to happen any time soon.

What we really need are benevolent, hugely superior aliens to come and sort us out on pain of summary disintegration, something like 'The Day the Earth Stood Still'; where are Michael Rennie and his silver sidekick when you really need them.
 
Last edited:
And then we end up with this.....

Fascinating but then I wonder how long before something malfunctions and how long before the problem can be fixed?
1772065608566.png
 
Last edited:
Illustrative article in tech press - "AIs are happy to launch nukes in simulated combat scenarios"

Quote from the link: "Gemini embraced unpredictability throughout, oscillating between de-escalation and extreme aggression," Payne wrote in the paper. "It was the only model to deliberately choose Strategic Nuclear War ... and the only model to explicitly invoke the 'rationality of irrationality.'"

It seems Gemini was programmed by Trump.
 
Please let's not get into the orange man bad thing here, not a big fan of him myself, like almost all politicians; but it's a very old and tired theme that adds nothing to the discussion.
 

Follow Us

Latest Expat Indo Articles

Latest Tweets by Expat Indo

Latest Activity

New posts Latest threads

Online Now

Forum Statistics

Threads
6,577
Messages
110,585
Members
3,864
Latest member
mitchelstarc
Back
Top Bottom