The future and the role of Artificial intelligence.

Thanks for the gift then, I'll keep it loaded so as not to possibly use up another gift on reloading it.
[Edit] I didn't intend the far below to get posted, I closed the window, but looks like it saved it !!
[Edit again] Looks like I got into blurb mode again, ah well, will leave it all there anyway !

What I was leading up to in the blurb below is that the programmers he interviewed seem to think that generating code that 'works' quickly is the objective, but it isn't, only a small amount of the effort required during an application's whole lifetime is the initial coding, way more time is spent fixing it, adapting it and maintaining it over the years. The quicker you do the first bit, without considering the long-term, i.e. how the software will develop over time (and all software does this), the more time some poor buggers that inherit it after you leave have to spend doing the far longer bit.

So yeah, perhaps junior programmers can produce software that passes various tests (and so is considered to 'work') in a fraction of the time and so, developing the software looks like it'll be quicker and cheaper - hooray - I suspect when looking back on a project written that way at the end of its lifecycle in maybe 10 years, the impression will be somewhat different.

An old maxim of software development is "You can have quick, cheap or good; pick any two" - so you can have it done quickly and well, but it'll be expensive; or good and cheap, but it'll take a long time. The 'AI' approach opts for quick and cheap, at the expense of good.

Perhaps the people he interviewed only work on small projects, or have never inherited a big mess of a legacy project, or just aren't very proficient yet - not to say they're not smart, but possibly not very experienced in their field. As far as I can tell, it's generally the experienced greybeards that are bring layed off more, 'cos they're dashed expensive, trainees/novice programmers are relatively cheap.

The difference between designing an engine, say, and software is that with an engine if you decide you want to change all the bolts from steel to titanium, you have to dismantle the whole thing and change each bolt manually. With software you can do the equivalent change with a search and replace and release the results in seconds; like you can reach a ghostly hand into the engine and change the bolts without taking it apart; this is in some ways a huge advantage but can also be a massive liability - it means cowboy/novice developers can wing it, since you can always change something later and if it crashes and burns; then you just do another edit and restart it, you don't waste dozens of hours with a spanner, and several hundred kilos of expensive mangled metal.

The impression being that it doesn't matter if you do a half-arsed 'just about OK enough to sort of work' job at first, you can always change it - but with software of any complexity, changing thing #1 often means you need to change thing #s 2,7 & 14 as well, and changing #2 means you have to change thing #5 and #158, and.... these are called dependencies.

Part of designing software well is in reducing these interconnected dependencies, creating it as a bunch of self-contained, self-protecting, isolated things that nevertheless work seamlessly together and don't cause havoc when half a dozen of them need to be changed - but to do that you need an overall vision, which is what the 'AI' approach cannot bring.

Say you build a house, and employ trainee bricklayers and tell them to work as fast as they can, they'll probably get a few walls up pretty soon, and they'd probably look OK and essentially function as walls - OK, job done, then you plonk a roof on top and you have a house, but would you want to invest money in buying that house knowing how it was built... would it stand for 50+ years, would you be able to add an extension without half of it falling over etc. ?

Same with software, and my contention is (having designed and written everything from low-level operating system software to user applications since the 1980s until retiring a few years back (thank fuck... :):))) is that the 'AI' approach will come back to bite the owners of the software very, very badly as the software beds in and their, or their customer's businesses become entirely dependent on it.

They'll have a gargantuan mess of a code-base that nobody understands and is almost impenetrable to anyone trying to understand it. It'll be horribly inefficient and almost impossible to adapt without breaking some other part of it; and no amount of clever prompting is going to get an 'AI' to fix that without just making things worse, because the foundations are built on slurry.

--------------- blurb below ------------

Read it [the article] now, good read and rings a lot of bells. Reminds me of the late 1980s when the new thing was 'fourth generation languages', specifically a thing that I worked on (as in, wrote the software, not used it) at ICL for a while called 'Application Master' - the idea was that customers describe the system they want in broad terms and the system generates an application to do that.

It was fairly easy to get the bare bones of something working, say somebody wants to have a database of customers, so the system generates database fields for customer name, phone number, address etc. and code to display them and allow the user to add and update entries, all well and good, basic stuff. Then they can you add requirements like recording what orders they place, what payments they make, what shipments are sent to them, what products they returned etc - again, not a problem.... well.. individually anyway.

It starts coming unstuck when business-specific things start to come into play, especially where the isolated screens for customer details, orders placed, payments made etc need to make sense together according to the business rules of the customer, then it starts getting complicated - which is where the human system architect would usually be talking to the customer's business people, analysing their requirements now and in the medium and long-term, and talking to the people who will actually use the system, look at what computer or paper system they use now and what the problems are with it, what needs to stay the same and what needs to change.

This is where the Application Master system fell way short, and where people using code generation tools will too.

Even if it's the simplest of applications like a customer/order/payment/shipment system, the number of intertwined and often conflicting complex business requirements that need to be weighed and thought through means without the high-level understanding that the architect provides, and then communicates to the people writing the code, the system that is produced is generally inefficient and clunky, a technical dead-end and a nightmare to make changes to and fix bugs in.

By now, so much time and effort has been invested, and the customer is now using the system, imperfect as it is, so you have to plough on, no option to nuke it from orbit and start again.

As the system matures, new features are added and since it's not designed well, every change makes it more of a mess, more of a problem to add further changes without breaking things that have worked fine since day 1.

Do this for a year or two, and any reasonably major change is a big risk, so either you take that risk of breaking it big time or spend large amounts testing every non-trivial change, so more costs, slower development, and it just gets worse and worse.
 
Last edited:
And here we go....
There is already plenty A.I. generated shit on Youtube, but now it seems some book authors use it as well....

"Hachette Book Group, one of the largest publishers in the United States, pulled a forthcoming horror novel on Thursday in a decision that followed widespread allegations online that the author, Mia Ballard, relied heavily on artificial intelligence to write the book.
On Thursday, a day after The New York Times approached Hachette citing evidence that the novel appeared to be A.I.-generated, the company said it was pulling the book from publication. By Thursday afternoon, the novel was removed from Amazon and the Hachette website.
Hachette told The Times that its Orbit imprint decided not to publish “Shy Girl,” which was due out in the United States this spring, after conducting a thorough and lengthy review of the text. Hachette said it will also discontinue the book in the U.K., where it was published last fall and has sold 1,800 print copies, according to NielsenIQ BookData.
“Hachette remains committed to protecting original creative expression and storytelling,” a Hachette spokeswoman said. She added that Hachette requires all submissions to be original to the authors, and asks authors to disclose to the company whether they are using A.I. during the writing process.

In an email to The Times late on Thursday night, Ballard denied using A.I. to write “Shy Girl,” contending that an acquaintance she hired to edit the self-published version of the novel had used A.I.

“This controversy has changed my life in many ways and my mental health is at an all time low and my name is ruined for something I didn’t even personally do,” she wrote, noting that she could not elaborate on how the book had been edited with A.I. because she was pursuing legal action."
 
Obviously much of the AI rubbish overtaking Youtube could be reduced if they had a policy of requiring contributors to indicate AI and ban them if they do not comply. But YouTube just seems happy to have more clicks and presumably more advertising that follows. I have several times bothered to make this argument with reports to YouTube and get an automated reply saying. "Thank you" and that's the end of the matter. I expect it will continue this way until enough people just stop using YouTube because of the AI junk.
 
Yeah, I mainly use YT for audiobooks so don't 'watch' it so much as listen. As soon as I realise something is being narrated by an artificial voice I stop/switch not particularly on principle; I just find it slightly disturbing/uncanny-valleyish and don't want it in my head.

Maybe the viewing stats will register that views of videos that they are maybe aware have fake voices are terminated quickly (or, the infinitesimally minuscule effect that my activity has on the stats anyway!) - though often I've downloaded the media so they won't know, and I've no idea if YT have any register/inkling of which videos use real voices or not anyway.

As you say, why would they care, they are just after clicks to sell advertising on the basis of; which I never see anyway due to me downloading and/or using advert blockers - and am amazed anyone would put up with the adverts, from seeing how intrusive it is on somebody else's computer !

I just read that Price Waterhouse Coopers have announced to their staff that anyone not on board with using 'AI' isn't their kind of employee, it's getting to be like a religion !! - am glad I'm old and retired enough so that I can choose to be a heretic.

It's bad enough that the human race is spending so much of its time consuming what the algorithm behind a small glass screen is telling them to, without what they're told to watch being increasingly generated by algorithms. Humanity really is in deep trouble.
 
Last edited:
It's bad enough that the human race is spending so much of its time consuming what the algorithm behind a small glass screen is telling them to, without what they're told to watch being increasingly generated by algorithms. Humanity really is in deep trouble.
Perfect time to plug the TV series Black Mirror. So many great episodes and some are starting to partially come true. It's just a matter of time for some of the others.
 

Follow Us

Latest Expat Indo Articles

Latest Tweets by Expat Indo

Latest Activity

New posts Latest threads

Online Now

Forum Statistics

Threads
6,587
Messages
110,716
Members
3,882
Latest member
parmindersanghvi566
Back
Top Bottom