The future and the role of Artificial intelligence.

Thanks for the gift then, I'll keep it loaded so as not to possibly use up another gift on reloading it.
[Edit] I didn't intend the far below to get posted, I closed the window, but looks like it saved it !!
[Edit again] Looks like I got into blurb mode again, ah well, will leave it all there anyway !

What I was leading up to in the blurb below is that the programmers he interviewed seem to think that generating code that 'works' quickly is the objective, but it isn't, only a small amount of the effort required during an application's whole lifetime is the initial coding, way more time is spent fixing it, adapting it and maintaining it over the years. The quicker you do the first bit, without considering the long-term, i.e. how the software will develop over time (and all software does this), the more time some poor buggers that inherit it after you leave have to spend doing the far longer bit.

So yeah, perhaps junior programmers can produce software that passes various tests (and so is considered to 'work') in a fraction of the time and so, developing the software looks like it'll be quicker and cheaper - hooray - I suspect when looking back on a project written that way at the end of its lifecycle in maybe 10 years, the impression will be somewhat different.

An old maxim of software development is "You can have quick, cheap or good; pick any two" - so you can have it done quickly and well, but it'll be expensive; or good and cheap, but it'll take a long time. The 'AI' approach opts for quick and cheap, at the expense of good.

Perhaps the people he interviewed only work on small projects, or have never inherited a big mess of a legacy project, or just aren't very proficient yet - not to say they're not smart, but possibly not very experienced in their field. As far as I can tell, it's generally the experienced greybeards that are bring layed off more, 'cos they're dashed expensive, trainees/novice programmers are relatively cheap.

The difference between designing an engine, say, and software is that with an engine if you decide you want to change all the bolts from steel to titanium, you have to dismantle the whole thing and change each bolt manually. With software you can do the equivalent change with a search and replace and release the results in seconds; like you can reach a ghostly hand into the engine and change the bolts without taking it apart; this is in some ways a huge advantage but can also be a massive liability - it weans you can wing it, since you can always change something later and if it crashes and burns; then you just do another edit and restart it, you don't waste dozens of hours with a spanner, and several hundred kilos of expensive mangled metal.

The impression being that it doesn't matter if you do a half-arsed 'just about OK enough to sort of work' job at first, you can always change it - but with software of any complexity, changing thing #1 often means you need to change thing #s 2,7 & 14 as well, and changing #2 means you have to change thing #5 and #158, and.... these are called dependencies.

Part of designing software well is in reducing these interconnected dependencies, creating it as a bunch of self-contained, self-protecting, isolated things that nevertheless work seamlessly together and don't cause havoc when half a dozen of them need to be changed - but to do that you need an overall vision, which is what the 'AI' approach cannot bring.

Say you build a house, and employ trainee bricklayers and tell them to work as fast as they can, they'll probably get a few walls up pretty soon, and they'd probably look OK and essentially function as walls - OK, job done, then you plonk a roof on top and you have a house, but would you want to invest money in buying that house knowing how it was built... would it stand for 50+ years, would you be able to add an extension without half of it falling over etc. ?

Same with software, and my contention is (having designed and written everything from low-level operating system software to user applications since the 1980s until retiring a few years back (thank fuck... :):))) is that the 'AI' approach will come back to bite the owners of the software badly in 2, 5, 10 years. They'll have a gargantuan mess of a code-base that's horribly inefficient and almost impossible to adapt without breaking some other part of it, and no amount of clever prompting is going to get an 'AI' to fix it without just making things worse, because the foundations are built on slurry.

--------------- blurb below ------------

Read it [the article] now, good read and rings a lot of bells. Reminds me of the late 1980s when the new thing was 'fourth generation languages', specifically a thing that I worked on (as in, wrote the software, not used it) at ICL for a while called 'Application Master' - the idea was that customers describe the system they want in broad terms and the system generates an application to do that.

It was fairly easy to get the bare bones of something working, say somebody wants to have a database of customers, so the system generates database fields for customer name, phone number, address etc. and code to display them and allow the user to add and update entries, all well and good, basic stuff. Then they can you add requirements like recording what orders they place, what payments they make, what shipments are sent to them, what products they returned etc - again, not a problem.... well.. individually anyway.

It starts coming unstuck when business-specific things start to come into play, especially where the isolated screens for customer details, orders placed, payments made etc need to make sense together according to the business rules of the customer, then it starts getting complicated - which is where the human system architect would usually be talking to the customer's business people, analysing their requirements now and in the medium and long-term, and talking to the people who will actually use the system, look at what computer or paper system they use now and what the problems are with it, what needs to stay the same and what needs to change.

This is where the Application Master system fell way short, and where people using code generation tools will too.

Even if it's the simplest of applications like a customer/order/payment/shipment system, the number of intertwined and often conflicting complex business requirements that need to be weighed and thought through means without the high-level understanding that the architect provides, and then communicates to the people writing the code, the system that is produced is generally inefficient and clunky, a technical dead-end and a nightmare to make changes to and fix bugs in.

By now, so much time and effort has been invested, and the customer is now using the system, imperfect as it is, so you have to plough on, no option to nuke it from orbit and start again.

As the system matures, new features are added and since it's not designed well, every change makes it more of a mess, more of a problem to add further changes without breaking things that have worked fine since day 1.

Do this for a year or two, and any reasonably major change is a big risk, so either you take that risk of breaking it big time or spend large amounts testing every non-trivial change, so more costs, slower development, and it just gets worse and worse.
 
Last edited:
All this talk about AI is great, but let's not forget the basics—if we're not careful, it could just amplify the existing mess we've got. Check out this tool to keep your images in check while we sort out the noise. https://www.image2kb.com
 

Follow Us

Latest Expat Indo Articles

Latest Tweets by Expat Indo

Latest Activity

New posts Latest threads

Online Now

Forum Statistics

Threads
6,586
Messages
110,703
Members
3,882
Latest member
pranabgaba568
Back
Top Bottom