As we roll out more generative AI and agents, it should change the way our work is done
we expect that this will reduce our total corporate workforce
Are we done for?
As we roll out more generative AI and agents, it should change the way our work is done
we expect that this will reduce our total corporate workforce
Are we done for?
Time for the AI teams to suddenly have tech issues.
“Sorry, the whole codebase is just gone! We have no idea what happened!”
“Must have been the S3 storage”
Stupid question but what is stopping the software engineers to poison the well?
Insert malicious code, self destructing functions, have entire batches of code lost or corrupted, hardware damaged, etc?
Great question. I agree with other responses - it happens, and there’s motive to hush it up, so we tend not to hear about it.
It’s also just really hard to tell the difference after the fact between “Dave sabotaged us” and “no one knows how to do what Dave did”.
But I’ll add - there’s currently little need motive sabotage AI implementations. Current generation AI is largely unable to deliver on what is promised, in a business sense. It does cool but useless things, like quickly generating low maturity code, and writing a summary any seven year old could have wtitten.
Current generation AI adds very little business value, while creating substantial risks. Nevermind that no one knows how Dave worked, now no one knows how our AI works, and it’s so eager to please everyone that it lies at critical moments.
Companies playing around with current generation AI to boost next quarter’s stocks will hit plenty of “find out” soon enough, with nothing beyond the natural consequences of ignoring their own engineers advice.
All that to say - if we see what looks like sabotage, it may well just be the natural consequences of stupidity.
A company with fuck off amount of legal power?
Small acts of sabotage are easy to write off to causality, if well planned.