Something, Something, AI
I have seen a myriad of threads and posts regarding AI and its impact on the future of software engineering. Maybe if I had more energy or desire, I could write in a more elegant way. But I simply can't right now.
There is a crowd which sees AI's ability to quickly iterate and deploy web applications and thus they proclaim software engineering dead. They do not realize that the complexity of software engineering explodes the moment your application leaves its LAN and speaks across networks. At that moment, suddenly the software engineer must handle basic security, authentication and authorization, validating user input, etc. We have entire teams dedicated to security, infrastructure, and coding itself yet there are security incidents and network-caused troubles anyways. AI becomes a cognitive tool rather than an 'agent' (there is no word I hate more nowadays) at scale.
So, yes. Software engineering is dead if you never needed to engineer software. If you need to create small tools at home, and you solely care on a functioning thing and not learning or total control, use AI. It is amazing for small things.
There are software developers who look at AI and believe it is garbage and will only be garbage. This could not be further from the truth. Like any revolution regarding information, like the rise of the internet, cognitive outliers benefit disproportionately. AI empowers the quick gathering of information across disparate fields: one can learn the fundamentals of cosmology, distributed consensus, and the Rust borrow checker in the span of an hour. For those already gifted at information absorption and pattern matching, AI as a learning tool is an incredible accelerator.
The implication of this knowledge accelerator is that the gap widens further between incredible engineers and the average engineer. I would not be surprised if the above average become more valuable and the average become less valuable. The top 1% accelerate massively, while everyone else, at best, stagnates. This is not a statement of meaning or moral worth: the vast majority of companies do not need outliers and probably would be happier with stable and adequate employees.
Complication: there are not enough jobs for those excellent engineers already. The unfortunate outliers will find themselves in roles not greatly suited for their capabilities.
The problem is that most AI models will tend to agree with your reasoning, and even if they disagree often times the arguments are weak. It is incredibly easy to follow dumb and energy-consuming paths. In this sense, junior engineers are much more screwed than senior engineers. They do not have the epistemic foundation to discern what is truly good vs its complications, and their reasoning itself gets crippled by AI because they never were allowed to strengthen it.
The real problem with AI is the delusion of the executive and investor classes. Billions of dollars of investments in a circular fashion, hardly any return on investment. But these companies operate in fear of missing out on something revolutionary, and there is nothing more they cling onto than their own fantasy projections: the replacement of workers.
Re-reading the above sentence, I know it sounds like left-wing populist talking points. Yet I was in a company meeting recently, and my CEO could not stop talking about the rise of agentic AI. To quote him, "by the end of this year you will have agentic coworkers." A blatant fantasy to replace employees. Further, for some reason he shared an interview he did with the CMO of a massive technology company (you have used their products); to quote that CMO, "I am excited for the future of AI that enables labor for pennies on the dollar."
I want to burn it all down.
I wrote a sentence about the impossibility of convincing white collar workers to unionize, but I deleted it: how much does it matter if almost every state is at-will?