Danielle Heberling

Tech Thoughts

I'm Worried About Generative AI

I'm Worried About Generative AI

May 04, 2024


Photo by Ksenia Makagonova on Unsplash

I’m worried about generative AI. I say this as someone who uses AI to help with work tasks daily. I do believe there are some neat things that AI can do and there is high potential for it in the future as it improves, but I have concerns.

No this post isn’t about the common objections I’ve been seeing in the discourse. Things like bias in the models, hallucinations, and people’s jobs being replaced. Although they are very important things to monitor.

I’m concerned with AI monopolizing the focus of companies.

Take the big tech companies for example. There are reports everywhere about multiple layoff rounds but also news of higher rates of investment (both money and people resources) in AI than ever before. These large companies have huge ecosystems that many folks depend on. Does this mean since the main focus is AI that there will be less focus on their non-AI products and services? Does this mean that service reliability will degrade as time goes on because engineers who previously responded to outages were moved to teams working on AI or were laid off?

On the other end of the spectrum we have tech startups. I’m seeing many chatbots being built and their marketing tells me this is the solution to my problems (that I didn’t know I had). I’m excited that this opens up opportunity for new ideas and businesses to take shape, but when the core of a product is centered on what is essentially a black box that might be very wrong sometimes…this doesn’t feel like a good business model to me until the tech matures. My overall impression is that AI is a magic word that these companies can use to attract more customers who like shiny new things and also to attract venture capital funding.

I’ve also seen lots of companies throwing AI at things when it wasn’t needed. An example: a company who will remain nameless was demoing a new tool that uses AI to help users troubleshoot error messages that surface in their app. A developer watching the demo yelled out “or you could consider writing better error messages that users will understand instead of bringing AI into this.”

These are my opinions based off of observation of the world around me. I do not have data to present as evidence. This is all based off of vibes. I sure hope my vibes are wrong, that I’m overreacting to all of this, and that I should take off my tin foil hat.

Time will tell.

AI has promise and it does have the potential to be helpful for specific types of tasks, but let’s not forget that we should attempt to solve problems that already exist before bringing in tools that have the potential to cause even more problems.

What are your thoughts?