Skip to Content

Is Your Job Secretly At Risk From The Hidden Dangers Of AI?

Are We Trusting AI Blindly And Creating A Tech Dystopia?

The conversation about Artificial Intelligence (AI) is growing louder. It is a talk that should not be only for the big tech companies to have. This belief has pushed many groups in society to make their voices heard. In Germany, over 75 organizations have put their name to a document called the “Democratic AI” Code of Conduct. This was started by a group called D64 – Center for Digital Progress. It is a promise to use AI in a way that is responsible. Resistance to using AI without careful thought is also growing in other places.​

It is time to talk about using AI again. In the past few days, I have read articles and opinions that make you stop and think. Yes, AI can help in some areas if used with care. But AI can also trick us if we use it without thinking.

The Big Buzz Around Agentic AI

Let’s start with a message from Spitch, a company in Switzerland that makes voice conversation systems. They asked 100 business leaders what they think about using AI agents to talk to customers. From this, they made a marketing paper.​

The leaders in the survey think that “agentic AI” is a big new trend. Spitch says that more than two-thirds (68 percent) of the 100 leaders they asked believe this. These companies are in Germany and have centers for talking to customers. About 70 percent of them have their own centers, and 30 percent pay other companies to do it for them.

What Is This Agentic AI?

Agentic AI is a new kind of AI system. It works on its own to reach a goal that has been set for it. It does this by breaking down big tasks into smaller steps. It does the steps, checks the results, and fixes any mistakes. Modern voice systems use this kind of AI. This allows them to “talk” with people, understand what they want, and answer on their own.

Spitch’s survey says that the early doubts about using AI for customer service are gone. Now, many people are sure it is a good thing. Over three-quarters (77 percent) of the companies believe agentic AI is changing customer communication in a big way.

This is not just a short-term trend. 47 percent are sure that agentic AI will become the normal way of doing things. Another 52 percent think this will happen at least in part.

AI as a Smart Move for Businesses

Using these smart AI systems is not just about being faster. It is also a signal that a company is thinking ahead. Here is what the Spitch survey found:

  • 93 percent think being seen as a company that uses new ideas is important.
  • 77 percent believe using agentic AI helps them compete with other companies. 36 percent even say it is a key factor.
  • 97 percent expect good things from agentic AI, with 42 percent expecting them right away.
  • 64 percent say the best thing is being able to handle more customers when it is busy.
  • 59 percent like that it can quickly scale up when unexpected things happen.
  • 45 percent see the instant response, no matter how busy, as a major plus.
  • 50 percent think it is great to be available all the time without paying more people.
  • 47 percent like that AI can speak many languages without needing translators.

These points might have some truth, especially for bosses who see customer service as just a cost. They might see a customer with a question as a bother. The idea seems to be that if everyone else gives bad service, we do not need to be better.

What Do Customers Really Think?

Spitch also claims that 71 percent of the leaders they talked to point to better quality for customers who call. They say the service is always the same, no matter when you call or who you talk to. The survey also says that this helps the brand’s image. 46 percent believe that AI agents can make the brand stronger by always being positive and new in how they talk to customers.

Stephan Fehlmann from Spitch says this does not mean people in call centers will lose their jobs. He says their jobs will change. People will handle the hard problems that need human feelings. AI will do the simple, repeated tasks. This creates a team of humans and machines working together.

Is This Real, or Just Good Marketing?

The claims above are a great example of marketing talk that bosses love. What they think and what is real seem to be very different.

I know of companies where these smart AI agents do not work well at all. With some big phone or bank companies, you get stuck. I have heard many people complain that the service is getting worse. But there are good examples too. A German health insurance company, Techniker Krankenkasse, has great phone support. When I cannot find something on their website and call them, I get a helpful person on the phone quickly.

If I think about an AI agent keeping me from solving my problem, it would be terrible for how I see the company. It does not matter how new the technology seems. The AI in customer service at many banks and phone companies is just a fancy word for a disaster.

Customers Do Not Trust AI

I also have information from a report by Qualtrics, the 2026 Consumer Experience Trends Report. It says that only 39% of people trust companies to keep their personal data safe. This report was based on a survey of 20,000 people in 14 countries.​

The new Qualtrics report shows the big difference between what AI promises and what it really does. AI in customer service fails four times more often than in other areas. At the same time, companies are under more pressure to show that AI is worth the money. Isabelle Zdatny from Qualtrics warns, “Too many companies are using AI to cut costs instead of solving problems.”​

My own bad experiences with AI in customer service are backed up by the Qualtrics study. Almost one out of five people who used AI for customer service said it did not help them at all. This is four times worse than the dissatisfaction with AI in general. The results of the global report from Qualtrics also include answers from 1,500 people in Germany.​

As companies feel the pressure to show good results from their AI spending, the Qualtrics report shows that using AI to improve service is not working as well as expected. People are also more worried about how their data is being used. Customers rate AI for customer service very low on being easy, saving time, and being useful.

The biggest fear for people now is that their personal data will be misused. 53 percent share this fear, which is 8 percent more than last year. 50 percent are worried that AI will stop them from talking to a real person. 47 percent are scared of losing their jobs. Almost half of the people surveyed would share more data if it was clear what was being collected and how they could choose to use or delete it.​

The Lies About AI and Jobs

In line with all this praise for AI, there are a couple more things to mention. At the end of October 2025, a report from Forrester called “Predictions 2026: The Future of Work” talked about layoffs because of AI. While companies are happy about these layoffs, half of them will have to hire new people again. But these new jobs are often in other countries where they can pay people less.

I saw a post online that talked about this too. A bank laid off 45 workers and replaced them with AI. The bank was proud that it saved 2,000 calls a week. Two weeks later, the bank is asking the laid-off workers to come back because the AI is not working. The number of customer calls went up, not down. The managers are working extra hours, and team leaders are answering calls. This is what is happening in the real world, not in marketing promises.

Another article looks at what the AI excitement has led to in the US. It’s a bad trend: companies are making record profits but still laying people off. This is being called the “jobless boom” caused by AI. Also, young people fresh out of college are having a very hard time finding jobs. A big “boom-and-bust cycle” is starting, which will likely cause serious problems for all of us.

These current excesses will have consequences. There is a very interesting article that analyzes how the big tech companies created an AI bubble with circular deals. The main question is not if, but when the big crash will happen, and what will be left after the “cleanup.”

A Guide for Responsible AI

“We must not leave the discussion on AI to Big Tech,” says Monika Ilves, a board member of the digital policy group D64. That is why the group started the “Democratic AI” project.​

More than 75 organizations from German civil society have signed the “Democratic AI” Code of Conduct, a project started by D64 – Center for Digital Progress. This is a shared promise to use artificial intelligence responsibly.​

The groups that signed first represent 3 million members and show the variety of society: from welfare groups to nature conservation unions and senior citizens’ organizations.

D64 developed the Code of Conduct with the help of the participating organizations, starting in April 2024. It has eight main principles:

  • Thinking about how AI is used
  • Putting people first
  • Being open and clear
  • Including everyone and letting them take part
  • Being against discrimination
  • Taking responsibility and being accountable
  • Having the right skills and being environmentally friendly
  • Ecological sustainability

This promise is for non-profit groups and gives them a guide to follow. It does not matter if they make their own AI, use existing AI, or choose not to use AI at all.

“We signed because using AI responsibly means we have to look closely at the technology and have the right skills,” says Julia von Westerholt, a leader at the German Adult Education Association. “The Code of Conduct gives us a framework for how we can use AI to help people participate and do good for everyone.”

The Code of Conduct is meant to add to existing rules like the EU’s AI Regulation. But it was made from the real experiences of social groups and for their specific problems. The organizations that sign it promise to think about the good and bad sides of using AI, to be clear when content is made by AI, to let employees and the people they serve take part, and to teach people to think carefully about AI.​

“By signing, more than 75 organizations are promising to use AI systems responsibly. Together they show that civil society can and wants to help shape a digital future that is democratic,” says Monika Ilves from D64.

Who Is in Control of AI Agents?

Markus Müller from a company called Boomi has thought about AI agents. He sees them becoming more common. Experts around the world agree that they have the power to change companies. In a study, Boomi asked 300 leaders from business and tech, including some from Germany. Almost three-quarters (73 percent) think AI agents will be the biggest change for their company in the last five years. But I would not call these leaders experts – they are often just following the latest marketing buzzwords.

But Boomi’s study brings up a much more interesting point. According to the study, only two percent of the AI agents being used now are fully responsible for what they do and are watched over all the time.

This means that 98 percent of AI agents in companies are working without any, or with very few, rules. The decision-makers are being very naive. And this is where the danger is for companies, writes Markus Müller. Without someone watching them, the powers of AI agents cannot be controlled. He then lists several ideas and questions that every leader should think about.

AI agents are being given more and more responsibility. Not long ago, everyone thought that only people could handle important things like security risks or approving big spending. But that has changed with the fast growth of AI agents. Now, leaders are more willing to let an AI agent handle these areas, at least in part.

This technology comes with a huge amount of responsibility. Leaders and IT teams no longer know all the ways the technology uses private data. This can lead to breaking security or compliance rules. For any company, an AI agent that works without control is a risk that cannot be accepted.

Checking on “Digital Employees”

Müller writes that the current rules for AI agents are not good enough. Often, even the basic requirements for a plan to govern AI agents are not met.

  • Less than a third of companies that use AI agents have a set of rules for them.
  • And only 29 percent of companies have regular training for employees and managers on how to use AI agents responsibly.
  • When it comes to specific things like checking for bias or planning for when an AI agent fails, even fewer companies are ready (only about a quarter in each case).

Naivety and a lack of care are everywhere. Müller writes: “Companies must therefore begin treating digital employees (AI agents) in the same way as human employees.” With people, it is normal to check their skills and if they have behaved ethically in the past. AI agents must be treated with the same standards. For example, they should be checked to see if they have a history of being biased or making things up.

Having rules for all AI agents is not just a nice extra, Müller is sure. It is necessary for data security and for the business to do better. Companies with good rules do better in many ways than those with only basic rules. They also protect themselves from breaking rules, damaging their reputation, and security problems caused by unregulated AI agents, which can cost a lot of money.

AI Divides Us

Finally, a bit of reading to calm those who think we are all being left behind on AI in Germany. Bianca Kastl, who works on security issues, has written a great piece about “dedigitalization” and warns us to act “like a donkey.”

Donkeys are smart animals that do not just follow the crowd. They stop, look at the situation, and then decide what to do. This is sometimes the better way – even if being “stubborn as a donkey” seems to bother the people who want a quick fix with AI.

Paul Rohan has made fun of “AI and robots” in a video. It is a good meme for what AI and robots can do on average right now.