In 2022, something truly seismic happened: GPT flipped the table. AI, once a niche domain of researchers and scientists, suddenly entered the mainstream.

At xfive, we’ve been tracking these AI developments closely, learning which tools deliver real value and how to apply them responsibly across client projects.
While tools like ChatGPT weren’t “AI” in the classic sci-fi sense – they relied on statistical models rather than conscious agents – they nevertheless opened the door to a broader cultural and commercial reckoning. Industries and consumers alike realized that these models weren’t just clever toys; they were transformational tools. What started as curiosity quickly became serious business.
Early Wins and Real Impact
Industry quickly realized that it was a choice between allying with AI or being left behind. According to McKinsey, in 2022, around 50% of companies reported using at least one AI-based function in their business activity.
By the end of 2024, this figure was already almost 80% and, apparently, growing:
- ChatGPT could generate texts,
- DALL-E or MidJourney could create images,
- Chatbots could provide 24/7 automated customer service,
- Perplexity AI could analyze large datasets,
- GitHub Copilot could write code snippets, suggest improvements, and automate repetitive programming tasks.
Managements were delighted. People doing jobs that are apparently rendered redundant – not so much.
Not Everything That Shines Is Gold
However, concerns quickly arose: Is this responsible? Is it trustworthy? Is it safe? The consequences of using AI, especially the far-reaching ones, are still vaguely recognized, although we can already observe some unwelcome repercussions, like the reduction of junior positions in the employment market.
In terms of trust, AI quickly proved prone to so-called "hallucinations," which simply means "making things up," because its main imperative is to provide answers, not necessarily truthful ones. This results in the necessity of meticulously controlling the models' output before using it in a business context.
Safety is also an issue: the safety of the data the AI is working with and of the people chatting with it to solve their problems. In the worst-case scenarios, a chat with AI may encourage a person to do something utterly harmful, which, unfortunately, is no longer a hypothetical possibility.
We are talking about security breaches, unintended actions taken by agents leading to loss of revenue, and poor quality code, as reported by IBM, TechRadar or companies operating in the security sector. Unfortunately, there have been cases with profoundly tragic consequences caused by the chatbot’s reckless reaction.
With great power comes great responsibility – as a certain web-related superhero was once advised...
Towards the Balanced Approach - AI Lab Built for Questions, Not Just Answers
Due to the factors mentioned above, xfive decided to establish an AI Lab – an internal think tank working on maintaining clear guidelines for the responsible, trustworthy, and safe implementation of LLMs, both into internal company processes and into the services we offer our partners.
The few years of AI presence on the market proved that it’s not a temporary fad but a change that will stay with us for good. Internally, we aim at leveraging AI's power to:
- deliver our work faster and better,
- deliver our work without compromising the quality of our services,
- deliver our work without compromising the culture that is written in the DNA of our company.
For our partners, on the other hand, we are ready to implement AI-based solutions wherever it is reasonable and beneficial. We advise when and how AI is the way to go, and when it is better to stick to good old habits.
AI Lab is – paradoxically – all about people: a group of enthusiasts from various company departments connected with the idea to incorporate AI into xfive’s bloodstream responsibly and wisely. It is an interdisciplinary endeavor because the smartest concepts are born from a clash of different perspectives.
That's why AI Lab gathers people from:
- management,
- marketing,
- and development to include different points of view.
Also, to see risks and dangers that may be unperceivable for specialists who are focused too much on their own domain.
From Quick Fixes to Thoughtful Creation
After the initial euphoria, industries are beginning to realize that not everything labeled "AI" is actually beneficial and profitable.
Tomasz Nowak, WordPress Developer & AI Lab Member at xfive.
And there are reasons to think this way. An example is "vibe coding," a term created to describe the activity of creating a digital product (like a website or app) solely by giving instructions to the AI agent, with minimal or no control and manual interference by a human developer.
The internet was full of stories about "how I made an app in 2 hours," which might have created the impression that 2 hours is actually what it takes these days to deliver a digital product (and that development companies' days are basically over).
Today, vibe coding understood in this manner is fading, as more and more studies indicate that the overall quality of the code produced since the introduction of AI agents has decreased, and that the maintenance and bug-fixing of such creations actually takes more effort than developing them manually.
Instead, a more self-aware and intentional use of AI agent assistance is taking over. We call it "vibe engineering," and we consider it a new, more mature, and refined approach to using AI in delivering digital products.
This is just an example of how a thrilling trend that both the industry and clients were enthusiastic about quickly began to fade away in favor of a more responsible, trustworthy, and safe approach.
Journey from Curiosity to Confidence, Safely with xfive
We believe that recognizing the ephemeral AI fads from serious game-changers is both a challenge and an opportunity. Whether it’s integrating AI into existing products or exploring entirely new solutions, we support our partners every step of the way, making sure innovation is both safe and impactful.
With the AI Lab, we are confident we can navigate between the reefs of risks towards the safe waters of sustainable and prosperous AI implementations.
Feel like that is the direction you are happy to head? Let us guide you.



