Technology

The search for transparency and reliability in the AI era

2025-12-03 15:13
934 views
The search for transparency and reliability in the AI era

Questions are being raised regarding how safe sensitive internal and user data is from unauthorized access.

  1. Pro
The search for transparency and reliability in the AI era Opinion By Michael Hanratty published 3 December 2025

Transparent and robust data residency ensure the confidentiality of clients’ data

Comments (0) ()

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

AI Education (Image credit: Pixabay)

Generative AI is taking organizations to new realms of efficiency, innovation, and productivity. Just like the technological innovations that came before it – from the industrial revolution to the rise of the internet – the AI era will see businesses continue to adapt in order to capitalize on the most efficient processes possible.

Michael HanrattySocial Links Navigation

Chief Technology and Information Officer at HGS UK.

If a company’s data is unknowingly passed to third or even fourth parties following the use of AI tools, the implications could not only compromise client trust but also weaken their competitiveness.

You may like
  • AI writer The data crisis: why the future of AI depends on fixing the foundations
  • Green hosting Why data sovereignty is essential to help businesses prepare for impending AI regulation
  • Phishing, E-Mail, Network Security, Computer Hacker, Cloud Computing Cyber Security 3d Illustration The new paradigm: a concentration of data in AI demands greater vigilance

The data security issue

The business world is now firmly in the age of AI, where companies are undoubtedly seizing the tangible benefits of the technology. Nevertheless, firms are also facing the significant risks associated with misusing this technology. Increasingly, there have been incidents of AI providers misleading clients on how their data is used.

For example, OpenAI was fined €15 million for deceptively processing European users’ data when training its AI model, while the SEC penalized investment firm Delphia for misleading clients by falsely claiming its AI used their data to create an ‘unfair investing advantage’.

These recent instances of high-profile breaches of trust are raising alarm bells among businesses. There are growing fears that AI enterprises are acting in a deceptive manner.

As a result, potential clients are reconsidering their use of AI and are hesitant to share personal data with providers. In fact, some companies are hesitant to invest in AI tools all together.

Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

According to KPMGs global study from earlier this year, more than half of people are unwilling to trust AI tools – expressing conflict between its clear advantages and perceived dangers, such as concerns regarding their data resides.

This poses a significant question for AI providers: how can they raise trust surrounding AI and data security?

The path to trust: data residency and transparency

For AI providers, honesty translates to transparency – this is a crucial first step to rebuilding trust. Being upfront about who data is shared with, or what it’s being used for, informs individuals before they entrust AI applications with their valuable information.

You may like
  • AI writer The data crisis: why the future of AI depends on fixing the foundations
  • Green hosting Why data sovereignty is essential to help businesses prepare for impending AI regulation
  • Phishing, E-Mail, Network Security, Computer Hacker, Cloud Computing Cyber Security 3d Illustration The new paradigm: a concentration of data in AI demands greater vigilance

This is essential regardless of whether the client agrees or disagrees with the policy.

Providing businesses with a transparent overview extends to clarity in data residency. Displaying the physical or geographical location of where data is stored and processed removes the uncertainty and speculation linked to AI.

If clients are given visibility into their data usage, their fear of the unknown diminishes, bringing the ‘invisible’ space into viewpoint.

A combination of transparency and residency moves beyond efforts to rebuild trust. For instance, from a compliance perspective, it helps providers take on a stronger position.

Making the disclosure of data sources used by AI a mandatory measure is the goal of the highly anticipated Data (Use and Access) Bill. Through refining these procedures prior to the implementation of such laws, providers can position themselves in a way that ensures they benefit from any future policy changes.

By implementing these practices, clients will establish trust that their data is protected against the risk of fraudulent activities. Nevertheless, providers must also ascertain that this data is secure from further threats to.

Ensuring data security

Transparency helps to build trust between organizations and their clients, but this is only a first step. Another element to maintaining trust involves data security – where cybersecurity has a crucial role to play.

A combination of outdated IT infrastructure, inadequate cybersecurity funding, and holding onto valuable data are key issues actively fueling most cyberattacks.

In order to show clients that unauthorized access to their data is not an option, AI providers must revamp their security systems. This includes implementing security measures like multi-factor authentication (MFA) and data encryption, which prevent illicit access to vital customer databases.

Moreover, regularly updating and patching security systems prevents threat actors from identifying and exploiting potential vulnerabilities.

Naturally, businesses want to take advantage of AI's unparalleled capabilities to enhance operational efficiency. However, the use of AI will decline if users cannot rely on providers to protect their data – no matter the transparency of their use cases.

Building responsible AI ecosystems

Whilst the capabilities of AI evolve and become more integral to every-day business operations, simultaneously, the responsibilities placed on AI providers continue to rise. If they neglect their duties to keep customer data safe – whether through malpractice or external threat actors – a viral element of trust will be broken between parties.

Establishing client trust requires AI providers to significantly improve data residency and transparency, as this demonstrates a serious commitment to the highest ethical standards for both current and future clients.

Further, it also ensures that enhanced security protocols are clearly perceived as foundational to all operations and data protection efforts. This commitment ultimately strengthens organizational trust.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS AI Michael HanrattySocial Links Navigation

Chief Technology and Information Officer at HGS UK.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more AI writer The data crisis: why the future of AI depends on fixing the foundations    Green hosting Why data sovereignty is essential to help businesses prepare for impending AI regulation    Phishing, E-Mail, Network Security, Computer Hacker, Cloud Computing Cyber Security 3d Illustration The new paradigm: a concentration of data in AI demands greater vigilance    A cybersecurity icon projecting from a laptop screen. Zero Trust: a proven solution for the new AI security challenge    Security padlock and circuit board to protect data Shadow AI: the next frontier of unseen risk    data What is data governance and why is it crucial for successful AI projects?    Latest in Pro Screenshot of website builfrt bring used on a macbook Squarespace’s Cyber Week continues - grab 20% off before it’s gone    A shopping cart logo on a laptop screen. Over two-thirds of retailers have already partially deployed AI agents for efficiency    Amazon AI Factories Amazon is testing out private on-premises 'AI Factories'    A padlock icon next to a person working on a laptop. Passwordless authentication isn’t the problem, the myths around the technology are    AWS reinvent 2025 AWS wants to be a part of Nvidia's "AI Factories" - and it could change everything about how your business treats AI    AWS reinvent 2025 "The world is not slowing down" - AWS CEO says AI agents will be bigger than the Internet, so act now    Latest in Opinion AI Education The search for transparency and reliability in the AI era    ChatGPT app on an iPhone ChatGPT users furious as even $200 a month Pro subscribers are hit with app suggestions    Samsung Galaxy Z Trifold How much will the Galaxy Z TriFold cost? I’m a Samsung expert and here’s my prediction    Waiting Forget Prime – Amazon starts 30-minute deliveries to show good things come to those with zero patience    Shocked woman worker looking at laptop screen Microsoft's warning on 'security implications' of AI agents is causing panic    The Samsung Galaxy Z Trifold on a purple background The Samsung Galaxy Z Trifold's folding mechanism looks odd, but it's the right call on a crucial design decision    LATEST ARTICLES