What is a Chief AI Officer anyway…?

Che Kulhan
5 min readMay 15, 2023

--

Artificial Intelligence is now permeating its way through businesses and consumers at a global scale. Not only do everyday users have access to amazing applications that allow them to do everything from searching, image to text translation and vice versa, create presentations and possibly even write this very article, programmers also have powerful tools at their fingertips not only for using AI to assist in software development, but also for creating their own machine and deep learning models, courtesy of powerful and accessible cloud-based computing resources.

Image courtesy of https://s3-prod.adage.com/

According to a UBS study, ChatGPT is “estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history.”

Google Trends results for the web search term “Chat GPT”

What is the problem that needs to be solved by a Chief AI Officer?

With so many users within organisations meddling with AI, like myself, organisations will eventually end up having users in various departments, business areas or even countries using different applications, tools, API’s, data and even under the regulation of different laws. What this therefore requires is for companies to develop an AI strategy that consists of an overall strategic plan to identify, communicate and capitalise on strategic opportunities that arise while simultaneously managing risks associated with their use.

Imagine a situation where a programmer, having access to a few interesting datasets, starts to create her own model using tools such as scikit-learn. At the same time, another programmer on a big data system begins to create similar data models, using the same or even different datasets, using Apache Spark’s machine learning library. Simultaneously, some non-technical end-users are starting to ask ChatGPT questions to solve business problems, while other non-technical users have just got on-board with Google’s Bard, asking similar questions, though at the same time, unbeknown to them, questioning these AI systems with potentially non-disclosable company information. Finally, these users from the same company with offices within the European Union are not privy to recent regulation developments affecting AI systems.

One can see from this simple example how AI has the potential to reach all areas of business and potentially create a melting pot of tools, needs, use cases, opportunities and risks. These opportunities and risks need to be managed with an overall vision of how AI fits within an organisation, promoting clarity, transparency and outlining a medium to long term strategic roadmap, both internally and exernally.

Areas of Influence

Strategic Business Opportunities

A quick look at online AI tools through websites such as https://www.futuretools.io shows just how many applications are out there and how the above situation could really get complicated and problems expand exponentially. Anyone in an organisation, especially non-technical staff, can now instantly write code, produce artwork, translate documentation, develop presentations or emails, just to name a few examples.

futuretools.io

Given the fact that AI is now effectively in the hands of the consumer, business opportunities exist in abundance for organisations to capitalise on. However, strategic plans need to be in place so that AI initiatives are aligned with the organisation, its employees, stakeholders and clients. Business opportunities may include enhancements to existing products and features, improvements in processes, new businesses services and new products altogether.

Companies with proprietary data, in-house AI development skills, domain knowledge and proven innovation systems are those that are in the best position to excel in new business opportunities.

In addition, new roles are appearing, such as “prompt engineers” — a professional who specialises in developing, refining and optimising AI-generated text prompts to ensure they are accurate, engaging and relevant for various applications.

Risk Management

Risk frameworks are developed within organisations to mitigate financial losses, reputational damage, loss of trust, minimise regulatory breaches, security breaches, fraud, etc. Allowing employees access to AI tools, without risk management framework and continuous training, increases an organisations exposure to these risks. Furthermore, if employees or customers base their decisions on the outcomes of these tools, are they aware of the risks, such as the quality of data, modelling bias, lack of transparency, model explainability, etc…to name just a few areas where AI could potentially be dangerous, mis-used or misunderstood.

A few examples where risk management is required:

data quality — the amount of data and its quality also influences outcomes of models. Consider invalid or missing data, gaps, timeframes, etc. In addition, models also need to be maintained and updated, hence the surging interest in MachineOps or machine learning operations.

model biasing — the health care sector provides many examples of racial or social discrimination when applying AI, negatively impacting minority groups. For example, facial recognition software may have been trained more on certain types of ethnic backgrounds than others, thereby biasing the results.

ethics — imagine a situation where a restaurant app uses an AI model to suggest dishes to a diner. However, the app hasn’t fully taken into consideration the allergies that the consumer has. How will the company respond to negative health consequences of the AI advice given?

regulatory frameworks — there are many highly regulated industries such as banking, finance and insurance. However, even these industries may or may not have regulations specific to AI. As the time of writing, the European Parliament is finalising an AI Act.

Why is this role needed?

As can be seen, there is currently a gap in many organisations where a Chief Artificial Intelligence Officer (CAIO) is required. Collaborating with stakeholders, users, software developers and clients to provide an overall vision and roadmap of how AI will be used within a company is his or her responsibility. Seizing strategic opportunities and managing risks are the areas of influence that could mostly benefit from such an integral role.

Authors warning. No AI tools were used in the writing of this article ;)

Resources

The Baker McKenzie survey, Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence, March 2022. https://www.bakermckenzie.com/-/media/files/aisurveypptfinalmarch20222.pdf?sc_lang=en&hash=E7BCD28D19AF8D726596FC648F27C9F2

Prompt Engineers. https://resources.workable.com/prompt-engineer-job-description

Hiring Your First Chief AI Officer. Andrew Ng. Harvard Business Review, 2016. https://hbr.org/2016/11/hiring-your-first-chief-ai-officer

Unter Nehmertum, Applying AI: How to find and apply AI use cases. https://aai.frb.io/assets/files/AppliedAI_Whitepaper_UseCase_Webansicht.pdf

--

--

Che Kulhan
Che Kulhan

No responses yet