• Home |
  • Seven questions on AI investments that shareholders and board must ask the CEO

Seven questions on AI investments that shareholders and board must ask the CEO

  • July 4, 2024

Early this June, Salil Parekh, CEO, Infosys, announced that the IT major was a GenAI-first company. Shortly after, N Chandrasekaran, chairman, Tata Sons, announced TCS’s aspiration to become a GenAI-first company. These two cases highlight the pivotal shifts in corporate artificial intelligence (AI) strategies.

Entities in India spent USD1.7038 billion on AI in 2023, according to an Intel-IDC report. Banking, financial services, insurance, manufacturing, healthcare, telecommunications, and retail were the sectors with the highest spending during the year. The study projects that the expenditure on AI in India is expected to grow at a compound annual growth rate of 31.5% from 2023 and reach USD5.1 billion by 2027.

Here is a question for Indian shareholders (and boards representing shareholders): How do they know all the proposed expenditure is directed towards increasing shareholder value rather than being spent on another botched IT project?

Here are seven key questions that investors and company boards must consider before approaching the issue.

 

Question #1: Is it AI at all?

Investors need to be certain about the exact technology application that is being used. Many companies try to either pass off conventional automation projects or overstate AI capabilities to get projects approved. This practice is pejoratively called AI Washing, where old IT projects are funded through some catchy GenAI or AI labels.

The board and the shareholders can focus on the following key areas:

1. Clear mention ofincrease in revenue and/or reduction in costs. How will the insights from the investment change the decision-making processes? Without monetary goals, the project will have no clear goals.

2. The kind of machine-learning (ML) system that is being used. What are the data predictions, classifications, and anomalies that the AI engine is throwing up?

3. The ML model needs copious amounts of data. What data sets are being used for training the ML model?

 

Question #2: Does the AI expenditure align with the business goals?

The key thing is to examine whether the proposed AI initiative aligns with the revenue, positioning, and market-share objective of the company.

Around four years ago, one of India’s largest private banks faced issues with the user interface and user experience of their mobile app and website, which made it challenging for customers to navigate through both platforms. Consequently, the RBI prohibited new customers from onboarding. Today, the bank is using its customer relationship management (CRM) data to feed its AI-driven touchpoints for different customer channels.

 

Question #3: What is the technical strategy?

The organisation should outline the technical strategy for implementing AI. One of the critical questions is that of the existing technology system, ‘stack’, if you will. Investors must ask the management how it plans to integrate AI technologies with the existing systems and infrastructure. The management must ensure that the AI infrastructure is scalable and capable of handling increased computational demands as the initiative grows. There should be clarity on steps to be taken to optimise the performance and efficiency of AI models.

 

Question #4: Does the company have the right data to train the AI system?

It is important to assess the quality and quantity of the data available for ML models. Typically, ML models deal with structured data, which necessitates the involvement of data scientists.

If the data is insufficient, the results prompt ‘underfitting’, which means the model is too simplistic and the machine equivalent of ‘jumping to conclusions’.

Conversely, if the data is excessive, the results prompt ‘overfitting’, where the ML engine gets too specific in its conclusions, or the machine equivalent of ‘missing the big picture (woods for the trees, if you will)’.

Therefore, it is crucial to get the right amount of data. Companies unable to afford in-house data scientists should consider accessing these skills externally. However, good companies committed to their objectives retain such talent in-house. This serves as a clear indicator that shareholders can monitor. They can examine the staffing pattern of the AI project of their company.

 

Question #5: How can you convince the organisation of the “value capture” from AI?

Value capture, simply put, is how the project will increase revenues (ultimately market value) to justify the AI expenditure or decrease costs significantly. Before a company gets there, a key part that the CEO and his top team must play is taking the organisation along. People fear AI, and it falls upon the senior management to demonstrate how its integration will drive growth and deliver benefits. This is the only way the team will choose disruption over status quo.

Let’s consider the following examples.

1. A consulting/advisory firm has a vast repository of a million documents created over the lifetime of their existence, including white papers, industry reports, and market data. Now they aim to explore the feasibility of automating the rapid generation of a fresh proposal in the FMCG (fast-moving consumer goods) sector, where they have worked with over 50 clients. The objective is to build a platform capable of creating such proposals significantly faster – potentially 6x-9x faster than it currently takes.

Imagine the effect on the topline!

2. An insurance company aims to predict consumer behaviouracross several metrics. Who will not file a claim? Who will not pay the premium and let the policy lapse? Who will take car insurance from them and switch out the following year, and can this be understood before the event happens? This company has been training its ML model on data from hundreds of policy holders over decades. The company seeks to predict these behavioursbefore they occur, enhancing their ability to optimise the operations.

 

Question #6: What data-governance framework does the company have in place?

What measures are being taken to ensure the quality, integrity, and security of the data used for training AI models? How do you plan to address concerns related to data privacy and compliance with relevant regulations (e.g., GDPR, CCPA)?

Data privacy: Today when a company uses OpenAI’s platform, they are making API calls to that platform. The platform is hosted in the cloud, requiring companies to interact with it via API calls. This means that the company data and queries are sent through the internet to the platform. Thus, there are data privacy risks associated with this design.

Ethical considerations: What are objectives, and how can you balance them with human aspects to achieve those objectives? Suppose there is a company which, through its AI system, is planning to downsize its MIS (management information systems) team by more than 50% over the next three to four quarters. How would you handle this situation, emphasising the importance of “taking the organisation along” mentioned earlier?

Cyber security landscape: How do you deal with new-generation threats, like those targeting AI infrastructure. There are instances of “model poisoning”, where the system is sabotaged, and the inference is impacted. This, obviously, leads to wrong decisions.

 

Question #7: How can algorithmic transparency be maintained, and bias avoided?

Large language models (LLMs), like Google Gemini, have been accused of having a gender bias in the output. Each company setting up AI platforms will have to be cautious about this aspect. How does the company plan to ensure transparency, and mitigate bias in their AI systems? What mechanisms will be in place to monitor and address any biases that may arise in AI algorithms? Remember, there is a potential risk of huge reputational damage if mishandled. A company cannot be too cautious when it comes to addressing this aspect of their AI system.

 

SOURCE : ET

Leave A Comment

Fields (*) Mark are Required