News

Banking Giants Warn New AI Risks

Goldman Sachs, Citigroup, JPMorgan Chase, and other Wall Street companies have warned investors about new risks associated with scaling artificial intelligence using practices, including software hallucinations, employee-morale issues, AI leverage by cybercriminals, and the impact of changing laws around the world.

Banking Giants Warn New AI Risks

Machine intelligence is what can be called a source of tremendous power. In fact, artificial intelligence is the most advanced technology at the current stage of the material and digital evolution of human civilization. Like the vast majority of other technologies, AI can be used not only in the framework of constructive activities aimed at achieving positive results but also in the context of the implementation of destructive scenarios. The fact of leveraging machine intelligence by cybercriminals in no way negates the significant beneficial potential of virtual cognitive systems for humanity, but it signals the need to respond to threats and intensively counteract relevant sources of danger.

The annual reports of financial institutions have mentioned the new dangers associated with artificial intelligence. In this context, banks pay special attention to flawed or unreliable AI models. The specified reports also mention increased competition and new rules that limit the space for using artificial intelligence.

JPMorgan Chase said machine intelligence could cause workforce displacement. The financial institution claims that this process can have a negative impact on morale and retention of staff. Moreover, the bank’s report highlighted that the mentioned tendency may become a factor in increasing competition for hiring employees with the necessary technological skills.

It is worth noting that over the past two years, financial institutions have repeatedly made statements about the intensification of risks associated with the active development of artificial intelligence and the expansion of the practice of using AI. Cybercriminals have perceived machine intelligence as a source of new and additional opportunities for their activities, and have taken advantage of it. Another important factor is that the practice of using artificial intelligence is scaling up in the financial sector. In this case, companies and organizations either embrace their own software or leverage the corresponding virtual products from third-party developers. Against the background of the implementation of relevant practices, new concerns related to security issues are being formed.

Banks’ annual reports often state that if financial institutions are not aware of the latest developments in the artificial intelligence industry, they will face a realistic risk of losing business and customers. In this context, it is worth noting separately that, according to media reports, the tendency has recently been observed, in which consumers of financial services, when choosing a company with which they will interact, increasingly pay attention to what measures firms are taking to ensure security.

At the same time, it is obvious that despite the various risks, the rejection of the use of artificial intelligence at the present stage of the technological evolution of human civilization is what can be described as a kind of mental atavism that generates negative practical implications. AI has already demonstrated its effectiveness as a working tool in several industries. Artificial intelligence can process huge amounts of information and generate original content. Also, some forecasts suggest that at some point in its development, digital intelligence will surpass the human mind in terms of cognitive abilities. It is possible that, from the standpoint of to a certain extent philosophical characteristics, machine intelligence will be transformed into something like a new form of thinking, characterized by a higher level of capabilities. As for the banking sector, in this area of activity, artificial intelligence can perform many tasks, especially those related to routine processes to be automated and working with data. Moreover, AI can be used as a tool to detect and prevent the activity of cybercriminals who leverage digital intelligence.

The mentioned potential is still not a reason to forget about the significant harm of risks. Banks’ annual reports mention that scaling up the global practice of using artificial intelligence increases the risk of not only cyberattacks but also misuse.

Ben Shorten, Accenture Plc’s lead for finance, risk, and compliance for banking and capital markets in North America, stated during a conversation with media representatives that having those right governing mechanisms in place to ensure that machine intelligence is being deployed in a way that’s safe, fair and secure — that simply cannot be overlooked. It was also noted separately that this is not a plug-and-play technology.

Currently, there is a risk that banks, as part of their operations, may use technologies that have been built by leveraging outdated, biased, or inaccurate sets of financial data. JPMorgan Chase’s annual report noted that the development and maintenance of artificial intelligence models that provide the highest level of data quality is fraught with dangers.

Citigroup said that as it rolls out generative machine intelligence in select parts of the bank’s structure, there are risks of ineffective, inadequate, or faulty results produced for its analysts. The data may also be incomplete, biased, or inaccurate, which may have a negative impact on reputation, customers, clients, business, or results of operations and financial condition. The corresponding statement is contained in Citigroup’s report for 2024.

In the latest annual report of Goldman Sachs, it was noted that while this financial institution has increased its investment in digital assets, blockchain, and artificial intelligence, growing competition generates risks for timely enough integration of AI technologies to boost productivity, reduce costs, and provide customers with better transactions, products, and services. It was also highlighted that it could have an impact on gaining and retaining clients.

Ben Shorten stated that financial companies run the risk of maintaining data privacy and regulatory compliance in an environment that is less certain and evolving rapidly. Last year, the Artificial Intelligence Act came into force in the European Union. In this case, rules were formulated for the use of machine intelligence systems in a region where many financial institutions from the United States operate. Ben Shorten stated that the mentioned act establishes rules for placing on the market, putting into service, and using a lot of artificial intelligence systems in the European Union. It was also noted that the outlook for the United States and the US market is less clear.

Currently, there is a large-scale practice in which financial institutions use a combination of their own artificial intelligence tools and tools acquired from outside providers. Citigroup rolls out a suite of tools that can synthesize key information from public filings. AI @ Morgan Stanley Debrief performs rote tasks with a ChatGPT-like interface.

Goldman Sachs’s private-wealth unit uses artificial intelligence to evaluate portfolios and analyze dozens of underlying positions. The mentioned financial institution’s chief information officer Marco Argenti told about this while communicating with media representatives. He also stated that it is important to take a responsible approach and really apply controls to ensure protection from potential inaccuracies and hallucinations.

JPMorgan Chase chief executive officer Jamie Dimon said artificial intelligence is perhaps the biggest issue grappling with the financial institution he heads. In his annual letter to shareholders, Mr. Dimon compared the potential impact of machine intelligence to a steam engine. Also, the head of JPMorgan Chase, which is the largest financial institution in the banking sector of the United States by assets, noted that artificial intelligence technology could augment virtually every job.

As for commenting on the risks associated with scaling up the practice of using machine intelligence, in this case, banks limit themselves to statements contained in their annual reports. Representatives of financial institutions declined to comment in more detail. This was reported by journalists who tried to have substantive conversations on the topic of the mentioned risks with banks’ employees.

As financial institutions increasingly use artificial intelligence, cybercriminals are also expanding the scale of AI leverage. It is also worth noting separately in this context that machine intelligence makes the activities of the mentioned criminals more sophisticated. One of the tools for countering the corresponding threat in the cyber environment is the personal awareness of users. For example, an Internet search query such as how to know if my camera is hacked will allow anyone to get information about signs of unauthorized access to the device. Digital literacy is an effective tool for countering cybercrime. At the same time, it is worth noting in this case that the mentioned knowledge needs to be updated periodically. The corresponding statement is because cybercriminals, as part of their activities, seek to use advanced technologies that are actually in the condition of permanent development. The main goal of the specified criminals is to steal the victims’ money. At the same time, they are sometimes aimed at a kind of public discrediting of individuals. Within the framework of relevant activities, artificial intelligence is a tool that helps to increase the level of operational efficiency.

The results of a global survey conducted by Accenture, which involved 600 cybersecurity executives in the banking sector, indicate that relevant specialist teams are struggling to keep up with their organizations’ AI adoption efforts. Also, 80% of respondents said that, in their opinion, generative artificial intelligence is empowering criminals faster than banks can respond.

Morgan Stanley stated in its latest annual report that generative machine intelligence, remote work, and the integration of third-party technologies could pose a threat to data privacy. Ben Shorten noted that the risks associated with leveraging artificial intelligence when working from home will require companies to set up rules to avoid problems. It was also separately highlighted that the mentioned moves are only going to increase in criticality. Ben Shorten stated that attackers are being enabled by AI technology faster than financial institutions are able to respond.

Last year, artificial intelligence was used as part of cyberattacks that included ransomware, zero-day exploits, and supply chain attacks.

Michael Shearer, chief solutions officer at Hawk, said during a conversation with media representatives that in the context of the realization of threats in the cyber environment and countering this, there is an adversarial game. It was noted that criminals seek to make money, and the business community needs to curtail the mentioned activities. Michael Shearer stated that there have been changes, as a result of which both sides of the specified process are armed with really impressive technologies.

In a certain sense, the activities of cybercriminals and counteraction to this also correspond to the definition of the game ahead. In practical terms, the corresponding thesis means that criminals strive to use advanced technological solutions as quickly as possible, and banks and companies specializing in the development of security tools in the virtual space are trying either to get ahead of illegal efforts, or at least not to react too late. With the highest probability, the game ahead will continue in the long run. This does not mean a certain hopelessness in terms of consumer vulnerability, but it is a signal that the fight is not over and still requires significant efforts on the part of those who protect the safety of funds and confidential information.

The actions of cybercriminals face counteraction in one way or another. Currently, many companies are developing functional solutions aimed at detecting and suppressing destructive activity in the virtual space. For example, Amazon Web Services (AWS) is making efforts to combat artificial intelligence hallucinations using automated reasoning. In this case, it means a method rooted in centuries-old principles of logic. AWS Director of Product Management Mike Miller, during a conversation with media representatives, stated that the mentioned technique is a major leap in making artificial intelligence outputs more reliable. It was noted that this is particularly valuable for highly regulated industries such as finance and healthcare.

Cybercrime has ceased to be what can be called a phenomenon with a limited scale of existence. Currently, this problem is a global threat that generates greater risks every year, not only for the banking sectors but also for the economic systems of countries. Cybersecurity Ventures projected that global cybercrime damage to reach $9.5 trillion in 2024. Experts also predict the growth of the mentioned indicator in the coming years.

Serhii Mikhailov

3518 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.