Federal Law "On Personal Data", or industry initiatives, such as the AI ​​code of ethics from the AI ​​Alliance.

Transform business strategies with advanced india database management solutions.
Post Reply
tanjimajuha20
Posts: 464
Joined: Thu Jan 02, 2025 7:52 am

Federal Law "On Personal Data", or industry initiatives, such as the AI ​​code of ethics from the AI ​​Alliance.

Post by tanjimajuha20 »

AFT conducted a survey of its participants on the most appropriate level of AI regulation for the Russian financial market. The research data shows that 14% of fintech companies are categorically against any form of regulation, but they are the market leaders. Maxim Grigoriev explains their concerns by the fact that regulation would become a hindering factor in the development of AI in Russia.

“The more mature companies bolivia whatsapp resource are in working with AI, the less they support initiatives to regulate AI,” noted analysts from the FinTech Association.

According to the study, the most popular approach to regulating AI was a moderate one - 26% of survey participants voted for it. 23% chose the approach that involves developing special regulation, 20% of respondents voted for the advisory format. 17% of respondents spoke in favor of expanded special regulation of AI-based solutions in the financial market.

The study covered 75% of the top 20 largest banks in the Russian Federation: the FinTech Association conducted 45 in-depth interviews and studied more than 100 cases of AI application in fintech. Representatives of Sber, Alfa-Bank, Tinkoff Bank, and Otkritie Bank did not additionally comment on the results of the study at the request of a ComNews correspondent.

Grigory Gryaznov, head of the Analytical Services division of DOM.RF JSC, reported that within the framework of a trilateral agreement on the platform of the FinTech Association with Yandex, a pilot project is being implemented to introduce YandexGPT to solve internal problems - this project is being implemented in the so-called sandbox, where you can test the capabilities of the service on individual requests. However, according to Grigory Gryaznov, with the current level of regulation in the AI ​​sphere, not all the data that the company has can be transferred outside of it, to third-party services, including for testing AI tools.

"Dom.RF" consists of two blocks - a development institute in the housing sector and a large mortgage and retail bank, so we have a large volume of data on both 3,800 developers and mortgage clients. We also securitize all mortgage transactions. The data array is large, and this is what is needed to develop neural networks. But we cannot give all this data to third-party companies, so we are trying to develop AI products within the company," added the representative of the press service of "Dom.RF".

Marina Dorokhova, Project Manager at Yakov & Partners LLC (a strategic consulting company), also noted that fintech companies may have difficulties using data, namely because of the need to mask data during any actions with the AI ​​model - training, transmitting prompts, receiving results. According to her, this limits the ability to exchange data with AI technology providers and potentially affects the quality of the result produced by the AI.

According to Marina Dorokhova, this problem could be solved by accredited companies that provide the ability to quickly integrate masking and encryption tools for confidential data into generative AI systems.

Marina Dorokhova believes that the creation of a regulatory framework is necessary to respect the rights of fintech clients and improve the quality of models, and at the current stage it is important to ensure a balance between regulation and the creation of conditions for the development of AI. "It is possible to use elements of "soft regulation" - for example, creating general requirements for compliance with laws, rights, continuous improvement of the model and labeling of the results of generative intelligence; industry measures and requirements for the use and quality of AI models used in credit scoring and brokerage operations; individual requirements for working with banking secrecy," she added.

Olga Makhova, Director of Change, Innovation and Data Management at Rosbank (part of the Interros Group), believes that regulation in the field of AI is necessary first and foremost to ensure security, since the spread of AI creates potential threats such as cyberattacks and abuse of technology. According to her, it is also important to determine issues of liability when autonomous AI systems make decisions.

Alexander Krushinsky, director of the voice digital technologies department at BSS, on the contrary, believes that there is no point in regulating the AI ​​market, since at the moment the risks of AI are not obvious, and if we start regulating AI, then there will inevitably be a temptation to take the easiest path - to introduce prohibitive measures for risks that are obvious from the current position.

"Restrictive measures hinder the development of the industry. The same attempt to regulate the collection of biometric data, despite its undoubted benefit from the point of view of data leakage security, has definitely slowed down the use of biometric voice technologies for several years," he adde
Post Reply