An overwhelming 91% of financial services industry (FSI) companies are either assessing artificial intelligence or already have it in the bag as a tool that’s driving innovation, improving operational efficiency and enhancing customer experiences.
Generative AI — powered by NVIDIA NIM microservices and accelerated computing — can help organizations improve portfolio optimization, fraud detection, customer service and risk management.
Among the companies harnessing these technologies to boost financial services applications are Ntropy, Contextual AI and NayaOne — all members of the NVIDIA Inception program for cutting-edge startups.
And Silicon Valley-based startup Securiti, which offers a centralized, intelligent platform for the safe use of data and generative AI, is using NVIDIA NIM to build an AI-powered copilot for financial services.
At Money20/20, a leading fintech conference running this week in Las Vegas, the companies will demonstrate how their technologies can turn disparate, often complex FSI data into actionable insights and advanced innovation opportunities for banks, fintechs, payment providers and other organizations.
New York-based Ntropy is helping remove various states of entropy — disorder, randomness or uncertainty — from financial services workflows.
“Whenever money is moved from point A to point B, text is left in bank statements, PDF receipts and other forms of transaction history,” said Naré Vardanyan, cofounder and CEO of Ntropy. “Traditionally, that unstructured data has been very hard to clean up and use for financial applications.”
The company’s transaction enrichment application programming interface (API) standardizes financial data from across different sources and geographies, acting as a common language that can help financial services applications understand any transaction with humanlike accuracy in just milliseconds, at 10,000x lower cost than traditional methods.
It’s built on the Llama 3 NVIDIA NIM microservice and NVIDIA Triton Inference Server running on NVIDIA H100 Tensor Core GPUs. Using the Llama 3 NIM microservice, Ntropy achieved up to 20x better utilization and throughput for its large language models (LLMs) compared with running the native models.
Airbase, a leading procure-to-pay software platform provider, boosts transaction authorization processes using LLMs and the Ntropy data enricher.
At Money20/20, Ntropy will discuss how its API can be used to clean up customers’ merchant data, which boosts fraud detection by improving the accuracy of risk-detection models. This in turn reduces both false transaction declines and revenue loss.
Another demo will highlight how an automated loan agent taps into the Ntropy API to analyze information on a bank’s website and generate a relevant investment report to speed loan dispersal and decision-making processes for users.
Contextual AI — based in Mountain View, California — offers a production-grade AI platform, powered by retrieval-augmented generation (RAG) and ideal for building enterprise AI applications in knowledge-intensive FSI use cases.
“RAG is the answer to delivering enterprise AI into production,” said Douwe Kiela, CEO and cofounder of Contextual AI. “Tapping into NVIDIA technologies and large language models, the Contextual AI RAG 2.0 platform can bring accurate, auditable AI to FSI enterprises looking to optimize operations and offer new generative AI-powered products.”
The Contextual AI platform integrates the entire RAG pipeline — including extraction, retrieval, reranking and generation — into a single optimized system that can be deployed in minutes, and further tuned and specialized based on customer needs, delivering much greater accuracy in context-dependent tasks.
HSBC plans to use Contextual AI to provide research insights and process guidance support through retrieving and synthesizing relevant market outlooks, financial news and operational documents. Other financial organizations are also harnessing Contextual AI’s pre-built applications, including for financial analysis, policy-compliance report generation, financial advice query resolution and more.
For example, a user could ask, “What’s our forecast for central bank rates by Q4 2025?” The Contextual AI platform would provide a brief explanation and an accurate answer grounded in factual documents, including citations to specific sections in the source.
Contextual AI uses NVIDIA Triton Inference Server and the open-source NVIDIA TensorRT-LLM library for accelerating and optimizing LLM inference performance.
London-based NayaOne offers an AI sandbox that allows customers to securely test and validate AI applications prior to commercial deployment. Its technology platform allows financial institutions the ability to create synthetic data and gives them access to a marketplace of hundreds of fintechs.
Customers can use the digital sandbox to benchmark applications for fairness, transparency, accuracy and other compliance measures and to better ensure top performance and successful integration.
“The demand for AI-driven solutions in financial services is accelerating, and our collaboration with NVIDIA allows institutions to harness the power of generative AI in a controlled, secure environment,” said Karan Jain, CEO of NayaOne. “We’re creating an ecosystem where financial institutions can prototype faster and more effectively, leading to real business transformation and growth initiatives.”
Using NVIDIA NIM microservices, NayaOne’s AI Sandbox lets customers explore and experiment with optimized AI models, and take them to deployment more easily. With NVIDIA accelerated computing, NayaOne achieves up to 10x faster processing for the large datasets used in its fraud detection models, at up to 40% lower infrastructure costs compared with running extensive CPU-based models.
The digital sandbox also uses the open-source NVIDIA RAPIDS set of data science and AI libraries to accelerate fraud detection and prevention capabilities in money movement applications. The company will demonstrate its digital sandbox at the NVIDIA AI Pavilion at Money20/20.
Powering a broad range of generative AI applications — including safe enterprise AI copilots and LLM training and tuning — Securiti’s highly flexible Data+AI platform lets users build safe, end-to-end enterprise AI systems.
The company is now building an NVIDIA NIM-powered financial planning assistant. The copilot chatbot accesses diverse financial data while adhering to privacy and entitlement policies to provide context-aware responses to users’ finance-related questions.
“Banks struggle to provide personalized financial advice at scale while maintaining data security, privacy and compliance with regulations,” said Jack Berkowitz, chief data officer at Securiti. “With robust data protection and role-based access for secure, scalable support, Securiti helps build safe AI copilots that offer personalized financial advice tailored to individual goals.”
The chatbot retrieves data from a variety of sources, such as earnings transcripts, client profiles and account balances, and investment research documents. Securiti’s solution safely ingests and prepares it for use with high-performance, NVIDIA-powered LLMs, preserving controls such as access entitlements. Finally, it provides users with customized responses through a simple consumer interface.
Using the Llama 3 70B-Instruct NIM microservice, Securiti optimized the performance of the LLM, while ensuring the safe use of data. The company will demonstrate its generative AI solution at Money20/20.
NIM microservices and Triton Inference Server are available through the NVIDIA AI Enterprise software platform.
Learn more about AI for financial services by joining NVIDIA at Money20/20, running through Wednesday, Oct. 30.
Explore a new NVIDIA AI workflow for fraud detection.