AI Unlocked
1820208 Members
3709 Online
109620 Solutions
New Article
HPE_Experts

Keep up with the FSI giants: 4 steps to fast, modular AI experimentation for smaller FSIs

Smaller FSIs have been at a disadvantage against larger peers who can build complex AI models in-house. With four simple steps, they can quickly deploy safe and secure AI environments from existing off-the-shelf models, while creating value from their own data.

HPE-Financial-Services-AI.png

It’s time to turn the tables on the financial services industry’s AI leaders.

For years, smaller FSI companies have had to sit back and watch as the industry giants flexed their muscles and invested in building large, complex in-house AI models.

But as AI has matured, the landscape is changing. AI technologies are now available out of the box. And they’re enabling smaller and slower-moving firms to jump the line, skip difficult model training projects, and start experimenting with business use cases for AI right now.

Today, smaller FSI firms—including regional banks and hedge funds, as well as fintech, insurance and capital markets companies—can join a level playing field for AI innovation.

By following these four simple steps, you can quickly deploy safe and secure AI environments, which create value from your own proprietary data by using the latest open-source models.

And the only hardware and software you need are commercially available solution from vendors like NVIDIA and HPE.

Step 1: A little preparation

Your AI projects can move faster if you leverage your existing data and other resources. So a good place to start is with an assessment of your data, production systems, available data center capacity, and AI experimentation team.

You can use this assessment to produce your business and technical requirements and assess pre-built AI solutions.

Here are some of the questions you should be asking:

  • What are the biggest datasets you’ll use with AI? Can your AI compute access this data easily in your data center, private cloud, or public cloud?
  • Do you want to own your models and the content they create?
  • Do you need your data and user activity to stay private?
  • Which kinds of AI models do you want to use? It’s helpful to have access to the latest models without getting bogged down by security re-verifications.
  • Do you want to customize your models to fit your business problems, or even license them to others in your industry? Some open source models allow you to do this.
  • Who will be running your AI experiments? Will they be managed and funded centrally, or by individual business units?
  • Which stage of the AI journey are you at now, and where do you want to be in one, two, and five years’ time?

You can get help to answer all of these questions, and address technical aspects of your AI plans, with complementary workshops and advice from HPE.

Step 2: Select your "MVP AI infrastructure and software stack

Once you’ve discovered your requirements, you’re ready to choose the infrastructure, software stack, and AI models that can be the “most valuable players” in your AI experiments.

Get everything in one scalable solution

Big FSI institutions who adopted AI early had to deploy their solutions the hard waytesting components one by one, troubleshooting complex applications, and applying updates manually. There are hundreds of successful proof of concepts (POCs) out there, which never reached production because they got stuck in various stages of development.

But it’s now possible to get all of the hardware, key software components, and foundation models you need in an easy-to-use package. Look for solutions with components that are modular and widely compatible, so you’re ready to scale and adapt your applications. Avoid becoming locked into proprietary technologies that can limit your options in the future.

Centralize for the fastest time to production

To reach production faster, best practice is to centralize deployment and support with a few strategic vendors who can support your entire AI platform.

NVIDIA AI Computing by HPE builds in components that give enterprises of all sizes a fast, flexible, simple path to deployment:

  • Pre-built containers called NIM’s (NVIDIA inference microservices) package together a leading open source model, such as Meta’s llama3, with all of the supporting components needed to deploy the model as a microservice. They’re optimized for specific hardware accelerators, and you can deploy entire environments ready for use within minutes.
  • You can deploy NIMs in as few as three clicks with HPE Machine Learning Inference Software (MLIS). For example, those 3 clicks take you from an open-source model to an enterprise-grade chatbot. In the AI Lifecycle, model deployment is ultimately where you realize value
  • HPE and NVIDIA co-developed the Private Cloud AI portfolio with a curated stack of AI and data infrastructure and software tools to help clients privately deploy and serve very large models like LLM’s to their users while maintaining compliance, efficiency, and scalability.

Decide what you are willing and able to do yourself, and rely on trusted experts and partners to deliver the rest.

Step 3: Iterate and expand

With your platform and models deployed, it’s time to learn about prompt engineering and to let your AI team and subject matter experts go to work.

Experiment to find valuable use cases

Start by experimenting to find the most valuable use cases for your business. Public-facing AI apps may generate the most attention, but the use cases with the fastest business impact often face employees in the middle and back offices. These might include automating repetitive tasks, so employees can focus on duties that add the most value.

Try a wide variety of use cases at a shallow level. Then, continue developing the most promising and successful experiments.

In this phase, inferencing needs tend to grow rapidly. Expect that successful experiments will expand with larger models (or combinations of models), and wider business applications will grow, as more users onboard.  When your first use cases reach production, celebrate the wins … then make sure you can keep up with demand as everyone in the organization wants in.

Add more AI capabilities as demand grows

As demand for AI resources within your firm grows over time, you’ll need to add new capabilities and resources such as a wider range of AI models that serve different business needs.

Choosing a modular platform will enable you to do this more easily. For example, HPE’s Machine Learning platform can help you build pipelines and governance, which makes interactions between your data and AI models more efficient and secure. The platform also helps you with fine-tuning RAG (see below), experiment tracking and collaboration, and managing your AI infrastructure.

Easily transition to bigger use cases

When you hit the limits of model performance in a particular domain, you may want to deploy a bigger model and compare results. Or you may want to improve model accuracy and reliability, by having models reference facts from trusted internal knowledge bases using a process called retrieval-augmented generation (RAG).

To improve accuracy even further, you can try fine tuning, where you retrain an open-source foundation model with your own specialized domain data. With fine tuning, you can outperform the biggest cloud-based commercial modelswhich may contain trillions of parametersfor your specific tasks. And you can do it with a much smaller and more efficient model, hosted locally or in your private cloud.

And because you placed your AI compute close to your data, you can avoid the greater costs, latency, and security risks associated with moving big data frequently.

Step 4: Don't let the future sneak up on you

As you find more ways to create value with AI, you’ll need to run more POCs, with more users and bigger models. Your production deployments will grow bigger and become mission critical, calling for redundancy and effective disaster recovery (DR) solutions. Eventually your data and compute needs will outgrow your existing facilities.

Don’t allow these changes to surprise you, or to slow down your innovation. Strike early and create growth forecasts, which will help you:

  • Scale infrastructure for growing production deployments, and the amount of structured and unstructured data you’ll collect.
  • Scale up governance, controls, security, auditing, and monitoring as you move projects into production and expand into customer-facing use cases.
  • Make plans for when you outgrow existing facilities. These plans could co-locations, and high-performance computing systems such as those from Cray. Work with partners who can meet your current and future needs.
  • Monitor your power consumption and costs, and run your AI workloads on the cheapest, cleanest power available to you. At a certain scale, adapting innovations like direct liquid cooling (pioneered by HPE) can finance itself with power savings and advance sustainability goals.
  • Make financial plans that work with your forecasted scale and capital allocation, for example by considering consumption-based pricing models like HPE GreenLake.

Can we help with your next steps?

AI technologies have matured to a point where it’s become much easier to deploy AI infrastructure, experiment with use cases, and move into production. It’s an exciting time for smaller FSI firms who want to keep up with bigger competitors.

But there are still difficult decisions to make, and a wide array of technologies to assess, which is why consulting the partners and vendors you trust is so helpful.

Start now by learning more about HPE Private Cloud AI.


Joe Fuchs.pngMeet Joe Fuchs, HPE AI Lead for Financial Services

Joe is a financial services and IT consulting professional with several years of experience in financial management and Accounting, Financial Planning and modeling, Financial Analysis, system implementations, business operations, process reengineering and technology initiatives. He has extensive knowledge and practice with data integration and transformation and serves as the AI lead for the financial services industry at HPE. Connect with Joe on LinkedIn

 

 

0 Kudos
About the Author

HPE_Experts

Our team of Hewlett Packard Enterprise experts helps you learn more about technology topics related to key industries and workloads.