QA teams are struggling to maintain the balance between Time to Market and First Time Right. Time windows for QA are shrinking as release cycles become more frequent and On Demand. The move towards Digital Transformation is making this even more acute. Enter Risk-Based Testing.

The idea of risk-based testing is to focus on testing and spend more time on critical functions. By combining the focused process with metrics, it is possible to manage the test process by intelligent assessment and to communicate the expected consequences of decisions taken. Most projects go through extreme pressure and tight timescales coupled with a risky project foundation. With all these limitations, there’s simply no room for settlement on quality and stability in today’s challenging world, especially in the case of highly critical applications. So, instead of doing more with less and risking late projects, increased costs, or low quality, we need to find ways to achieve better with less. The focus of testing must be placed on aspects of the software that matter most to reduce the risk of failure as well as ensure the quality and stability of the business applications. This can be achieved by risk-based testing. The pressure to deliver may override the pressure to get it right. As a result, the testers of modern systems face many challenges. They are required to-

  1. Calculate software product risks. Identify and calculate, through consultation, the major product risks of concern and propose tests to address those risks.
  2. Plan and judge the overall test effort. Judge, based on the nature and scope of the proposed tests and experience, how expensive and time-consuming the testing will be.
  3. Obtain consensus on the amount of testing. Achieve, through consensus, the right coverage, balance, and emphasis on testing.
  4. Supply information for a risk-based decision on release. Perhaps the most important task of all is to provide information as the major deliverable of all testing.

The Association of Testing and Risk

There are three types of software risk:
  1. Product risk– A product risk is a chance that the product fails in relation to the expected outcome. These types of risks are related to the product definition, the product complexity, the lack of stability of requirements, and the potential defect-proneness of the concerned technology that can fail meeting requirements. Product risk is indeed the major part of concern of the tester.
  2. Process risk– process risk is the potential loss resulting from an improper execution of processes and procedures in conducting a Financial Institution’s day-to-day operations. These risks relate primarily to the internal aspects of the project including- its planning and scrutinizing. Generally, risks in this area involve the testers underestimating the complexity of the project and therefore not putting in the effort or expertise needed. The project’s internal management including efficient planning, controlling, and progress monitoring is the project management concern.
  3. Project risk– A project risk is an uncertain event that may or may not occur during a project. Contrary to our everyday idea of what “risk” means, a project risk could have either a negative or a positive effect on progress towards project objectives Such types of risk are related to the context of the concerned project as a whole.

The purpose of structured test methodologies tailored to the development activities in risk-based testing is to reduce risk by detecting faults in project deliverables as early as possible. Finding faults early, rather than late, in a project reduces the reworking necessary, costs, and amount of time lost.

Risk-based Testing Strategy

Risk-based testing – Objectives
  • To issue relevant evidence showing that all the business advantages required from the systems can be achieved.
  • To give relevant data about the potential risks involved in the release (as well as use) of the concerned system undergoing the test.
  • To find defects in the software products (software as well as documentation) to make necessary corrections.
  • To highlight and build the impression that the stated (as well as unstated) needs have been successfully met.

Risk-based test process – Stages

Stage 1: Risk Identification

Risk Identification is the activity that examines each element of the program to identify associated root causes that can cause These are derived from existing checklists of failure modes (most commonly) and generic risk lists that can be used to seed the discussions in a risk workshop. Developers, users, technical support staff, and testers are probably best placed to generate the initial list of failure modes. The tester should compile the inventory of risks from practitioners and input schedule the risk workshop, and copy the risk inventory to the attendees. Ensuring that adequate and timely risk identification is performed is the responsibility of the test manager or product owner is the first participant in the project.

Stage 2: Risk Analysis

Define levels of uncertainty. Once you have identified the potential sources of risk, the next step is to understand how much uncertainty surrounds each one. At this stage, the risk workshop is convened. This should involve application architecture from the business, development, technical support, and testing communities. The workshop should involve some more senior managers who can see the bigger picture. Ideally, the project manager, development manager, and business manager should be present.

Stage 3: Risk Response

The risk response planning involves determining ways to reduce or eliminate any threats to the project, and also the opportunities to increase their impact. When the candidate risks have been agreed on and the workshop is over, the tester takes each risk in turn and considers whether it is testable. If possible, the tester then specifies a test activity or technique that should meet the test objective. Typical techniques include requirements or design reviews, inspections or static analysis of code or components, or integration, system, or acceptance tests.

Stage 4: Test Scoping

A test scope shows the software testing teams the exact paths they need to cover while performing their application testing operations Scoping the test process is the review activity that requires the involvement of all stakeholders. At this point, the major decisions about what is in and out of scope for testing are made; it is, therefore, essential that the staff in the meeting have the authority to make these decisions on behalf of the business, the project management, and technical support.

Stage 5: Test Process

The process of evaluating a product by learning about it through experiencing, exploring, and experimenting, includes to some degree: questioning, study, modeling, observation, inference, etc. At this point, the scope of the testing has been agreed on, with test objectives, responsibilities, and stages in the overall project plan decided. It is now possible to compile the test objectives, assumptions, dependencies, and estimates for each test stage and publish a definition for each stage in the test process.

Conclusion

When done effectively, risk-based assessment and testing can quickly deliver important outcomes for an organization. Because skilled specialists assess risk at each stage of delivery, the quality of deliverables starts with the requirements.

Know how Magic FinServ can help you or reach out to us at mail@magicfinserv.com.

All of a sudden there has been an increasing consensus that wealth management advisory services are something that we all need – not just for utilizing our corpus better, but also for gaining more accurate insights about what to do with our monies – now that there are so many options available. This has been partly due to the proliferation of platforms including the Robo- advisory services that deliver financial information on the fly. And partly due to psychological reasons. We all have heard stories of how investing “smart” in stocks, bonds, and securities resulted in a financial windfall and ludicrous amounts of wealth coming into the hand of the lucky ones while with our fixed income and assets we only ended up with steady gains over the years. So yes, we all want to be that “lucky one” and want our money to be invested better!

Carrying out the Fiduciary Duties!

But this blog is not about how to invest “smart.” Rather the focus is on wealth managers, asset managers, brokers, Registered Investment Advisors (RIA), etc., and the challenges they face while executing their fiduciary duties.

As per the Standard of Conduct for Investment Advisers, there are certain fiduciary duties that the financial advisors/ investment advisors are obligated to adhere to, for example, there’s the Duty of Care which makes it obligatory for investment advisors to ensure the best interests of the client and:

  • Provide advice that is in the clients’ best interests
  • Seek best execution
  • Provide advice and monitoring over the course of the relationship

However, due to multiple challenges – primarily related to the assimilation of data, that makes it difficult to fulfil the fiduciary obligations. The question then is how wealth managers can successfully operate in complex situations and with clients with large portfolios and retain the personal touch.

The challenges enroute

Investors today desire, apart from omnichannel access, integration of banking and wealth management services, and personalized offerings, and are looking at wealth advisors that can deliver all three. In fact, fully 50 percent of high-net-worth (HNW) and affluent clients say their primary wealth manager should improve digital capabilities across the board. (Source: McKinsey)

Lack of integration between different systems: The lack of integration between different systems is a major roadblock for the wealth manager, as is the lack of appropriate tools for cleaning and structuring data. As a result, wealth management and advisory end up generating a lot of noise for the client.

Multiple assets and lack of visibility: As a financial advisor, the client’s best interests are paramount. Visibility into the various assets the client possesses is essential. But what if the advisor does not see everything? As the client has multiple assets – retirement plan, stock and bond allocations, insurance policy, private equity investments, hedge funds, and others, without visibility how can you execute your fiduciary duties to the best of your ability.

Data existing in silos: The problem of data existing in silos is a huge problem in the financial services sector. Wealth managers, asset managers, banks, and the RIAs require a consolidated position of the clients’ portfolios, so that no matter the type of asset class, the data is continually updated and made available. Let’s take the example of the 401K – the most popular retirement plan in America. Ideally, all the retirement plan accounts should be integrated. However, when this is not the case, it becomes difficult to take care of the client’s best interests.

Delivering personalized experience: One of the imperatives when it comes to financial advice is to ensure that insights or conversations are customized as per the customer’s requirements. While someone might desire inputs in a pie chart form, others might require inputs in text form. So apart from analyzing and visualizing portfolio data, and communicating relevant insights, it is also essential to personalize reporting so that there is less noise.

Understanding of the customer’s risk appetite: A comprehensive and complete view of the client’s wealth – which includes the multiple asset classes in the portfolio – fixed income, alternative, equity, real assets, directly owned, is essential for an understanding of the risk appetite.

The epicenter of the problem is of course poor-quality data. Poor quality or incomplete data, or data existing in silos and not aggregated is the reason why wealth advisory firms falter when it comes to delivering sound fiduciary advice. They are unable to ascertain the risk appetite, or fix incomes, or access the risk profile of the basket (for portfolio trading). More importantly, they are unable to retain the customer. And that’s a huge loss. Not to mention the woeful loss of resources and money when instead of acquiring new customers or advising clients, highly paid professionals spend their time in time-intensive portfolio management and compliance tasks and end up downloading tons of data in multiple formats for aggregation and then for analytics and wealth management.

Smart Wealth Management = Data Consolidation and Aggregation + Analytics for Smart Reporting

Data consolidation and aggregation is at the heart of wealth management practice. is undeniable.

  • A complete view of all the customer’s assets is essential – retirement plan, stock and bond allocations, insurance policy, private equity investments, hedge funds, and others.
  • Aggregate all the assets. Bring together all multiple data sources/ custodians involved
  • Automate the data aggregation and verification in the back office. Build the client relationships instead of manually going through data
  • In-trend trading such as portfolio trading wherein a bundle of bonds of varying duration and credit quality are traded in one transaction. It requires sophisticated tools to access the risk profile of the whole basket (in the portfolio trade) (Source: Euromoney)
  • Ensure enhanced reporting or sharing the data in the form that the customer requires – pie charts, text, etc., using sophisticated analytics tools for an uplifting client experience using a combination of business intelligence and analytics.

How can we help?

Leverage Magic DeepSightTM for data aggregation and empower your customers with insightful information

Magic FinServ’s AI Optimization framework utilizing structured and unstructured data build tailored solutions for every kind of financial institution delivering investment advice – banks, wealth managers, brokers, RIAs, etc.

Here’s one example of how our bespoke tool can accelerate and elevate the client experience.

Data aggregation: Earlier we talked about data consolidation and aggregation. Here we have an example of how we can deliver on when it comes to clarity, speed, and meaningful insights from data. Every fund is obligated to publish its investment strategy quarterly. Magic FinServ’s AI optimization framework can potentially provide the capability to read these details from public websites. Bringing together data from disparate sources and data stores and consolidating it by combining our bespoke technology – DeepSightTM – that has a proven capability to extract insights from data in public websites such as 401K, 10K as well as from unstructured sources such as emails and aggregate them to ensure a single source of truth, which provides intelligence and insights to carry out portfolio trading and balancing exercise, scenario balancing and forecasting among others.

Business Intelligence: Our expertise in building digital solutions that leverage content digitization and unstructured / alternative data using automation frameworks and tools improve trading outcomes in the financial services industry.

DCAM authorized partners: As DCAM authorized partners, leverage the best-in-class data management practices for evaluating and accessing data management programs, based on core data management principles.

Keeping up with the times:

The traditional world of Wealth Management Firms is going through a sea change. Partly due to the emergence of tech-savvy high-net-worth individuals (HNWI) who demand more in terms of content, and partly due to increasing role played by Artificial Intelligence, Machine Learning and natural language processing. Though, it is still the early days of AI, it is evident that in wealth management, technology is increasingly taking on a larger role in delivering content to the client while taking of aspects like cybersecurity, costs, back-office efficiency and automation, data analysis and personalized insights, forecasting and improving the overall customer experience.

To know more about how Magic FinServ can amplify your client experience, you can write to us mail@magicfinserv.com.

Jim Cramer famously predicted, “Bear Stearns is fine. Do not take your money out. “

He said this on an episode of Mad Money on 11 March 2008.

The stock was then trading at $62 per share.

Five days later, on 16 March 2008, Bear Stearns collapsed. JPMorgan bailed the bank out for a paltry $2 per share.

This collapse was one of the biggest financial debacles in American history. Surprisingly nobody saw it coming (except Peter, who voiced his concerns in the now infamous Mad Money episode). Sold at a fraction of what it was worth – from $20 billion capitalization to all-stock deal values of $ 236 million, approximately 1% of what it was worth earlier, there are many lessons from the Bear Stearns fall from grace.

Learnings from Bear Stearns and Lehman Brothers debacle

Bear Stearns did not fold up in a day. Sadly, the build-up to the catastrophic event began much earlier in 2007. But no one heeded the warning signs. Not the Bear Stearns Fund Managers, not Jim Cramer.

Had the Bear Stearns Fund Managers ensured ample liquidity to cover their debt obligations; had they been a little careful and understood and accurately been able to predict how the subprime bond market would behave under extreme circumstances as homeowner delinquencies increased; they would have saved the company from being sold for a pittance.

Or this and indeed the entire economic crisis of 2008, was the rarest of rare events, beyond the scope of human prediction – a Black Swan event, an event characterized by rarity, extreme impact, and retrospective predictability. (Nassim Nicholas Taleb)

What are the chances of the occurrence of another Black Swan event now that powerful recommendation engines, predictive analytics algorithms, and AI and ML parse through data?

In 2008, the cloud was still in its infancy.

Today, cloud computing is a powerful technology with an infinite capacity to make information available and accessible to all.

Not just the cloud, financial organizations are using powerful recommendation engines and analytical models for predicting the market tailwinds. Hence, the likelihood of a Black Swan event like the fall of Bear Stearns and Lehman Brothers seems remote or distant.

But faulty predictions and errors of judgment are not impossible.

Given the human preoccupation with minutiae, instead of possible significant large deviations, even when it is out there like an eyesore, black swan events are possible (the Ukraine war and subsequent disruption of the supply chain were all unthinkable before the pandemic).

Hence the focus on acing the data game.

Focus on data (structured and unstructured) before analytics and recommendation engines

  • The focus is on staying sharp with data – structured and unstructured.
  • Also, the focal point should be on aggregating and consolidating data and ensuring high-level data maturity.
  • Ensuring availability and accessibility of the “right” or clean data.
  • Feeding the “right” data into the powerful AI, ML, and NLP-powered engines.
  • Using analytics tools and AI and ML for better quality data.

Data Governance and maturity

Ultimately financial forecasting – traditional or rolling is all about data from annual reports, 10-K reports, financial reports, emails, online transactions, contracts, and financials. As a financial institution, you must ensure high-level data maturity and governance within the organization. For eliciting that kind of change, you must first build a robust data foundation for financial processes, as advanced algorithmic models or analytics tools that organizations use for prediction and forecasting require high-quality data.

Garbage in would only result in Garbage out.

Consolidating data – Creating a Single Source of Truth

Source: Deloitte
  • The data used for financial forecasting comes primarily from three sources:
    • Data embedded within the organization – historical data, customer data, alternative data – or data from emails and operational processes
    • External: external sources and benchmarks and market dynamics
    • Third-party data: from ratings, scores, and benchmarks
  • This data must be clean and high-quality to ensure accurate results downstream.
  • Collecting data from all the disparate sources, cleaning it up, and keeping it in a single location, such as a cloud data warehouse or lake house – or ensuring a single source of truth for integration with downstream elements.
  • As underlined earlier, bad-quality data impairs the learning of even the most powerful of recommendation engines, and a robust data management strategy is a must.
  • Analytics capabilities are enhanced when data is categorized, named, tagged, and managed
  • Collating data from different sources – this is what it was and what is – historical trend analysis.

Opportunities lost and penalties incurred when data is not of high quality or consolidated

Liquidity assumption:

As an investment house, manager, or custodian, it is mandatory to maintain a certain level of liquidity for regulatory compliance. However, due to the lack of data, lack of consolidated data, or lack of analytics and forecasting, organizations end up making assumptions for liquidity.

Let’s take the example of a bank that uses multiple systems for different portfolio segments or asset classes. Now consider a scenario where these systems are not integrated. What happens? As the organization fails to get a holistic view of the current position, they just assume the liquidity requirements. Sometimes they end up placing more money than required for liquidity, which results in the opportunity being lost. Other times, they place less money and become liable for penalties.

If we combine the costs of the opportunity lost and the penalties, the organization would have been better off investing in better data management and analytics.

Net Asset Value (NAV) estimation:

Now let’s consider another scenario – NAV estimation. Net Asset Value is the net value of an investment fund’s assets less its liabilities. NAV is the price at which the shares of the funds registered with the U.S. Securities and Exchange Commission (SEC) are traded. For calculation of month-end NAV, the organization would require the sum of all expenses. Unfortunately, as all the expenses incurred are not declared on time, only a NAV estimate is provided. Later, after a month or two, once all the inputs regarding expenses are made available, the organization restates the NAV. This is not only embarrassing for the organization as they have to issue a lengthy explanation of what went wrong but are also liable for penalties. Not to mention the loss of credibility when investors lose money as the share price is incorrectly stated.

DCAM Strategy and DeepSightTM Strategy – making up for lost time

Even today, when we have extremely intelligent new age technologies at our disposal – incorrect predictions are not unusual. Largely because large swathes of data are extremely difficult to process – especially if you aim to do it manually or lack data maturity or have not invested in robust data governance practices.

But you can make up for the lost time. You can rely on Magic FinServ to facilitate highly accurate and incisive forecasts by regulating the data pool. With our DCAM strategy and our bespoke tool – DeepSightTM , you can get better and better at predicting market outcomes and making timely adjustments.

Here’s our DCAM strategy for it:

  • Ensure data is clean and consolidated
  • Use APIs and ensure that data is consolidated in one common source – key to our DCAM strategy
  • Supplement structured data with alternative data sources
  • Ensuring that data is available for slicing and dicing.

To conclude, the revenue and profits of the organization and associated customers depend on accurate predictions. And if predictions or forecasts go wrong, there is an unavoidable domino effect. Investors lose money, share value slumps, hiring freezes, people lose jobs, and willingness to trust the organization goes for a nosedive.

So, invest wisely and get your data in shape. For more information about what we do, email us at mail@magicfinserv.com

APIs are driving innovation and change in the Fintech landscape with Plaid, Circle, Stripe, or Marqueta, facilitating cheaper, faster, and more accessible financial services to the customer. However, while the APIs are the driving force in the fintech economy, there is not much relief for the software developers and quality analysts (QAs). Their workloads are not automated and there is increasing pressure to release products to the market. Experts like Tyler Jewell, managing director of Dell Technologies Capital, have predicted that there will be a Trillion programmable endpoints soon. It would be inconceivable then to carry out manual testing of APIs as is done by most organizations today. An API conundrum will be inevitable. Organizations will be forced to choose between quick releases and complete testing of APIs. If you choose a quick release, you might have to deal with technical lags in the future and rework. Failure to launch a product in time could lead to a loss of business value.

Not anymore. For business-critical APIs that demand quick releases and foolproof testing, Automation saves time and money and ensures quicker releases. To know more read on.

What are APIs and the importance of API testing

API is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other. APIs lie between the application and the web server, acting as an intermediary layer that processes data transfer between systems.

Visual representation of API orientation

Is manual testing of APIs enough? API performance challenges

With the rise in cloud applications and interconnected platforms, there’s a huge surge in the API-driven economy.

Today, many of the services that are being used daily rely on hundreds and thousands of different interconnected APIs – as discussed earlier, APIs occupy a unique space between core application microservices and the underlying infrastructure.

If any of the APIs fails the entire service will be rendered ineffective. Therefore, API testing is mandatory. When testing for APIs, the key tests are as depicted in the graphic below:

So, we must make sure that API tests are comprehensive and inclusive enough to measure the quality and viability of the business applications. Which is not possible manually.

The API performance challenges stem primarily due to the following factors:

  • Non-functional requirements during the dev stage quite often do not incorporate the API payload parameters
  • Performance testing for APIs happens only towards the end of the development cycle
  • Adding more infrastructure resources like more CPU or Memory will help, but will not solve the root cause

The answer then is automation.

Hence the case for automating API testing early in the development lifecycle and including it in the DevSecOps pipeline. The application development and the testing teams must also make an effort to monitor API performance the way monitor the application (from Postman and Manage Engine right up to AppDynamics) and also design the core applications and services with API performance in mind – questioning how much historical data a request carries and whether the data sources are monolith or federated.

Automation of APIs – A new approach to API testing

Eases the workload: As the number of programmable endpoints reaches a trillion (in the near future), the complexity of API testing would grow astronomically. Manually testing APIs using home-grown scripts and tools and open-source testing tools would be a mammoth exercise. Automation of APIs then would be the only answer.

Ensures true AGILE and DevOps enablement: Today AGILE and the ‘Shift Left’ approach have become synonymous with the changing organizational culture that focuses on quality and security. For true DevOps enablement, CI/CD integration, and AGILE, an automation framework, that can quickly configure and test APIs is desired instead of manual testing of APIs.

Automation simplifies testing: While defining and executing a test scenario, the developer or tester must keep in mind the protocols, the technology used, and the layers that would be involved in a single business transaction. Generally, there are several APIs working behind an application which increases the complexity of testing. With automation, even complex testing can be carried out easily.

Detects bugs and flaws earlier in the SDLC: Automation reduces technical work and associated costs by identifying vulnerabilities and flaws quickly saving monetary losses, rework, and embarrassment.

Decreases the scope of security lapses: Manual testing increased the risk of bugs going undetected and security lapses occurring every time the application is updated. However, with automation, it is easier to validate if any update in software elicits a change in the critical business layer.

Win-win solution for developers and business leaders: It expedites the release to market, as the API tests can validate business logic and functioning even before the complete application is ready with the UI. Resolving thereby the API conundrum.

Magic FinServ’s experience in API engineering, monitoring, and automated QA

Magic FinServ team with its capital markets domain knowledge and QA automation expertise along with industry experience helps its clients with:

  • Extraction of data from various crypto exchanges using opensource APIs to common unified data model covering the attributes for various blockchains which helps in:
    • Improved stability of the downstream applications and data warehouses
    • Eliminates the need for web scraping for inconsistent/protected data – web scraping is prevented by 2FA often
    • Use of monitored API platform improved data access and throughput and enabled the client to emerge as a key competitor in the crypto asset data-mart space
  • Extraction of data from various types of documents using Machine/AI learning algorithms and exposing this data to various downstream systems via a monitored and managed API platform
  • Use of AI to automate Smart Contract based interfaces and then later repurpose these capabilities to build an Automated API test bed and reusable framework
We also have other engineering capabilities as:
  • New generation platforms for availability, scalability and reliability for various stacks (Java/.NET/Python/js) using Microservices and Kubernates
    • Our products built uses the latest technology stack in the industry in terms of SPA (Single Page Application) (Automated pipelines/Kubernetes Cluster/Ingres controller/Azure Cloud Hosted) etc.
  • Full stack products in full managed capacity covering all the aspects of products (BA/Development/QA)

APIs are the future, API testing must be future-ready

There’s an app for that – Apple

APIs are decidedly the future of the financial ecosystem. Businesses are coming up with innovative ideas to ease payments, banking, and other financial transactions. For Banks and FinTechs, API tests are not mere tests, these are an important value add as they bring business and instill customer confidence, by ensuring desired outcomes always.

In this blog, part 1 in the series of blogs on Automation in API testing, we have detailed the importance of Automation in API testing. In the blogs that follow, we will have a comprehensive account of how to carry out tests, and the customer success stories where Magic FinServ’s API Automation Suite has provided superlative results. Keep looking out in this space for more! You can also write to us mail@magicfinserv.com.

“The unknown can be exciting and full of opportunity, but you have to be involved and you have to be able to evolve.”

-Alice Bag

When it comes to hosting a website or application, banks and financial institutions, particularly medium seized nimble hedge funds and fintechs, have multiple options. Two of the most frequently used options are – commercial shared hosting and cloud hosting. While shared hosting relies on a single or distributed physical servers, cloud hosting draws on the power of the cloud, or multiple virtual interconnected servers spread across disparate geographical locations. In shared hosting, multiple users accede to sharing of the resources (space, bandwidth, memory) of a server, in accordance with a fair use policy. Cloud hosting is more modern and technologically superior, as a result, it is increasingly being sought by modern financial institutions as they navigate rapidly changing customer preferences amid disruptive market forces and escalating geopolitical rivalries to ensure seamless delivery of services every time.

Key factors to keep in mind while deciding between cloud and shared hosting

We have enumerated a few factors which will make it easier for you to decide between the two.

Performance: Website and application performance is a critical requirement. No business today would like to lose customers due to deteriorating site speed, hence website owners must consider the performance criteria while choosing the hosting. So, it is critical to question:

  • Does the website and application performance degrade during peak hours?
  • Does the site speed slow down and then it takes ages to get it running again?
  • What is the volume of traffic expected?
  • Would the volume of traffic be consistent all through or would there be peaks and valleys?
  • How resource-intensive would the website/application be? Depending upon how important site performance is for your business/ product, you can opt from the two.
  • Do I get real time and flexible performance analytics?

Reliability: Another key requirement is reliability. Business-critical processes cannot afford downtime. Downtime translates into a cent per cent loss for the business. It means that transactions and revenue earned are zero. It is also responsible for loss of brand value. Some studies also point out that downtime results in client abandonment. Considering the amount of time and effort it takes to acquire a customer, banks and Financial Institutions are wary of unplanned downtime.

So, questioning how your regular hosting might perform – will it snap under the weight of increased workload is advisable. It makes sense as well to know beforehand how many resources would be permanently allocated to the site (in case it is a shared hosting that you have chosen). For website or application stalling can snowball into a huge embarrassment or disruption.

Security: The security of data is of paramount importance for any organization. Data must be kept safe from breaches and cyber-attacks regardless of the costs. You must be extremely careful when you choose shared hosting, because when multiple websites have the same IP address, their vulnerability to attacks increases. It becomes inevitable then for the provider to monitor closely and upgrade the latest security patches as needed. The other option is cloud hosting.

Scalability: What if your site picks up speed or you desire to scale your online presence? What then? Can demand for on-demand scalability be met by the provider? Will the website be ready for the unexpected? What if there is a jump in workload (this depends on how much resource is permanently allocated to the site)? With cloud hosting, the biggest advantage is scalability. Cloud allows me to predict when to auto-scale multifolds, both in theory and practice.

Traffic Analytics: Cloud allows you to do traffic analytics and predict which segment of your target market or which geography is attracting more eyeballs for your offerings. You can customize analytics to suit your marketing requirements and do micro-positioning of your business. This is not possible with shared hosting or any other hosting options.

Budget: Budget is another key differentiator for organizations as they have to keep their businesses running while investing in technology. Cloud hosting is undoubtedly more expensive than vanilla shared hosting. But while shared hosting looks deceptively affordable, enterprise grade shared hosting can also be quite expensive if features and functionalities are compared side by side. Undoubtedly cloud offers advantages in the long-term from a Total Cost of Operations too. Cloud also offers several enterprise grade features that are not attached to vanilla shared hosting.

Ease of management: The key question here is – who will take care of the upkeep and maintenance costs? With organizations focusing on their core activities, who will be responsible for security and upgrade? What would happen in the case of any emergency – how safe would the data be? This has to be accounted for as well, as no one would want key information to fall into the wrong hands.

Business-criticality: Lastly, if it is an intensive, business-critical process, shared hosting is not an option because business-critical processes cannot afford disruption. If it is a new product launch that an organization is planning or a website that interfaces with the customer directly, businesses cannot go wrong. Hence the cloud is the preferable option.

Shared or cloud hosting?

When it comes to choosing between the two, shared hosting is certainly economical at a base level. It is the most affordable way to kickstart a project online. But if the project is demanding, resource-intensive, and business critical, you need to look beyond shared hosting even for a small and medium enterprise.

So, when we weigh all the factors underlined earlier, the cloud undeniably has advantages. It is a preferable option for banks and financial institutions that must ensure data security at all costs while also providing a rich user experience to their customers.

Advantage Cloud: 6 Cloud Hosting benefits decoded by Magic FinServ’s Cloud team

  1. Cloud is far superior in terms of technology and innovation

Whether you are a FinTech raring to go in the extremely volatile and regulations-driven financial services ecosystem or a reputed bank or financial services company with years of experience and a worldwide userbase, there are many benefits when you choose cloud.

The cloud is one of the fastest-growing technological trends and is synonymous with speed, security, and performance.

There is so much more that an organization can do with the cloud. The advancements that have been made in the cloud, including cloud automation, enable efficiency and cost reduction. Whether it is an open-source or paid-for resource, these can be acquired by organizations with ease.

All the major cloud service providers, AWS, Microsoft Azure, and Google, offer tremendous opportunities for businesses as they become more technologically advanced each passing day. Also, cloud service providers have developed their own services that can be used by customers for solving key concerns. These native services are wide ranging starting from warehouses such as Redshift on AWS to managed Kubernetes containers on Azure Magic FinServ’s team of engineers help you realize the full potential of the cloud, with deep knowledge of AWS and Azure native services and serverless computing.

  1. Security is less of a concern when you choose the cloud

Security is less of a concern compared to shared hosting. In shared hosting, a security breach can impact all websites. In cloud hosting, the levels of security are higher and there are multiple levels of protection such as firewalls, SSL certificates, data encryption, login security etc., to keep the data safe.

Magic FinServ’s team understands that security is an infallible construct in modern tech architecture. Our engineers and cloud architects are well acquainted with the concept of DevSecOps, where security is a shared responsibility and is ingrained in the IT lifecycle, and not taken care of at the end of the lifecycle.

  1. Cloud offers more benefits in the longer term

Though in terms of pricing, shared hosting seems more affordable, there are several disadvantages:

  • The amount of hosting space for websites/applications is extremely limited as you rent only a piece of the server space.
  • The costs are lower upfront, but you lose the scalability associated with the cloud.
  • Performance and security also suffer,
  • For an agile FinTech, faster go to market is the key. Cloud offers you a platform where you can release products into the market significantly faster

For more on how you can evolve with the cloud, we have a diverse team comprising cloud application architects, Infrastructure engineers, DevOps professionals, Data migration specialists, Machine learning engineers, and Cloud operations specialists who will guide you through the cloud journey with minimum hassle.

  1. High availability and scalability

When it comes to cloud hosting, the biggest advantage is scalability. With the lean and agile driving change in the business world, cloud hosting enables organizations to optimize resources as per need. There are multiple machines/servers acting as one system. Secondly, in the case of any emergency, cloud hosting ensures high availability of data due to data mirroring. So, if one server is disabled, there are others spread in disparate geographical locations that can ensure the safety of your data and ensure that processes are not disrupted.

Magic FinServ has consistently built systems with over 4 nines availability, being used by Financial Institutions, with provisions for both planned and unplanned downtime, thereby ensuring high availability and ensuring that your business does not suffer even under the most exacting circumstances.

  1. Checking potential threats – Magic FinServ’s way

Our processes are robust and include a business impact analysis to understand the potential threat to business due to data loss. There are two key considerations we take into account, the Recovery Time Objective (RTO) which is essentially the window needed for data recovery, and RPO or Recovery Point Objective which is the maximum tolerable period during which the data might be lost. Keeping these two major metrics in mind, our team builds a robust Data Replication and Recovery Strategy aligned with the business requirement.

  1. Effective monitoring mechanism for increasing uptime

We have built a robust monitoring and alert system to ensure minimal downtime. We bring specialists with diverse technological backgrounds to build an effective & automated monitoring solution that increases the system uptime while keeping the cost of monitoring under check.

  1. Better cost control with shared hosting

When organizations choose shared hosting, they have better control of costs. This is principally because only specific people can commission additional resources. However, this is inflexible. We have seen that though the cloud allows greater autonomy for Dev Pods of today – allowing people to spin resources easily from the cloud; on the flip side, there are instances where people forget to decommission these resources when they are no longer required – escalating the costs needlessly. With shared hosting, the costs are predictable and definite.

  1. Fail fast and fail forward – smarter and quicker learning

Lastly, for a nimble FinTech of tomorrow, you want to quickly test new products and discard unviable ideas equally fast. Cloud allows Product and Engineering teams to traverse the Idea-to- Production” cycle faster. Cloud allows Fail fast and fail forward concepts to work smoothly for a Product and Dev Pod of tomorrow. Go-to-Market becomes faster and CI/CD and Containers on Cloud allow new features to be introduced on a weekly basis or less. Organizations thus significantly benefit from smarter and quicker learning.

Big and Small evolve with the Cloud: Why get left behind?

In the last couple of years, we have been seeing a trend where some of the biggest names in the business are tiptoeing into the future with cloud-based services. Accenture has also forecasted that in the next couple of years Banks in North America are going to double the number of tasks that are on the cloud (currently 12 percent of tasks are handled in the cloud). Bank of America, for example, has built its own cloud and is saving billions in the process. Wells Fargo also plans to move to data centers owned by Microsoft and Google, and Goldman Sachs says that it will team up with AWS to give its clients access to financial data and analytical tools. Capital One, one of the largest U.S. banks, managed to reduce development environment build time from several months to a couple of minutes after migrating to the cloud.

With all the big names increasingly adopting the cloud, it makes no sense to get left behind.

Make up your mind! today!

If you are still undecided on how to proceed, we’ll help you make up your mind. As the one- size-fits approach for technology implementation is no longer applicable for the banks and financial institutions today – the nature of operations has diversified and what is ideal for one is not necessarily good for the other. But when you have to keep a leash on costs while ensuring a rich and tactile user experience, without disruption to business, the cloud is ideal.

With a partner like Magic FinServ, the cloud transition is smoother and faster. We ensure peace of mind and maximize returns. With our robust failover designs that ensure maximum availability and a monitoring mechanism that increases uptime, and reduces downtime, we help you take the leap into the future. For more, write to us at magicfinserv@gmail.com.

Any talk about Data Governance is incomplete without Data Onboarding. Data onboarding is the process of uploading the customer’s data to a SaaS product often involving ad hoc manual data processes. Data Onboarding is the best use case of Intelligent Automation (IA).

If done correctly, data onboarding can result in high-quality data fabric (the golden key or the single source of truth (SSOT)) for use across back, middle, and front office for improving organizational performance, meeting regulatory compliance, and ensuring real-time, accurate and consistent data for trading.

Data Onboarding is critical for Data Governance. But what happens when Data Onboarding goes wrong?

  • Many firms struggle to automate data onboarding. Many continue with the conventional means of data onboarding such as manual data entry, spreadsheets, and explainer documents. In such a scenario, the benefits are not visible. Worse, inconsistencies during data onboarding results in erroneous reporting, leading to non- compliance.
  • Poor quality data onboarding could also be responsible for reputational damage, heavy penalties, loss of customers, etc., when systemic failures become evident.
  • Further we cannot ignore that a tectonic shift is underway in the capital markets – trading bots and crypto currency trading are becoming more common, and they require accurate and reliable data. Any inconsistency during data onboarding can have far- reaching consequences for the hedge fund or asset manager.
  • From the customer’s perspective, the longer it takes to onboard, the more frustrating it becomes as they cannot avail the benefits until the data is fully onboarded. End result – customer dissatisfaction! Prolonged onboarding processing is also a loss for the vendor as they cannot initiate the revenue cycle until all data is onboarded. This leads to needless revenue loss as they wait for months before they receive revenue from new customers.

Given the consequences of Data Onboarding going wrong, it is important to understand why data onboarding is so difficult and how it can be simplified with proper use cases.

Why is Data Onboarding so difficult?

When we talk about Data Governance, we are simply not talking about Data Quality Management, we are also talking about Reference and Master Data Management, Data Security Management, Data Development, Document and Content Management. In each of the instances mentioned, data onboarding poses a challenge because of messy data, clerical errors, duplication of data, and dynamic nature of data exchanges.

Data onboarding is all about collecting, validating, uploading, consolidating, cleansing, modeling, updating and transforming data so that it meets the collective need of the business – in our case the asset manager, fintech, bank, FI, or hedge funds engaged trading and portfolio investment.

Some of the typical challenges faced during data acquisition, data loading, and data transformation are underlined below:

Data Acquisition and Extraction

  • Constraints in extracting heavy datasets, availability of good APIs
  • Suboptimal solutions like dynamic scrapping in case API are not easily accessible
  • Delay in source data delivery from vendor/client
  • Receiving revised data sets and resolving data discrepancies across different versions
  • Formatting variations across source files like missing/ additional rows and columns
  • Missing important fields/ corrupt data
  • Filename changes

There are different formats in which data is shared – CSV files, ADI files, spreadsheets. It is cumbersome to onboard data in these varied formats.

Data Transformation

Converting data into a form that can be easily integrated with workflow or pipeline can be a time-taking exercise in the absence of standard taxonomy. There’s also the issue of creating a unique identifier for securities amongst multiple identifiers (CUISP, ISIN etc.). In many instances, developers end up cleaning messy files, which is not at all worthwhile.

Data Mapping

With data structures and formats different for Source and Target systems, data onboarding becomes difficult as data mapping – mapping the data coming in with the relevant fields in the target system poses a huge challenge for organizations.

Data Distribution/Loading

With many firms resorting to the use of spreadsheets and explainer documents, data uploading is not as seamless as it could be. File formatting discrepancies with the downstream systems and data reconciliation issues between different systems could easily be avoided with Intelligent Automation or Administrative AI.

Data Onboarding builds a bridge for better Data Governance

“Without a data infrastructure of well-understood, high-quality, well-modeled, secure, and accessible data, there is little chance for BI success.” Hugh J Watson

When we talk about the business-driven approach to Data Governance, the importance of early wins cannot be negated and hence the need for streamlining Data Onboarding with the right tools and technologies for ensuring scalability, accuracy, and transparency while keeping in mind affordability.

As the volume of data grows, data onboarding challenges will persist, unless a cohesive approach that relies on people, technology, and data is employed. We have provided here two use cases where businesses were able to mitigate their data onboarding challenges with Magic FinServ’s solutions:

After all Comprehensive Data Governance requires Crisper Data Onboarding.

Case 1: Investment monitoring platform data onboarding – enabling real-time view of positions data

Investment Monitoring Platform automates and simplifies shareholder disclosure, sensitive industries and position limit monitoring and is a notification system for filing threshold violations based on market enriched customer holding, security, portfolio, and trade files. Whenever a new client is onboarded into the application, the client’s implementation team takes care of the Initiation, Planning, Analysis, Implementation and Testing of Regulatory filings. We analyzed customer’s data during the Planning phase. Data such as the Fund and Reporting structure, Holdings, Trading Regimes, and Asset Types etc., were analyzed from the Reference Data perspective. As a part of the solution, after the analysis, the reference data is set up and source data loaded with the requisite transformation, followed by a quality vetting and completeness check. As a result of which our client was able to have a real-time view of the positions data which keeps flowing into the application in real time.

Case 2: Optimizing product capabilities with streamlined onboarding for regulatory filings

The requirement was for process improvement while configuring jurisdiction rules in the application. The client was also facing challenges in the report analysis that their client required for comparing the regulatory filings. Streamlining the product and optimizing its performance required a partner with know-how in collecting, uploading, matching, and validating customer data. Magic FinServ’s solution consisted of updating the product data point document – referred to by clients for field definitions, multiple field mapping, translations, code definitions, report requirements, etc. This paved the way for vastly improved data reconciliation issues between different systems.

The client’s application had features for loading different data files related to Security, Position, Transactions, etc., for customizing regulatory rule configuration, pre-processing data files, creating customized compliance warnings, direct or indirect jurisdiction filings, etc. We were able to maximize productivity by streamlining these complex features and documenting it. By enabling the sharing of valuable inputs across teams, the errors and omissions in data/customer were minimized while product’s capabilities were enhanced manifold times.

The importance of Data Governance and Management be ascertained from the success stories of Hedge Funds like Bridgewater Associates, Jana Partners, and Tiger Global. By implementing a robust Data Governance Approach, they have been able to direct their focus on high value stocks (as is the case with Jana Partners) or ensure high capitalization (Tiger Global).

So, it’s your turn now to strategize and revamp your data onboarding!

Paying heed to data onboarding pays enormous dividends

If you have not revamped your data onboarding strategy, it is time to do so now. As a critical element of the Data Governance approach, it is imperative that data onboarding should be done properly and without needless human intervention and the shortest span of time to meet the competitive needs of capital markets. Magic FinServ with its expertise in Client Data Processing/Onboarding with proficiency in Data Acquisition, Cleansing, Transformation, Modeling and Distribution can guide you through the journey. A professionally and systematically supervised data onboarding results in detailed documentation of data lineage, something very critical during data governance audits and subsequent changes. What better way to prevent data problems from cascading into a major event than doing data onboarding right. A stitch in time after all saves nine!

For more information about how we can be of help write to us mail@magicfinserv.com

“Noise in machine learning just means errors in the data, or random events that you cannot predict.”

Pedro Domingos

“Noise” – the quantum of which has grown over the years in the loan processing, is one of the main reasons why bankers have been rooting for automation of loan processing for some time now. The other reason is data integrity, which gets compromised when low-end manual labor is employed during loan processing. In a poll conducted by Moody’s Analytics, when questioned about the challenges they faced in initiation of loan processing, 56% of the bankers surveyed answered that manual collection of data was the biggest problem.

Manual processing of loan documents involves:

  • Routing documents/data to the right queue
  • Categorizing/classifying the documents based on type of instruction
  • Extracting information – relevant data points vary by classification and relevant business rules Feeding the extracted information into the ERP, BPM, RPA
  • Checking for soundness of information
  • Ensuring the highest level of security and transparency via an audit trial.

“There’s never time to do it right. There’s always time to do it over.”

With data no longer remaining consistent, aggregating, and consolidating dynamic data (from sources such as emails, web downloads, industry websites, etc.) has become a humongous task. Even when it comes to static data, the sources and formats have multiplied over the years, so manually extracting, classifying, tagging, cleaning, tagging, validating, and uploading the relevant data elements: currency, transaction type, counterparty, signatory, product type, total amount, transaction account, maturity date, the effective date, etc., is not a viable option anymore. And adding to the complexity is the lack of standardization in the Taxonomy with each lender and borrower using different terms for the same Data Element.

Hence, the need for automation, and integration of the multiple workflows used in loan origination – right from the input pipeline, the OCR pipeline, pre-and post-processing pipelines, to the output pipeline for dissemination of data downstream. With the added advantage of achieving a standard Taxonomy, at least in your shop.

The benefits of automating certain low-end, repetitive, and mundane data extraction activities

Reducing loan processing time from weeks to days: When the integrity of data is certain, when all data exchanges are consolidated and centralized in one place instead of existing in silos in back, middle, and front offices, only then can bankers reduce the loan processing time from months, weeks to days.

That was what JP Morgan Case achieved with COIN. They saved an estimated 360k hours or 15k days’ worth of manual effort with their automated contract management platform. It is not hard to imagine the kind of impact it had on the customer experience (EX)!

More time for proper risk assessment: There is less time wasted in keying and rekeying data. With machines taking over from nontechnical staff, the AI (Artificial Intelligence) pipelines are not compromised with erroneous, duplicate data stored in sub-optimal systems. With administrative processes streamlined, there’s time for high-end functions such as reconciliation of portfolio data, thorough risk assessment, etc.

Timely action is possible: Had banks relied on manual processes, it would have taken ages to validate the client, and by that time it could have been too late.

Ensuring compliance: By automating the process of data extraction from the scores of documents (that banks are inundated with during the course of loan processing) and by combining the multiple pipelines where data is extracted, transformed, cleaned, validated with a suitable business rules engines, and thereafter loaded for downstream, banks are also able to ensure robust governance and control for meeting regulatory and compliance needs.

Enhances the CX: Automation has a positive impact on CX. Bankers also save dollars in compensation, equipment, staff, and sundry production expenses.

Doing it Right!

One of Magic FinServ’s success stories comprises a solution for banking and financial services companies that successfully allows them to optimize the extraction of critical data elements (CDE) from emails and attachments with Magic’s bespoke tool – DeepSightTM for Transaction processing and accelerator services.

The problem:

Banks in the syndicated lending business receive large volume of emails and other documented inputs for processing daily. The key data is embedded in the email message or in the attachment. The documents are in PDF, TIF, DOCX, MSG, XLS, form. Typically, the client’s team would manually go through each email or attachment containing different Loan Instructions. Thereafter the critical elements are entered into a spreadsheet and then, uploaded, and saved in the bank’s commercial loan system.

As is inherent here there are multiple pipelines for input, pre-processing, extraction, and finally output of data, which leads to duplication of effort, is time consuming, resulting in false alerts, etc.

What does Magic Solution do to optimize processing time, effort, and spend?

  • Input Pipeline: Integrate directly with an email box or a secured folder location and execute processing in batches.
  • OCR Pipeline: Images or Image based documents are first corrected and enhanced (OCR Pre-Processing) before feeding them to an OCR system. This is done to get the best output from an OCR system. DeepSightTM can integrate with any commercial or publicly available OCRs.
  • Data Pre-Processing Pipeline: Pre-Processing involves data massaging using several different techniques like cleaning, sentence tokenization, lemmatization etc., to feed the data as required by optimally selected AI models.
  • Extraction Pipeline: DeepSight’s accelerator units accurately recognize the layout, region of interest and context to auto-classify the documents and extract the information embedded in tables, sentences, or key value pairs.
  • Post-Processing Pipeline: Post-Processing pipeline applies all the reverse lookup mappings, business rules etc. to further fine tune accuracy.
  • Output Storage: Any third-party or in-house downstream or data warehouse system can be integrated to enable straight through processing.
  • Output: Output format can be provided according to specific needs. DeepSightTM provides data in excel, delimited, PDF, JSON, or any other commonly used format. Data can also be made available through APIs. Any exception or notifications can be routed through emails as well.

Technologies in use

Natural language processing (NLP): for carrying out context-specific search from emails and attachments in varied formats and extracts relevant data from it.

Traditional OCR: for recognizing key characters (text) scattered anywhere in the unstructured document is made much smarter by overlaying an AI capability.

Intelligent RPA: is used to consolidate data from various other sources such as ledgers, to enrich the data extracted from the documents. And finally, all this is brought together by a Rules Engine that captures the organization’s policies and processes. With Machine Learning (ML) and a human-in-the-loop approach to carry out truth monitoring, the tool becomes more proficient and accurate every passing day.

Multi-level Hierarchy: This is critical for eliminating false positives and negatives since payment instructions could comprise of varying CDEs. The benefits that the customer gets are:

  • Improve precision on Critical Data Elements (CDEs) such as Amounts, Rates and Dates etc.
  • Contains false positives and negatives to reduce the manual intervention

Taxonomy: Train the AI engine on taxonomy is important because:

  • Improve precision and context specific data extraction and classification mechanism
  • Accuracy of the data elements which refer to multiple CDEs will improve. For e.g., Transaction Type, Dates and Amounts

Human-eye parser: For documents that contain multiple pages and lengthy preambles you require a delimitation of tabular vs. free flow text. The benefits are as follows:

  • Extraction of tabular data, formulas, instructions with multiple transaction types all require this component for seamless pre and post processing

Validation & Normalization: For reducing the manual intervention for the exception queue:

  • An extensive business rule engine that leverages existing data will significantly reduce manual effort and create an effective feedback loop for continuous learning

OCR Assembling: Highly required for image processing of vintage contracts and low image quality (i.e., vintage ISDAs):

  • Optimize time, cost and effort with the correct OCR solution that delivers maximum accuracy.

Conclusion

Spurred on by competition from FinTech and challenger banks, that are using APIs, AI, and ML for maximizing efficiency of loan processing, the onus is on banks to maximize efficiency. The first step is ensuring data integrity with the use of intelligent tools and business-rules engines that make it easier to validate data. It is after all much easier to pursue innovation and ensure that SLAs are met when workflows are automated, cohesive, and less dependent on human intervention. So, if you wish to get started and would like more information on how we can help, write to us mail@magicfinserv.com.

Wealth managers are standing at the epicenter of a tectonic shift, as the balance of power between offerings and demand undergoes a dramatic upheaval. Regulators are pushing toward a ‘constrained offering’ norm while private clients and independent advisors demand a more proactive role. FinTech Innovation: Paolo Sironi

Artificial Intelligence, Machine Learning-based analytics, recommendation engines, next best action engines, etc., are powering the financial landscape today. Concepts like robo-advisory (a $135 Billion market by 2026) for end-to-end self-service investing, risk profiling, and portfolio selection, Virtual Reality / Augmented Reality or Metaverse for Banking and Financial trading (Citi plans to use holographic workstations for financial trading) are creating waves but will take time to reach critical value.

In the meanwhile, there’s no denying that Fintechs and Financial Institutions must clean their processes first – by organizing and streamlining back, middle, and front office operations with the most modern means available such as artificial intelligence, machine learning, RPA, and the cloud. Hence, the clarion call for making back, middle and front office administrative processes of financial institutions the hub for change with administrative AI.

What is administrative AI?

Administrative AI is quite simply the use Artificial Intelligence based tools to simplify and make less cumbersome administrative processes such as loans processing, expense management, KYC, Client Life Cycle Management / Onboarding, data extraction from industry websites such as SEC, Munis, contract management, etc.

Administrative AI signals a paradigm shift in approach – which is taking care of the basics and the less exciting first. It has assumed greater importance due to the following reasons:

  1. Legacy systems make administrative processes chaotic and unwieldy and result in duplication of effort and rework:

Back and middle office administrative processes are cumbersome, they are repetitive, and sometimes unwieldy – but they are crucial for business. For example, if fund managers spend their working hours extracting data and cleaning excel sheets of errors, there will be little use of the expensive AI engine for predicting risks in investment portfolios or modeling alternative scenarios in real time. With AI life becomes easier.

  1. Administrative AI increases productivity of work force, reduces error rate resulting in enhancec customer satisfaction

AI is best for processes that are high volume and where the incidences of error are high such as business contracts management, regulatory compliance, payments processing, onboarding, loan processing, etc. An example of how Administrative AI reduces turnaround time and costs is COIN – contract intelligence developed by J P Morgan Chase that reviews loan agreements in a record time.

  1. Administrative costs are running sky-high: In 2019, as per a Forbes article, Banks spent an estimated $ 67 billion on technology. The spending on administrative processes is still umongous. From the example provided below (Source: McKinsey) 70% of the IT spend is on IT run and technical debt that is the result of unwieldy processes and silos.
  1. Without reaching the critical mass of process automation, analytics, and high-quality data fabric, organizations risk ending up paralyzed

And lastly, even for the moonshot project, you’ll need to clear your core processes first. The focus on financial performance does not mean that you sacrifice research and growth. However, if processes that need cleaning and automation are not cleaned and automated, then the business could be saddled with expensive start-up partnerships, impenetrable black-box systems, cumbersome cloud computational clusters, and open-source toolkits without programmers to write code for them.” (Source Harvard Business Review )

So, if businesses do not wish to squander the opportunities, they must be practical with their approach. Administrative AI for Fintechs and FIs is the way forward.

Making a difference with Magic DeepSightTM Solution Accelerator

Administrative AI is certainly a great way to achieve cost reduction with a little help from the cloud, machine learning, API-based AI systems. In our experience, we provide solutions for such administrative tasks that provides significant benefits in terms of productivity, time and accuracy while improving the quality of work environment for the Middle and Back-office staff. For banks, capital markets, global fund managers, promising Fintechs and others, a bespoke solution that can be adapted for every unique need like DeepSightTM can make all the difference.

“Magic DeepSightTM is an accelerator-driven solution for comprehensive extraction, transformation, and delivery of data from a wide range of structured, semi-structured, and unstructured data sources leveraging cognitive technologies of AI/ML along with other methodologies to provide holistic last-mile solution.”

Success Stories with DeepSightTM

Client onboarding/KYC

  • Extract and process a wide set of structured/unstructured documents (e.g., tax documents, bank statements, driver’s licenses, etc.
  • From diverse data sources (email, pdf, spreadsheet, web downloads, etc.)
  • Posts fixed format output across several third-party and internal applications for case management such as Nice Actimize

Trade/Loan Operations

  • Trade and loan operation instructions are often received as emails and attachments to emails.
  • DeepSightTM intelligently automates identifying the emails, classifying and segregating them in folders.
  • The relevant instructions are then extracted from emails and documents to ingest the output into order/loan management platforms.

Expense Management

  • Invoices and expense details are often received as PDFs or Spreadsheets attached to emails
  • DeepSightTM Identifies types of invoices – e.g., deal related or non-deal related or related to any business function legal, HR etc.
  • Applies business rules on the extracted output to generate general ledger codes and item lines to be input in third-party applications (e.g., Coupa, SAP Concur).

Website Data Extraction

  • Several processes require data from third party websites e.g., SEC Edgar, Muni Data.
  • This data is typically extracted manually resulting in delays.
  • DeepSightTM can be configured to access websites, identify relevant documents, download the same and extract information.
  • Several processes require data from third party websites e.g., SEC Edgar, Muni Data.
  • Applies business rules on the extracted output to generate general ledger codes and item lines to be input in third-party applications (e.g., Coupa, SAP Concur).

Contracts Data Extraction

  • Contract/Service/Credit agreements are complex and voluminous text-wise. Also, there are multiple changes in the form of renewals and addendums.
  • Therefore, managing contracts is a complex task and requires highly skilled professionals.
  • DeepSightTM provides a configured solution that simplifies buy-side contract/service management.
  • Combined with Magic FinServ’s advisory services, the buy-side firm’s analyst gets the benefits of a virtual assistant.
  • Not only are the errors and omissions that are typical in human-centric processing reduced significantly, but our solution also ensures that processing becomes more streamlined as documents are categorized according to type of service, and for each service provider, only relevant content is identified and extracted.
  • Identifies and segregates different documents and also files all documents for a particular service provider in the same folder to enable ease of access and retrieval.
  • A powerful business rules engine is at work in the configuration, tagging, and extraction of data.
  • Lastly, a single window display ensures better readability and analysis.

Learning from failures!

Before we conclude, an example of a challenger bank that set up an account within 10 minutes, and provided customers access to money management features, and a contactless debit card in record time to prove why investor preferences are changing. It was once a success story that every fintech wanted to emulate. Toda. y, it is being investigated by the Financial Conduct Authority (FCA) over potential breaches of financial crime regulations. (Source: BBC) There were reports of freezing several accounts on account of suspicious activity. The bank has also undergone losses amounting to £115 million or $142 million in 2020/21 and its accountants about the “material uncertainty” of its future.

Had they taken care of the administrative processes, particularly those dealing with AML and KYC? We may never know? But what we do know is that it is critical to make administrative processes cleaner and automated.

Not just promising FinTechs, every business needs to clean up its administrative processes with AI:

Today’s business demands last-mile process automation, integrated processes, and a cleaner data fabric that democratizes data access and use across a broad spectrum of financial institutions such as Asset Managers, Hedge Funds, Banks, FinTechs, Challengers, etc. Magic FinServ’s team not only provides advisory services; we also get into the heart of the matter. Our hands on approach leveraging Magic FinServ’s Fintech Accelerator Program helps FinTechs and FIs modernize their platforms to meet emerging market needs.

For more information about Magic Accelerator write to us mail@magicfinserv.com Or visit our website: www.magicfinserv.com

The Buy-Side and Investment Managers thrive on data – amongst the financial services players, they are probably the ones that are the most data-intensive. However, while some have reaped the benefits of a well-designed and structured data strategy, most firms struggle to get the intended benefits primarily because of the challenges in consolidation and aggregation of data. In their defense however, Data Consolidation and Aggregation challenges are more due to gaps in their data strategy and architecture.

Financial firms’ core Operational and Transactional processes and the follow-on Middle Office, Back Office activities such as reconciliation, settlements, regulatory compliance, transaction monitoring and more depend on high-quality data. However, if data aggregation and consolidation are less than adequate, the results are skewed. As a result, investment managers, wealth managers, and service providers are unable to generate accurate and reliable insights/information on Holdings, Positions, Securities, transactions, etc., which is bad for trade and shakes the investor’s confidence. Recent reports of a leading Custodian’s errors in account set up due to faulty data resulting in less than eligible Margin Trading Limits are classic examples of this problem.

In our experience of working with many buy-side firms and financial institutions, the data consolidation and aggregation challenges are largely due to:

Exponential increase in data in the last couple of years: Data from online and offline sources must both be aggregated and consolidated before being fed into the downstream pipeline in a standard format for further processing.

Online data primarily comes from these three sources:

  • Market and Reference Data providers
  • Exchanges which are the source of streaming data
  • Transaction data from inhouse Order Management Systems or from the prime brokers and custodians, often this is available in different file formats, types, and taxonomies thereby compounding the problem.

Offline data comes also through emails for clarifications, reconciliation of the data source in email bodies, attachments as PDF’s, web downloads etc., which too must be extracted, consolidated, and aggregated before being fed into the downstream pipeline.

Consolidating multiple taxonomies and file types of data into one: The data that is generated either offline or online comes in multiple taxonomies and file types all of which must be consolidated in one single format before being fed into the downstream pipeline. Several trade organizations have invested heavily to create Common Domain Models for a standard Taxonomy; however, this is not available across the entire breadth of asset and transaction types.

Lack of real-time information and analytics: Investors today demand real-time information and analytics, but due to the increasing complexity of the business landscape and an exponential increase in the volume of data it is difficult to keep abreast with the rising expectations. From onboarding and integrating content to ensuring that investor and regulatory requirements are met, many firms may be running out of time unless they revise their data management strategy.

Existing engines or architecture are not designed for effective data consolidation: Data is seen as critical for survival in a dynamic and competitive market – and firms need to get it right. However, most of the home-grown solutions or engines are not designed for effective consolidation and aggregation of data into the downstream pipeline leading to delays and lack of critical business intelligence.

Magic FinServ’s focused solution for data consolidation and integration

Not anymore! Magic FinServ’s Buy-Side and Capital Markets focused solutions leveraging new-age technology like AI (Artificial Intelligence), ML (Machine Learning), and the Cloud enable you to Consolidate and Aggregate your data from several disparate sources, enrich your data fabric from Static Data Repositories, and thereby provide the base for real-time analytics. Our all-pervasive solution begins with the understanding of where your processes are deficient and what is required for true digital transformation.

It begins with an understanding of where you are lacking as far as data consolidation and aggregation is concerned. Magic FinServ is EDMC’s DCAM Authorized Partner (DAP). This industry standard framework for Data Management (DCAM), curated and evolved from the synthesis of research and analysis of Data Practitioners across the industry, provides an industrialized process of analyzing and assessing your Data Architecture and overall Data Management Program. Once the assessment is done, specific remediation steps, coupled with leveraging the right technology components help resolve the problem.

Some of the typical constraints or data impediments that prevent financial firms from drawing business intelligence for transaction monitoring, regulatory compliance, reconciliation in real-time are as follows:

Data Acquisition / Extraction

  • Constraints in extracting heavy datasets, availability of good API’s
  • Suboptimal solutions like dynamic scrapping in case API are not easily accessible
  • Delay in source data delivery from vendor/client
  • Receiving revised data sets and resolving data discrepancies across different versions
  • Formatting variations across source files like missing/ additional rows and columns
  • Missing important fields / Corrupt data
  • Filename changes

Data Transformation

  • Absence of a standard Taxonomy
  • Creating a unique identifier for securities amongst multiple identifiers (Cusip, ISIN etc.)
  • Data arbitrage issues due to multiple data sources
  • Agility of Data Output for upstream and downstream system variations

Data Distribution/Loading

  • File formatting discrepancies with the downstream systems
  • Data Reconciliation issues between different systems

How we do it?

Client Success Stories: Why to partner with Magic FinServ

Case Study 1: For one of our clients, we optimized data processing timelines & reduced time and effort by 50% by optimizing the number of manual overrides for identifying an asset type of new securities by analyzing the data, identifying the patterns, extracting the security issuer, and conceptualizing a rule- based logic to generate the required data. Consequently, manual intervention was required only for 5% of the records manually updated earlier in the first iteration itself.

In another instance, we enabled the transition from manual data extraction from multiple worksheets to more streamlined and efficient data extraction. We created a macro that selects multiple file source files and uploads data in one go – saving time, resources, and dollars. The macro fetched complete data in the source files even when the source files had some filters applied to the data (accidentally). The tool was scalable, so it could be easily used for similar process optimization instances. Overall, this tool enabled reduced data extraction efforts by 30-40%.

Case Study 2: We have worked extensively in optimizing reference data. For one prominent client, we helped onboard the latest Bloomberg industry classification, and updated data acquisition and model rules. We also worked with downstream teams to accommodate the changes.

The complete process of setting up a new security – from data acquisition to distribution to downstream systems, took around 90 minutes (about 1 and a half hours) and users needed to wait till then for trading the security. We conceptualized and created a new workflow for creating a skeleton security (security with mandatory fields) which can be pushed to downstream system in 15 minutes. If sec is created in skeleton mode, only the mandatory data sets/tables were updated and subsequently processed. Identification of such DB tables was the main challenge as no documentation was available.

Not just the above, we have worked with financial firms extensively and ensured that they are up to date with the latest – whether it be regulatory data processing, or extraction of data from multiple exchanges, or investment monitoring platform data on-boarding, or crypto market data processing. So,
if you want to know more, visit our website, or write to us at mail@magicfinserv.com.

Money laundering is a crime, a fraudulent activity to cleanse “dirty” money by moving it in and out of the financial system without getting detected. This takes a big toll on banks and financial institutions as they end up paying hefty fines and penalties for anti-money laundering breaches.

Often changes in regulations or sanctions convert otherwise legal money into “dirty” money requiring banks and FIs to report deposits and transactions and also freeze them. Inadvertently releasing these funds could also result in regulatory action.

Constantly changing rules of AML require retraining of staff, changes to workflows, and case tools. Until the staff becomes adept at the new rules, errors and omissions are a huge risk.

A typical money laundering scheme looks something like below.

  • Collecting and depositing dirty money in a legal account.
  • With banks in the US having a threshold limit of $ 10,000 in deposits scammers deposit lesser amounts to prevent detection using false invoices, made-up names, etc.
  • Afterwards, they take out the dirty money via purchases of property and other luxury items through shell companies.
  • With this process, money becomes legitimate, and they take out the money from the system.

With regulators across the world coming heavy on any financial institution found negligent of AML compliance, many banks, and financial institutions are turning to machine learning, big data, AI, and analytics for ensuring regulatory compliance and saving themselves the hefty penalties and fines or being named as a defaulter. They are also preventing the disruption to services when costly investigations ensue because of flaws or breaches in AML. Though AML compliance or processing can seem like a gigantic exercise, it is primarily all about collating data and drawing meaningful insights using advanced rules and machine learning.

Quality of data is either an impediment or an asset

Whether it is investigating anomalies, or raising the red flag in time, or ensuring accurate customer profiling (watchlist or sanctions screening), the quality of data is of paramount importance. It is either an impediment that is throwing false positives or an asset which streamlines processes and results in cost effectiveness and efficiency while ensuring compliance.

So, before you proceed with automating AML processing through use of automation tools and machine learning, you need to question –

Is my data clean?

While machine learning has multiple benefits, implementing it is not easy.

  1. As underlined earlier – data today is like many headed hydras – emanating from many sources, and in multiple formats – pdfs, invoices, emails, scanned text, spool files, etc.
  2. Good data is an asset and bad data an impediment resulting in poor decisions
  3. Most of the machine learning technology is all about identifying just the relevant data from terabytes of available data and self-learning over a period of time to become more efficient. However, this needs to be coupled with other technologies to help cleanse the data, and if you are not efficient at cleaning it, you will never get the desired results.

Unfortunately, most data-related work even today is primarily the responsibility of the back- end staff of banks and FIs. The manual process makes it expensive and time consuming. Not just that, human intelligence/capability limits the amount of data that can be optimally processed and hence results in potential errors and exposure.

Result – Delays, late filing of suspicious activity report (SAR), time, resources, and money wasted in investigations, poor customer experience (duplication of effort during Know Your Customer (KYC) and onboarding), potential politically exposed person (PEPS), offenders, and others on the watchlist evading detection, etc. When you fail to spot a suspicious transaction in time or scale up exponentially as per need, you end up bearing the burden of costly fines later.

Magic’s DeepSightTM Solution raising the bar in fighting money laundering

AI and Machine Learning aided solutions help in finding patterns of unlawful movement of money like layering and structuring, deciphering suspicious activities in time, accurately identifying customers in the sanctions list, transaction monitoring, risk-based monitoring, investigations, and reporting for suspicious activities enterprise-wide. However, the efficiency of these tools is limited by the amount of clean data available. Enter Magic DeepSightTM , a tool leveraging AI, ML and a host of other automation technologies embedded with Rules Engines and Workflows to deliver extensive amounts of clean data.

Reading like a human but faster: Magic FinServ’s OCR technology and form parsing intelligence use advanced technologies like natural language processing (NLP), computer vision, and neural network algorithms to read like humans and infinitely faster. From tons of unstructured data in the form of text, character, and images, it figures out the relevant fields with ease. What is time-consuming and tedious for the average staff is made easy with Magic DeepSightTM .

Scaling data cleansing effort exponentially: The importance of cleaning data at scale can be realized from the fact that if it is not done at an exponential pace, machines will end up learning from untrustworthy data. Magic DeepSightleverages RPA, API and workflows to extract data from various sources, compare and resolve errors and omissions.

Keeping track of changing rules: AML rules keep changing frequently, people and entities sanctioned keep changing. In a manual operation, this is bound to cause problems. Magic DeepSight™ leverages Rules Engines where changes in rules can be updated to ensure uniform and complete adherence to new rules.

Identifying customers accurately even when information changes. Digitalization has amplified the efforts that firms have to put in for ensuring AML compliance. Customers move places, they change names, addresses, and other information that sets them apart. It is a tedious and time- consuming affair to keep up to date. Magic DeepSightTM resolves entities and identifies customers accurately.

Keeping pace with sophisticated transaction- monitoring: Transaction Monitoring is at the heart of anti-money laundering, with sophisticated means adopted by hackers requiring more than manual effort to ensure timely detection. Establishing a clear lineage of the data source is one of the foremost challenges that enterprises face today. Magic DeepSightTM can read the transactions from source and create a client profile and look for patterns satisfying the money laundering rules.

Act Now! Fight Fraud and Money Laundering Activities

The time to act is now. You can prevent money launderers from having their way by investing right in tools that can do the work of extracting data efficiently at half the time and cost and which can be integrated into your AML workflows seamlessly.

Our research data indicates that 45% of businesses that invested in more AI/ML deployments and had clearer data and technology strategies have fared relatively better in terms of garnering a competitive advantage than the remaining 55% that are still stuck in the experimental phase. Do not take the risk of falling further behind. Download our brochure on AML compliance to know more about our offerings or write to us mail@magicfinserv.com.

Get Insights Straight Into Your Inbox!

    CATEGORY