A New World of Banking

The rise of fintech in the last half decade or so has taken the financial world by storm. Research suggests that there are now more than 7,500 fintech firms around the world which have raised nearly USD 109 billion in investment. The sector raked in a record breaking USD 54 billion investment in 2018 and USD 10 billion within the first quarter of 2019. 

Clearly, the hunger for fintech is growing, and with it, the fear among banks and traditional financial business about potentially lost revenue and customers. The fact that customers are increasingly preferring these non-traditional competitors does little to calm the uncertainty. 

As established players in the financial services industry wake up to this new business dynamic, the majority are attempting to collaborate with fintech: to leverage its ever-expanding ecosystem, turn the innovation to their favor, and address the concerns that arise with their business being at risk. Research reveals that as many as 82% of incumbents in the financial industry sector expect to enhance their partnerships with fintech players, going forward. 

Fintech – A Force to Reckon With

Fintech can be rightly characterized as a movement that has brought disruptive and transformative innovation in financial services through cutting-edge technology. Unlike traditional financial institutions, fintech startups have the advantage of not being burdened by age-old regulatory constraints, legacy systems and processes. This has allowed them to move faster and come up with solutions that compete directly with conventional methods of financial service deployments. 

Another aspect that has fuelled the rapid progression of fintech is an entirely new generation of well-informed and connected mobile consumers who continue to reshape financial service requirements. With time, fintech companies have managed to rope in these digital natives with smart banking platforms. This has given them a head start in the race to capitalize on the ‘1.7 billion billion adults, who according to World Bank’s Global Findex Database 2017 are naturally inclined towards smart fintech services.

On the other hand, major players in the financial services sector and capital market incumbents have failed to gain precedence on this front. Burdened with massive structural costs, hefty capital charges, and stagnant revenues, this sector continues to score low on the innovation index. Additionally, the relentless pressure to stay compliant and adhere to regulatory guidelines also leaves organizations short of bandwidth to invest time and resources in initiatives that can improve margins. 

There’s no denying that in the digital age, customer experience (CX) is the final battleground for businesses. And here, fintech has a natural advantage. By placing CX above everything else, fintech offerings have been able to provide their users with unending benefits. For instance, by leveraging smart application program interfaces (APIs), fintech companies are able to nurture a healthy community of third party partners around their native software problem. Open APIs allow fintech players to expand their customer services by enabling third party partners and developers to create their own apps and layers into the middleware. 

Apart from this, the algorithmic design and data-rich environment in this sector has proven ideal for machine learning (ML), artificial intelligence (AI) and blockchain-driven product deployments. Developers today are able to leverage these technologies to simplify and optimize cumbersome and effort-intensive processes such as compliance, credit checks, risk management, and P2P payments. 

But there’s good news. These technologies can yield similar results in capital markets as well, provided they are strategically implemented in the right areas. For instance, process automation with Robotic Process Automation (RPA) can help organizations working in the capital markets space replace manual legacy systems, make the systems compliant with Know Your Customer (KYC), Anti-Money laundering (AML) and other regulations, reconcile reports and connect middle and back office functions. On the other hand, more contemporary technologies like AI can simplify cumbersome processes such as trade settlement, compliance reporting, contract management and accounts payable. 

Blockchain, is another area which promises to yield unprecedented gains for capital market players. No wonder, the financial services industry has witnessed some of the biggest use cases of this technology. For example, in digital trading, blockchain is helping organizations reduce settlement times. In the current trading architecture, a single transaction can take days to settle. A blockchain-based settlement solution significantly curbs this turn-around time. A cryptocurrency token that serves as a proxy to a particular transaction is immediately transferred to the wallet of the beneficiary, confirming the completion of the settlement and ledger update.   

Extrapolating into the Future of the Financial Services Sector

With the gradual implementation of next-generation technologies like ML, neural networking with long/short term memory, Blockchain,  AI and robo-advisors, fintech will continue to gain trust and popularity among customers . 73% of millennials are eager to shift to a new financial paradigm where service products from technology companies like Google, Apple, Paypal, and Amazon are more exciting, intuitive, and CX-friendly than anything traditional financial players currently provide.   

The times are clearly changing. Fintechs are fast opening the virtual vault doors to innovation in the once impenetrable banking and the financial services sector. Can traditional players take the bold steps necessary to match the frictionless experience that’s the new norm, or will they eventually lose grounds to the new entrants? Only time can tell. 


Security Token Offering seems to be the next hype to utilize BlockChain concepts and transform the current security instruments (Equity, Debt, Derivatives, etc) into digitized security. STOs have gained popularity and momentum in recent times due to lack of regulations in the ICO world with a lot of outliers for most of the ICOs in 2018. There are many Startups that are building platforms by utilizing programmable blockchain platforms like Ethereum. Recent developments in STO platforms seem to be moving in the direction of building an Ecosystem with defined standards as well. By seeing lots of traction on GitHub towards standards like ERC-1400, ERC-1410, ERC-1594, ERC-1643 and ERC-1644, it has given us the opportunity to think about how can a technology company like us (Specialized in ensuring Quality standards for Blockchain-based applications on Ethereum) can contribute to this. We started our journey in defining the Complete STOs processing cycle in the context of real-time usage (from a functional perspective) with underlying Ethereum platforms (from a technology perspective).

It is highly important that we list down all the major participants & their roles before actually defining the  STO lifecycle :

  1. Issuers – Legal entity who develops, registers & sells security for raising funds for their business expansion.
  2. Investors – Entities who are ready to invest in securities to expect financial returns.
  3. Legal & Compliance Delegates – Entities that ensure all participants & processes are complied within the defined rules & regulations by the jurisdiction.
  4. KYC/AML service providers – Entity which provide KYC/AML for required participants.
  5. Smart Contract Development communities like Developers, Smart Contract Auditors, QA Engineers.

Most of the companies claiming to provide STO platforms are using Ethereum as the underlying programmable blockchain platform with few exceptions. The rationale for using Ethereum as the first choice is – It is a Turing Complete platform to build complex decentralized applications by defining logics inside Smart contracts (Solidity is the most favorable programming language among developer communities). Parallelly, Ethereum is also getting matured, secured and improved on performance with scalability by introducing lots of new features and improvements. There are very few who are utilizing other platforms apart from Ethereum to build their own STO processing platforms and some of them are trying to build a completely new blockchain platform dedicatedly designed for STOs. The last approach seems to be too optimistic as it might take years to build such a  system whereas the current momentum around STOs does not seem to wait that long.

Basis the above, we can now define the generic STO lifecycle from a functional standpoint into 2 phases as below: Primary Market

  1. To issue STO by Issuer
  2. To invest in STO
  3. Secondary Market to trade STO on either on Exchanges or Over The Counter

Primary Market

  1. To issue STO by Issuer –
  1. Registration of Issuer
  2. Creation of STO Token
  3. Approval from Legal & Compliance for STO
  4. Issuance of STO post Legal & Compliance approval  
  1. To invest in STO by Investor –
  1. Registration of Investor
  2. KYC/AML for Investors
  3. Whitelisting of Investors for STOs post KYC/AML
  4. Investment in STO for allowed STOs based on whitelisting of corresponding STO

Before we actually get into the technical insight of underlying blockchain technology, we need to define the STO platform technical architecture from a  user perspective. Each STO platform that exists in any state in today’s world has –

  1. A Presentation layer (User Interface with any chosen front end technology, Integration with Wallets)
  2. A Business Layer (JS libraries to provide an interface to interact with Smart Contracts)
  3. A Data Layer (Ethereum data storage in blocks in the form of key-value storage)

Now let’s define a high-level overview from a technical standpoint by assuming that the STO platform is using Ethereum as an underlying blockchain platform ( assuming that the Backend Layer has been set up already) –

  1. Creation of an external account for all participants to bring everyone on Ethereum blockchain
  2. Defining transactions for Off-Chain and On-Chain for all activities defined for Issuer and Investors
  3. Merger of Off-Chain data with On-Chain data
  4. Develop Smart Contracts
    1. Standard smart contract to be built for each STO depending upon Jurisdictions for generic processes among required participants
    2. STO specific Smart Contract to be built for implementing business/regulation rules
    3. Smart contracts with all business logic especially for transaction processing

Based on the expertise of our group, Magic FinServ can contribute in a very big way in the development of Smart Contracts (Written in Solidity) along with Auditing of contracts.
For more details visit https://www.magicblockchainqa.com/our-services/#smart-contract-testing

In the next part, we will detail out all the above mentioned high-level technical overview with high-level functional overview followed by more insight on all these defined functional & technical flow.

When ERC-20 Standards came into existence, it eased down the ICO token interoperability across wallets & crypto exchanges for all ERC-20 compliant tokens. Having standards for any process not only helps to have bigger acceptance but also improves interoperability to build up an ecosystem. Being a technology obsessed firm, we’ve always encouraged standards to be in place. An acceptable standard not only helps developers (One of the strongest stakeholders in the ecosystem who have the responsibility to provide workable solutions by using available technology) to build  the ecosystem but also leads to minimal changes for implementing interoperability. In today’s world, there is no system in existence which does not raise any error /failure in real time usage . Using global standards provides us another vital advantage of finding a resolution for such errors/failures as  these cases would have already been resolved by the tech fraternity earlier.

Today, it is of utmost importance to have standards that can not only integrate multiple systems (STO Platforms, Wallets and Exchanges) with minimal changes but also make security tokens easily interoperable across wallets and exchanges. Security Token Offerings can’t be an exception for not having standards when they seem to have the biggest and most complicated technological advancement for transforming the existing world of security to Digitized security with automated processing over traditional blockchain technology.

The recent traction on ERC-1400 (now moved to ERC-1411) has helped towards defining standard libraries for the complete STO life cycle especially for on-chain/off-chain transactions This compilation of requirements has got the technology folks globally excited as this has the mettle to ease down the complete STO lifecycle. It completely makes sense that lots of individuals are very excited to see such a good compilation of requirements from various involved participants with probable interface that can ease down the complete STO lifecycle. Github, for instance has a lot of real time developers participating in discussions to share their experiences as well.

Ethereum Standards (ERC abbreviation of Ethereum Request for Comments) related to regulated tokens

The below standards are worth a read to understand in depth about the rationale behind targeting more regulated transactions based on Ethereum tokens –  

  1. ERC-1404 : Simple Restricted Token Standard
  2. ERC-1462 : Base Security Token
  3. ERC-884 : Delaware General Corporations Law (DGCL) compatible share token

ConsenSys claims to have implemented ERC-1400  on the Github repository & named the solution as Dauriel Network.  GitHub says, “Dauriel Network is an advanced institutional technology platform for issuing and exchanging tokenized financial assets, powered by the Ethereum blockchain.”  

ERC-1400 (Renamed to ERC-1411) Overview

Smart contracts  will eventually control all the activities like Security issuance process, trading lifecycle from an issuer & investor perspective as well as  event processing related to security token automatically. Let’s try to understand ERC-1400 standard libraries with respect to each activity for STO lifecycle :

  1. ERC-20: Token Standards
  2. ERC-777: A New Advanced Token Standards
  3. ERC 1410: Partially Fungible Token Standard
  4. ERC 1594: Core Security Token Standard
  5. ERC-1643: Document Management Standard
  6. ERC-1644: Controller Token Operation Standard
  7. ERC-1066: Standard way to design Ethereum Status Code (ESC)

All the defined methods inside each standard (Solidity Smart Contract Interfaces) at an activity level are (Pre MarketPrimary MarketSecondary Market)  and can be represented pictorially as below:

It is of utmost importance to distinguish Off-Chain & On-Chain activities  with those that will be processed outside STO platform before defining the mapping between standard libraries methods and activities across all 3 stages. Off Chain activities can be done outside the main chain of underlying blockchain platform then merged. However, Integration will be needed for all activities performed outside an STO platform where several standards (e.g. ERC-725 & ERC-735 define for Identity management) play an  important role.

All activities related to Pre-Market are supposed to happen outside the  STO platform as those are completely related to documentation like structuring the offering, preparing the  required documentation with all internal and external stakeholders including the legal team to ensure regulation compliances . To bring reference of all pre market documentation to the  STO platform, Cryptographic representation of all documentation can be used effectively.

Similarly, KYC/AML process can happen off-chain with proper integration on the STO platforms with proper identity management (standards around Identity management like ERC-725 and ERC-735).

ERC-1400 (now a.k.a ERC-1411) covers all activities related to primary and secondary market with proper integration to all off chain data which brings all related documentation/identity to the underlying blockchain platform on which the STO is designed.

Magic and its approach for defining ERC-1411 mapping

Team Magic is working continuously to define  the mapping between all defined methods with all real time activities of primary and secondary markets. A key part of our strategy is  to collect all requirements from various stakeholders like Security lawyers, Exchange Operators, KYA providers, Custodians, Business Owners, Regulators, Legal Advisor. Once we have all requirements collected then our experienced business analyst teams (Experts from Pricing, Corporate Actions, and Risk Assessment) take over and reconcile the requirements with ERC-1400 standards to not only map each requirement but also find out the gaps in the standards. Post this, our technology team  prepares the implementation strategy of all those standards by developing smart contracts in Solidity. Having an in-house developed smart contract for any specific case study (Provided by our Business Analyst team) helps us define Auditing of ERC-1400 specific smart contracts and the testing strategy for each contract as well. 

The original promise of blockchain technology was security. However, they might not be as invulnerable as initially thought. Smart contracts, the protocols which govern blockchain transactions, have yielded under targeted attacks in the past.

The intricacies of these protocols let programmers implement anything that the core system allows, which includes inserting loops in the code. The greater the options are given to programmers, the more the code needs to be structured. This makes it more likely for security vulnerabilities to enter blockchain-based environments.

The Attacks that Plague Blockchain

Faulty blockchain coding can give rise to several vulnerabilities. For instance, during Ethereum’s Constantinople upgrade in January of this year, reentrancy attacks became a cause for concern. These are possibly the most notorious among all blockchain attacks. A smart contract may interface with an external smart contract by ‘calling it’. This is an external call. Reentrancy attacks exploit malicious code in the external contract to withdraw money from the original smart contract. A similar flaw was first revealed during the 2016 DAO attack, where hackers drained $50 million from a Decentralized Autonomous Organization (DAO). Note the following token contract, from programmer Peter Borah, of what appears to be a great endeavor at condition-oriented programming:

contract TokenWithInvariants {   

mapping(address => uint) public balanceOf;

uint public totalSupply;

   modifier checkInvariants {

          _
         if (this.balance < totalSupply) throw;

}

   function deposit (uint amount) checkInvariants {

     balanceOf[msg.sender] += amount;

     totalSupply += amount;

  }

  function transfer(address to, uint value) checkInvariants {

        if (balanceOf[msg.sender] >= value) {

        balanceOf[to] += value;

        balanceOf-msg.sender] -= value;

  }

  }

  function withdraw() checkInvariants {

      uint balance = balanceOf[msg.sender];

      if (msg.sender.call.value(balance) ()) {

        totalSupply -= balance;

        balanceOf[msg.sender] = 0;

The above contract executes state-changing operations after an external call. It neither carries out an external call at the end nor does it have a mutex to prevent reentrant calls. The code does perform excellently in some areas, such as checking for a global invariant wherein the contract balance (this.balance) should not be below what the contract perceives it to be (totalSupply). However, these invariant checks are done at function entry in function modifiers, thereby treating a global invariant as a post-condition rather than holding it at all times. The deposit function is also flawed since it considers the user-mandated amount(msg.sender) instead of msg.amount.

Finally, the seventh line has a bug in it. Instead of,

if (this.balance < totalSupply) throw;

It should be,

if (this.balance != totalSupply) throw;

This is so because instead of checking for a stronger condition, we are now confirming a somewhat weaker condition of the contract’s actual balance being higher than what it thinks it should be.

These issues enable the contract to stock more money than it should. An attacker can potentially withdraw more than their share, heightening the danger of reentrancy even when the contract codes are watertight.

Overflows and underflows are also significant vulnerabilities that can be used as a Trojan Horse by non-ethical hackers. An overflow error occurs when a number gets incremented above its maximum value. Think of odometers in cars where the distance gets reset to zero after surpassing, say 999,999 km. If we affirm a uint8 variable that can take up to 8 bits, it can have decimal numbers between 0 and 2^8-1 = 255. Now if we code as such,uint a = 255;a++;

Then this will lead to an overflow error since a’s maximum value is 255.

On the other end, underflow errors effect smart contracts in the exact opposite direction. Taking an uint8 variable again:unint8 a = 0;a-;

Now we have effected an underflow, which will make a assume a maximum value of 255.

Underflow errors are more probable, since users are less likely to possess a large quantity of tokens. The Proof of Weak Hands Coin (POWH) scheme by 4chan’s business and finance imageboard /biz/ suffered a $800k loss overnight in March 2018 because of an underflow attack. Building and auditing secure mathematical libraries that replace the customary arithmetic operators is a sensible defense for these attacks.

The 51% attack is also prevalent in the world of cryptocurrency. A group of miners control more than 50% of the mining hashrate on the network and control all new transactions. Similarly, external contract referencing exploits Ethereum’s ability to reuse code from and interact with already existing contracts by masking malevolent actors in these interactions.

Smart contract auditing that combines the attention of manual code analysis and the efficiency of automated analysis is indispensable in preventing such attacks.
Solving the Conundrum
Fixes to such security risks in blockchain-based environments are very much possible. A process-oriented approach is a must with agile quality assurance (QA) models. Robust automation frameworks are also crucial in weeding out errors in coding and therefore strengthening smart contracts in the process.

In the case of reentrancy attacks, avoiding external calls is a good first step. So is inserting a mutex, a state variable to lock the contract during code execution. This will block reentry calls. All logic that changes state variables should occur before an external call. Correct auditing in this instance will ensure these steps are followed. In the case of overflow and underflow attacks, the right auditing tools will build mathematical libraries for safe math operations. The SafeMath library on Solidity is a good example.

To prevent external contract referencing, even something as simple as using the ‘new’ keyword to create contracts may not be implemented in the absence of proper auditing. Incidentally, this one step can ensure that an instance of the referred contract is formed during the time of execution, and the attacker cannot replace the original contract with anything else without changing the smart contract itself.

Magic BlockchainQA’s pioneering QA model has created industry-leading service level agreements (SLAs). Our portfolio of auditing services leverage our expertise in independently verifying blockchain platforms. This ensures decreased losses on investments for fintech firms, along with end-to-end integration, security, and performance. Crucially, this will usher in widespread acceptance of blockchain-based platforms. With the constant evolution of blockchain-based  environments, we are constantly evolving as well, to tackle new challenges and threats, while ensuring that our tools can conduct impeccable auditing of these contracts.

Blockchain technology first came with the promise of unprecedented security. Through correct auditing practices, we can fulfill this original promise. At Magic BlockchainQA’s, we aim to take that promise to its completion every single time.

Predictive Analysis – What it is?

Whenever you hear the term “Predictive Analysis”, a question pop-ups in mind “Can we predict the future?”. The answer is “no” and the future is still a beautiful mystery as it should be. However, the predictive analysis does forecast the possibility of a happening in the future with an acceptable percentage of deviation from the result. In business terms, predictive analysis is used to examine the historical data and interpret the risk and opportunities for the business by recognizing the trends and behavioral patterns.

Predictive analysis is one of the three forms of data analysis. The other two being descriptive analysis and Prescriptive analysis. The descriptive analysis examines the historical data and evaluates the current metrics to tell if business doing good; predictive analysis predicts the future trends and prescriptive analysis provides a viable solution to a problem and its impact on the future. In simpler words, descriptive analysis is used to identify the problem/scenario, predictive analysis is used to define the likelihood of the problem/scenario and why it could happen; prescriptive analysis is used to understand various solutions/consequences to the problem/scenario for the betterment of the business.

Predictive Analysis process

The predictive analysis uses multiple variables to define the likelihood of a future event with an acceptable level of reliability. Let’s have a look at the underlying process of Predictive Analysis:

Requirement – Identify what needs to be achieved

This is the pre-step in the process where it is identified what needs to be achieved (requirement) as it paves the ways for data exploration which is the building block of predictive analysis. This explains what a business needs to do more vis-à-vis what is being done today to become more valuable and enhance the brand value. This step defines which type of data is required for the analysis. The analyst could take the help of domain experts to determine the data and its sources.

  1. Clearly state the requirement, goals, and objective.
  2. Identify the constraints and restrictions.
  3. Identify the data set and scope.

Data Collection – Ask the right question

Once you know the sources, the next step comes in to collect the data. One must ask the right questions to collect the data. E.g. to build a predictive model for stock analysis, historic data must contain the prices, volume, etc. but one must also pay attention to how useful the social network analysis would be to discover the behavioral and sentiment patterns.

Data Cleaning – Ensure Consistency

Data could be fetched from multiple sources. Before it could be used, this data needs to be normalized into a consistent format. Normally data cleaning includes –

  1. Normalization
    • a. Convert into a consistent format
  2. Selection
    • a. Search for outliers and anomalies
  3. Pre-Processing
    • a. Search for relationships between variables
    • b. Generalize the data to form group and/or structures
  4. Transformation
    • a. Fill in the missing value

Data Cleaning removes errors and ensures consistency of data. If the data is of high quality, clean and relevant, the results will be proper. This is, in fact, the case of “Garbage In – Garbage out”. Data cleaning can support better analytics as well as all-round business intelligence which can facilitate better decision making and execution.

Data collection and Cleaning as described above needs to ask the right questions. Volume and Variety are two words describing the data collection results, however, there is another important thing which one must focus on is “Data Velocity”. Data is not only required to be quickly acquired but needs to be processed at a good rate for faster results. Some data may have a limited lifetime and will not solve the purpose for a long time and any delay in processing would require acquiring new data.

Analyze the data – Use the correct model

Once we have data, we need to analyze the data to find the hidden patterns and forecast the result. The data should be structured in a way to recognize the patterns to identify future trends.

Predictive analytics encompasses a variety of statistical techniques from traditional methods e.g. data mining, statistics to advance methods like machine learning, artificial intelligence which analyze current and historical data to put a numerical value on the likelihood of a scenario. Traditional methods are normally used where the number of variables is manageable. AI/Machine Learning is used to tackle the situations where there are a large number of variables to be managed. Over the ages computing power of the organization has increased multi-fold which has led to the focus on machine learning and artificial intelligence.

Traditional Methods:

  1. Regression Techniques: Regression is a mathematical technique used to estimate the cause and effect relationship among variables.

In business, key performance indicators (KPIs) are the measure of business and regression techniques could be used to establish the relationship between KPI and variables e.g. economic parameters or internal parameters. Normally 2 types of regression are used to find the probability of occurrence of an event.

  1. Linear Regression
  2. Logistic Regression

A time series is a series of data points indexed or listed or graphed in time order.

Decision Tree

Decision Trees are used to solve classification problems. A Decision Tree determines the predictive value based on a series of questions and conditions.

Advanced Methods – Artificial Intelligence / Machine Learning

Special Purpose Libraries

Nowadays a lot of open frameworks or special purpose libraries are available which could be used to develop a model. Users can use these to perform mathematical computations and see data flow graphs. These libraries can handle everything from pattern recognition, image and video processing and can be run over a wide range of hardware. These libraries could help in

  1. Natural Language Processing (NLP). Natural Language refers to how humans communicate with each other in day to day activities. It could be in words, signs, e-data e.g. emails, social media activity, etc. NLP refers to analyzing this unstructured or semi-structured data.
  2. Computer Vision

Algorithms

Several algorithms which are used in Machine Learning include:

1. Random Forest

Random Forest is one of the popular machine learning algorithm Ensemble Methods. It uses a combination of several decision trees as a base and aggregates the result. These several decision trees use one or more distinct factors to predict the output.

2. Neural Networks (NN)

The approach from NN is to solve the problem in a similar way by machines as the human brain will do. NN is widely used in speech recognition, medical diagnosis, pattern recognition, spell checks, paraphrase detection, etc.

3. K-Means

K-Means is used to solve the clustering problem which finds a fixed number (k) of clusters in a set of data. It is an unsupervised learning algorithm that works itself and has no specific supervision.

Interpret result and decide

Once the data is extracted, cleaned and checked, its time to interpret the results. Predictive analytics has come along a long way and goes beyond suggesting the results/benefits from the predictions. It provides the decision-maker with an answer to the query “Why this will happen”.

Few use cases where predictive analysis could be useful for FinTech business

Compliance – Predictive analysis could be used to detect and prevent trading errors and system oversights. The data could be analyzed to monitor the behavioral pattern and prevent fraud. Predictive analytics in companies could help to conduct better internal audits, identify rules and regulations, improve the accuracy of audit selection thus reducing the fraudulent activities.

Risk Mitigation – Firms could monitor and analyze the operational data to detect the error-prone areas and reduce outages and avoid being late on events thus improving the efficiency.

Improving customer service – Customers have always been the center of business. Online reviews, sentiment analysis, social media data analysis could help the business to understand customer behavior and re-engineer their product with tailored offerings.

Being able to predict how customers, industries, markets, and the economy will behave in certain situations can be incredibly useful for the business. The success depends on choosing the right data set with quality data and defining good models where the algorithms explore the relationships between different data sets to identify the patterns and associations. However, FinTech firms have their own challenges in managing the data caused by data silos and incompatible systems. Data sets are becoming large and it is becoming difficult to analyze for the pattern and managing the risk & return.

Predictive Analysis Challenges

Data Quality / Inaccessible Data

Data Quality is still the foremost challenge faced by the predictive analyst. Poor data will lead to poor results. Good data will help to shape major decision making.

Data Volume / Variety / Velocity

Many problems in Predictive analytics belong to big data category. The volume of data generated by users can run in petabytes and it could challenge the existing computing power. With the increase in Internet penetration and autonomous data capturing, the velocity of data is also increasing at a faster rate. As this increases, traditional methods like regression models become unstable for analysis.

Correct Model

Defining a correct model could be a tricky task especially when much is expected from the model. It must be understood that the same model could be used for different purposes. Sometimes, it does not make sense to create one large complex model. Rather than one single model to cover it all, the model could consist of a large number of smaller models that together could deliver better understanding and predictions.

The right set of people

Data analytics is not a “one-man army” show. It requires a correct blend of domain knowledge with data science knowledge. Data Scientist should be able to ask the correct questions to domain experts in terms of what-if-analysis and domain experts should be able to verify the model with appropriate findings. This is where we at Magic FinServ could bring value to your business. At Magic FinServ we have the right blend of domain expertise as well as data science experts to deliver the intelligence and insights from the data using predictive analytics.

Magic FinServ – Value we bring using Predictive Analysis

Magic FinServ hence has designed a set of offerings specifically designed to solve the unstructured & semi-structured data problem for the financial services industry.

Market Information – Research reports, News, Business and Financial Journals & websites providing Market Information generate massive unstructured data. Magic FinServ provides products & services to tag meta data and extracts valuable and accurate information to help our clients make timely, accurate and informed decisions.

Trade – Trading generates structured data, however, there is huge potential to optimize operations and make automated decisions. Magic FinServ has created tools, using Machine Learning & NLP, to automate several process areas, like trade reconciliations, to help improve the quality of decision making and reduce effort. We estimate that almost 33% effort can be reduced in almost every business process in this space.

Reference data – Reference data is structured and standardized, however, it tends to generate several exceptions that require proactive management. Organizations spend millions every year to run reference data operations. Magic FinServ uses Machine Learning tools to help the operations team reduce the effort in exception management, improve the quality of decision making and create a clean audit trail.

Client/Employee data – Organizations often do not realize how much client sensitive data resides on desktops & laptops. Recent regulations like GDPR make it now binding to check this menace. Most of this data is semi-structured and resides in excels, word documents & PDFs. Magic FinServ offers products & services that help organizations identify the quantum of this risk and then take remedial actions.FacebookLinkedInTwitter

Gartner Says By 2020, a Corporate “No-Cloud” Policy Will Be as Rare as a “No-Internet” Policy Is Today.

With the increasing number of cloud adoption rate, it has become instrumental for organizations to build a robust Cloud migrations strategy.

As per the Commvault report. the cloud Fear of Missing Out (FOMO) is driving the business leaders to move full speed ahead towards the cloud.

Many organizations are already moving part of their applications to cloud or planning to move all of their applications to cloud.  Apart from the reliability, scalability, cost optimization and security benefits, the recent disruption in cognitive technologies like AI/ML/Blockchain are one of the driving factors to embrace the cloud as an important IT strategy Most of the cloud providers are offering attractive easy to implement AI/ML platform along with other multidimensional benefits.

However, there are so many cloud providers, so many cloud services are available in the market.

Who is the best service provider? Which service model should be fit for the organizations?

The answers are not a single word or list of words. This is a process appropriately designed towards your business goals.

Hence choose a Cloud service provider who can work as a partner, not as a vendor.

This is a journey through the learning curve for both partners.

I am highlighting a few aspects which must be considered when selecting a cloud partner:

Define your migration strategy – IaaS vs PaaS vs SaaS. You need to select the right partner for platform, infrastructure and application services. Sometimes you may need to work with multiple providers for different services or you can have one combine managed Service partner.

The above diagram from Gartner is showing a perfect ownership sharing in various cloud service model. Like if you have a best in class application service team, you can procure infrastructure services or platform services from a service provider and align the internal team to run the cloud service. This will need extensive cloud training for the existing team, hire some cloud experts to build the in-house capability and robust service management process to coordinate among different vendors. Your cloud partner should take a role of your training partner in such cases. In a SaaS-based model, this is essential that cloud partner know your business and industry well. Because ultimately the cloud service will be fully integrated with the business model. Hence it is very important that the SaaS provider is fully aligned with your business need. Overall Selection Criteria can be designed by analyzing and comparing the below factors- 

The provider must be knowledgeable about your application, data, interfaces, compliance, security, BCP/DR and other business requirements. Critical Success Factors are defined in the various model. However, as per our study, we have listed Seven key success factors for cloud computing –

Cloud Partner – The most important step towards success. Choose a perfect cloud partner who will help in your journey towards success

 Cloud Strategy –

  • Create a plan & solution architecture
  • Define the cloud applications and services
  • Prepare the service catalogue
  • Build the capability and processes

Cost & performance – one of the most important success criteria.

  • Plan cost and ROI
  • Benchmark the performance
  • Proactive monitoring
  • Capacity planning
  • Right-sizing & optimization

Security –

  • Build the security strategy – secure all the layers and components
  • Automation, tooling and proactive monitoring
  • Plan the audit, compliance reporting & certification

Contract & SLA –

  • Incorporate all the aspects of the contract carefully with the legal help
  • Build customer and suppliers terms properly
  • Define Service SLA & service credits
  • Manage the contract (an ongoing process)

Automation –

  • Have an automation strategy
  • From Infrastructure to Application – automate the repetitive work
  • Increase the response and resolution
  • Reduce the human error

Manage the stakeholders –

  • Cloud adoption changing the organizational structure and IT landscape drastically.
  • Manage your stakeholders throughout the journey.
  • Assess the impact of positive and negative stakeholders on the project.

A managed service provider is the ideal solution in today’s complex world. At MagicFinServ, we are helping the global FinTech companies to build their successful SaaS model. Our highly skilled cloud team can align all the moving parts from architecting to implementation and deliver a production-ready solution. To know more about our Financial services focused cloud solution please contact us at  mail@magicfinserv.com

Machine learning is one amongst those technologies that is invariably around us and that we might not even comprehend it. For instance, machine learning is employed to resolve issues like deciding if an email that we got is a spam or a genuine one, how cars can drive on their own, and what product someone is likely to purchase. Every day we tend to see these sorts of machine learning solutions in action. Machine learning is when we get a mail and automatically/mechanically scanned and marked for spam within the spam folder. For the past few years, Google, Tesla, and others have been building self-drive systems that may soon augment or replace the human driver. And data information giants like Google and Amazon can use your search history to predict which things you are looking to shop for and ensure you see ads for those things on each webpage you visit. All this useful and sometimes annoying behavior is the result of artificial intelligence.

This definition brings up the key component of machine realizing specifically, that the framework figures out how to tackle the issue from illustration information, instead of us composing a particular rationale. This is a noteworthy advancement from how most writing of computer programs is done. In more customary programming we deliberately examine the issue and compose code.

This code peruses in information and utilizes its predefined rationale to distinguish the right parts to execute, which at that point creates the right outcome.

Machine Learning and Conventional Programming

With conventional programming, we use code structs like– if statements, switch-case statements, and control loops implemented with — while, for and do statements. Every one of these announcements has tests that must be characterized. And the dynamic information, typical of machine learning issues can make defining these tests very troublesome. In contradiction to machine learning, we do not write this logic that produces the results. Instead, we gather the information we need and modify its format into a form which machine learning can use. We then pass this data to an algorithm. The algorithmic program analyses the data and creates a model that implements the solution to solve the problem based on the information and data.

Machine Learning: High-Level View

We initially start with lots of data, the data that contains patterns. That data gets inside machine learning logic and algorithm to find a pattern or several patterns. A predictive model is the outcome of the machine learning algorithm process. A model is typically the business logic that identifies the probable patterns with new data. The application is used to supply data to the model to know if the model identifies the known pattern with the new data. In the case that we took, new data could be data of more transactions. Probable patterns mean that a model should come up with predictive patterns to check if the transactions are fraudulent.

Machine Learning and FinTech
FinTech is one of the industries that could be hugely impacted by machine learning and can leverage machine learning technologies to get better predictions and risk analysis in finance applications. Following are five areas where machine learning could impact finance applications and so financial technologies can become smarter to take care of fraud detection, algorithmic trading or portfolio management.

Risk Management
Applying predictive analysis model to the huge amount of real-time data can help the machine learning algorithm to have command over numerous data points. The traditional method of risk management worked on analyzing structured data against some data rules which were very constrained to only structured data. But there is more than 90% of data that is unstructured. Deep learning technology can process unstructured data and does not really depend upon static information coming from loan applications or other financial reports. Predictive analysis can even foresee the loan applicant’s financial status that may be impacted by the current market trends.

Internet Banking Fraud
Another such example could be to detect internet banking fraud. If there is a continuous fraud happening with the fund’s transfer via internet banking and we have the complete data, we could find out the pattern involved. Through this, we can identify where are the loopholes or hack prone areas of the application. So, it’s all about patterns and predicting the results and future based on those patterns. Machine learning plays an important role in data mining, image processing, and language processing. It cannot always provide a correct analysis or cannot always provide an accurate result based on the analysis, but it gives a predictive model based on historical data to make decisions. The more data, the more the result-oriented predictions that can be made.

Sentiment Analysis
One of the areas where machine learning can play an important role is sentiment analysis or news analysis. The futuristic applications on machine learning can no longer depend upon the only data coming from trades and stock prices. As a legacy, the human intuition of financial activities is dependent upon trades and stocks data to discover new trends. The machine learning technology can be evolved to understand social media trends and other information/news trends to do sentiment or news analysis. The algorithms can computationally identify and categorize the opinions or thoughts expressed by the user to make predictive analysis. The more the data the more accurate would be the predictions.

Robo-Advisors
Robo-advisors are a kind of digital platforms to calibrate a financial portfolio. They provide planning services with least manual or human intervention. The users furnish details like their age, current income, and their financial status and expect from Robo-advisors to predict the kind of investment they can make, as per current and futuristic market trends to meet their retirement goals. The advisor processes this request by spreading the investments across financial instruments and asset classes to match the goals of the user. The system works on real-time modification in user’s goals and current market trends and does a predictive analysis to find the best match for the user’s investments. Robo-advisors may in future completely wipe out the human advisors who make money out of these services.

Security
The highest concern for banks and other financial institutions is the security of the user and user’s details, which if leaked could be prone to hacking and eventually resulting in financial losses. The traditional way in which the system works are providing a username and password to the user for secure access and in case of loss of password or recovery of the lost account, few security questions or mobile number validation is needed. Using AI, in the future, one can develop an anomaly detection application that might use biometric data like facial recognition, voice recognition or retina scan. This could only be possible by applying predictive analysis over a huge amount of biometric data to make more accurate predictions by applying repetitive models.

How Can Magic FinServ help?

Magic FinServ is aggressively working on visual analytics and artificial intelligence thereby leveraging the concept of machine learning and transforming the same in the perspective of technology to solve business problems like financial analysis, portfolio management, and risk management. Magic FinServ being a financial services provider can foresee the impact of machine learning and predictive analysis on financial services and financial technologies. The technology business unit of Magic uses technologies like Python, Big Data, Azure Cognitive Services to develop and provide innovative solutions. Data scientists and technical architects at Magic work hand in hand to provide consulting and developing financial technology services having a futuristic approach.

Evolution of RPA

IT outsourcing took-off in the early ’90s with broadening globalization driven primarily by labor arbitrage. This was followed by the BPO outsourcing wave in early 2000.

The initial wave of outsourcing delivered over 35% cost savings on an average but continued to stay inefficient due to low productivity & massive demand for constant training due to attrition.

As labor arbitrage became less lucrative with increasing wage & operational cost, automation looked to be a viable alternative for IT & BPO service providers to improve efficiency. This automation was mostly incremental. At the same time, high-cost locations had to compete against their low-cost counterparts and realized that the only way to stay ahead in this race was to reduce human effort.

Robotic Process Automation (RPA) was therefore born with the culmination of these two needs.

What is RPA?

RPA is a software that automates the high volume of repetitive manual tasks. RPA increases operational efficiency and productivity and reduces cost. RPA enables the businesses to configure their own software robots (RPA bots) who can work 24X7 with higher precision and accuracy.

The first generation of RPA started with Programmable RPA solutions, called Doers.

Programmable RPA tools are programmed to work with various systems via screen scraping and integration. It takes the input from other system and determines decisions to drive action. The most repetitive processes are automated by Programmable RPA.

However, Programmable RPA work with structured data and legacy systems. They are highly rule-based without any learning capabilities.

Cognitive Automation is an emerging field which is providing the solution to overcome the limitations of the first-generation RPA system. Cognitive automation is also called “Decision-makers” or “Intelligent Automation”.

Here is a nice diagram published by the Everest group that shows the power of AI/ML in a traditional RPA framework.

Cognitive automation uses artificial intelligence (AI) capabilities like optical character recognition (OCR) or natural language processing (NLP) along with RPA tools to provide end to end automation solutions. It deals with both structured and unstructured data including text-heavy reports. This is probabilistic but can learn the system behavior over time and provide the deterministic solution.

There is another type of RPA solution – “Self-learning solutions” called “Learners”.

Programmable RPA solutions need significant programming effort and technique to enable the interaction with other systems. Self-learning solutions program themselves.

There are various learning methods adopted by RPA tools:

  • It may use historical (when available) and current data, these tools can monitor employee activity over time to understand the tasks. They start completing them after they have gained enough confidence to complete the process.
  • Various tools are used to complete tasks as they are done in the manual ways. Tools learn the necessary activities under the tasks and start automating them. The tool’s capabilities are enhanced by feedback from the operation team and it increases its automation levels.
  • Increasing complexity in the business is the driving factor from rule-based processing to data-driven strategy. Cognitive solutions are helping the business to manage both known and unknown areas, take complex decisions and identify the risk.

As per HfS Research RPA Software and Services is expected to grow to $1.2 billion by 2021 at a compound annual growth rate of 36%. 

Chatbots, Human Agents, Agent assists tools, RPA Robots, Cognitive robots – RPA with ML and AI creates a smart digital workforce and unleash the power of digital transformation

The focus has shifted from efficiency to intelligence in business process operations.

Cognitive solutions are the future of automation…. and data is the key driving factor in this journey.

We, at MagicFinServ, have developed several solutions to help our clients make more out of structured & unstructured data. Our endeavor is to use modern technology stack & frameworks using Blockchain & Machine Learning to deliver higher value out of structured & unstructured data to Enterprise Data Management firms, FinTech & large Buy & sell-side corporations.

Understanding of data and domain is crucial in this process. MagicFinServ has built a strong domain-centric team who understands the complex data of the Capital Markets industry.

The innovative cognitive ecosystem of MagicFinServ is solving the real world problem.

Want to talk about our solution? Please contact us at https://www.magicfinserv.com/.

Before we actually get into details as to how digitalization has contributed to unstructured data, we really need to understand what is meant by the terms, Digitalization and Unstructured Data.

Digitization: is the process of converting information into a digital format. In this format, information is organized into discrete units of data (called bits) that can be separately addressed (usually in multiple-bit groups called bytes).

Unstructured Data: is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy but may contain data such as dates, numbers, and facts as well. This results in irregularities and ambiguities that make it difficult to understand using traditional programs as compared to data stored in fielded form in databases or annotated (semantically tagged) in documents. (Source: https://en.wikipedia.org/wiki/Unstructured_data)

Now to establish connections between above two, I begin with a point, that every day there is new evolution happening in Technology space, and in addition to this desire to digitalize everything around us is also gaining momentum.

However, we haven’t thought that this process will solve our problem, or will lead to a bigger problem which will be common across all the current verticals and new verticals of the future.

Actually, if we do deep thinking around this then we will realize that instead of creating a solution for the digital world or digitized economy we have actually paved the path for making data as unstructured or for that matter Semi/Quasi structured, and this heap/pile of unstructured data is growing day by day.

Certain questions crop in our minds that what are various factors which are contributing to the unstructured data pile. Some of them are mentioned below:

  1. The rapid growth of the Internet leading to data explosion resulting in massive information generation.
  2. Data which is digitalized and given some structure to it.
  3. Free availability and easy access to various tools that help in the digitization of data.

The other crucial angle for unstructured data is how do we manage it.

Some insights and facts around unstructured data problem, that stresses it is a serious affair:

  • According to projections from Gartner, white-collar workers will spend anywhere from 30 to 40 percent of their time this year managing documents, up from 20 percent of their time in 1997
  • Merrill Lynch estimates that more than 85 percent of all business information exists as unstructured data – commonly appearing in e-mails, memos, notes from call centers and support operations, news, user groups, chats, reports, letters, surveys, white papers, marketing material, research, presentations, and Web pages

(Source – http://soquelgroup.com/wp-content/uploads/2010/01/dmreview_0203_problem.pdf)

  • Nearly 80% of enterprises have very little visibility into what’s happening across their unstructured data, let alone how to manage it.

(Source – https://www.forbes.com/sites/forbestechcouncil/2017/06/05/the-big-unstructured-data-problem/2/#5d1cf31660e0Source –)

Is there a solution to this?

In order to answer the above question, I would say data (information) in today’s world is Power, and Unstructured data is tremendous power because the essence/potential is still untapped, which when realized effectively and judiciously can turn fortunes for the organizations.

On the other hand, Organizations and business houses which are trying to extract meaning/sense out of this chaotic mess will be well-positioned to reap competitive edge and will have a competitive advantage among the peer group.

Areas to focus on addressing the problem related to unstructured data are.

  1. Raising awareness around it.
  2. Identification and location in the organization.
  3. Ensure information is searchable
  4. Make the content context and search friendly
  5. Build Intelligent content.

The good news is that we, at Magic, realized the quantum of this challenge sometime back and hence have designed a set of offerings specifically designed to solve the unstructured & semi-structured data problem for the financial services industry.

Magic FinServ focuses on 4 primary data entities that financial services regularly deals with:

Market Information – Research reports, News, Business and Financial Journals & websites providing Market Information generate massive unstructured data. Magic FinServ provides products & services to tag meta data and extracts valuable and accurate information to help our clients make timely, accurate and informed decisions.

Trade – Trading generates structured data, however, there is huge potential to optimize operations and make automated decisions. Magic FinServ has created tools, using Machine Learning & NLP, to automate several process areas, like trade reconciliations, to help improve the quality of decision making and reduce effort. We estimate that almost 33% effort can be reduced in almost every business process in this space.

Reference data – Reference data is structured and standardized, however, it tends to generate several exceptions that require proactive management. Organizations spend millions every year to run reference data operations. Magic FinServ uses Machine Learning tools to help the operations team reduce the effort in exception management, improve the quality of decision making and create a clean audit trail.

Client/Employee data – Organizations often do not realize how much client sensitive data resides on desktops & laptops. Recent regulations like GDPR make it now binding to check this menace. Most of this data is semi-structured and resides in excels, word documents & PDFs. Magic FinServ offers product & services that help organizations identify the quantum of this risk and then take remedial actions.

People are often confused with the terms – Visual analytics and Visual Representations. They many times take both words for the same meaning – presenting a set of data into some kind of graphs which looks good to the naked eye. However deep down, ask an analyst and they will tell you that visual representation and visual analytics are two different arts.

Visual Representation is used to present the analyzed data. The representations directly show the output from the analysis and are of less help to drive the decision. The decision is already known with analytics already performed on data.

On the other hand, Visual analytics is an integrated approach that combines visualization, human factors, and data analysis. Visual analytics allows human direct interaction with the tool to produce insights and transform the raw data into actionable knowledge to support decision- and policy-making. It is possible to get representations using tools, but not interactive visual analytics visualizations which are custom made. Visual Analytics capitalizes on the combined strengths of human and machine analysis (computer graphics, machine learning) to provide a tool where alone human or machine has fallen short.

The Process

The enormous amount of data comes with a lot of quality issue where data would be of different types and from various sources. In fact, the focus is now shifting from structured data towards semi-structured and unstructured data. Visual Analytics combines the visual and cognitive intelligence of human analysts, such as pattern recognition or semantic interpretation, with machine intelligence, such as data transformation or rendering, to perform analytic tasks iteratively.

The first step involves the integration and cleansing of this heterogeneous data. The second step involves the extraction of valuable data from raw data. Next comes the most important part of developing a user interface based on human knowledge to do the analysis which uses the combination of artificial intelligence as a feedback loop and helps in reaching the conclusion and eventually the decision.   

If the methods used to come to conclusion are not correct, the decisions emerging from the analysis would not be fruitful. Visual analytics takes a leap step here by providing methods/user interfaces to examine the procedures using the feedback loop.  

In general, the following paradigm is used to process the data:

Analyze First – Show the Important – Zoom, Filter and Analyze Further – Details on Demand (from:  Keim D. A, Mansmann F, Schneidewind J, Thomas J, Ziegler H: Visual analytics: Scope and challenges. Visual Data Mining: 2008, S. 82.)

Areas of Application

Visual Analytics could be used in many domains. The more prominent use could be seen in

  1. Financial Analysis
  2. Physics and Astronomy
  3. Environment and Climate Change
  4. Retail Industry
  5. Network Security
  6. Document analysis
  7. Molecular Biology

Today’s era greatest challenge is to handle the massive data collections from different sources. This data could run into thousands of terabytes or even petabytes/exabytes. Most of this data is in a semi-structured or unstructured form which makes it highly difficult for only a human to analyze or only a computer algorithm to analyze.

E.g. In the financial industry a lot of data (mostly unstructured) is generated on a daily basis and many qualitative and quantitative measures can be observed through this data. Making sense of this data is complex due to numerous sources and amount of ever-changing incoming data. Automated text analysis could be coupled with human interaction and knowledge (domain specific) to analyze this enormous amount of data and reduce the noise within the datasets. Analyzing the stock behavior based on news and the relation to world events is one of the prominent behavioral science application areas. Tracking the buy-sell mechanism of the stocks including the options trading in which the temporal context plays an important role, could provide an insight into the future trend. By combining the interaction and visual mapping of automated processed world events, the user could be supported by the system in analyzing the ever-increasing text corpus.  

Another example where visual analytics could be fruitful is the monitoring of information flow between various systems used by financial firms. These products are very specific to the domain and perform specific tasks within the organization. However, there is an input of data which is required for these products to work. This data flows between different products (either from the same vendor or different vendor) through integration files. Sometimes, it could become cumbersome for an organization to replace an old system with a new one due to these integration issues. Visual analytic tools could provide the current state of the flow and could help in detecting the changes would be required while replacing the old system with a new system. It could help in analyzing which system would be impacted most based on the volume and type of data being integrated reducing the errors and minimizing the administrative and development expenses.

Visual analytics tools and techniques create an interactive view of data that reveals the patterns within it, enabling to draw conclusions. At Magic FinServ, we deliver the intelligence and insights from the data and strengthen the decision making. Data service team from Magic would create more value for your organization by improving decision making using various innovative tools and approaches.

Magic also partners with top data solution vendors to ensure that your business gets the solution that fits your requirements, this way we rightly combine the technical expertise with business domain expertise to deliver greater value to your business. Contact us today and our team will be happy to speak with you for any queries.

Get Insights Straight Into Your Inbox!

    CATEGORY