Artificial Intelligence | February 12, 2020

AyasdiAI Model Accelerator recognized as Prime Example of AI Governance by Singapore Government at Davos 2020

BY Simon Moss

On Jan 22nd, 2020, Singapore launched its second edition of its Model AI Governance Framework at the World Economic Forum in Davos at a joint press conference for the Fourth Industrial Revolution (WEF C4IR). This framework’s “unique contribution to the global discourse on AI ethics lies in translating ethical principles into practical recommendations that organizations could readily adopt to deploy AI responsibly.” The Model Framework focuses primarily on four broad areas: internal governance structures and measures, human involvement in AI-augmented decision-making, operations management, and stakeholder interaction and communication.

The report includes the use case of Symphony AyasdiAI’s Model Accelerator as an example for their Operations Management principle; stating that models should be repeatable, auditable, traceable, explainable, reproducible, and robust. Model Accelerator includes all these features to create AI models that can adequately forecast revenues and the capital reserve required to absorb losses under stressed economic conditions with justification for the regulatory bodies in the United States. In this case deployment focused on Global Liquidity optimization that both added several hundred million dollars of improved liquidity to a global institution, but also fully justified the improvements to global regulators in simple, clear and understandable explanations.

Stephen Moody, Head of Customer Operations and Success at AyasdiAI commented “AI has the power to improve our lives but only if we adopt safe, fair and transparent processes around its use. At AyasdiAI we have developed methods for explainability in the domain of complex, noisy and biased data – i.e. the real world. These capabilities align well with Singapore’s Model AI Framework and such transparency will enable trust in the next generation of AI solutions. It’s an excellent starting point for organizations to define their AI governance process.”

This framework echo’s AyasdiAI’s principles of making sure that “AI decision-making processes are explainable, transparent and fair, while AI solutions should be human-centric” always keeping the end-user in mind.  Mr. S Iswaran, Minister for Communications and Information Singapore puts it nicely stating “The steps we take today will leave an indelible imprint on our collective future. The Model Framework has been recognized as a firm foundation for the responsible use of AI and its future evolution. We will build on this momentum to advance a human-centric approach to AI – one that facilitates innovation and safeguards public trust – to ensure AI’s positive impact on the world for generations to come.” 

Drop us a line if you want to know more about how this is making a massive difference in Liquidity optimization. Here its the link to the full REPORT and a summary.


Additional Resources

Artificial Intelligence | August 10, 2020
Innovation, transformation, growth, and resiliency

Symphony AyasdiAI has been innovating in the artificial intelligence domain for over a decade. Only a handful of companies on the planet can say that today. From Ayasdi’s beginnings out of the Stanford math department, the technology’s potential impact across multiple industries and problems has been unquestionable. This is why DARPA funded its development and […]

Artificial Intelligence | July 21, 2020
Beyond Explainability – AI Transparency

It is now well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to the development of the notion of explainable AI (sometimes called XAI) which was initially a loosely defined concept […]

Artificial Intelligence | May 19, 2020
Understanding AI Risk through the Lens of Cybersecurity and Project Management Concepts

Thanks to the movies and the media, the public understands AI as having grand potential intermixed with enormous risk. The worst-case scenario is that AI takes over the world and either enslaves or kills off humans. It would be easy to dismiss this somewhat emotional sense of the risk of AI as being based mostly […]