Notes from AIWorld Congress 2022, London, UK

27 October 2022
Notes from AIWorld Congress 2022, London, UK

AI World Congress 2022, London

Unless otherwise mentioned, all quotes or positions reflect the positions of the speakers, not of Data Merit.  My notes are most likely not fully complete.

I was not familiar with the AIWorld Congress before I attended this year's edition. This event is a no-nonsense, content-first type of event, without technology vendor market-place and vendors trying to get your attention. The speaker line-up is international and from various sectors, including academia and thought leaders. A broad variety of topics is tackled, making it a great place to feel the temperature of the AI field. The format consists of two days of fairly high paced short speeches and key-notes, packing the available time with content. And guess what, nobody mentioned the data-mesh....

Scaling AI in the Enterprise

Javier Campos, SVP/General Manager, UK&I Data Labs, Experian

Company cross adoption. 
There is no right answer on the question where to put the AI team. It depends on the maturity. If the maturity is higher and the volume is sufficient, the Centre of Excellence can be spread into the business units. The middle management is essential. It is the level where fear needs to be alleviated.

Governance & responsible AI. 
Governance is essential and a single person should be responsible.

Skills & diversity. 
You will not make it with data scientists alone. Translators, IT infrastructure specialists and ML engineers need to work hand in hand with the data scientists.

Infrastructure. 
Data, Cloud and APIs enable the issue that data is often in legacy systems, where a lot of AI processing is in cloud infrastructure.

Research & Development.
AI cannot be measured against perfection, as human intelligence is not perfect either.

Why AI cannot debias recruitment

Dr. Eleanor Drage, University of Cambridge

24% of businesses have already implemented AI for recruitment purposes and 56% plan to adopt it in the next year.

The study looked at the following tools: MyInterview, Retorio and HireVue.

The questions to ask are:

1. Can AI really create a representative workforce?
2. What about their claim that race and gender can be stripped from a candidate?
3. What does inclusive hring actually mean?
4. Is it a race of human versus AI?

Example: one of the tools uses the big five personality trait test, to bypass culture, race or gender. They assume that personality is a universal concept, not influenced by other factors. Micro-expressions and on-line presence are included in the analysis. Is this accurate and ethical? The idea is that race and gender need to be eradicated from the recruitment process and that this would lead to a diverse workforce.  How can the “right” fit between a candidate and a company be quantified? It cannot and it should not even be the purpose to only have “perfect fit” candidates.

As an example: It has been proven that tools can be mislead by increasing brightness and saturation of the camera.

No tool will solve the structural issue of under representation of certain categories in most companies’ workforces.  Techno-solutionism is the belief that technology alone can solve complex problems on its own.


Panel: Exploring emerging AI market trends

Dr. Eleanor Drage, University of Cambridge
Javier Campos, SVP/General Manager, UK&I Data Labs, Experian
Henri Kivioja, CEO, Lempea Oy
El Bachir Boukherouaa, Division Chief, Information Technology Department, International Monetary Fund

Which emerging trends should we be weary of? We should be careful with technologies that claim to detect things that are not detectable, such as sexuality, authenticity, etc.

What are game changers? Large language models are generating a lot of impact.  Speech technologies will get a new boost and that in turn opens a lot of new consumer oriented applications.

We need to pay attention to the effect of Covid to breaking almost all time series that exist.  Covid on itself has accelerated the appetite for AI.  

It is concerning that society and regulation cannot at all keep up with the technological evolutions.

In the financial sector, explainability remains crucially important.  Trust is essential and care for AI ethics is the enabler to protect trust.

We are already to much connected to our phones and therefore to artificial intelligence.  From that point of view, we are living already in a "MetaVerse". The MetaVerse will develop, although the format is still very unclear.  We should be particularly worried about indicators of mental health, especially in the population of teenage girls. We need to make sure that "the good" traverses from the MetaVerse to the real world.  We need to care and actively promote this, we cannot just pretend that "whatever happens in the MetaVerse will stay in the MetaVerse".


The EU AI Act, a game changer?

Jared Browne, Head of Data Privacy, Fexco

The act aims to ensure excellence in artificial intelligence, support the EU internal market ensure AI tools are safe and that the rights of individuals are respected.

For AI providers, it’s comply or die. AI providers will not be able to sell high-risk AI tools in the EU, unless they comply.

What problem is it trying to solve?
Opaqueness of AI, complexity, dependent on data and data quality, autonomous, bias risk, discrimination risk, not ethical by design and finally, it’s too big to fail.  The AIA defines AI broadly as a suite of software development frameworks that encompass machine learning, expert and logic systems and Bayesian or statistical approaches.  The geographical scope of the act is global. As soon as your output is used in the EU, you need to comply.  It includes all stand-alone AI tools and all existing products subject to the EU CE scheme.  The act will be regulated by new AI regulators and the European Artificial Intelligence Board.  The pain is not the fines, it’s the CE scheme because there is no business to be done without compliance to that.

The four categories of risk are:
minimal risk (transparency rules apply), limited risk (transparency rules apply), high risk – 35% (subject to the EU CE scheme), Unacceptable Risk (banned).

Unacceptable systems include state-level social credit scoring, real time remote biometrics to identify people in public spaces, using subliminal techniques to manipulate a person’s behavior.

High risk sectors include:
Employment, product safety, systems that interact with children, education, critical infrastructure, core public and private services, police and justice, biometric systems, migration and border control, systems that influence democratic processes.  The high risk requirements are quite elaborate.

Example:  China has developed an AI prosecutor, called system 606, that automatically files charges against individuals.

Points of contention are:
The definition of AI, data accuracy, explainability, threshold of high risk, definition of high risk categories, IP concerns, class-action suits, metaverse and general purpose technologies.

General purpose AI is being discussed, was first out of scope but is now back in.

Timing:
The best case prediction of the time line would be to make it law by end 2023 and make it applicable end 2024. Do not wait to start!


Building the future talent for Ethical AI

Fabio Fulci, VP of Impact Development, Omdena Inc.

Ethical AI is not about bringing together data scientists with good intentions.

What needs to be included: Government regulation, bias detection tools, create diverse AI teams.

The three principles for building ethical AI are:
Collaboration, Compassion and Consciousness. We have to collaborate with compassion and consciousness. We have to leverage crowd wisdom and build AI through the power of diverse teams.

We should democratize knowledge and education by enabling millions of people from all over the world to contribute to the AI revolution.

The AI journey: From research to real world solutions

Dr. Ariel Ruiz-Garcia, Machine Learning Architect, SeeChaEl Bachir Boukherouaa, Division Chief, Information Technology Department, International Monetary Fundnge Technologies Limited

Clearly, AI is a rapidly growing field.  NeurIPS is a popular AI conference, currently attracting 9634 submissions, up from less than 2000 in 2015.  Biased data leads to biased models, so we need new policies to tackle data misuse.

Issues in Academic Research
AI in academic research – and often in industry – is based on well known and understood and controlled data sources. There is often a lack of reproducibility and transparency. Code is often poor. Everybody claims to be state of the art but state of the art does not translate easily to the real world. Architectures are very complex and resource hungry.

Issues in Industry
Business is solution and monetization oriented. Everybody wants 99% or 100% accurate models, not really knowing what that means. Models need to be optimized to increase ROI, but at what cost?
Deep learning should not be seen as the answer to everything. There is often a lack of data and model understanding.  Teams are often not diverse enough in terms of expertise.

Ethical AI
Does your model respect the user and the user’s privacy? Does the model discriminate or manipulate? Are they biased?

Powering the Digital Economy: opportunities and risks of artificial intelligence in finance

El Bachir Boukherouaa, Division Chief, Information Technology Department, International Monetary Fund

Technological advances are facilitating rapid AI/ML deployment in a wide range of sectors, including finance. AI/ML systems bring in important capabilities.

A majority of financial institutions expect AI/ML to play a bigger role after the pandemic. In particular, it enables the substantial increase in online economic activities, customer relationship and risk management. AI/ML deployments have been concentrated largely in advanced economies and few emerging markets. That means that LDC’s lack the necessary investments to train skilled people, which may acerbate their fate for the future.

AI/ML raises new risks to financial integrity and stability. Explainability, privacy and cybersecurity need to get attention. For the financial system as a whole, robustness, interconnectedness, procyclicality and the rise of new systemically important providers need to be tackled.

Typical (very high level) applications are forecasting and predictive analytics, investment services, risk and compliance management, prudential supervision and central banking.

Embedded bias, explainability and complexity need to be tackled.

Forecasting the Market for Artificial Intelligence

Jim Morrish, Founding Partner, Transforma Insights

The forecast starts from the type of environments in which AI can surface, being IOT, Edge Infrastructure, PC/Tablets/Handhelds or Cloud.

Refer to: Transforma Insights, 2022



Why rigorous value management is essential to sustainable data and AI transformations

Jo Coutuer, Founder of Data Merit

Note taking during my presentation was hard, but just reach out to me and I'll be happy to bring you closer to a sustainable value generation from data and AI.



Human Centred Computing with Machine Intelligence

Professor Yonghong Peng, Professor of Artificial Intelligence, Manchester Metropolitan Unviersity

There will be ten times more corporate AI in 10 years as compared to now.  Monetisation tactics vary widely for different market participants.  The opportunity for AI is very high. AI could boost the UK economy by 22% by 2030. Howsever, it is a very competitive race.  80% of CEOS think AI will significantly change the way they do business in the next 5 years but 56% don't feel their teams have the skills to manage such a change.  The fundamental challenge is the gap and inequality beteen human intelligence and machine intelligence.

The paradigm of data driven AI is to combine big data and AI to generate productivity and innovation by detecting patterns that were otherwise not visible.  Next level approach is to train a neural model and test it and by testing it, train it.  However, AI should understand inputs, manage knowledge, learn and execute tasks. In the traditional view, we tend to look only at the training. Humans need to be actively involved in the daily training of AI.  AI needs to not just be trained by data but also by computer simulation and domain knowledge.

We need to go from a striving for machine intelligence to HAI, Human-Machine Cooperative Intelligence. With that, we can make AI be trustworthy to humans and to make humans able to work with AI. That helps to remove the gap between human and machine, enabling a trustful collaboration.  If we combine the inspiring capabilities of human intelligence with the machine capabilities of understanding, we can come to a true cooperation between humans and machine.

Never start AI with data or technology

Dr Anandhi Vivek Dhukaram, Founder & CEO, Esdha

How many AI enabled devices have received FDA regulatory approval since 1997? 343 AI/?L enabled devices have received regulatory approval, of which 64 actually held algorithms. Why is there this discrepancy? Most approvals went to small business, not to the large corporations.

Why do AI projects fail (85% of projects fail, 49% exceed deadline, 43% exceed budgets)?

1. Embarking on tech without knowing ill defined problems
2. Neglecting organizational change and lack of collaboration
3. Neglecting people, process and technology and supporting infrastructure
4. Failure to experiment
5. Poor data quality, privacy, ethical and regulatory issues

To achieve true impact, breakdown a challenge from purpose, across values and priorities, across purpose related functions, object related processes and ultimately physical objects.  Understand the socio technical systems.  The road to success is based on the composition of the multi-disciplinary team, the relevance of the use cases and the alignment with strategy and organisation.  Use research-based frameworks to empower AI teams help.

Quote from Andy Grove, CEO of Intel: 'We assumed that just because it could be done technically, there would be high demand. We were wrong."

Beyond the Hype and Spin.  Can Big Data & AI add value for investors?

Armando Gonzalez, Co-Founder & CEO, Ravenpack

Various signals from various data sources, i.e. investment reports, timing of result disclosure, audio from investment calls, media coverage, etc. are collected and processes.  Natural language processing is then used to identify earnings related events, for which a taxonomy of investment events is used. Various signals from various sources are combined into a strategy that tends to outperform normal market performance over a longer period of validity.  

As it has become impossible for any single human analyst to understand the complexity of markets, Big Data and AI is required to combine the massive amounts of information to help analysts to achieve sustainable good performance.

Panel discussion: Emerging opportunities and challenges in the AI Industry.

Armando Gonzalez, Co-Founder & CEO, Ravenpack
Professor Yonghong Peng, Professor of Artificial Intelligence, Manchester Metropolitan University
Chris Eastham, Partner, Fieldfisher
Jo Coutuer, Founder of Data Merit

Various topics were discussed but note taking was hard, being in the panel...

Taming AI/Data Science quality and data quality

Per Myrseth, Data Strategist, DNV AS

The company is active in the assurance and risk management arena.  More and more equipment is filled with sensors that generate data which in turn our used in all kinds of models.  The question that can be raised is: Is there a need for data science quality diagnostic methods and tools?

We need to distinguish in AI use cases between cases of low consequence or high consequence.  Playing chess is low consequence, navigating a ship is of high consequence.  

Data quality and lack of best practice and tools to document quality of production grade data, are key factors in the failure of AI projects. We need to be conscious that the use of the data science model itself impacts the real world, which in turn generates new data.  In all steps of the value chain (real world, data sets, model, use) there needs to be trust.  Any deficiency in trust along the chain, invalidates the final outcome.  As such, the trust chain points in the opposite direction of the production value chain of a data science solution.

The organisation that built the data science model is also a key factor in the overall trust equation.  

ISO8000 is an international standard on measuring data quality and this field is actively growing.  There is a lot of proven interaction between the quality of data and the quality of models.  For instance, completeness in data is often strongly related to accuracy in models.

To trust data science, you must believe in the partner, the quality of the data science and the fit for use within acceptable risk and cost.  Effective use of data science models requires attention to trust, bias, ethics, fairness, security, scalability, accuracy, relevance, timeliness, robustness, governance, explainability.    The data science model itself can be subject to confusion, precision, feature drift, sub-model inaccuracy,...    And finally, this generates a set of requirements on your data quality.

It has been shown that fewer data are required to have an accurate model if the data quality is higher and less noise is present in the data (Andrew Ng).

A documented approach to data science quality can be relevant for buyers to compare vendors, for vendors to gain trust from buyers through a certification and for users to reassure them with evidence of quality.

AI and the Project Manager

Peter Taylor, VP global PMO Ceridian

AI is slowly breaking into the profession of project management, reference to Peter's book "AI and the Project Manager, How the Rise of AI Will Change Your World".

Gartner quote: By 2030, 80% of the work of today's project management discipline will be eliminated as AI takes on functions such as data collection, tracking and reporting.

Most project managers (63%) are excited rather than worried (2%).  The most valuable application area to be supported by AI would be scheduling and budgeting and resource planning.  Most current solutions are very niche, but major players are expected to enter the market soon.

The big questions are:

1. Can ANYONE do project management?
2. Do we care about certifications?
3. Is methodology history?
4 Is it the end of bodies of knowledge?
5. Will program managers become devalued?

A mindshift will be required.  So far, AI cannot be "human".  This will push project management practices back to the core of "people management", communicate, argument, persuade,...

AI is coming and will stay and change the PM role significantly.  We should embrace it.  It will aid project managers in a variety of ways.  People will become more about people, less about methodology.  Project managers will become more people managers, empowered with by AI.


Rethinking AI

Daniel Hulme, CEO of Satalia

Future AI systems need to be adaptive.  None of the systems in production are auto-adaptive so far.

Modelling the brain with neural networks are good at recognition but reasoning and decision making requires adaptiveness.  There are a lot of AI taxonomies out there.  However, the main criterion for AI should be adaptiveness.

We need to frame the application and not the technology.  So what do we see?
1. Task automation
2. Generation og images, video, text
3. Human representation
4. Extracting complex insights and predictions
5. Complex decision making
6. Extending the abilities of humans in the physical of digital worlds

The intent of AI needs to be scrutinized to judge the intent.  AI in itself is not ethical or not ethical.  

The Pestel framework for singularity:
1. Political: when we no longer know what is true
2. Economic: when we automate the majority of human labor (book: The Economic Singularity)
3. Social: when we cure death
4. Technological: when we create a superintelligence
5. Environmental: uncontrollable ecological collapse
6. Legal: when surveillance becomes ubiquitous


5G and Edge Powered AI

Dario Betti, CEO of Mobile Ecosystem Forum

5g enables more devices, more sensors and thus, more data. The respons is quote instant.  Edge computing brings computation closer to the sources of data. Edge helps to reduce response times and save bandwidth.

Do telcos know use AI? Yes, for applications such as capacity planning, servicing, network planning and optimisation, real time pricing for data traffic between data centres, predictive analysis for marketing targeting, personalisation, chatbots, anti fraude,…

So what about AI in 5G Edge? Edge computing is not really standardised so scalability is a challenge.

Potential future applications are: ai as a service, real time data processing at the edge, sensing iot, smarter bandwidth management and network orchestration.