November 23, 2017 @ 6:58 am

The Future of AI in Europe

Should AI be regulated in Europe, beyond the legal framework that GDPR sets out? Is the current state of the art in AI, with the ever increasing use of big data and algorithms in our daily lives fit for purpose?

When?   Wednesday 29 November 2017, 15:00 to 17:30 followed by a cocktail reception

Where? Clifford Chance LLP, Avenue Louise 65, Box 2, 1050 Brussels

The European Committee of Interoperable Systems (“ECIS”) invites you to its traditional end of year debate on technology policy.

The aim is to throw more light on the key challenges with respect to AI and big data. We will hear different perspectives on the matter from academia, the EU institutions and business.

Networking cocktail Please join us afterwards for a cocktail and canapé networking reception, providing an opportunity to speak with some of our speakers about issues raised during the debate.

Please note that the event is by invitation only, and places are limited. 

Please RSVP to info@ecis.eu or telephone Elena Bek at +322 533 5964 ECIS website: http://www.ecis.eu/

Share
  • Twitter
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Suggest to Techmeme via Twitter
  • NewsVine
  • Sphinn
  • StumbleUpon
  • Yahoo! Buzz
  • email

ECIS Press Statement

The following is a statement by ECIS commenting on the European Commission’s Communication on Resilience, Deterrence and Defense: Building strong cyber security for the EU, published on 13 September 2017.

Brussels – 13 September 2017 – ECIS welcomes the Communication on Resilience, Deterrence and Defense: Building strong cyber security for the EU, in particular the European Commission’s renewed focus on education and training and other initiatives aimed at improving cyber-resilience, which ECIS has long argued should be one of the top priorities.  Indeed, attacks such as Dyn and WannaCry have demonstrated that one of the greatest vulnerabilities to the Internet of Things (“IoT”) ecosystem is human error – whether clicking on phishing links, failure to update or patch systems, or lack of adherence to basic practices of cyber hygiene.

Whilst it is critical that regulators across the European Union act in a co-ordinated fashion to enhance and promote a regional cyber-secure ecosystem, it is also important that they do not advance policies which could inadvertently stifle innovation and thereby undermine cyber resilience.  The vast potential of the IoT for the European Union will be realised only in an EU policy climate which focuses on managing risk and empowering innovators to be sufficiently dynamic, adaptive, and responsive to an ever changing cyber threat landscape.

ECIS is of the opinion that the proposed framework for certification and labelling scheme are inadequate solutions to the challenges cyber security poses.  It will add complexity and costs for both providers without adequately protecting end users.  In addition, ECIS fears certification could instil a false sense of security in users.

International standards alignment must remain a priority.  In addition, there is a need for cyber security risk management frameworks.  This guidance should include information as to how respective standards address security requirements laid down in EU regulation.  Where gaps in standards and certification schemes are identified these should be pursued via the normal EU standardisation system.

About ECIS

ECIS is an international non-profit association founded in 1989 that endeavours to promote a favourable environment for interoperable ICT solutions.  It has actively represented its members regarding issues related to interoperability and competition before European, international and national fora, including the EU institutions and WIPO.  ECIS’ members include large and smaller information and communications technology hardware and software providers.  The association strives to promote market conditions in the ICT sector that ensure there is vigorous competition on the merits and a diversity of consumer choice.

ECIS’ member companies have a long and established track record of providing resources and operational expertise to EU authorities so as to effectively address cyber security and related policy questions.  In particular, ECIS has always been an advocate of a co-operative regulation model for cyber security.

Share
  • Twitter
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Suggest to Techmeme via Twitter
  • NewsVine
  • Sphinn
  • StumbleUpon
  • Yahoo! Buzz
  • email

Artificial Intelligence (“AI”) is not a new phenomenon; it has been around for at least 50 years as possibly the grandest of all challenges for computer science. Recent developments have led to AI systems providing remarkable levels of progress and value in different areas – from robotics in manufacturing and supply chain, to social networks and e-commerce, and systems that underpin society such as health diagnostics. As with any technology there is an initial period of hype, with excessive expectations and then a period of reality and measurable results – we are at the beginning of such a period right now. Our comments below reflect our initial thinking on these issues, and concern machine learning systems that are trained with data sets and algorithms, and not the so-called Artificial General Intelligence.

As with similar technological developments in the past, it is important that the industry is left free to develop, and that technology evolves in time according to the needs of businesses and consumers. Intervening at such an early stage would have detrimental impact on the evolution of this technology, and should therefore be avoided. Nonetheless, there is a need for a strong ethical approach as to how AI should be applied, which should be properly set out at EU level (if not at global level).

For the development of AI technology, we consider essential that the algorithm used for AI purposes is transparent (in the sense of Recital 71 of the General Data Protection Regulation (“GDPR”), which means that where there is unintended bias, this bias can be addressed. Moreover, it is appropriate that these systems are subjected to extensive testing on the basis of appropriate data sets as such systems need to be “trained” to gain equivalence to human decision making.

With regard to liability, it is important among other aspects, to consider the complex supply chain in AI services. Software algorithms and data sets which are used to train the software are important elements but other aspects need to be considered, such as the purpose of the AI application, the sector specific norms are in place. In addition, algorithms should be inspected at a technical level, so that the reasons for malfunctions can be established.

Finally, we would also like to refer you to the following academic papers, which do not necessarily reflect our position on the relevant matters, but we considered them useful and interesting background for you to have in the context of your ongoing fact-finding study on AI:

 

Share
  • Twitter
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Suggest to Techmeme via Twitter
  • NewsVine
  • Sphinn
  • StumbleUpon
  • Yahoo! Buzz
  • email

1. Introduction

Issues about data – for example customer lists and purchase histories – have been relevant to competition authorities for decades.  But various technological developments have led to rather revolutionary changes in the amount and kinds of data that can be gathered and the ways in which that data can be analyzed and used.  So the era of so-called Big Data is likely to present new competition law issues.  But we’re fairly close to the beginning of the road of assessing how Big Data may raise competition concerns.  Having massive quantities of data and even achieving a great advantage in data scale does not inherently yield dominance or give rise to incentives to preserve that dominance through anti-competitive conduct.  Nor would the acquisition of a “data rich company” necessarily lead to dominance or significantly impede competition.

A number of instances might arise where data and scale will give rise to competition problems, but every situation will have to be addressed on its own facts.  There are three areas where data and competition law may intersect: (i) merger control and data; (ii) dominance cases involving data; and (iii) the use of data and concerted practices.  Each will be addressed in turn.

2. Merger control and data

There is a need to distinguish between two issues: (i) whether the existing Merger Regulation thresholds should be modified to require notification of data-driven acquisitions that are currently not subject to review by the European Commission (“EC“), and (ii) the assessment of potential harm to competition in cases that are subject to merger control (either under the existing thresholds or some new ones meant to capture more data-related deals).

At the beginning of October the EC launched a public consultation on the possibility of changing the existing Merger Regulation’s purely turnover-based notification thresholds.  Some have suggested that the purely turnover-based thresholds do not necessarily capture some transactions that could raise serious competition concerns.  For example, an innovative target with little or no income could be a tempting acquisition, especially if the acquirer can combine the target’s assets with its existing assets – perhaps including data — in a way that yields a competitive advantage – or, if by acquiring the target, an existing large market player can avoid disruption of the market by an innovative new competitor.

Potentially, the acquisition by an existing large player of another company with little current revenue but large quantities of unique data that is not easily replicable could establish barriers to entry and establish or maintain a dominant position.

In the pharmaceutical sector, an existing large player might be after a target pipeline; the target might have developed a promising new drug that’s not yet approved for sale (which might compete with the acquirer’s existing products).

In theory, acquiring a company with these kinds of assets might result in “a significant impediment of effective competition“, even though the company’s turnover might not be high enough to meet the Merger Regulation thresholds.  Nevertheless, one should be cautious about making changes to the merger notification thresholds.  It is important to find the right balance in order only to cover mergers that could have potentially negative effects on competition, without making life harder for innovative startups.

One key question is whether there really is any evidence that potentially anti-competitive transactions are falling through the cracks.  Some suggest that Facebook’s acquisition of WhatsApp in 2014 presented an example of a gap in the EC Merger Reg.  Facebook paid USD 19 billion for a company with 600 million customers, but the merger did not need to be notified to the EC because WhatsApp’s turnover was too low. Other than Facebook/WhatsApp, which is not necessarily the best example of a case proving a need to change the EU thresholds because the case was ultimately referred to the EC by the national competition authorities of three Member States, there seem to have been no cases suggesting there is such a need.  As such, there does not seem to be any concrete experience demonstrating the jurisdictional rules should be changed in order to look beyond turnover as a means to identify whether or not a merger should be notified.

It should be noted that Germany is moving on this front.  The Government published a proposed amendment to the German Antitrust law for a new merger control notification threshold based on transaction value.  The German proposal resembles the size-of-transaction test used in the United States.  One question that arises in this context is this: why, if a size-of-transaction test works in the U.S., would it be so bad for one to be adopted by Germany or the EC?

3. Dominance and data

Second, it is worth considering data in the behavioural context.  There are two aspects to it:

(i) Can controlling data and having a large advantage in data scale give rise to dominance?

(ii) To what extent can a dominant company’s conduct related to data restrict competition?

  • Dominance

Controlling data and having a large data scale advantage does not inherently yield dominance.  Nevertheless, achieving scale in data may create barriers to entry and can be a means to establish or strengthen market power.

Search and search advertising is one area with which the EC has become very familiar over the years (going back at least to Double Click).  Thus, the search and search advertising areas are more ripe than any others for addressing the consequences of data and scale and may well present differences from how scale and data is or should be analyzed in other areas.

In search, having a huge scale advantage in query-related data will yield dominance, and a dominant search company’s interest in retaining its data scale advantage and buttressing barriers to entry – and to monetize its data in ever-expanding ways – will give rise to incentives to engage in conduct that might be deemed unlawful under the antitrust laws.

In relation to search engines specifically, it is query scale – not technology – that is the primary driver of search engine profitability and competitiveness, due in particular to machine-learning aspects of search.  Search algorithms learn from user queries and how users interact with search results, and the greater the scale advantage with respect to number of queries (in particular so-called “long tail” queries), the more relevant results the search engine will be able to show to users.

This helps explain why barriers to entry can be so high in a search engine market where one company has an overwhelming advantage in scale of queries.

  • Conduct

Having considered dominance, it is also necessary to examine to what extent a dominant company’s conduct related to data can actually restrict competition.

Can foreclosing competitors’ access to data constitute an abuse?  Many argue that this depends on the type of data, and its replicability.  This has already been analyzed in the merger context: if the data acquired by the acquiring company from the target is easy to obtain from other sources, then there may be no anti-competitive effects.

What if a dominant company changes its privacy practices to combine sources of data that gives it abilities to monetize that data in ways no competitor can match?  Might this be an exploitative abuse under Article 102?  And if it forecloses competition by other advertizing competitors, might it be an exclusionary abuse?

4. Concerted practices and data

A third example of potentially infringing data-related conduct arises in the context of potential concerted practices.  What if competitors on the market used artificial intelligence means to determine prices?  Let’s say those tools determined that the most profitable way to respond to an increase of the price by a competitor would be to increase one’s own price (rather than keeping one’s price down to capture a higher volume share).  There would be in principle no agreement between companies themselves, yet the result for the consumers would be an increase in prices.

5. Conclusion

Many data-related antitrust questions and interesting debates are still to come.  Some instances might arise where data and scale will give rise to competition problems, but every situation will have to be addressed on its own facts.

 

 

 

Share
  • Twitter
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Suggest to Techmeme via Twitter
  • NewsVine
  • Sphinn
  • StumbleUpon
  • Yahoo! Buzz
  • email

Author – session moderator (more detailed report by the subject matter expert, Adrian Weller, to follow)

Artificial Intelligence (“AI“) is not a new phenomenon; it has been around for at least 50 years as possibly the grandest of all challenges for computer science.  Recent developments have led to AI systems providing remarkable levels of progress and value in different areas – from robotics in manufacturing and supply chain, to social networks and e-commerce, and systems that underpin society such as health diagnostics.  As with any technology there is an initial period of hype, with excessive expectations and then a period of reality and measurable results – we are at the beginning of such a period right now.[1]  Recent progress has been enabled by three factors:

  • Advances in algorithmic theory;
  • Greater computational power; and
  • Availability of data

Our session focused on three questions – (i) how do we deal with liability; (ii) how can privacy be respected; and (iii) how can we achieve a greater degree of transparency and accountability in algorithms and avoid discrimination.  Algorithms arguably need to attain a “higher” standard than humans, they need to be more transparent than “human decision making” and while they may follow the same logic as humans they should be held to account.

First, with regard to liability, it is important among other aspects, to consider the complex supply chain in AI services.  The self driving car is a good example.  In the case of an accident, who is at fault – the software developer, the manufacturer or the driver?  AI agents are rather like children, they are strongly influenced by what they learn from data – outputs can be substantially changed depending on the data set used.  Examples are the Tay chatbot that emitted hate speech after being exposed to Twitter.  It is therefore important to consider what standards of behavior are appropriate and what actual normative standards should be in place.  In addition, algorithms should be inspected at a technical level, so that the reasons for malfunctions can be established.  From a legal point of view, a key consideration is whether algorithms have a legal personhood.

Second, with regard to privacy, it can be said that we live in an age where “tracking” and various kinds of surveillance have become commonplace.  Health apps are widely used and our phones collect and transmit large amounts of data about us.  Giving up personal privacy, while maybe not such an issue for today’s millenials, can clearly have significant consequences down the road (think Facebook).  The ability to manage one’s data, and third party data ownership are important factors in this regard.  Who actually owns your data?  For instance, if you allow a smart meter to be installed in your household, who owns the data collected about your energy usage and patterns and who is given access to it?  Has the data been retained by the primary business group or is it passed on to third parties?  If data is “tweaked”, how does this affect ownership since it has been transformed?  It should be noted that tweaked data can be used to train algorithms.

What about the “right to be forgotten” as set out in data protection laws including the General Data Protection Regulation (“GDPR”)?  And, if your data has been used to train an algorithm, what happens when your data is erased?  Does this erasure influence the output?

Another growing area of interest is the concept of “Differential Privacy” and the potential to mask certain data fields.  This will be covered in a follow-up paper.

Third, with regard to transparency, a key concern is to ascertain “how well” an algorithm is working, set against its original objectives.  Can a system be trusted?  Why was a specific recommendation made in one decision instance?  Is the algorithm “fair”?  What about instances of bias in the system, which may be influenced by certain attributes such as gender or ethnic origin? A good example would be an application for a bank loan online, which may refuse credit to minority groups and cause follow-on adverse impacts on an individual’s ability to get credit.  Another example is the COMPAS recidivism risk prediction system used in the US to help decide whether parole can be granted.[2]  This has been claimed to be biased in certain ways in terms of race.  How do you remove bias from such systems to avoid discrimination?  Is the right approach to delete certain data fields, or to add other data fields to make the system fairer?  Arguably it can be said that humans are by nature biased so is it possible for AI systems to function better?

In the follow up discussions we also discussed the European regulatory context.  A key consideration here is that AI is changing very rapidly and policy actions tend to lag behind.  As such, there is a risk that regulatory action by the EU as in other technology areas, may fall behind the curve of technological development and deployment.  It is therefore difficult to imagine how such regulatory intervention in the EU would usefully guide the development and deployment of AI systems in a meaningful and timely way.

On the other hand, key issues underlying AI, such as data protection, safety and security, liability, and accountability are already to a greater extent than is immediately obvious, covered by current and soon to be implemented legal frameworks.  A good example of this is the GDPR, that will take effect as law across the EU in 2018 and will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users.  The law will effectively create a so-called “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them (see Goodman, Flax et al).[3]

Policy action should make sure that society can take full advantage of the capabilities of AI systems while minimizing the possible undesired consequences on people.  Safety is very important, as well as fairness, inclusiveness, and equality. These and other properties should be assured of in AI systems, or at least we should be able to assess the limits of an intelligent machine, so that we do not “overtrust” it.

It is therefore of utmost importance for societal well-being that policies and Regulations help society use AI in the best way.  Ethical issues, including safety constraints, are essential in this respect, since an AI system that behaves according to our ethical principles and moral values would allow humans to interact with it in a safe and meaningful way.

While it is clear that a complete lack of Regulation would open the way to unsafe developments, this may not be the case in the EU due to the strong legal framework that will frame the development and deployment of AI.

[1]              http://www.nextgov.com/emerging-tech/2016/12/microsofts-new-plan-flood-your-entire-life-artificial-intelligence/133908/?oref=nextgov_today_nl

[2]              https://www.nytimes.com/2014/02/17/opinion/new-yorks-broken-parole-system.html?_r=0

[3]              https://pdfs.semanticscholar.org/25d7/d975a12fd657b4743acd262cbdfe2dc2e6e9.pdf

Share
  • Twitter
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Suggest to Techmeme via Twitter
  • NewsVine
  • Sphinn
  • StumbleUpon
  • Yahoo! Buzz
  • email

Follow ECIS