fbpx

Disruptive technologies are developing faster than legislation. Artificial intelligence (AI) and machine learning will advance technology even more, and nobody knows the full extent of its capabilities.

Ageing legislators in government offices certainly are not in touch the digital technologies. Yet as more businesses invest in AI, Big Data and machine learning, the technological evolution progresses without legal restrictions and regulations.

The promise of AI is to deliver enhanced experiences, improves the quality of life and enables businesses to make better decisions for the benefit of their customers and their stakeholders.

The emergence of AI has undoubtedly created untapped issues which raise critical ethical, social and legal issues for consumers, businesses and lawmakers.

The latter group are already looking for solutions to ensure AI is used responsibly in government programs and business services. The protection of consumers and end-users is of utmost importance.

However, as we have seen in the past, the promise of new technologies is not always fulfilled. And more often than not, it is the consumer that is the victim.

Privacy Concerns

AI and the Internet of Things (IoT) rely on the continuous interaction between devices. But to connect personal devices, consumers are required to hand over personal data.

Privacy is already a hot topic for discussion. Tech giants including Google and Facebook have already dented the public trust by violating consumer privacy.

To date, multi-billion dollar companies are fined a fraction of the money they make and subsequent laws ultimately work in their favour whilst hindering SMEs. GDPR is a case-in-point.

In October 2016, the House of Commons in Britain published a report to address Robotics and Artificial Intelligence. The paper primarily addresses privacy and accountability but raises more questions than it answers.

Accountability

AI arguably complicates the privacy issue even more. Lawmakers with the responsibility to update legislation that puts controls and limitations on how businesses use data have to iron out who is responsible for protecting personal data.

The key points to focus on are ethical issues, its development, deployment and the fundamental rights of consumers. Many of these regulations already exist in relation to digital technology but are foggy, to say the least.

With the emergence of AI-powered IoT systems, there will be an increasing number of cases where more than one party is involved in the handling of consumer data.

The volume and relevancy of data will need to be scrutinised by legal regulations. But where does that leave businesses in what they can and can’t do? What should and should not be deemed appropriate and what will the ramifications be for companies that breach consumer rights?

For example, to use IoT systems, consumers will have to hand over their phone number and/or email address. This data is then shared amongst three different companies for IoT to run smoothly. Will the consumer receive advertising spam from three different companies?

Even more complicated is the question of causality. For example, if a robot causes an error which results in a financial loss, is the company legally responsible for damages? Can a driverless car be accused of causing an accident?

Legislators for AI have to dig deep. The scope of the legislation should address programming errors, whether the statistical chances of glitches raise safety issues, whether testing protocols are sufficient and much more.

A concern for companies will be whether manufacturers are given the freedom to pass on the responsibility to their customers and if so, where does that leave businesses?

The legal complications around AI are a minefield. So far, lawmakers have failed to protect regular business owners and consumers. Before you invest in AI speak to legal experts that have experience in with AI, Big Data and Machine Learning laws.