Joshua Ellul on technologies

Aleksandar Simonovic 3 years ago
Joshua Ellul on technologies

Words by Dr. Joshua Ellul, Chairman of the MDIA and Director of DLT at the University of Malta. Catch up with him later this year in the Autumn edition of Block Magazine

Over the past years, we have seen a surge in interest (and hype) surrounding Artificial Intelligence (AI), Blockchain, Cloud computing, Data science and Quantum computing and communication. These (and other) technologies are helping to shape a new world. A World which could: (A) allow for more automation of tasks which typically require humans; (B) provide more transparency and guarantees; (C) with minimal upfront costs; (D) for various different domains; (Q) at unprecedented speeds and distances; to ultimately (Y) benefit YOU.

Such benefits are frequently publicised, yet what your place will be in this new world is not discussed enough. Following a number of issues which will impact society will be discussed.

If you are a techie, you probably have a good grasp of what’s coming. Whether you will be working on core development of such technologies requiring you to potentially undertake research to familiarise yourself with theoretical underpinnings, or if you will be developing applications using these technologies you are probably waiting for (or already using) new frameworks and libraries that hide away the intricacies of the technologies allowing for you to focus on the application at hand.

However, one new aspect which will be required of techies to do more and more is that of engaging with professionals from other disciplines. Whether it is to discuss specific problems pertaining to the respective business domain, or debating with legal professionals in regards to technical architectural options and decisions which may require various regulatory and compliance requirements to be followed (or challenged), or even ethicists and different stakeholders with respect to implementation options when it comes down to application domains that inherently require ethical considerations.

josh ellulBesides the techies, who are endlessly working on implementing various solutions to automate and enhance various tasks and provide new types of service, are the various professionals. Such professionals play a critical role in ensuring that such products and services are of a sound nature, relevant to the various stakeholders, that the product and service is actually marketable and above board.

Whether working directly with an innovative technology-related company or in any other sector, such emerging technologies will have an impact on professionals across the board. Whether training will be required to understand products and services which your company offers in order to effectively apply your skills, or whether your job is totally unrelated, at some point in time, these technologies will impact how the processes you follow will be changed.

AI tools to help implement a first line of customer support; understanding which processes have been made completely transparent using Blockchain; shifting to Cloud services that provide ease of access and use; or whether computational processes and communication make a Quantum leap in speeds.

The above highlights the need for multidisciplinarity. Where professionals from various backgrounds — specialised in their discipline are knowledgeable or literate within the other disciplines, allowing them to engage in discussions and critical thought regarding the various perspectives including technical, legal, business and ethical ones.

Not only those working directly with or indirectly with emerging technologies will be affected. These technologies bring further digitisation and automation allowing for many tasks to be undertaken more efficiently and effectively than before. This means that certain types of jobs may be automated which would no longer require a human fully in the loop. Some tasks may be fully automatable whilst others may still require human oversight to some extent.

Indeed, this means that some jobs which we currently play an essential role within various industries may no longer be required. We must consider how this will impact those affected and consider how this will impact society and what policies can be put into place to minimise negative effects on the individual and society at large. Will those affected be able to find similar jobs elsewhere? Will they require upskilling or reskilling? What responsibilities should employers bear? Similarly governments? And even individuals?

Let’s assume for a second that through automation the accumulative amount of work which is required to be undertaken within a society can be minimised. Would society be able to benefit directly from this, allowing for everyone to benefit from more time off? Or will this only benefit those at the top of the pyramid? However, this situation is unlikely to happen as has been demonstrated since automation efforts were introduced during the industrial revolution. In fact, certain types of tasks that can be automated using AI and data science require images and other types of data to be labelled or verified by human operators. This need has given rise to a new sector, that of AI labelling in which many ‘AI farms’ have been sprouting around the World. It could very well be that previous manufacturing quality assurance staff could be reskilled to work within the AI labelling industry, which some may consider to be an opportunity to increase the quality of work conditions for such staff (from manufacturing floor jobs to office jobs).

Blockchain, Distributed Ledger Technology (DLT), and Smart Contracts allow for disintermediation — removing ‘middlemen’ from various services and processes, which also brings with it greater transparency, guarantees and tamperproof mechanisms that ensure participants in the services cannot cheat.

However, these guarantees only exist within the services and processes encoded within the Blockchain itself. This means that if a service requires external input. For example, if Blockchain is used to provide transparency within a supply chain to ensure that no slave labour is used, whilst all data pertaining to the staff working within the supply chain and their productivity is available for all to see and cannot be manipulated, someone has to input the data.

If staff are provided with unique credentials (or rather private/public key-pairs) we can ensure that it is indeed the particular individual inputting data. However, could they be coerced to input incorrect data, for example the amount of work undertaken? Could some of their productivity not be registered within the system? Whilst one can optimise the process to be as transparent and foolproof as possible, ultimately the process is reliant on the data input and the various stakeholders inputting data are deemed to be trusted parties whose correct reporting the overall system aims depend on.

Can the staff be trusted? Can their managers be trusted? Can the company policy be trusted? In such cases, new types of physical audit/verification jobs are being created. We may not be able to trust the company itself, but perhaps a third party independent auditor may well be trusted. As we can see such innovative technologies are changing the workplace and modes of operation, and potentially removing some types of jobs, but they also create new ones.

It’s not just about the technology and jobs impacted either. The introduction of these emerging technologies raises ethical dilemmas (beyond those of automating jobs). One such cliché is: what should an automated vehicle do in case it ended up in a situation where it has no way out but to hit one of two people, which one should it choose? When automated vehicles become ubiquitous we may have more involvement in this decision then we would like. Will our automated vehicles learn some of its driving skills or patterns based upon how we sometimes may drive in manual? Will they learn how other drivers in the region drive and/or how pedestrians interact in such situations so as to develop a strategy in case this scenario becomes a reality? Or will the car require immediate attention from the driver to make a decision and/or ask each passenger to vote on the strategy it should take? If so, in such a scenario or completely different scenarios, will we be equipped to make such hard decisions?

In fact much talk and policy surrounding AI ethical frameworks has received extensive coverage over the past few years. But really, it’s not about AI, and discussions should not be focused on AI-ethical frameworks (only). It may be more about automation (which may be AI-enabled or not). And such ethical debate and considerations should not even be automation-focused nor software focused. Yet we should be focusing on the core ethical issues that are independent of the technology.

Another cliché here is: should we allow AI to decide who is entitled to an insurance policy and/or bank account depending upon demographics and historical data for those with similar traits? Well the fact that this decision may be made by AI-enabled software is irrelevant. The question that remains is whether demographics is an ethical inclusion/exclusion criteria. Would such a decision be any different if it were to be taken by a human? Of course not.

That being said, it is common practice that such decisions are based on the risk appetite and discretion of an institution itself to decide whether to onboard or provide a service to a client. How is this any different? Therefore, it is important that as a society we focus less on the specific emerging technologies and more on policies, ethical and regulatory issues underpinning such issues. Unless of course the technology itself poses direct risks. Once policy, regulation and ethical guidelines are clear, then it is important that they are followed (whether implemented manually or through technology).

Perhaps, the only such AI-technology specific ethical consideration we should be thinking about is how we should handle sentient AI. You may worry about AI taking over the World. However advancements towards such a reality have not been made. There are two generalisations of AI. Artificial General Intelligence (AGI), capable to do ‘anything’ — the type of AI depicted in movies. And Artificial Narrow Intelligence (ANI), capable to do/learn one task really well — the type of AI we can actually implement today. Whilst, it doesn’t seem like AGI is anywhere close (though it could very well be that a single breakthrough could bring it about), perhaps the only AI-specific ethical consideration that we should be promoting is that: if you manage to discover how to implement AGI, you should refrain from deploying it until society and the World has figured out how best to regulate it.

Quantum computing, on the other hand, once further developed, could pose cybersecurity risks. Our existing Internet infrastructure, and on top of it, really, all modern day infrastructure is reliant on cryptography — a mechanism that ensures that secure communication can take place, and secret data can not be seen by prying eyes. This is achieved by using algorithms that ensure that it is impossible (or rather infeasible) to break such systems. This is only impossible because it would take billions to trillions of years to undertake such an attack.

That is however only because our computers are only so fast. If tomorrow a quantum computer were made available with much (exponentially) higher speeds than what we have today, some aspects of our Internet infrastructure and systems may be susceptible to attacks that would not take billions to trillions of years, but could potentially take minutes to hours to attack. Therefore, it is important that as a society and as a world, much like how treaties were put in place for the development of nuclear weapons, that similar agreements are put in place so as to ensure that when such computational power is made available that it will not be abused of by any nations. Then  again though, should we already have such cyber-warfare agreements in place?

Whether you will be developing, interacting directly or indirectly with such emerging technology or even if you are completely disconnected from such technologies, it is clear that it is changing and will change our society and world substantially. Therefore, it is pertinent that our educational system prepares our new generations for this exciting new world. Able to not only work within a specific discipline, but able to be appreciate various disciplines and perspectives that will be overlapping in many ways, and able to think critically with a sound principled foundation which is able to advance our species forward both technologically as well as ethically to strive towards the common good.

SiGMA Americas:

Following the successful launch of SiGMA Europe (Malta) and SiGMA Asia (Manila), we’re now launching the inaugural SiGMA AMERICAS, covering all three major timezones. The inaugural edition is set for September 22-24, 2020 with a virtual summit focusing on two themes: SiGMA AMERICAS for the Gaming industry and AIBC AMERICAS for the Emerging Tech industry. We wanted to provide fresh content, to help you navigate through these turbulent times. If you’re exploring Americas as a new frontier or wondering which tech solutions to embrace, we’ve got you covered: tune in on September 22-24, 2020.

Share it :

Recommended for you
Lea Hogg
20 hours ago
Jenny Ortiz
1 day ago
Jenny Ortiz
1 day ago
Shirley Pulis Xerxen
1 day ago