AI expert: Education on AI Ethics and Regulation needed
A global expert on Artificial Intelligence and cybersecurity said people need to be educated on the ethical use of technology for society to counter the malicious threats posed by AI.
Dr. Mohamed El-Guindy, Cybersecurity Expert at the UN Office on Drugs and Crime and ICT Consultant at UNESCO, spoke at the webinar, “Malicious Use of Artificial Intelligence: Legal and Ethical Implications,” held last October, the first in a series on the topic, sponsored by the Asian Media Information and Communication Centre (AMIC).
AI, the science that enables machines to think and do tasks in complex environments without constant human supervision or being explicitly programmed, is reshaping our lives and the global economy, he said.
These computer systems are fed with data, which are then stored in large databases, controlled by private sector, used to train AI, linked to identities, user IPs, devices, etc., and sold to data brokers, advertisers, governments, friends and foes.
Data is the “new oil” that fuels the digital economy, he said.
AI has been used for good—in education, business, medicine, communication, transportation, crime prevention, and almost all facets of modern life.
Yet AI has also engendered new issues in privacy, security, social bias, social equality, and integrity of mediated information.
“Privacy is an issue now in the cyber world. Crime is on the rise in the cyber world, because people are not aware” of what these systems can do to them.
“From the security perspective, these apps can steal your info, can be used in accessing sensitive data on your device.”
Balancing pre-crime detection with human rights of suspects is another issue, because computer systems programmers can have bias against certain groups.
“We are not guaranteed that these AI are giving us the right direction…And this is important because we are dealing with human rights… the privacy of people, human dignity itself.”
He observed that AI will radically change media–like movies using AI instead of actors, TV shows presented by AI robots instead of TV anchors, and media being used to spread fake information.
He said young people are most vulnerable to fake information because of their heavy dependence on social media rather than traditional media for their information needs.
Technology is changing media theories, as machines are now able to send you things that do not exist, he said.
So we need media scholars and students to study the effect of AI on the media industry and to train people who design the algorithm and also audit the algorithm design and usage, he added.
Colleges and universities need to teach the legal and ethical aspects of AI, and not just the technology itself, he said.
In addition, regulation of AI is still a big debate, because while AI is being designed by the private sector, governments do not have the power to interfere due to lack of technological understanding.
Some governments favor regulation, like the UK and the European Commission which have enacted laws on AI. Others, like the United States, argue against it, saying regulation stifles innovation and creativity.
The European Commission has identified seven principles for ethical AI systems ethics and regulation: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity/non-discrimination/fairness, societal and environmental well-being, and accountability.
The webinar was attended by 180 participants from nine countries.