Home   »   Regulating Artificial Intelligence (AI)

Regulating Artificial Intelligence (AI)

Context: The CEO of OpenAI, during his visit to India emphasized on the need to regulate AI.

More on the News

  • The CEO, during his testification before US Senate, had urged that a new agency be formed to license AI companies. Further, he has identified three specific areas of concern.
    • First, artificial intelligence (AI) could go wrong. AI tool such as ChatGPT, for instance, often provides inaccurate or wrong answers to queries.
    • Second, AI will replace some jobs leading to layoffs in certain fields. There will be a need to mitigate that.
    • Third, AI could be used to spread targeted misinformation, especially during elections.

Regulating Artificial Intelligence (AI)_4.1

What is Artificial Intelligence (AI)?

  • Artificial Intelligence is a branch of Computer Science that is responsible for performing tasks commonly associated with human cognitive functions.
    • These functions include interpreting speech, playing games, identifying patterns, generating arts etc.
  • Working of AI:
Working of AI
Working of AI
  • Applications of AI:
    • Healthcare: It helps provide personalized medicine and X-ray readings. It can better analyse reports and make accurate diagnosis.
    • Transport: Driverless vehicles are developed as a part of Artificial Intelligence (AI).
    • Banking and finance: AI bots, digital payment advisers and biometric fraud detection are some of the applications in banking and finance.
    • Security: AI facial recognition tools may be used for surveillance and security purposes.
    • Education: AI can be used to develop content for education purpose, which could convey knowledge in an effective way.
    • Robotics: AI can help robots learn the processes and perform the tasks with complete autonomy, without any human intervention.
    • Agriculture: AI can analyse the crop health by looking at images of crops and suggesting appropriate amounts of fertilizers and water. It can also predict yields.
    • E-commerce and social media: AI can provide personalized content to users based on their previous usage pattern. Apart from targeted sale, it also help in targeted advertising.

Need for Regulations

  • Chances of misuse: AI has been used in weapon technology, to guide weapons and drones. These non-conventional weapons have the potential to cause chaos.
    • AI can be used to develop unique chemical compounds and microbes, which may be used in chemical and biological warfare.
  • Accessibility: Currently, access to AI is unlimited. From a student to a terrorist, everyone can use AI. This increases its chances of misuse.
  • Misinformation: AI modules have the capability to imitate and produce copies, which are difficult to distinguish from truth. There have been instances where speeches are tweaked and images are morphed to create unrest.
  • Job losses: AI may replace some human jobs in future. Adopting AI without having drastic effects on employment is the need of the hour.
  • Ethical issues: Currently, AI powered weapons and vehicles have some sort of human control. This is going to change in future when whole decision-making will be made by machines.

Efforts to Regulate AI

  • The US: The US has launched an initiative to promote international cooperation on the responsible use of AI and autonomous weapons by militaries.
    • Currently, US does not have a federal legislation to govern the use of AI, nor is there any substantial state legislation in force to regulate AI.
  • Canada: Canada has enacted the Artificial Intelligence and Data Act (AIDA) in 2022 to regulate companies that use AI with a modified risk-based approach.
  • India: India currently has no specific regulatory framework for AI systems. However, NITI Aayog has published some guidelines that indicate the government’s intention to move forward with AI regulation.
  • China: The “Next Generation Artificial Intelligence Development Plan” was established by the Chinese State Council in 2017. In 2021, it published ethical guidelines for dealing with AI.
  • EU: The European Commission had proposed draft rules for an AI Act nearly two years ago.
    • The draft Artificial Intelligence Act (AI Act) plans to classify AI tools based on their perceived level of risk, from low to unacceptable.

Way forward

  • Regulation has serious implications for constitutional rights like privacy, equality, liberty and livelihood. It can lead to debate between state intrusion and the privacy claims of the individual.
  • A regulation that enables AI to be used in a way that helps the society, while preventing its misuse will be the best way forward for regulating this dual-use technology.
  • In this regard, producer responsibility will be the best way to regulate AI. Companies must take active measures to prevent misuse of their product.
  • Regular audits of AI systems must be conducted to ensure that they are aligned with ethical principles and values.

Responsible AI in the Military domain (REAIM)

  • The first REAIM summit was held in Hague, the Netherlands. It brought together governments, corporations, academia, startups, and civil societies to raise awareness, discuss issues, and agree on common principles regarding use of AI in armed conflicts.
  • Objectives:
    • To put the topic of ‘responsible AI in the military domain’ higher on the political agenda.
    • Bringing together a wide group of stakeholders to contribute to concrete next steps;
    • Enhancing knowledge by sharing experiences, best practices and solutions.

Sharing is caring!

Leave a comment

Your email address will not be published. Required fields are marked *