✕ CLOSE Online Special City News Entrepreneurship Environment Factcheck Everything Woman Home Front Islamic Forum Life Xtra Property Travel & Leisure Viewpoint Vox Pop Women In Business Art and Ideas Bookshelf Labour Law Letters
Click Here To Listen To Trust Radio Live

AI advancements and human dominance in the workforce

It is common for people to regard Artificial Intelligence (AI) with apprehension and worry. In everyday marketing, the accessibility and ease of use of AI are consistent taglines, with a strong emphasis on the convenience and efficiency it can provide as a time saver. Simple examples include virtual assistants like Apple’s Siri and Amazon’s Alexa.

In the tech industry, the term AI has become one of the growing buzzwords used to market robust tools that enhance workflow functionality. There is a high emphasis on scalability (the volume of tasks an AI can perform) and detailed specifications on its compatibility with existing programmes.

For both groups, the ability of AI to perform an increasing number of tasks is a major highlight, as it provides increased accessibility to many types of technology. Both the public and developers are attracted to the latest advancements: the public is interested in how AI can bring futuristic features into their daily lives, while developers are keen to work with state-of-the-art tools and frameworks.

SPONSOR AD

For illustrators, the term AI has recently taken on a highly negative connotation due to the influx of image generation software that can create paintings through simple prompts using diffusion models. This software can learn to mimic and recreate drawings by being repeatedly exposed to millions of illustrations. Because the images used to train these models are often sourced without consent, various lawsuits have been launched against these AI companies for copyright infringement.

In January 2023, Getty Images, a popular website for stock images, filed a lawsuit against Stability AI for training their image generator on stock images without procuring a licence. According to Getty, the unauthorised action violated the licensing agreements that govern the use of its extensive library of photographs and visual content. Getty Images argued that the alleged misuse of its images not only harmed its own business model but also devalued the work of photographers and content creators who depend on Getty for distribution and licensing. While the company supports technological advancements, it insisted that these innovations must respect existing intellectual property laws.

In September 2023, ChatGPT found itself on the receiving end of a lawsuit by the Authors Guild on behalf of acclaimed Game of Thrones writer George R. R. Martin over allegations that his books were used to train their chatbot. The Authors Guild alleged that the lack of transparency in the sourcing of data for training was a major concern and showed a lack of regard for the intellectual property rights of authors.

For many people, however, Deepfakes are the most immediately concerning development in the AI landscape. These synthetic media are used to spread false information, exacerbating the already critical problem of fake news. AI-generated videos can depict people saying or doing things they never did, making it challenging to distinguish truth from fiction. This undermines public trust in media and institutions, leading to widespread confusion and scepticism. Individuals can be targeted with Deepfakes that place them in compromising or defamatory situations, causing severe personal and professional damage.

The intersection of Deepfakes with social media has added another dimension to contend with: the rapid spread of misinformation. Deepfakes can reinforce false narratives and conspiracy theories, making it difficult for users to discern factual information from fabrications.

All this reference to the legal framework highlights that the rapid development of general AI has been divisive across different industries, as these programmes often narrow the field of available roles that human creators can fill. There is a growing sentiment that AI companies are attempting to push humans out of key industries. Concerns range from data collection leading to loss of privacy and increased surveillance to a widespread belief that these companies are using AI to collect and analyse personal data without individuals’ explicit consent. At the core of this issue is a feeling of lacking control, where asking for forgiveness is considered preferable to asking for permission. This approach has largely shaped how AI companies develop their programmes.

One of the most effective ways individuals can regain control is by improving their digital literacy. Understanding how AI works, considering its capabilities and limitations, empowers people to make informed decisions about their interactions with technology. Regularly reviewing and managing privacy settings on social media and other online platforms can limit the amount of personal information available to AI systems. Supporting policies and regulations that require companies to disclose AI algorithms and data usage practices can promote accountability and protect users from misuse.

While there is not much that can be done to prevent the continuous encroachment of AI into every aspect of work, it is crucial to highlight the areas in which humans can still maintain dominance in the workforce. AI is still in its infancy, and while recent breakthroughs have improved its functionality, AI systems are primarily built for generalisation. For instance, ChatGPT still faces challenges in identifying context clues and may occasionally provide incorrect or fabricated answers to questions. Therefore, while AI technology offers numerous benefits, it is essential for individuals and society to take proactive steps to maintain control and ensure these systems serve human interests.

 

Ibrahim wrote from Abuja via [email protected]

 

Join Daily Trust WhatsApp Community For Quick Access To News and Happenings Around You.