- The Freelance Lab
- Posts
- Will the advancement of AI lower human economic value contribution, and therefore deemphasise human rights, in the future?
Will the advancement of AI lower human economic value contribution, and therefore deemphasise human rights, in the future?
Let's explore that together...
Experts and n00bs alike are raising concerns about artificial intelligence (AI) at the moment. I was going to hyperlink that, but a quick Google Search will show you all that’s out there.
But I received an email from The Batch this week, which raised a new concern I hadn’t thought of before — that the advancement of AI could risk lowering the contribution of human economic value in the future, and consequently disincentivise the protection of human rights:
This… freaks me out a little.
The problem statement
I long understood that menial tasks are a thing of a past with AI development. That’s a no-brainer: AI can do basic things faster and better than the average person. Everyone now has the ability to be generically good, for relatively cheaply.
But the flip-side of that coin is what the above quote speaks to: If everyone can be generically good, what happens to the large groups of disempowered people, that still sit a few steps behind everyone else in the technological advancement space?
The experiment
I decided to plug this all into ChatGPT, and see if I could come up with some ways that we can start working against this dystopian future today.
Here are my findings:
The method
Here’s the prompt I used in ChatGPT 4:
Act like a AI strategist, anthropologist, and futurist. Your task is to read the following article, and suggest some ways in which humans can counter the prediction that human economic value will wane. First, think about the impacts of what a reduction of human economic value contribution might have. Then, think through the main areas in which this will happen. Finally, list 5 highly impactful ways that humans can start today to counter that in the future.
[inserted article from TheBatch.ai]
The results
Risk: Devaluation of Human Work
Counter measure: Lifelong Learning
Actionable steps: Governments and organisations can invest in making education more accessible to people of all ages, including online learning platforms, free or low-cost courses, and educational grants. They could also promote learning cultures within organisations, encouraging employees to continuously learn and adapt.
It's also important to emphasise the development of "soft" skills such as creativity, critical thinking, and emotional intelligence, which are areas where humans excel over AI.
Critique: While encouraging continuous education and upskilling is a solid strategy, it assumes that all individuals have equal access to resources for learning and the capacity to continually adapt. This might not be true for everyone, particularly older workers, people with disabilities, or those in low-income situations. It also puts the onus on individuals to respond to a systemic issue.
Furthermore, there's a risk that AI might eventually outpace human ability to learn and adapt, especially in areas like creativity that may seem unique to humans today but end up being something AI learns to do 100x better than humans.
Risk: Unemployment and Inequality
Counter measure: Promote Entrepreneurship
Actionable steps: Governments could provide more support to start-ups, including tax breaks, grants, and resources for networking and mentorship. Educational institutions could incorporate entrepreneurship training into their curricula, helping to develop future entrepreneurs from a young age.
In addition, fostering an entrepreneurial mindset, which values creativity, innovation, and problem-solving, can be beneficial even for those who do not start their own businesses.
Critique of counter measure: Although entrepreneurship can create new jobs and opportunities, it's not a guaranteed success and can be a significant financial risk. Not everyone has the skills, resources, or inclination to be an entrepreneur.
Over-reliance on entrepreneurship could also lead to an unstable economy with high levels of inequality between successful and unsuccessful ventures.
Risk: Poverty and Inequality
Counter measure: Implement Universal Basic Income (UBI)
Actionable steps: Some governments could start with pilot programs to test the effectiveness and impacts of UBI in a controlled setting before nationwide implementation. Another approach might be to implement a negative income tax, where people earning below a certain threshold receive supplemental pay from the government.
Critique of counter measure: The idea of UBI is still controversial and unproven at large scales. Critics argue that it could create disincentives to work, potentially leading to economic stagnation. Additionally, it might be challenging to fund UBI without significant tax increases or spending cuts in other areas.
Risk: Concentration of Power and Inequality
Counter measure: Regulate AI
Actionable steps: Governments can start by creating a regulatory framework that prevents the concentration of AI power. This could include measures to promote competition, prevent monopolies, and protect data privacy. International cooperation is also necessary to ensure global standards and prevent the exploitation of weaker regulatory environments.
Critique of counter measure: Regulation is necessary to prevent abuse of power, but over-regulation can stifle innovation. Striking a balance is difficult. Moreover, with the current pace of technological change, regulations often lag, which can lead to a reactive rather than proactive regulatory environment. There's also the issue of international cooperation in the regulation of AI.
Risk: Replacement of Human Labour
Counter measure: Human-centric AI Development
Actionable steps: Businesses and AI researchers can emphasise the development of AI systems that augment human abilities rather than replace them. For instance, AI could be used to automate mundane aspects of a job while leaving the decision-making and creative aspects to humans. Industry leaders and policymakers can also promote ethical AI development guidelines that prioritise human value and job creation.
Critique of counter measure: While this is an admirable goal, determining what constitutes "complementing human abilities" versus "replacing them" can be subjective and complex. Moreover, businesses motivated by profit might still opt for AI solutions that replace human labor if they are more cost-effective, regardless of the broader societal implications.
Reply