Is it Ethical to Use GPT Tools for Nonprofits?


Posted

in

We are steered by our media consumption, built-in biases, filter bubbles and communities when it comes to adopting new tech and ideas. Trying to understand and acknowledge this bias in all sources of information is an important practice. Especially as it applies to the adoption of new technology built on a variety of large amounts of unknown data (LLMS) that now also threaten the work of many people with power and without it. This article was built on the 7 Ethical GPT Objections.

The following AI GPT trained bots will either convince you to use GPT or not - your choice :)


Convince me NOT to use GPT tools for my nonprofit
Convince me that I should use GPT tools for my nonprofit

Objection 1: Bias in Bias out

One of the most significant concerns with GPT models is the potential for them to reflect and perpetuate biases present in the data used to train them. For example, if the training data contains a disproportionate amount of text from a particular demographic group (read: cis white men), the model may struggle to understand or generate text related to other groups. This can lead to biased outputs and reinforce existing inequalities.

Rebuttal: While it is true that GPT models can reflect and perpetuate biases present in the data used to train them, there are approaches that can be taken to mitigate this issue. For example, researchers can use data augmentation techniques to balance the training data and ensure that the model is exposed to a diverse range of perspectives. Additionally, there are efforts underway to develop methods for identifying and mitigating bias in GPT models. These include techniques such as debiasing and adversarial training, which aim to reduce the impact of biased data on the model’s output. In the brighter cases, this kind of attention to removing bias may actually perform better than humans but the risk in automated vetting tools like resume screeners should be carefully implemented.

Objection 2: Lack of Transparency

Another objection to GPT models is the lack of transparency in how they arrive at their outputs. Because these models can be complex and difficult to interpret, it can be challenging to understand how they generate their text. There should be a better understanding of the “black box” that can explain how a GPT came on an answer – especially if it is deemed harmful.

Rebuttal: While it is true that GPT models can be complex and difficult to interpret, efforts are being made to improve transparency and interpretability. For example, researchers have developed methods for visualizing the attention patterns of GPT models, which can provide insight into how the model is processing and generating text. Additionally, there are ongoing efforts to develop more transparent and interpretable AI models, including GPT models.

Objection 3: Misuse

There is a risk that GPT models can be misused or abused for malicious purposes, such as generating fake news or impersonating individuals online. The power of bad actors to 100x their negative impact on society should make us stop the development of these tools.

Rebuttal: While it is true that GPT models can be misused or abused, this is not a fundamental flaw with the technology itself. Rather, it is a concern that applies to any technology that can be used for both good and bad purposes. There are ongoing efforts to develop approaches for identifying and mitigating the misuse of GPT models, including developing techniques for detecting fake news and identifying instances of impersonation. If anything this should increase the imperative for social impact organizations to learn how to use fire to fight fire.

Objection 4: Complete Human Dependence

Some critics argue that the use of GPT models can lead to a dependence on technology and a lack of human creativity and innovation. This may be especially true for a rising generation of writers in school or early career stages.

Rebuttal: While it is true that there is a risk of dependence on technology, this is not a fundamental flaw with GPT models specifically. Rather, it is a concern that applies to any technology that is designed to automate or augment human tasks. Additionally, there is evidence to suggest that the use of GPT models can actually stimulate human creativity and innovation, by providing new tools and approaches for generating and exploring ideas. It should also be noted that similar critiques were levied upon the introduction of the calculator.

Objection 5: Environmental Impact

The energy consumption of GPT models can vary depending on a number of factors, such as the size of the model, the hardware used to train and run the model, and the specific task the model is being used for. However, it is generally accepted that GPT models require a significant amount of computing power and energy.

For example, OpenAI’s GPT-3 model, which has 175 billion parameters, reportedly requires over 3.2 gigawatts of power to train, which is equivalent to the power consumption of over 400,000 homes in the United States. Additionally, training a GPT model like GPT-3 can produce a large amount of carbon emissions, which can contribute to climate change.

Rebuttal: While it is true that the energy consumption and carbon emissions associated with GPT models are a concern, efforts are being made to develop more energy-efficient approaches to training and running these models. For example, researchers are exploring techniques like model distillation, which can reduce the energy consumption of GPT models by compressing them into smaller, more efficient models. Additionally, there are ongoing efforts to develop more sustainable computing technologies, including renewable energy sources and more efficient hardware.

Objection 6: Exploitive Labor Practice

Exploitive labor practices by out-sourcing moderation in data cleanup. In 2022, it was reported by Time Magazine and other sources (Open AI Used Kenyan Workers, Jan 2023) that OpenAI used outsourcing firm Sama, a b-corp firm that hires workers in places like Kenya. The net pay for removing violent and disturbing content from the training set was reported to be $2 per hour.

Rebuttal: As a b corp company, Sama claims to be raising over 50,000 people out of poverty in places like Nairobi where the national average salary is $1.29 per hour. Beyond this, other companies like Facebook have relied on services that Sama provides to keep user content safe. In fact, Sama stated they would be canceling the Facebook (Meta) contract due to alleged union-busting tactics employed by the company.

Objection 7: Generic Outputs

Initially, what appears to be a magical output of original content is actually just a fancy autocomplete. In a YouTube critique on AI, comedian Adam Conover makes the point that the claims of what this AI can do are driven by their need to pump up share price rather than reflect reality. A risk here is that organizations assume they are getting something unique but when they end up posting to the public their content ends up looking generic and quite similar.

Tools like GPTzero.me and CauseWriter detect AI can quickly reveal these using perplexity scores.

Rebuttal: Whole Whale has framed this as the ‘Grey Jacket Problem’ and we think it is real. There is a level of learning that staff and organizations need to invest in before just using off-the-shelf AI tools. An AI Prompt architect/engineer mindset will be needed for organizations to build out unique outputs.

Objection 8: Stolen Data

There is a real concern that these Large Language Models behind major tools that OpenAi, Google, Bing, Midjourney, Stable Diffusion and surely others to come were partially built on ‘stolen’ data. An April, 2023 article by Washington Post revealed that Google is training on web data that they don’t have permission for. What does it mean that companies can now capture and monetize tech built on millions of uncompensated creators and provide tools that can now mimic their work?

Should your organization use a tool that was built in this way?

Rebuttal: Follow the prevailing laws of your country while keeping your eyes open. This issue is far from settled and countries like Italy took steps to block the use of GPT tools in Q1 of 2023. The rules are still being written and unfortunately, this kind of practice falls into a ‘hold-your-nose’ for users when it comes to large tech companies making people the product. It is heartening to see some groups like Stable Diffusion acknowledge and allow artists to elect out of the database.

In these types of decisions consider your own position of power that may afford you/your org the luxury of not using a tool with this kind of power to increase capacity.

Selections of AI Coverage

Here is a selection of some media narratives on GPT tools to help your thinking around the issues surrounding these generative AIs.

The AI Dilemma

Narrative Take-down of AI

Sam Altman on GPT

LastWeek Tonight on AI

AI Tools & Resources

PetitionGPT

The following is just one of the tools that subscribing members of CauseWriter.ai have greater access to. Join the waiting

Bite-sized AI learning via Email

Learn AI prompting and fundamentals in 3-weeks.

* indicates required