Ethical Aspects of AI in the Legal Sector

Zahed AshkaraAI & Legal Expert
8 minAI & EthicsFebruary 28, 2025
Banner image for article: Ethical Aspects of AI in the Legal Sector

Don't miss any AI developments

Receive weekly insights about AI, legal developments, and practical tips. No spam, only valuable content.

💡 Join 2,500+ professionals who already benefit from our AI insights

Artificial intelligence (AI) is increasingly finding its way into the legal sector. From searching case law to drafting contracts – generative AI promises to help lawyers and other knowledge workers work more efficiently. Research shows that over 60% of lawyers have already used AI applications in their work.1 At the same time, this rise brings new ethical questions. In this blog, we discuss four core topics: privacy, intellectual property (IP), confidentiality, and hallucinations (fabricated output) of AI language models. For each theme, we highlight the risks and considerations, without adopting a moralizing tone. The goal is to present a nuanced picture that aligns with the practice of lawyers and other knowledge workers.

Privacy – AI and Legal Data

Privacy is an essential consideration when AI is deployed on legal material. Files often contain sensitive personal data – from names and addresses to medical or financial information. As soon as this data is processed via AI, the question arises how that information is protected and used.

Generative AI models like ChatGPT are trained on enormous amounts of text, often sourced from the internet. As a result, they can, like a search engine, reproduce surprisingly large amounts of information. Professor James Grimmelmann compares such AI to a very good search engine: models like ChatGPT are trained on almost the entire web and bring similar privacy dangers as Google Search.2

This means that a model can produce personal information about people that exists somewhere online, without those people having control over it. For lawyers, this means that AI may produce data about clients or opposing parties that can be found in public sources. This raises concerns under privacy legislation such as the GDPR. In Europe, it has been suggested that the right to be forgotten and other GDPR rules should also apply to generative AI.2 Technically, however, this is difficult to enforce: how does a model "forget" specific personal data that was in its training data? Currently, there is no conclusive solution for this.2

Privacy and Prompt Storage

Privacy also plays a role at another level. If a lawyer enters confidential information into an AI tool, what happens to it? Many AI services store inputted prompts and possibly use them to improve the model.3 OpenAI itself advises users not to share sensitive details in ChatGPT prompts.3 The risk is that otherwise confidential data could appear in answers to other users – a potential data breach. This has not remained theoretical: in 2023, Italian supervisory authorities and companies like Samsung had to intervene when privacy-sensitive data threatened to become public via ChatGPT.3

Practical Privacy Measures

ConsiderationRecommendation
Data MinimizationAnonymize data where possible
Tool SelectionChoose business AI services that explicitly do not use user data for training4
Contractual ProtectionEnter into a Data Processing Agreement (DPA) with the AI provider4

Intellectual Property – Who Owns AI-Generated Content?

Intellectual property (IP) forms a complex ethical theme around AI in legal practice. Two questions are relevant: (1) What about copyrights on the data with which AI is trained? and (2) Who is the owner/author of texts or documents generated by AI?

Training Data and Copyright

Generative AI is fed with existing texts – books, articles, case law, websites – many of which are under copyright. During training, copies are made and analyses are done of protected works. This has led to lawsuits from authors and content creators who believe their material has been unlawfully used.

Professor Grimmelmann emphasizes that at all levels of the AI "supply chain," copies of works take place, from data collection to output, making each stage subject to copyright.2 There are now multiple lawsuits against AI companies for using protected material in training data.2 So far, the first rulings seem relatively favorable for AI creators: judges are looking primarily at whether specific AI output constitutes infringement, rather than seeing the entire training process as infringement.2 But this area of law is still developing.

Ownership of AI Output

Equally important is the question of who has the rights to texts written by AI. Suppose a lawyer has an AI formulate a contract clause – who owns that text? Traditionally, the person who drafts a text gets the copyright, but with AI, human creativity is missing. In the US, it was recently confirmed that fully AI-generated works are not eligible for copyright.1 Only works with human authorship are protectable. In Europe, there is no explicit legislation on this yet, but there too, the principle is that there must be a "personal creation" for copyright.

AI service providers try to provide clarity through their terms of service. OpenAI, for example, states that the user owns the output their model generates.4 In other words, a lawyer retains the rights to the text that ChatGPT produces for them. However, this contractual agreement does not change the copyright law itself – if the output largely contains existing texts, rights holders can still claim them. OpenAI warns that answers are not unique and may partly contain protected material.4

A lawyer must therefore be careful not to adopt entire chunks of generated text unchanged in official documents, especially if it appears to be verbatim reproductions from existing works. That's why it's important to always edit and carefully check AI output, to ensure it meets the original content requirements and does not infringe on someone else's copyright.

Confidentiality – Using AI Without Leaking Secrets

Lawyers have a strict duty of confidentiality. Sensitive client information must not fall into the wrong hands. The use of AI raises the question: does the inputted information not leave the door?

When you enter a prompt into an AI service, that prompt is often stored on the provider's servers. This can conflict with legal professional privilege if third parties can access that data. An incident at Samsung illustrated this danger: employees added source code and minutes into ChatGPT, leading to this confidential information ending up on external servers.3 Lawyers also discovered that Microsoft's Azure OpenAI service keeps certain prompts for 30 days and has them monitored by employees if they contain sensitive content.5 Such a "backdoor" for content control is understandable from a moderation perspective, but poses a potential leak for legal confidentiality.

Practical Guidelines for Safe AI Use

GuidelineExampleExplanation
Do not enter recognizable client informationKeep prompts generalUse "Analyze this anonymized contract text" instead of "Analyze the contract between X Corp and Y B.V."
Use secure AI environmentsCheck the termsChoose enterprise versions or specialized legal AI tools with good data isolation

In short, AI can also be used within professional privilege provided the lawyer takes the necessary precautions. Often this means investing in a business or internal AI solution instead of a free public chatbot – a necessary step to maintain client trust.

Hallucinations – AI and Fabricated Legal Information

A known risk of advanced language models is hallucination: the model generates plausible but incorrect or completely fabricated answers. In jurisprudence, this can be disastrous.

A now-famous case involved lawyers who were reprimanded and fined by the judge for citing fake case law that was invented by ChatGPT.1 These examples illustrate how dangerous blind trust in AI can be in law.

Hallucinations occur because an AI has no concept of "truth" – the model predicts the most likely continuation of words based on training data, without fact-checking.1 Thus, a language model can, for instance, fabricate a non-existent Supreme Court ruling that is grammatically and stylistically perfect, simply because it fits within the pattern the model has learned. The model doesn't do this deliberately wrong; it simply has no mechanism to distinguish reality from fiction.

For lawyers, this means that AI answers must always be checked. The American Bar Association warned lawyers that they remain responsible for the accuracy of their work, even if an error comes from an AI tool.1 In other words, using AI does not absolve a lawyer from the duty to verify every reference and every fact.

Developments in Legal AI

DevelopmentAdvantageLimitation
Specialized Legal AI ToolsAnswers linked to legal sourcesWestlaw and LexisNexis build AI on their own databases
Error ReductionFewer errors than general models17-34% still provides incorrect information6

Practical Advice

Never blindly trust output from an AI in a legal context. Use it as a tool to save time, but check the facts. Is a judgment or legal article mentioned? Look it up in the official source. Continue to think critically: if an answer seems illogical or too good to be true, ask for more details or check with a colleague. Finally, make sure you or your team understand how AI works – invest in AI literacy. Many incidents arise from a lack of knowledge on the user's part, not purely from the AI itself.1

Conclusion

AI can significantly improve legal practice, but the ethical aspects discussed – privacy, intellectual property, confidentiality, and the reliability of information – require continued vigilance. Lawyers and knowledge workers can safely embrace AI by building in clear boundaries and controls:

  • Conscious Data Use: Handle personal data carefully
  • Respect for IP: Respect the rights of third parties
  • Ensuring Confidentiality: Protect confidential information
  • Critical Assessment: Always critically evaluate AI answers

This way, humans remain at the helm, and the sector benefits from the advantages of AI without losing sight of the core values of the law.

Stay up to date with AI developments

Receive weekly practical AI tips and legal updates that you can apply immediately.

Zahed Ashkara

Zahed Ashkara

AI & Legal Expert

Ready to start with AI Literacy?

A complete training for both employees and management to effectively and responsibly deploy AI within your organization.

View AI Literacy Training