January 31, 2025
5 min read

The Fight for AI Responsibility is Just Beginning

“Ethics is knowing the difference between what you have a right to do and what is right to do.” – Potter Stewart
Header image
← More Posts

(before every post, fyi) I'm coming from a background in data and numbers, so my subjective opinion =/= endorsement, but my justifications are usually objective. It's really up for you to decide. It's your mind, after all.

Update: Case Dismissed, But the Fight for AI Responsibility is Just Beginning

After filing a proposed class-action lawsuit against LinkedIn, I made the decision to voluntarily dismiss the case following the company’s assertion that no wrongdoing had occurred. LinkedIn provided evidence that private messages were never used for AI training, and based on this information, I withdrew the lawsuit. While this legal case has come to an end, the concerns it raised about AI transparency, corporate accountability, and data privacy remain more relevant than ever. The lawsuit was initially filed due to growing concerns about how LinkedIn and other major platforms handle user data in an era where artificial intelligence is reshaping nearly every aspect of our digital lives. The rapid acceleration of AI technology means that companies must be held to higher standards of transparency and responsibility. While LinkedIn denied the specific claims in this lawsuit, the broader question remains—how do we ensure that tech companies handle user data ethically, especially as AI becomes increasingly integrated into our everyday experiences?

This issue is not just about one platform. AI is now making hiring decisions, influencing credit approvals, determining who gets access to healthcare, and even shaping law enforcement practices. Without clear regulations and ethical oversight, we risk allowing AI to reinforce biases, invade privacy, and operate without meaningful checks and balances. There have already been cases where AI-driven facial recognition has disproportionately misidentified people of color, leading to wrongful arrests. Hiring algorithms have been found to favor certain demographics over others, embedding systemic biases into automated decision-making processes. AI-generated content is increasingly trained on personal user data without clear consent, raising serious questions about privacy and intellectual property rights. While LinkedIn’s privacy policies were updated to give users more control over data usage, this case has reinforced that privacy should not be an afterthought or a reaction to public pressure—it should be a fundamental principle guiding all AI development. The reality is that most users do not have the time or expertise to navigate complex privacy settings or read between the lines of corporate policy updates. Companies must take proactive steps to communicate clearly, ensure informed consent, and implement safeguards that prioritize user trust.

The rapid pace of AI innovation makes this issue more pressing than ever. We are entering an era where artificial intelligence will play an even greater role in shaping economic opportunities, social interactions, and political discourse. If we do not establish strong ethical guardrails now, we risk creating a future where privacy is a luxury rather than a right, where AI systems operate in ways that are unaccountable and opaque, and where users are left with little recourse when their data is exploited. While this particular lawsuit has concluded, the broader conversation about AI ethics and accountability is just beginning. The responsibility to ensure AI is developed responsibly does not rest on one legal challenge—it requires ongoing advocacy, regulatory action, and public awareness. Companies must be pushed to uphold higher standards, lawmakers must act to strengthen data privacy protections, and consumers must remain vigilant about how their personal information is being used. My commitment to ethical AI and data transparency remains steadfast. The fight for accountability in artificial intelligence is far from over. If we want a future where technology serves humanity rather than exploits it, we must continue demanding greater transparency, stronger protections, and ethical leadership in AI development. Now more than ever, ensuring AI is used responsibly is not just a question of policy—it is a question of the kind of world we want to build.

Original Post:

In today’s digital age, data has become one of the most valuable resources. But with that value comes great responsibility—responsibility that companies often fail to uphold. My journey as a data engineer and advocate for ethical AI has been about harnessing technology to empower people, not exploit them. Yet, time and again, I’ve seen how the misuse of AI and personal data undermines trust, privacy, and even democracy. This is why I’ve taken a stand by leading a class-action lawsuit against LinkedIn, a platform that violated its users’ trust by disclosing private messages to train generative AI models without consent.

Recent history is littered with examples of how AI has been abused to the detriment of individuals and society. Take the Cambridge Analytica scandal, where personal data from millions of Facebook users was harvested without consent and used to manipulate elections. Or the instances of facial recognition technology disproportionately misidentifying people of color, leading to wrongful arrests and discrimination. Even the explosion of generative AI tools like ChatGPT has raised ethical concerns, with reports of companies secretly feeding sensitive user data into these models without clear permissions.

LinkedIn’s actions fit squarely into this troubling pattern. Between 2021 and 2024, as a Premium subscriber, I used the platform to share private, sensitive communications—business strategies, job searches, and professional opportunities. Unbeknownst to me, LinkedIn disclosed these messages to third parties to train generative AI systems. This was not only a breach of contract but a fundamental violation of privacy. Worse still, LinkedIn quietly updated its privacy policy only after being caught, offering users an “opt-out” option that didn’t undo the harm already done.

What makes this particularly concerning is the lasting impact of such disclosures. Once personal data is embedded into AI models, it cannot be fully extracted. This means sensitive communications—whether about employment negotiations, intellectual property, or personal matters—may forever inform AI outputs, potentially surfacing in other Microsoft products or even falling into the hands of third-party developers. It’s a permanent violation, and LinkedIn has shown no intention of addressing this through meaningful action, such as retraining its AI models without user data.

This lawsuit is about more than just LinkedIn. It’s about accountability in an industry that increasingly views privacy as an inconvenience rather than a right. If left unchecked, these practices will continue to proliferate, eroding public trust in technology and paving the way for even more egregious abuses. As we embrace AI’s potential, we must also ensure its development is ethical and transparent, prioritizing user consent and equity.

My journey in data engineering and AI has shown me the transformative power of technology when wielded responsibly. From founding Scholarcash, a platform that helped thousands of students access scholarships, to guiding businesses and communities through my work at Buildify and Project Ozone, I’ve always believed in using technology to uplift and empower. However, my work has also shown me the darker side of the tech industry—the ways in which private data can be exploited for profit, often at the expense of those least equipped to fight back.

This case is a chance to draw a line in the sand. It’s a call for greater transparency, stronger privacy protections, and a commitment to ethical AI practices. By holding LinkedIn accountable, I hope to set a precedent that will reverberate across the tech industry, ensuring that innovation is not built on the backs of exploited users.

Together, we can create a digital future where technology serves people—not the other way around.

Disclaimer: The image included in this blog post is utilized under Creative Commons licensing. Any inquiries regarding the case or related legal matters should be directed to cgivens@edelson.com, counsel of record in this matter.

← More Posts
Share this post (if the buttons don't work, just copy link lol)

The rest of my posts:

The rest of my thoughts i've written about.

The end of everything
December 8, 2024
read the blog ↗
A brief mindset consideration for the rise of AI. I go from John Henry to Cancer Diagnosis, so very on brand for me.
The start of something
December 9, 2024
read the blog ↗
Something I was interested in talking about in regards of how we perceive our experiences, how most of our sci-fi fears have existed before we were born.
The most important problem
December 15, 2024
read the blog ↗
The greatest challenge humanity faces isn’t the most obvious threat, like climate change or AI itself, but the people and ideologies behind these technologies.
What is beautiful?
December 30, 2024
read the blog ↗
I make and execute a dumb idea of making an equation for beauty.
The right will win the culture war
December 31, 2024
read the blog ↗
And the left fundamentally can't do anything about it.
Your boycott doesn't matter
January 11, 2025
read the blog ↗
Keep reading before you get mad at me.
The case for nothing making sense
January 25, 2025
read the blog ↗
A little bit of XS