Now Accepting Nominations!
Join a transformative cohort of changemakers in our Spring 2026 Ethical Tech Project Fellows Program
The Ethical Tech Project is seeking outstanding early- to mid-career tech builders (designers, product managers, engineers, architects, digital marketers, etc.) to join its Spring 2026 cohort of Ethical Tech Fellows.
Successful nominees will participate in a 10-week structured curriculum, designed to contextualize the social impact of the decisions and choices made by tech teams bringing digital and AI-driven products and services to market. Fellows will develop both leadership and practical skills they can apply to their day-to-day work. These skills will enable them to produce better outcomes for individuals, businesses and society. At the end of the program, participants will have the opportunity to present a mini-capstone or project idea to a panel of industry luminaries and receive valuable, actionable feedback.
The Ethical Tech curriculum has been designed by subject matter experts in conjunction with the Digital Futures Institute at Columbia University and is moderated by Jennie Baird, Nancy Green Saraisky and Robert Levitan along with special guests from the tech community.
Sessions will be held on Thursday evenings from April 9, 2026 through June 11, 2026. Though a few sessions will be remote, the majority of sessions will be in-person and meet in New York City.
This is a unique learning, leadership, and networking opportunity for motivated and passionate individuals who want their tech and AI-oriented work to support human flourishing and who believe companies can do well by doing good.
There is no cost to participate in the program, though acceptance to previous fellowship cohorts has been extremely competitive.
Nominate yourself or nominate someone else who you think would make a great ETP Fellow. Applications for the Fellows program will close March 9, 2026.
Fill out your application here.
Here’s what we’ve been reading and listening to, and you should be too:
🤔 Attitudes on AI
TIME: The People vs. AI. Across red states and blue, a grassroots movement is pushing back on the unchecked growth of the artificial intelligence industry.
The New York Times: Opinion | Where Is A.I. Taking Us? Eight Leading Thinkers Share Their Visions. Experts share their thoughts on the future of A.I. and how it will reshape society in the coming years.
The Boston Globe: AI answering systems are ‘saving the day’ for New England pizzerias. Customers aren’t so sure. The artificial receptionists, being used to take orders and field calls, have been met with resistance from some customers who said they can’t get the service they are used to.
💻 Jobs
The Atlantic: The Worst-Case Future for White-Collar Workers. The well-off have no experience with the job market that might be coming.
CNBC: Accenture tells senior staff to use AI tools or risk losing out on leadership promotions. Accenture started tracking how often senior staff are logging in to its AI tools this month saying AI adoption will be a “visible input to talent discussions.”
🧠 Mental Health
Ohio Capital Journal: Ohio bill would prevent people from creating AI models that encourage users to engage in self-harm. Ohio lawmakers have introduced a bipartisan bill that would prevent anyone from creating an AI model in Ohio that encourages users to engage in self-harm or harm another person.
🌎 Environment
WIRED: Could AI Data Centers Be Moved to Outer Space? Massive data centers for generative AI are bad for the Earth. How about launching them into orbit?
📃 Copyright
Deadline: Actors And Musicians Help Launch “Stealing Isn’t Innovation” Campaign To Protest Big Tech’s Use Of Copyrighted Works In AI Models. The campaign is a protest against the unauthorized use of copyrighted works to train AI models.
BBC: What is Seedance? The Chinese AI app sending Hollywood into a panic Clips of Deadpool and other film characters have sparked alarm within Hollywood over copyright infringement.
🗳️ Democracy
WIRED: AI-Powered Disinformation Swarms Are Coming for Democracy. Advances in artificial intelligence are creating a perfect storm for those seeking to spread disinformation at unprecedented speed and scale. And it’s virtually impossible to detect.
🍎 Education
The New York Times: Opinion | A.I. Companies Are Eating Higher Education. Human intelligence — the thing we as educators are duty bound to defend and advance — is under attack.
Fortune: Law school admissions expert sees ‘dangerous one-two punch’ as Gen Z seeks shelter from the AI hiring storm in 6-figure debt and JD lifeboat. ChatGPT can pass the bar, but just as in 2008 and during the pandemic, law schools are overwhelmed with applicants
NPR: In China, AI is no longer optional for some kids. It’s part of the curriculum. While debate rages in the U.S. about the merits and risks of AI in schools, it’s become a state-mandated part of the curriculum in China, as the authorities try to create a pool of AI-savvy professionals
🩺 Medicine
The Guardian: Google AI Overviews cite YouTube more than any medical site for health queries, study suggests. German research into responses to health queries raises fresh questions about summaries seen by 2bn people a month
🎓 Research and Resources
TechEquity: How Californians Feel About AI – Findings From the 2025 AI Compass. TechEquity surveyed over a thousand Californians to understand how they feel about AI and its impact on their lives.
MIT Sloan: Agentic AI, explained. The age of agentic AI — systems that are semi- or fully autonomous and can act on their own — has arrived. Here’s what you need to know, according to MIT experts.
The Guardian: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report. Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption.


