2025 Global AI Bias Bounty Hackathon
Hack with purpose. Surface AI risk. Contribute to global trust.
- Date: July 1, 2025
- Location: Online
2025 Global AI Bias Bounty Hackathon
Hack with purpose. Surface AI risk. Contribute to global trust.
- Date: July 1, 2025
- Location: Online
About Us
HackTheFest is a global Secure AI Hackathon Festival, leading a new frontier in artificial intelligence by embedding security, fairness, and trust into the heart of innovation.
As a mission-driven initiative, HackTheFest brings together emerging talent, researchers, and industry leaders/experts to uncover AI risks, stress-test models, and co-create solutions that make intelligent systems more resilient and accountable.
AI Bias Bounty Hackathon
Calling all AI Enthusiasts!
The AI Bias Bounty Hackathon is a first-of-its-kind, 48-hour competition where participants discover, test, and report algorithmic bias and fairness risks in AI systems. The hackathon allows participants to uncover bias in AI systems like a security vulnerability and reward participants who uncover, document, and propose fixes for harmful or vulnerable AI behaviors.
This hackathon is your launchpad to showcase your ideas in front of a world-class audience.
JOIN US FOR OUR UPCOMING EVENT!
A World of Innovation
- Date: July 1, 2025
- Location: Online
Challenges
AI is powerful but often flawed. From biased resume screenings to hallucinated health advice, AI systems are already shaping decisions that impact human lives. The challenge is that many of these risks go undocumented, unmeasured, and untested. That’s where you come in.
Your challenge is to act as an AI risk investigator, using your technical and soft skills to surface hidden risk datasets, document them using a structured method, and build a model others can build on.
You’ll be contributing to the first open-source, community-driven AI Risk Intelligence Framework: a platform for tracking and preventing real-world AI harms, similar to how cybersecurity communities report vulnerabilities and bugs.
Requirements
Format: 100% Virtual
- Platform: Devpost + GitHub
- Final Submission Deadline: July 3, 2025
- Eligibility: Open globally to participants aged 18+ (except restricted regions)
Technologies
Ideal Skillsets (Choose what fits your strength):
Technical Participants
- Python or Colab scripting
- Prompt engineering/red teaming
- Machine learning, NLP, or data visualization
- Streamlit, Jupyter, or GitHub workflows
- Dataset auditing/fairness metrics
Non-Technical Participants
- UX research or AI ethics knowledge
- Domain expertise (e.g., healthcare, HR, legal)
- Critical analysis, writing, or policy background
- Risk communication or information design
- Social science or behavioral research
What to Submit?
All Submissions Must Include:
- The completed AI Risk Report Template
- Your prompt set, dataset audit, or technical tool
- README or documentation explaining how to use your work
- Screenshots, charts, or summary visuals where relevant
- A model……
- A 2–3-minute video demo
- Presentation slide
- Public Github Repository
- Application URL
Your submission will contribute to a permanent, public GitHub archive used by developers, auditors, educators, and AI safety communities, turning your weekend project into a long-term asset for responsible AI.
For further details and guidance, please visit Submission Guidelines
Prizes
Up to $1,000 in prizes.
More details on prize categories and distribution are coming soon — stay tuned!
All winning teams will be highlighted in our post-event report and invited to contribute long-term to our GitHub repository.
Meet the Team

Rianat Abbas
Organizer

Ifeoma Eleweke
Co-organizer

Samuel Ajuwon
Technical Director, ML & Fairness Evaluation

Oluwafemi Oloruntoba
Director, AI Fairness & Database Innovatio

Sheriff Adepoju
Technical Director, ML & Data Track

Idara Bassey
Lead Ethics & Policy Advisor

Raphael Ugboko
Director, User- Experience

Chimdi Chikezie
Responsible and Usable Technology Advisor
Schedule
Timeline & Key Dates (All times in Central Time, unless otherwise noted)
Phase 1
Hackathon Registration Period
June 4 – June 27, 2025
Phase 2
Kickoff Event (Live on Zoom)
June 28, 2025
Phase 3
Participant Onboarding
June 28 – June 30, 2025
Phase 4
48-Hour Hackathon Begins
July 1, 2025 – July 3, 2025
Phase 5
Submission due
July 3, 2025 11:59pm CST
Phase 6
Judging Period
July 5 – July 15, 2025
Phase 7
Winners Announced
July 17, 2025
Phase 8
Certificates Distributed
July 23, 2025
Schedule
Timeline & Key Dates (All times in Central Time, unless otherwise noted)
Phase 1
Hackathon Registration Period
May 30 – June 13, 2025
Phase 2
Kickoff Event (Live on Zoom)
June 15, 2025
Phase 3
Participant Onboarding
June 16 – June 19, 2025
Phase 4
48-Hour Hackathon Begins
June 20, 2025 – June 22, 2025
Phase 5
Judging Period
June 23 – June 26, 2025
Phase 6
Winners Announced
June 27, 2025
Phase 7
Certificates Distributed
July 5, 2025
Speakers & Judges
Our judging panel includes leaders in AI ethics, data science, red teaming, and responsible tech from industry and academia.

FirstName LastName

FirstName LastName

FirstName LastName
Join hackthefest hackathon and innovate using the latest models in the market. Discover all the relevant details below.
Hackathon Details
Join hackthefest hackathon and innovate using the latest models in the market. Discover all the relevant details below.
- Why Join?
- Make a real impact by helping shape how AI harm is identified and discussed.
- Gain hands-on experience testing models, analyzing datasets, and building auditing tools.
- Learn valuable skills in red teaming, bias analysis, and ethical model evaluation.
- Contribute to a public AI risk database and earn prizes and recognition for your work.
The hackathon will take place online on the Hackthefest Devpost platform and Discord Server. Please register in order to participate. To participate, click the “Enroll” button at the bottom of the page and read our Hackathon Guidelines and Getting Started Guide.
Hackathon Details
Join hackthefest hackathon and innovate using the latest models in the market. Discover all the relevant details below.
- Why Join?
- Do something meaningful: Help shape how we identify and talk about AI harm
- Get hands-on: Test models, analyze datasets, and build lightweight auditing tools
- Learn new skills: From red teaming to bias analysis to ethical model testing
- Join a movement: Be part of the first wave building open AI accountability tooling
- Contribute to the knowledge base: Your work becomes part of a public AI risk database
- Win prizes and recognition: For top tools, reports, and community value
The hackathon will take place online on the Hackthefest Devpost platform and Discord Server. Please register in order to participate. To participate, click the “Enroll” button at the bottom of the page and read our Hackathon Guidelines and Getting Started Guide.
- Use of Materials & IP
All submissions remain the intellectual property of their creators. However, by participating, you agree to:
- Allow the organizers to showcase your work for educational and community purposes.
- Permit anonymized or attributed versions of your report and model to be included in the public AI Risk Repository, subject to review.
- Follow open-source best practices if using third-party data/tools.
- Tools & Resources Provided
- AI Risk Intelligence Framework (interactive format + reference map)
- AI Risk Report Template (.md and .docx)
- Sample dataset
- Submission checklist + GitHub structure guide
The hackathon will take place online on the Hackthefest Devpost platform and Discord Server. Please register in order to participate. To participate, click the “Enroll” button at the bottom of the page and read our Hackathon Guidelines and Getting Started Guide.
Judging Criteria
All projects will be reviewed by our judging panel using the following categories. Each criterion is equally weighted (20 points each, 100 total):
- Accuracy of Bias Identification (30 points)
- Coverage of Bias Types (15 points)
- Model Design and Justification (20 points)
- Interpretability and Insight (15 points)
- Mitigation Suggestions or Solutions (10 points)
- Presentation and Clarity (10 points)
Judging Criteria
All projects will be reviewed by our judging panel using the following categories. Each criterion is equally weighted (20 points each, 100 total):
- Accuracy of Bias Identification (30 points)
- Coverage of Bias Types (15 points)
- Model Design and Justification (20 points)
- Interpretability and Insight (15 points)
- Mitigation Suggestions or Solutions (10 points)
- Presentation and Clarity (10 points)
Sponsors
This is an independently hosted hackathon and is not affiliated with or endorsed by any government agency or external funding entity unless explicitly stated.
Learn more about the opportunity to become a Gold sponsor or explore other available sponsorship tiers for the upcoming event.
Please send an email to: sponsor@hackthefest.com

Volunteer as a Virtual Judge for Technovation
We’re inviting professionals, researchers, engineers, ethicists, data scientists, designers, and thoughtful technologists to serve as virtual volunteer judges for the AI Bias Bounty Hackathon, a global event focused on uncovering and mitigating bias in AI systems.
This is your chance to support ethical AI innovation and help shape the future of responsible technology.
Sponsors
This is an independently hosted hackathon and is not affiliated with or endorsed by any government agency or external funding entity unless explicitly stated.
Learn more about the opportunity to become a Gold sponsor or explore other available sponsorship tiers for the upcoming event.
Please send an email to: sponsor@hackthefest.com

Gallery
FAQs
The AI Bias Bounty Hackathon is a 48-hour global virtual event focused on uncovering and addressing bias, hallucinations, and hidden risks in AI systems. Participants analyze a synthetic dataset that mirrors real-world experience and report model behavior, develop lightweight auditing models to identify bias in the dataset, and create reusable public resources for safer AI.
The hackathon is open to anyone 18 years or older — students, early-career professionals, domain experts, engineers, researchers, UX designers, data scientists, technical writers, and ethical technologists. Both technical and non-technical contributors are welcome and encouraged to collaborate.
Participants will work with a pre-provided synthetic dataset that mimics real-world decisions. The task is to analyze the data, build a predictive model, identify sources of bias, and clearly communicate what the risks are, how they manifest, and what can be done to reduce them.
Yes! Winners get up to $1000 in prizes and community recognition awards
Yes, participants from all countries are welcome, unless local law or U.S. export restrictions prohibit participation or receiving a prize.
Register through the website. You can also read more information about the hackathon on our official Devpost page. Once you register:
- Your spot is confirmed — no selection or application process.
- One week before the hackathon, you’ll receive:
- A personal invite to the Discord server
- Your Participant Onboarding Kit with templates, tools, and submission instructions
- A link to view the full Hackathon Schedule and Live Event Calendar
This delayed onboarding ensures all participants start on the same page. Please check your inbox around that time (and your spam folder just in case!).
That’s okay! You don’t have to be a coder to contribute. We welcome:
- Data Scientists/Engineers, AI Engineers, Cybersecurity Analyst
- UX researchers & product thinkers
- Policy, ethics, or humanities students
- Technical Writers, strategists, and information designers
The best teams combine both technical and non-technical skills. This hackathon is as much about critical thinking and ethical awareness as it is about tools and code.
This is a fully virtual event. All activities — including the kickoff, workshops, submissions, and judging — will take place online.
The hackathon will run for 48 hours, starting from: Tuesday, July 1, 2025, 9:00 AM CST and submission is due Thursday, July 3, 2025 – 11:59pm CST
No. The AI Bias Bounty Hackathon is independent and open to all. You do not need to be affiliated with any institution to participate.
You’ll receive an onboarding email with your Discord invitation, resource kit, and submission instructions one week before the hackathon begins.
FAQs
The AI Bias Bounty Hackathon is a 48-hour global virtual event focused on uncovering and addressing bias, hallucinations, and hidden risks in AI systems. Participants use real-world prompts, public datasets, and our original AI Risk Intelligence Framework to test and report model behavior, develop lightweight auditing tools, and create reusable public resources for safer AI.
The hackathon is open to anyone 18 years or older — students, early-career professionals, domain experts, engineers, researchers, UX designers, data scientists, technical writers, and ethical technologists. Both technical and non-technical contributors are welcome and encouraged to collaborate.
The hackathon supports projects across three main tracks:
- Prompt-based model testing + bias scoring tools
- Dataset bias auditing tools and visualizations
- Domain-specific prompt sets that surface real-world AI risk
Participants can submit tools, analysis notebooks, risk reports, or prompt libraries — all aligned with our structured submission template and framework.
Yes! We offer both cash prizes and community recognition awards:
- Best Risk Detection Tool
- Best Prompt Dataset + Report
- Most Useful Community Contribution
- People’s Choice Award
- Honorable Mentions and GitHub Feature Slots
Winning teams will be highlighted in our post-event publication and invited to contribute to the open-source AI Risk Intelligence GitHub archive.
Yes — participants from all countries are welcome, unless local law or U.S. export restrictions prohibit participation or receiving a prize.
Register through the official event page on [Devpost / hackathon site link]. Once you register:
- Your spot is confirmed — no selection or application process.
- One week before the hackathon, you’ll receive:
- A personal invite to the Discord server
- Your Participant Onboarding Kit with templates, tools, and submission instructions
- A link to view the full Hackathon Schedule and Live Event Calendar
- A personal invite to the Discord server
This delayed onboarding ensures all participants start on the same page. Please check your inbox around that time (and your spam folder just in case!).
That’s okay! You don’t have to be a coder to contribute. We welcome:
- UX researchers & product thinkers
- Policy, ethics, or humanities students
- Domain experts in fields like healthcare, finance, education
- Writers, strategists, and information designers
The best teams combine both technical and non-technical skills. This hackathon is as much about critical thinking and ethical awareness as it is about tools and code.
You’ll pick one track and work with your team (or solo) to:
- Use or build a prompt/test set, tool, or bias detection script
- Evaluate a public model or dataset
- Use our AI Risk Reporting Template to document your findings
- Submit your work on GitHub and the event platform
You’ll also get access to optional mentoring, live office hours, and workshops during the event window.
- Model Behavior Testing + Scoring Tool – Test large language models using curated prompts and build a tool to grade their output for bias or harmful responses.
- Domain-Specific Prompt Library – Create and test prompts within a real-world domain like healthcare, hiring, or law to surface discrimination or risk.
- Dataset Bias Detection Tool – Analyze a public dataset for underrepresentation or skew and build a notebook, app, or script to visualize and document it.
Each track includes access to templates, sample tools, and guidance from our mentors.
This is a fully virtual event. All activities — including the kickoff, workshops, submissions, and judging — will take place online.
Since this is a virtual event, meals and logistics are not provided. However, we encourage local participants to form micro-meetups and co-work together if safe and possible in your area.
The hackathon takes place over 48 hours, from:
Friday, August 29, 2025 – 9:00 AM CST to
Sunday, August 31, 2025 – 9:00 AM CST
See the full Hackathon Schedule (link provided in the onboarding kit).
No. The AI Bias Bounty Hackathon is independent and open to all. You do not need to be affiliated with any institution to participate.
You’ll receive an onboarding email with your Discord invitation, resource kit, and submission instructions one week before the hackathon begins. This includes:
- The AI Risk Intelligence Framework
- Reporting templates (.docx and .md)
- Starter prompt and dataset examples
- Submission guide + judging rubric
- Live event and workshop calendar
All support and updates during the event will happen via Discord.
- Winners will be announced at the Closing Ceremony on September 8, 2025
- Submissions will be reviewed for inclusion in the AI Risk Intelligence GitHub repository
- Participants will be invited to continue contributing as risk mappers, red teamers, and peer reviewers
- Exceptional submissions may be featured in public whitepapers, research, or partner opportunities
We’re building a long-term community around this work — and this is just the beginning.