Skip to content

chaibi-mustapha/FactBeacon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Voici le fichier README.md mis à jour en anglais, intégrant toutes les corrections et leçons apprises de notre session de débogage.


FactBeacon - Google Chrome Built-in AI Challenge 2025 Submission

FactBeacon is a news analysis dashboard that uses Chrome's built-in AI (Gemini Nano via the LanguageModel API) to provide credibility scores, neutral summaries, and real-time reliability insights, all while preserving user privacy.

  • Deployed Demo Link: https://factbeacon.onrender.com/
  • Demo Video Link: https://youtu.be/oAVxNbsZOaA

⚠️ Important Note for Judges

This application utilizes on-device AI (Gemini Nano), which has specific hardware and browser setup requirements.

To see the AI features (scores > 0%, AI summaries, translation) work, you MUST run the deployed demo on a machine that meets these requirements:

  • Browser: Google Chrome (Version 139+).
  • RAM: At least 16GB of RAM.
  • OS: Windows 10/11, macOS 13+, or a compatible Linux/ChromeOS.
  • Browser Flags: You must manually enable specific Chrome Flags and verify the model is downloaded. Please see the "How to Run Locally / Enable AI" section below for the exact steps.

If tested on a machine with less than 16GB of RAM or without the correct flags enabled, the application will correctly enter a "fallback mode": the AI will not load, scores will remain at 0%, and only the raw article snippets will be shown.


The Problem Solved

In the constant flood of information, quickly evaluating source credibility is a major challenge. Existing solutions often lack transparency or require sending user data to external servers. FactBeacon offers a private, fast, and integrated alternative by harnessing the power of AI directly within the Chrome browser.


Features

  • Dynamic Timeline UI: A zoomable and pannable chronological visualization of articles.
  • Secure News Fetching: Retrieves articles via the Google Custom Search API, secured by a backend proxy hosted on Render.
  • On-Device AI Analysis (LanguageModel API):
    • Credibility Score: (Requires AI) A 0-100 rating based on the article snippet.
    • Neutral Summaries: (Requires AI) Generates a concise, neutral summary (in English) for each article.
    • Date Extraction (Attempt): (Requires AI) Tries to identify the publication date from the snippet text.
    • Global Verdict: Classifies the overall topic as "Credible", "Mixed", or "Suspect".
  • Instant On-Device Translation (LanguageModel API):
    • (Requires AI) Translates the AI-generated summaries on-the-fly using a second, ad-hoc prompt, matching the user's selected language.
  • Filtering & History: Advanced options to filter results by date/type and review past analyses.
  • Export: Generates PDF, CSV, or JPG reports of the current view.

APIs Used (Contest Requirement)

  1. LanguageModel API (Built-in AI / Gemini Nano): This is the core engine of FactBeacon, used for two distinct tasks from a single session:

    • 1. Core Analysis: A complex prompt asks the model to analyze snippets and return a structured JSON object containing the score, English summary, extracted date, and global consensus.
    • 2. On-the-fly Translation: A second, simpler prompt is used to translate the base-English summary into the selected interface language (e.g., French, Arabic, Chinese).
  2. Google Custom Search (CSE) API (External API via Proxy):

    • Used to fetch articles. The call is made via a Node.js Web Service hosted on Render (server.js), which securely stores the API keys (as environment variables) and adds the &sort=date parameter to the request.

Architecture

  • Frontend: A static site (index.html, styles.css, script.js) hosted on Render Static Site. It runs the LanguageModel API locally via the Origin Trial token.
  • Backend: A Node.js/Express Web Service (server.js) hosted on Render Web Service. This acts as a secure proxy to handle the Google CSE API calls, protecting the secret API keys.

How to Run Locally / Enable AI (For Judges/Testing)

This is the most critical section. The AI will not work (even on the deployed demo) unless you follow these steps on your test machine.

Prerequisites: Chrome 139+, 16GB+ RAM, Node.js (for local server).

Step 1: Enable the Correct Chrome Flags

  1. Go to chrome://flags in your address bar.
  2. Find and Enable this flag:
    • #prompt-api-for-gemini-nano -> Enabled
  3. Find and Enable this second, required flag. Select the "Bypass" option:
    • #optimization-guide-on-device-model -> Enabled BypassPerfRequirement
  4. Click the "Relaunch" button to restart Chrome.

Step 2: Verify the AI Model is Downloaded

  1. After Chrome restarts, go to chrome://components in your address bar.
  2. Find "Optimization Guide On Device Model" in the list.
  3. Check its "Version". If it is 0.0.0.0, click the "Check for update" button.
  4. Wait a few minutes for the model to download. Once it shows a real version number, the AI is ready.
  5. You can verify this by opening the console on the deployed app and typing await LanguageModel.availability(). It should return "available".

Step 3: Run the Project Locally (Optional)

  1. Clone the Repository:
    git clone https://github.com/chaibi-mustapha/FactBeacon
    cd FactBeacon 
  2. Install Dependencies:
    npm install 
  3. Configure Local Keys:
    • Create a .env file in the root directory.
    • Add your Google CSE keys:
      GOOGLE_CSE_KEY=YOUR_CSE_KEY_HERE
      GOOGLE_CSE_ID=YOUR_CSE_ID_HERE
      
  4. Run the Backend (Proxy): (In a new terminal)
    node server.js 
  5. Run the Frontend (Static Server): (In a second terminal)
    npx serve -l 3000 . 
  6. Open the App: Go to http://localhost:3000 in your flag-enabled, model-downloaded Chrome browser.

Experience Report & Known Issues

  • What Went Well: The LanguageModel API is extremely versatile. We successfully used a single session for both complex, structured JSON generation (for analysis) and ad-hoc multilingual translation, which is a powerful combination. The on-device execution is fantastic for privacy.
  • Challenges: The 16GB RAM hardware requirement is a significant barrier for development and testing. The setup process, requiring two specific flags and a component download check, is not obvious and was a major debugging hurdle.
  • Known (Harmless) Warning: In the developer console, you will see a warning: No output language was specified in a LanguageModel API request.... This warning is expected and intentional. Our application is designed to use a single AI session for multiple language outputs (English for analysis, but also French, Arabic, etc., for translation). Specifying a single output language would break this advanced feature. The app functions perfectly with this warning present.

About

Web app using Chrome's built-in AI (Gemini Nano) to analyze news credibility. Submission for the Google Chrome Built-in AI Challenge 2025.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors