Skip to content

sibi-seeni/nllb-localized-lang-translator

 
 

Repository files navigation

title Localized_lang_Translator
app_file app.py
sdk gradio
sdk_version 5.49.1

NLLB Language Translator (Apple Silicon Optimized)

This project is a fork of Rohan Bagulwar’s Quantized Language Translator, originally designed for quantized CPU usage.

This modified version has been updated to run efficiently on Apple Silicon Macs (M1/M2/M4) using MPS (Metal Performance Shaders) for GPU acceleration, and no longer relies on CUDA or quantization.


🔧 Key Changes

  • ✅ Removed bitsandbytes and 8-bit quantization (not compatible with macOS without CoreML).
  • ✅ Switched to Apple's MPS backend for GPU acceleration on Mac.
  • ✅ Compatible with MacBook Air/Pro (M1/M2/M3/M4).
  • ✅ Updated app.py and requirements.txt to reflect Apple Silicon optimizations.
  • Note on the Model: This app uses Meta’s official facebook/nllb-200-distilled-600M model, running in full precision (float32) on Apple Silicon via PyTorch’s MPS backend.

🚀 Features

  • Apple Silicon Optimized: Runs on Mac GPU using PyTorch MPS backend.
  • Multi-language Support: Translate between many language pairs using NLLB.
  • Fast Inference on Mac: Leveraging the GPU gives significant speedup over CPU.
  • Gradio Interface: Easy-to-use web UI for testing and demo.

✅ Setup Instructions

1. Clone the repository

git clone https://github.com/sibi-seeni/nllb_localized_lang_translator.git
cd nllb_localized_lang_translator

2. Create and activate a virtual environment

python3 -m venv myenv
source myenv/bin/activate

3. Install dependencies

pip install -r requirements.txt

Make sure you're using Python ≥3.8 and a recent version of pip.

4. Run the Gradio app:

python app.py

🌍 Supported Languages

Languages include (but are not limited to):

  • Tamil 🇮🇳
  • Hindi 🇮🇳
  • French 🇫🇷
  • Spanish 🇪🇸
  • German 🇩🇪
  • Arabic 🇸🇦

A complete list of supported languages is available in the web app's dropdown menus.

References

Acknowledgements

Thanks to the Hugging Face and Meta AI teams for their amazing models and tooling.

About

A implementation of Meta's NLLB translator, modified to run on Apple Silicon Macs (M1–M4) using GPU acceleration via PyTorch MPS. No CUDA or quantization required. Fast, offline language translation with Gradio UI. This is part of our AI Ethics coursework and paper.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

No contributors

Languages

  • Python 100.0%