NΞWЯΛLΛB

NewraLab Mission
Building multi-modal, low-resourced AI systems for real-world impact in emerging regions.

NewraLab (苏州拟界智能科技有限公司) is an R&D laboratory based in Suzhou, China, dedicated to advancing AI for low-resource and underrepresented environments. Our research operates at the intersection of computer vision, NLP, and multimodal learning, specifically targeting domains where conventional AI struggles due to data scarcity or limited infrastructure.


We prioritize the development of efficient alternatives to Transformer architectures, utilizing state-space systems and physically grounded dynamical models. We are committed to open-source contributions that democratize high-performance AI, ensuring robust intelligence is accessible regardless of compute constraints.

Latest News

Oct 19, 2025
🏆 Featured Oral Presentation: Our paper "Tiny-vGamba: Distilling Large Vision-(Language) Knowledge from CLIP into a Lightweight vGamba Network" was selected for an Oral Presentation at the ICCV 2025 ECLR Workshop. Only 3 papers were chosen.
June 22, 2025
🎉 Major Milestone! Our first work Tiny-vGamba has been accepted at ICCV 2025, workshop on Efficient Computing under Limited Resources: Visual Computing. This marks a significant step in our research into linear-complexity architectures.

Projects

Open Source Beta testing

Gamba-Vision Suite

A collection of lightweight visual backbone models designed for deployment on resource constrained devices in regions with limited cloud access.

Research Focus

Physically-Grounded AI

Physically-Grounded AI

We investigate state-space and dynamical-system–based alternatives to Transformer architectures, modeling dependencies with linear computational complexity.

Unified Dynamical Fields

Unified Dynamical Fields

Modeling multimodal data as interacting dynamical fields within a unified system for robust learning under noisy or sparse conditions.

Global AI Democratization

Global AI Democratization

Optimizing high-performance systems for deployment in emerging regions with limited compute and data infrastructure.

Publications

Tiny-vGamba: Distilling Large Vision-(Language) Knowledge from CLIP into a Lightweight vGamba Network

Yunusa H., et al. | ICCV 2024. Workshop ECLR

Distilling cross-modal capabilities into linear-complexity architectures for edge devices.

Tiny-vGamba Architecture

iiANET: Inception-Inspired Attention Hybrid Network for efficient Long-Range Dependency

Yunusa H.,, Jane Doe | Transactions on Machine Learning Research (TMLR)

Physically-motivated segmentation models for low-compute environments.

iiANET Architecture

Our Team

Yunusa Haruna

Yunusa Haruna

Founder & Lead Researcher

Computer Vision, Deep Learning and An Entrepreneur

Adamu Lawan

Adamu Lawan

Researcher & Developer

NLP, Sentiment Analysis

Amir Hashim

Amir Hashim

Business Lead

Contact Us