hama on-device NLP

Installation

hama publishes two packages: hama (Python) and hama-js (TypeScript). Both ship with embedded ONNX assets, so no follow-up downloads are required.

Subpath exports: hama-js/g2p, hama-js/asr, hama-js/g2p/browser, hama-js/asr/browser, hama-js/browser, hama-js/jamo, and hama-js/tokenizer.

Install from a package registry

uv pip install hama
# or
pip install hama

Verify the Python install with python -c “from hama import G2PModel; print(G2PModel().predict(‘안녕하세요’).ipa)”. For Node/Bun, run a short script that imports G2PNodeModel from hama-js/g2p or ASRNodeModel from hama-js/asr. For browser setups, import G2PBrowserModel or ASRBrowserModel from their published browser exports.

For live microphone ASR examples in Python, install the optional extra with uv pip install ‘hama[live]’.

Local development

git clone https://github.com/hamanlp/hama.git
cd hama/python
uv sync --extra test
uv run pytest

Notes

Python 3.9+ and Node 18+ (or Bun 1.1+) are recommended. Python installs include the ONNX weights inside the wheel; TypeScript installs ship the same assets in dist/.

G2P uses split assets by default (encoder.onnx + decoder_step.onnx). Single-file ONNX remains a fallback when you pass model_path or modelPath explicitly.

Versioning

hama and hama-js are versioned and released together. Each coordinated release should publish both packages and tag the corresponding shared asset bundle commit.