
Tether has launched a new AI training framework under its QVAC platform, enabling large language models to be fine-tuned on consumer hardware including smartphones and non-Nvidia GPUs.
The system leverages Microsoft’s BitNet architecture and LoRA techniques to reduce memory and compute requirements, lowering the cost and barriers to AI development.
The framework supports a wide range of hardware, including AMD, Intel, Apple Silicon and mobile GPUs, allowing cross-platform training and inference.
Tether said models with up to one billion parameters can be fine-tuned on smartphones in under two hours, with smaller models processed in minutes and larger models supported on mobile devices.
The BitNet-based approach reduces VRAM requirements by up to 77.8%, enabling more efficient model training and faster inference on limited hardware.
The company highlighted use cases such as on-device training and federated learning, which reduce reliance on centralised cloud infrastructure.
The launch reflects a broader trend of crypto firms expanding into AI infrastructure, alongside investments from mining companies and partnerships involving major tech and financial firms.
Industry activity includes Bitcoin miners pivoting toward AI data centres and crypto platforms building tools for autonomous AI agents and onchain interactions.
At the time of reporting, Bitcoin price was $74,193.31.