2025-09-18 11:40:00
github.com
Energy-efficient AI inference framework & kernels for phones & AI-native hardware.
Budget and mid-range phones control over 70% of the market, but frameworks today optimise for the highend phones.
Cactus is designed bottom-up with no dependencies for all mobile devices.
Example (CPU-only):
- Model: Qwen3-600m-INT8
- File size: 370-420mb
- 16-20 t/s on Pixel 6a, Galaxy S21, iPhone 11 Pro
- 50-70 t/s on Pixel 9, Galaxy S25, iPhone 16
Cactus exposes 4 levels of abstraction.
┌─────────────────┐
│ Cactus FFI │ ←── OpenAI compatible C API for integration
└─────────────────┘
│
┌─────────────────┐
│ Cactus Engine │ ←── High-level transformer engine
└─────────────────┘
│
┌─────────────────┐
│ Cactus Graph │ ←── Unified zero-copy computation graph
└─────────────────┘
│
┌─────────────────┐
│ Cactus Kernels │ ←── Low-level ARM-specific SIMD operations
└─────────────────┘
Cactus Graph is a general numerical computing framework that runs on Cactus Kernels.
Great for implementing custom models and scientific computing, like JAX for phones.
#include cactus.h
CactusGraph graph;
auto a = graph.input({2, 3}, Precision::FP16);
auto b = graph.input({3, 4}, Precision::INT8);
auto x1 = graph.matmul(a, b, false);
auto x2 = graph.transpose(x1);
auto result = graph.matmul(b, x2, true);
float a_data[6] = {1.1f, 2.3f, 3.4f, 4.2f, 5.7f, 6.8f};
float b_data[12] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12};
graph.set_input(a, a_data, Precision::FP16);
graph.set_input(b, b_data, Precision::INT8);
graph.execute();
void* output_data = graph.get_output(result);
graph.hard_reset();
Cactus Engine is a transformer inference engine built on top of Cactus Graphs.
It is abstracted via Cactus Foreign Function Interface.
#include cactus.h
const char* model_path = "path/to/weight/folder";
cactus_model_t model = cactus_init(model_path, 2048);
const char* messages = R"([
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/nothink My name is Henry Ndubuaku"}
])";
const char* options = R"({
"temperature": 0.1,
"top_p": 0.95,
"top_k": 20,
"max_tokens": 50,
"stop_sequences": [""]
})";
char response[1024];
int result = cactus_complete(model, messages, response, sizeof(response), options, nullptr, nullptr, nullptr);
With tool support:
const char* tools = R"([
{
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"properties": {
"location": {
"type": "string",
"description": "City name",
"required": true
}
},
"required": ["location"]
}
}
}
])";
int result = cactus_complete(model, messages, response, sizeof(response), options, tools, nullptr, nullptr);
This makes it easy to write Cactus bindings for any language.
Header files are self-documenting but documentation contributions are welcome.
Cactus SDKs run 500k+ weekly inference tasks in production today, try them!
You can run these codes directly on Macbooks with Apple chips due to their design.
Performance gain is observed in mobile devices but for testing during development,
Vanilla M3 CPU-only can run Qwen3-600m-INT8 at 60-70 toks/sec, use the following:
- Generate weights from HuggingFace model:
python3 tools/convert_hf.py Qwen/Qwen3-0.6B weights/qwen3-600m-i8/ --precision INT8
- Build and test:
./tests/run.sh # remember to chmod +x any script first time
- Gemma, SmolVLM, Liquid, Kitten, Vosk etc.
- SMMLA, NPU & DSP for high-end phones.
- INT4 support for 1B+ models.
- Python tools for porting Torch/JAX cactus.
Preliminary results:
- Qwen3-4B-INT4 on iPhone 16 Pro NPU = 21 t/s
While Cactus can be used for all Apple devices including Macbooks, for computers/AMD/Intel/Nvidia generally,
please use HuggingFace, Llama.cpp, Ollama, vLLM, MLX. They’re built for those, support x86, and are all great!
Keep your files stored safely and securely with the SanDisk 2TB Extreme Portable SSD. With over 69,505 ratings and an impressive 4.6 out of 5 stars, this product has been purchased over 8K+ times in the past month. At only $129.99, this Amazon’s Choice product is a must-have for secure file storage.
Help keep private content private with the included password protection featuring 256-bit AES hardware encryption. Order now for just $129.99 on Amazon!
Help Power Techcratic’s Future – Scan To Support
If Techcratic’s content and insights have helped you, consider giving back by supporting the platform with crypto. Every contribution makes a difference, whether it’s for high-quality content, server maintenance, or future updates. Techcratic is constantly evolving, and your support helps drive that progress.
As a solo operator who wears all the hats, creating content, managing the tech, and running the site, your support allows me to stay focused on delivering valuable resources. Your support keeps everything running smoothly and enables me to continue creating the content you love. I’m deeply grateful for your support, it truly means the world to me! Thank you!
BITCOIN bc1qlszw7elx2qahjwvaryh0tkgg8y68enw30gpvge Scan the QR code with your crypto wallet app |
DOGECOIN D64GwvvYQxFXYyan3oQCrmWfidf6T3JpBA Scan the QR code with your crypto wallet app |
ETHEREUM 0xe9BC980DF3d985730dA827996B43E4A62CCBAA7a Scan the QR code with your crypto wallet app |
Please read the Privacy and Security Disclaimer on how Techcratic handles your support.
Disclaimer: As an Amazon Associate, Techcratic may earn from qualifying purchases.