Skip to content
Orel Ohayon

About

I'm Orel Ohayon. I build the infrastructure between raw LLMs and production AI.

After shipping two full AI products from scratch (a privacy-first growth engine for X.com and a secure local-first macOS AI assistant), I kept solving the same problems: personality consistency, safety enforcement, process containment, adversarial red-teaming. Every time, built from scratch, incompatible with the last version.

The third time, I extracted everything into Laminae: a modular Rust SDK with 10 independent crates for the layer between raw LLM APIs and production-ready AI. Safety enforced in code, not in prompts.

What I believe

  • ·AI safety is an engineering problem, not a prompting problem. An LLM can't reason its way out of a syscall filter.
  • ·Infrastructure should be modular. Use what you need, ignore the rest. Unix philosophy applied to AI.
  • ·Open source at the core. The safety layer that protects your users should be inspectable, not a black box.
  • ·Build from real problems. Laminae wasn't designed in a vacuum. Every module was extracted from production code that ran into real walls.

Currently

Building commercial AI products on the Laminae stack. TypeScript bindings, hosted APIs, and new open-source tooling. Everything ships from orellius.ai.

Connect