Compute more, with less
Resource efficient hardware for scalable and intelligent computing
Working with expert neuroscientists, mathematicians, and AGI experts, we're here to innovate computer hardware.
As data continues to grow, the tech industry is spending more in power and costs. We're radically improving scaling pipelines.
Artificial intelligence is quickly progressing to the next level -- AGI -- with our new hardware.
Full Stack Pipelines
Our processors are co-designed from software to hardware with use in mind to ensure a perfect fit into your existing pipelines.
The Processor for Hyperscale Computing
Our new PCIe chip is designed to process more information per transistor, reducing costs and power in cloud, databases, blockchain, and AI applications.
- Less Power Consumption
- Faster Compute Throughput
- Post-quantum Secure
Innovating on Multiple Fronts
We founded Simuli to create more powerful computers that can protect us from existential risks. I strongly believe our resource efficient path is the correct path to more scalable and intelligent computing.
Custom hardware for AGI will do what Nvidia GPUs did for deep learning.
Tech infrastructure has reached an inflection point with data sizes and complexity exponentially exploding, costs are exploding too. It's just not scalable to use existing architectures to keep going, something new and transformative is needed, like the NDPU.
More on the NDPU
The NDPU uses a special combination of logic and memory on chip and combines that with an auto-vectorizing compiler to compress information before computing it. Our patented technology significantly reduces power and overall hardware costs.
Information is automatically compressed into a holographic math state and then processed in the compressed form. This technique uses less memory which reduces power needed.
Since information is compressed before processed, each transistor can be running in parallel computing all of the info with less bits used to represent that info.
Our new chip uses a type of math that is contextual, this means that a new type of artificial intelligence can be supported without adding more and more GPUs as models scale.
The chip uses two forms of security. First, the software layer randomly scrambles the data into an abstract string. Then, the hardware uses PUF to ensure the random scramble is post-quantum secure.
Our floor layout of logic and memory is like islands of logic in a sea of memory. The logic is designed for fixed point geometric operations instead of floating point matrix multiplication ops of GPUs. All information is compressed, binarized, and vectorized on chip.
Anyone who is processing lots and lots of data and wants to do so without constantly adding more hardware and increasing power will love the NDPU.