The Processor for Hyperscale Computing

We've designed a new processor using advanced math and an innovative hardware layout. The NDPU processes more information per transistor by compressing information before processing. Our early and unoptimized prototype already shows promising results.



Power Reduction


Speed Increase


Memory Reduction
NDPU Details

Common Technical Questions

Information is automatically compressed into a holographic math state and then processed in the compressed form. This technique uses less memory which reduces power needed.

Since information is compressed before processed, each transistor can be running in parallel computing all of the info with less bits used to represent that info.

Our new chip uses a type of math that is contextual, this means that a new type of artificial intelligence can be supported without adding more and more GPUs as models scale.

The chip uses two forms of security. First, the software layer randomly scrambles the data into an abstract string. Then, the hardware uses PUF to ensure the random scramble is post-quantum secure.

Our floor layout of logic and memory is like islands of logic in a sea of memory. The logic is designed for fixed point geometric operations instead of floating point matrix multiplication ops of GPUs. All information is compressed, binarized, and vectorized on chip.

Anyone who is processing lots and lots of data and wants to do so without constantly adding more hardware and increasing power will love the NDPU.