Optimizing data processing pipelines
The challenge with database processing today is that as data grows, and it will continue to do so, more hardware infrastructure is needed to store and process that data. The technique known as data-sharding splits up mega databases into smaller chunks that are farmed out to many different processors. The more data, the more processors needed. The NDPU changes the status quo in database processing by reducing the need for data-sharding. First, our compression compiler converts information to a special form that abstracts the original data into an equivalent form but uses less bits to do so. The effect is that more information in each processor is processing more information per transistor. Secondly, our lightning fast, hard-wired logic is configured to execute search in nanoseconds. Results are 1 reduced cost in hardware and power because less hardware is needed and the chip is energy efficient compared to CPU and GPU and 2 faster database search and processing due to our patented lighting throughput technology.