Enhancing AI and Machine Learning
via FPGA hardware

Why do we use Field Programmable Gate Arrays (FPGAs) for processing AI in our products?

A fundamental requirement of our products is to have the capability of low latency processing of real-time RF spectrum data on the edge. Processing RF spectrum data with low latency has a lot of inherent challenges, for which FPGAs provide size, weight and power efficient solutions in the following ways:

1. Compute Power delivered through flexibility:

DroneShield’s AI algorithms, rely heavily on dense fixed-point matrix multiplications, which map well to hardware platforms such as GPUs and FPGAs since the required compute power to obtain classifications with low latency are in the order of Tera-Operations/second (TOPs/s).

FPGAs prove to be the perfect target platform for our Machine Learning/Artificial Intelligence workloads as:

  • Machine Learning processing is a very compute intensive task. In simple terms, the more complex the problem the greater number of TOPs/s are required. FPGA have the capability of delivering on that requirement due to their inherit massive parallelism, that is exploited by DroneShield’s custom FPGA design. Which means that we are able to significantly increase the speed of our algorithms by parallelizing the computations using FPGAs.

  • As alluded to in the first point, FPGAs are reprogrammable hardware technology. In simple terms, the connections between transistors on the FPGA chip is run-time (re)programmable by software. This allows DroneShield to develop custom hardware, with really short development cycles, tailored specifically for the requirements of the AI algorithm being implemented.

  • Our AI algorithm is a custom algorithm which utilizes complex irregular parallelism such as network-sparsity (pruning) on custom data-types. These kinds of workloads are very difficult for general purpose GPUs and CPUs to handle, however are a great application for FPGAs due to their extreme customizability.

2. SWaP benefits

FPGAs offer the impressive compute capability and flexibility at very low size, weight, area and performance penalties. A direct competitor of the FPGA in terms of tera-operations/second, or speed of classifications, is a Graphics Processing Unit (GPU). When compared to a GPU or CPU, FPGA has immense benefits in the following general:

  • Size, Weight and Area: GPUs often require a mechanical stack, custom cooling and a dedicated AC power supply. Our FPGA solutions are capable of being deployed in hand-held products such as the RfPatrol on battery-power

  • Performance: Due to the custom FPGA-design for our specific AI application, the performance of the system to detect drones outclasses a GPU-based system in almost every metric

3. Power Efficiency

GOP/watt, or giga-operations/watt is a commonly used metric to define power-efficiency of AI workloads. DroneShield’s FPGA platform is over 2.3x more efficient in raw performance/watt metric when compared to common GPUs or CPUs.

4. Latency

Our FPGA-based AI architecture is currently capable of providing a latency of 90 micro-seconds/classification, in a hand-held battery-powered edge device.

5. Accuracy

Despite being a fraction of the size of their GPU counter-part, our FPGA-based AI solution is over 97% accurate as compared in a 1:1 test of over millions of data-samples.