Abstract
Understanding the dynamics and functionality of the human brain and its relationship with different physical entities has proven to be extremely useful in many applications including disability therapy and designing the next-generation user-interfaces. Communication between the brain and external hardware using neural stimulation and recordings has also been demonstrated recently. Such systems are usually analyzed by employing the brain–machine–body interface (BMBI) model. However, owing to the high complexity of the human brain activity, modeling and analyzing the neural-signals is a resource-intensive task. Moreover, coupling neural signals from different physical entities inevitably leads to large input data sets and hence, also making it data- and computationally intensive. Hence, here we employ a spatiotemporal fractal parallel algorithm to efficiently generate and analyze the BMBI models. However, such an algorithm can lead to demanding on-chip traffic patterns requiring an efficient communication infrastructure among different computing cores. To address this issue, we propose a machine-learning-inspired wireless network-on-chip (WiNoC)-based manycore architecture for handling the compute- and communication-intensive nature of the BMBI applications. The experimental results show that, compared with the traditional wireline mesh NoC, WiNoC achieves up to 55% savings in energy delay product for a system size of 1024 cores.