This paper presents Work-In-Progress research on control and communication method for a robotic arm integrated with a physical bi-directional tree-like neural network. The ultimate objective is to emulate the functionality of a human arm and nervous system; however, the current focus is on developing a generic neural network framework. Key design objectives include enabling the system to self-adjust instructions and autonomously learn its geometry and range of motion upon modifications, ideally including the ability to self-modify. The control architecture utilizes the most proximal neuron for direct arm manipulation, while subsequent neurons handle sensor data processing, actuator control, and reflexive responses.
Various methods of communication were tested, including traditional networking protocols such as IEC 61850 standard and onion routing, and more novel methods, such as I/O manipulation using mathematical formulations, and machine-learning neural network using the physical neurons, and embedding a neural network transformer inside each neuron. These methods were assessed on speed, scalability, and ability to adapt instructions to avoid collisions. Performance metrics from robotic arm testing will help us determine the optimal communication strategy.
For the control methods, both direct control (e.g., specifying joint angles) and indirect control (e.g., object manipulation coordinate commands) will be tested. The system incorporates safety mechanisms where newly attached limbs or those near collision events have reduced confidence in their movement ranges, thereby limiting joint movement speeds. Confidence levels are dynamically updated as limb movements stabilize. There is also a pain signal mechanism to indicate to the controller that movement should be none to reduced when limbs are removed. While the arm is idle, it will slowly scan its area to determine its maximum operational range. Overall, this work aims to establish effective control and communication protocols that enhance the robotic arm's adaptability and resilience, paving the way toward more sophisticated human-like motor and sensory integration.
The full paper will be available to logged in and registered conference attendees once the conference starts on June 22, 2025, and to all visitors after the conference ends on June 25, 2025