Latest Papers

ASME Journal of Mechanisms and Robotics

  • Mechanical Characterization of Supernumerary Robotic Tails for Human Balance Augmentation
    on August 31, 2023 at 12:00 am

    AbstractHumans are intrinsically unstable in quiet stance from a rigid body system viewpoint; however, they maintain balance, thanks to neuro-muscular sensory control properties. With increasing levels of balance related incidents in industrial and ageing populations globally each year, the development of assistive mechanisms to augment human balance is paramount. This work investigates the mechanical characteristics of kinematically dissimilar one and two degrees-of-freedom (DoF) supernumerary robotic tails for balance augmentation. Through dynamic simulations and manipulability assessments, the importance of variable coupling inertia in creating a sufficient reaction torque is highlighted. It is shown that two-DoF tails with solely revolute joints are best suited to address the balance augmentation issue. Within the two-DoF options, the characteristics of open versus closed loop tails are investigated, with the ultimate design selection requiring trade-offs between environmental workspace, biomechanical factors, and manufacturing ease to be made.

Deep Reinforcement Learning-Based Control of Stewart Platform With Parametric Simulation in ROS and Gazebo


The Stewart platform is an entirely parallel robot with mechanical differences from typical serial robotic manipulators, which has a wide application area ranging from flight and driving simulators to structural test platforms. This work concentrates on learning to control a complex model of the Stewart platform using state-of-the-art deep reinforcement learning (DRL) algorithms. In this regard, to enhance the reliability of the learning performance and to have a test bed capable of mimicking the behavior of the system completely, a precisely designed simulation environment is presented. Therefore, we first design a parametric representation for the kinematics of the Stewart platform in Gazebo and robot operating system (ROS) and integrate it with a Python class to conveniently generate the structures in simulation description format (SDF). Then, to control the system, we benefit from three DRL algorithms: the asynchronous advantage actor–critic (A3C), the deep deterministic policy gradient (DDPG), and the proximal policy optimization (PPO) to learn the control gains of a proportional integral derivative (PID) controller for a given reaching task. We chose to apply these algorithms due to the Stewart platform’s continuous action and state spaces, making them well-suited for our problem, where exact controller tuning is a crucial task. The simulation results show that the DRL algorithms can successfully learn the controller gains, resulting in satisfactory control performance.

Read More

Journal of Mechanisms and Robotics Open Issues