Latest Papers

ASME Journal of Mechanisms and Robotics

  • An Improved Dual Quaternion Dynamic Movement Primitives-Based Algorithm for Robot-Agnostic Learning and Execution of Throwing Tasks
    on May 9, 2025 at 12:00 am

    AbstractInspired by human nature, roboticists have conceived robots as tools meant to be flexible, capable of performing a wide variety of tasks. Learning from demonstration methods allow us to “teach” robots the way we would perform tasks, in a versatile and adaptive manner. Dynamic movement primitives (DMP) aims for learning complex behaviors in such a way, representing tasks as stable, well-understood dynamical systems. By modeling movements over the SE(3) group, modeled primitives can be generalized for any robotic manipulator capable of full end-effector 3D movement. In this article, we present a robot-agnostic formulation of discrete DMP based on the dual quaternion algebra, oriented to modeling throwing movements. We consider adapted initial and final poses and velocities, all computed from a projectile kinematic model and from the goal at which the projectile is aimed. Experimental demonstrations are carried out in both a simulated and a real environment. Results support the effectiveness of the improved method formulation.

  • Chained Timoshenko Beam Constraint Model With Applications in Large Deflection Analysis of Compliant Mechanism
    on May 9, 2025 at 12:00 am

    AbstractAccurately analyzing the large deformation behaviors of compliant mechanisms has always been a significant challenge in the design process. The classical Euler–Bernoulli beam theory serves as the primary theoretical basis for the large deformation analysis of compliant mechanisms. However, neglecting shear effects may reduce the accuracy of modeling compliant mechanisms. Inspired by the beam constraint model, this study takes a step further to develop a Timoshenko beam constraint model (TBCM) for initially curved beams to capture intermediate-range deflections under beam-end loading conditions. On this basis, the chained Timoshenko beam constraint model (CTBCM) is proposed for large deformation analysis and kinetostatic modeling of compliant mechanisms. The accuracy and feasibility of the proposed TBCM and CTBCM have been validated through modeling and analysis of curved beam mechanisms. Results indicate that TBCM and CTBCM are more accurate compared to the Euler beam constraint model (EBCM) and the chained Euler beam constraint model (CEBCM). Additionally, CTBCM has been found to offer computational advantages, as it requires fewer discrete elements to achieve convergence.

Deep Reinforcement Learning-Based Control of Stewart Platform With Parametric Simulation in ROS and Gazebo

Abstract

The Stewart platform is an entirely parallel robot with mechanical differences from typical serial robotic manipulators, which has a wide application area ranging from flight and driving simulators to structural test platforms. This work concentrates on learning to control a complex model of the Stewart platform using state-of-the-art deep reinforcement learning (DRL) algorithms. In this regard, to enhance the reliability of the learning performance and to have a test bed capable of mimicking the behavior of the system completely, a precisely designed simulation environment is presented. Therefore, we first design a parametric representation for the kinematics of the Stewart platform in Gazebo and robot operating system (ROS) and integrate it with a Python class to conveniently generate the structures in simulation description format (SDF). Then, to control the system, we benefit from three DRL algorithms: the asynchronous advantage actor–critic (A3C), the deep deterministic policy gradient (DDPG), and the proximal policy optimization (PPO) to learn the control gains of a proportional integral derivative (PID) controller for a given reaching task. We chose to apply these algorithms due to the Stewart platform’s continuous action and state spaces, making them well-suited for our problem, where exact controller tuning is a crucial task. The simulation results show that the DRL algorithms can successfully learn the controller gains, resulting in satisfactory control performance.

Read More

Journal of Mechanisms and Robotics Open Issues