Robots in recycling and disassemblyBogue, Robert
2019 Industrial Robot: The International Journal of Robotics Research and Application
doi: 10.1108/ir-03-2019-0053
This paper aims to illustrate the growing role robots are playing in recycling and product disassembly and provide an insight into recent research activities.Design/methodology/approachFollowing a short introduction, this first considers robotic waste sorting systems and then describes two systems for the disassembly of electronic products. It then provides details of some recent research activities. Finally, brief conclusions are drawn.FindingsRobotic systems exploiting artificial intelligence combined with various sensing and machine vision technologies are playing a growing role in the sorting of municipal and industrial waste, prior to recycling. These are mostly based on delta robots and can achieve pick rates of 60-70 items/min and be configured to recognise and select a wide range of different materials and items from moving conveyors. Electronic waste recycling is yet to benefit significantly from robotics although a limited number of systems have been developed for product disassembly. Disassembly techniques are the topic of a concerted research effort which often involves robots and humans collaborating and sharing disassembly tasks.Originality/valueThis provides an insight into the present-day uses and potential future role of robots in recycling which has traditionally been a highly labour-intensive industry.
The Pransky interview: Dr Howard Chizeck, founder, Olis Robotics; Professor, Electrical and Computer Engineering, University of WashingtonPransky, Joanne
2019 Industrial Robot: The International Journal of Robotics Research and Application
doi: 10.1108/IR-05-2019-0102
PurposeThe following paper is a “Q&A interview” conducted by Joanne Pransky of Industrial Robot Journal as a method to impart the combined technological, business and personal experience of a prominent, robotic industry PhD and innovator regarding his pioneering efforts and his personal journey of bringing a technological invention to market. This paper aims to discuss these issues.Design/methodology/approachThe interviewee is Dr Howard Chizeck, Professor of Electrical and Computer Engineering and Adjunct Professor of Bioengineering at the University of Washington (UW). Professor Chizeck is a research testbed leader for the Center for Neurotechnology (a National Science Foundation Engineering Research Center) and also co-director of the UW BioRobotics Laboratory. In this interview, Chizeck shares the details on his latest startup, Olis Robotics.FindingsHoward Jay Chizeck received his BS and MS degrees from Case Western Reserve University and the ScD degree in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. He served as Chair of the Department of Systems, Control and Industrial Engineering at Case Western Reserve University and was also the Chair of the Electrical and Computer Engineering Department at the University of Washington. His telerobotic research includes haptic navigation and control for telerobotic devices, including robotic surgery and underwater systems. His neural engineering work involves the design and security of brain-machine interfaces and the development of devices to control symptoms of essential tremor and Parkinson’s disease.Originality/valueProfessor Chizeck was elected as a Fellow of the IEEE in 1999 “for contributions to the use of control system theory in biomedical engineering” and he was elected to the American Institute for Medical and Biological Engineering (AIMBE) College of Fellows in 2011 for “contributions to the use of control system theory in functional electrical stimulation assisted walking.” From 2008 to 2012, he was a member of the Science Technology Advisory Panel of the Johns Hopkins Applied Physics Laboratory. Professor Chizeck currently serves on the Visiting Committee of the Case School of Engineering (Case Western Reserve University). He is a founder and advisor of Controlsoft Inc (Ohio) and also is a founder and Chair of the Board of Directors of Olis Robotics, Inc., which was established in 2013 (under the name of BluHaptics) to commercialize haptic rendering, haptic navigation and other UW telerobotic technologies. He holds approximately 20 patents, and he has published more than 250 scholarly papers.
Complexity-based task allocation in human-robot collaborative assemblyMalik, Ali Ahmad; Bilberg, Arne
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-11-2018-0231
Over the past years, collaborative robots have been introduced as a new generation of industrial robotics working alongside humans to share the workload. These robots have the potential to enable human–robot collaboration (HRC) for flexible automation. However, the deployment of these robots in industrial environments, particularly in assembly, still comprises several challenges, of which one is skills-based tasks distribution between humans and robots. With ever-decreasing product life cycles and high-mix low volume production, the skills-based task distribution is to become a frequent activity. This paper aims to present a methodology for tasks distribution between human and robot in assembly work by complexity-based tasks classification.Design/methodology/approachThe assessment method of assembly tasks is based on the physical features of the components and associated task description. The attributes that can influence assembly complexity for automation are presented. Physical experimentation with a collaborative robot and work with several industrial cases helped to formulate the presented method.FindingsThe method will differentiate the tasks with higher complexity of handling, mounting, human safety and part feeding from low-complexity tasks, thereby simplifying collaborative automation in HRC scenario. Such structured method for tasks distribution in HRC can significantly reduce deployment and changeover times.Originality/valueAssembly attributes affecting HRC automation are identified. The methodology is presented for evaluating tasks for assigning to the robot and creating a work–load balance forming a human–robot work team. Finally, an assessment tool for simplified industrial deployment.
Robust adaptive super twisting controller: methodology and application of a human-driven knee joint orthosisBkekri, Rihab; Benamor, Anouar; Alouane, Mohamed Amine; Fried, Georges; Messaoud, Hassani
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-09-2018-0198
The application of the sliding mode control has two obstacle phenomena: chattering and high activity of control action. The purpose of this paper concerns a novel super-twisting adaptive sliding mode control law of a human-driven knee joint orthosis. The proposed control approach consists of using dynamically adapted control gains that ensure the establishment, in a finite time, of a real second-order sliding mode. The efficiency of the controller is evaluated using an experimental set-up.Design/methodology/approachThis study presents the synthesis of a robust super-twisting adaptive controller for the control of a lower limb–orthosis system. The developed control strategy will take into consideration the nonlinearities as well as the uncertainties resulting from the dynamics of the lower limb–orthosis system. It must also guarantee a good follow-up of the reference trajectory.FindingsThe authors first evaluated on a valid subject, the performances of this controller which were studied and compared to several criteria. The obtained results show that the controller using the Adaptive Super-Twisting algorithm is the one that guarantees the best performance. Validation tests involved a subject and included robustness tests against external disturbances and co-contractions of antagonistic muscles.Originality/valueThe main contribution of this paper is in developing the adaptation super-twisting methodology for finding the control gain resulting in the minimization of the chattering effect.
An approach for learning from robots using formal languages and automataAslan, Muhammet Fatih; Durdu, Akif; Sabancı, Kadir; Erdogan, Kemal
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-11-2018-0240
In this study, human activity with finite and specific ranking is modeled with finite state machine, and an application for human–robot interaction was realized. A robot arm was designed that makes specific movements. The purpose of this paper is to create a language associated to a complex task, which was then used to teach individuals by the robot that knows the language.Design/methodology/approachAlthough the complex task is known by the robot, it is not known by the human. When the application is started, the robot continuously checks the specific task performed by the human. To carry out the control, the human hand is tracked. For this, the image processing techniques and the particle filter (PF) based on the Bayesian tracking method are used. To determine the complex task performed by the human, the task is divided into a series of sub-tasks. To identify the sequence of the sub-tasks, a push-down automata that uses a context-free grammar language structure is developed. Depending on the correctness of the sequence of the sub-tasks performed by humans, the robot produces different outputs.FindingsThis application was carried out for 15 individuals. In total, 11 out of the 15 individuals completed the complex task correctly by following the different outputs.Originality/valueThis type of study is suitable for applications to improve human intelligence and to enable people to learn quickly. Also, the risky tasks of a person working in a production or assembly line can be controlled with such applications by the robots.
Research on the forcefree control of cooperative robots based on dynamic parameters identificationXiao, Juliang; Zeng, Fan; Zhang, Qiulong; Liu, Haitao
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-01-2019-0007
This paper aims to propose a forcefree control algorithm that is based on a dynamic model with full torque compensation is proposed to improve the compliance and flexibility of the direct teaching of cooperative robots.Design/methodology/approachDynamic parameters identification is performed first to obtain an accurate dynamic model. The identification process is divided into two steps to reduce the complexity of trajectory simplification, and each step contains two excitation trajectories for higher identification precision. A nonlinear friction model that considers the angular displacement and angular velocity of joints is proposed as a secondary compensation for identification. A torque compensation algorithm that is based on the Hogan impedance model is proposed, and the torque obtained by an impedance equation is regarded as the command torque, which can be adjusted. The compensatory torque, including gravity torque, inertia torque, friction torque and Coriolis torque, is added to the compensation to improve the effect of forcefree control.FindingsThe model improves the total accuracy of the dynamic model by approximately 20% after compensation. Compared with the traditional method, the results prove that the forcefree control algorithm can effectively reduce the drag force approximately 50% for direct teaching and realize a flexible and smooth drag.Practical implicationsThe entire algorithm is verified by the laboratory-developed six degrees-of-freedom cooperative robot, and it can be applied to other robots as well.Originality/valueA full torque compensation is performed after parameters identification, and a more accurate forcefree control is guaranteed. This allows the cooperative robot to be dragged more smoothly without external sensors.
Collision detection method for industrial robot based on envelope-like linesZhang, Tie; Hong, JingDong
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-12-2018-0261
Successful sensorless collision detection by a robot depends on the accuracy with which the external force/torque can be estimated. Compared with collaborative robots, industrial robots often have larger parameter values of their dynamic models and larger errors in parameter identification. In addition, the friction inside a reducer affects the accuracy of external force estimation. The purpose of this paper is to propose a collision detection method for industrial robots. The proposed method does not require additional equipment, such as sensors, and enables highly sensitive collision detection while guaranteeing a zero false alarm rate.Design/methodology/approachThe error on the calculated torque for a robot in stable motion is analyzed, and a typical torque error curve is presented. The variational characteristics of the joint torque error during a collision are analyzed, and collisions are classified into two types: hard and soft. A pair of envelope-like lines with an effect similar to that of the true envelope lines is designed. By using these envelope-like lines, some components of the torque calculation error can be eliminated, and the sensitivity of collision detection can be improved.FindingsThe proposed collision detection method based on envelope-like lines can detect hard and soft collisions during the motion of industrial robots. In repeated experiments without collisions, the false alarm rate was 0 per cent, and in repeated experiments with collisions, the rate of successful detection was 100 per cent. Compared with collision detection method based on symmetric thresholds, the proposed method has a smaller detection delay and the same detection sensitivity for different joint rotation directions.Originality/valueA collision detection method for industrial robots based on envelope-like lines is proposed in this paper. The proposed method does not require additional equipment or complex algorithms, and highly sensitive collision detection can be achieved with zero false alarms. The proposed method is low in cost and highly practical and can be widely used in applications involving industrial robots.
Impedance control of collaborative robots based on joint torque servo with active disturbance rejectionRen, Tianyu; Dong, Yunfei; Wu, Dan; Chen, Ken
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-06-2018-0130
The purpose of this paper is to present a simple yet effective force control scheme for collaborative robots by addressing the problem of disturbance rejection in joint torque: inherent actuator flexibility and nonlinear friction.Design/methodology/approachIn this paper, a joint torque controller with an extended state observer is used to decouple the joint actuators from the multi-rigid-body system of a constrained robot and compensate the motor friction. Moreover, to realize robot force control, the authors embed this controller into the impedance control framework.FindingsResults have been given in simulations and experiments in which the proposed joint torque controller with an extended state observer can effectively estimate and compensate the total disturbance. The overall control framework is analytically proved to be stable, and further it is validated in experiments with a robot testbed.Practical implicationsWith the proposed robot force controller, the robot is able to change its stiffness in real time and therefore take variable tasks without any accessories, such as the RCC or 6-DOF F/T sensor. In addition, programing by demonstration can be realized easily within the proposed framework, which makes the robot accessible to unprofessional users.Originality/valueThe main contribution of the presented work is the design of a model-free robot force controller with the ability to reject torque disturbances from robot-actuator coupling effect and motor friction, applicable for both constrained and unconstrained environments. Simulation and experiment results from a 7-DOF robot are given to show the effectiveness and robustness of the proposed controller.
Improving stability in physical human–robot interaction by estimating human hand stiffness and a vibration indexBian, Feifei; Ren, Danmei; Li, Ruifeng; Liang, Peidong
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-05-2018-0111
The purpose of this paper is to eliminate instability which may occur when a human stiffens his arms in physical human–robot interaction by estimating the human hand stiffness and presenting a modified vibration index.Design/methodology/approachHuman hand stiffness is first estimated in real time as a prior indicator of instability by capturing the arm configuration and modeling the level of muscle co-contraction in the human’s arms. A time-domain vibration index based on the interaction force is then modified to reduce the delay in instability detection. The instability is confirmed when the vibration index exceeds a given threshold. The virtual damping coefficient in admittance controller is adjusted accordingly to ensure stability in physical human–robot interaction.FindingsBy estimating the human hand stiffness and modifying the vibration index, the instability which may occur in stiff environment in physical human–robot interaction is detected and eliminated, and the time delay is reduced. The experimental results demonstrate significant improvement in stabilizing the system when the human operator stiffens his arms.Originality/valueThe originality is in estimating the human hand stiffness online as a prior indicator of instability by capturing the arm configuration and modeling the level of muscle co-contraction in the human’s arms. A modification of the vibration index is also an originality to reduce the time delay of instability detection.
Adaptive motion planning framework by learning from demonstrationLi, Xiao; Cheng, Hongtai; Liang, Xiaoxiao
2019 Industrial Robot: An International Journal
doi: 10.1108/ir-10-2018-0216
Learning from demonstration (LfD) provides an intuitive way for non-expert persons to teach robots new skills. However, the learned motion is typically fixed for a given scenario, which brings serious adaptiveness problem for robots operating in the unstructured environment, such as avoiding an obstacle which is not presented during original demonstrations. Therefore, the robot should be able to learn and execute new behaviors to accommodate the changing environment. To achieve this goal, this paper aims to propose an improved LfD method which is enhanced by an adaptive motion planning technique.Design/methodology/approachThe LfD is based on GMM/GMR method, which can transform original off-line demonstrations into a compressed probabilistic model and recover robot motion based on the distributions. The central idea of this paper is to reshape the probabilistic model according to on-line observation, which is realized by the process of re-sampling, data partition, data reorganization and motion re-planning. The re-planned motions are not unique. A criterion is proposed to evaluate the fitness of each motion and optimize among the candidates.FindingsThe proposed method is implemented in a robotic rope disentangling task. The results show that the robot is able to complete its task while avoiding randomly distributed obstacles and thereby verify the effectiveness of the proposed method. The main contributions of the proposed method are avoiding unforeseen obstacles in the unstructured environment and maintaining crucial aspects of the motion which guarantee to accomplish a skill/task successfully.Originality/valueTraditional methods are intrinsically based on motion planning technique and treat the off-line training data as a priori probability. The paper proposes a novel data-driven solution to achieve motion planning for LfD. When the environment changes, the off-line training data are revised according to external constraints and reorganized to generate new motion. Compared to traditional methods, the novel data-driven solution is concise and efficient.