: Indicates there is an attachment. Click on the image to expose the link to attachment.
|
MoAT1 |
Room606 |
SLAM I |
Regular Session |
Chair: Ogata, Tetsuya | Waseda Univ. |
Co-Chair: Prestes, Edson | UFRGS |
|
11:00-11:15, Paper MoAT1.1 | |
>Systematic Floor Coverage of Unknown Environments Using Rectangular Regions and Localization Certainty |
Goel, Dhiraj | iRobot Corp. |
Case, James Philip | Coll. of Computing, Georgia Inst. of Tech. |
Tamino, Daniele | iRobot |
Gutmann, Jens-Steffen | Evolution Robotics Inc. |
Munich, Mario Enrique | Evolution Robotics |
Dooley, Mike | iRobot |
Pirjanian, Paolo | Evolution Robotics |
Keywords: Motion and Trajectory Generation, Mapping, Domestic Robots and Home Automation
Abstract: We address the problem of systematically covering all accessible space of an unknown environment by a mobile robot. Our approach uses rectangular regions that are swept over the environment. The robot covers each region using the classic boustrephedon pattern and planning paths to uncovered areas within the region while keeping track of its position uncertainty. The region is then moved sideways for covering the next part of the environment until all accessible space has been visited. In a second stage the robot revisits the perimeter around obstacles. We compare our method to 5 off-line methods including the distance transformation by Zelinsky et al. [1] in a standard test environment as well as in multi-bedroom homes. The presented method can be employed in our Mint cleaning robot for autonomously sweeping and mopping floors.
|
|
11:15-11:30, Paper MoAT1.2 | |
>Support-Theoretic Subgraph Preconditioners for Large-Scale SLAM |
Jian, Yong-Dian | Georgia Inst. of Tech. |
Balcan, Doru C. | Georgia Inst. of Tech. |
Panageas, Ioannis | Georgia Inst. of Tech. |
Tetali, Prasad | Georgia Inst. of Tech. |
Dellaert, Frank | Georgia Inst. of Tech. |
Keywords: SLAM
Abstract: Efficiently solving large-scale sparse linear systems is important for robot mapping and navigation. Recently, the subgraph-preconditioned conjugate gradient method has been proposed to combine the advantages of two reigning paradigms, direct and iterative methods, to improve the efficiency of the solver. Yet the question of how to pick a good subgraph is still an open problem. In this paper, we propose a new metric to measure the quality of a spanning tree preconditioner based on support theory. We use this metric to develop an algorithm to find good subgraph preconditioners and apply them to solve the SLAM problem. The results show that although the proposed algorithm is not fast enough, the new metric is effective and resulting subgraph preconditioners significantly improve the efficiency of the state-of-the-art solver.
|
|
11:30-11:45, Paper MoAT1.3 | |
>A Back-End L1 Norm Based Solution for Factor Graph SLAM |
Casafranca, Juan Jose | Univ. de Zaragoza |
Paz, Lina María | Univ. of Zaragoza |
Pinies, Pedro | Univ. de Zaragoza |
Keywords: SLAM, Mapping, Localization
Abstract: Graphical models jointly with non linear optimization have become the most popular approaches for solving SLAM and Bundle Adjustment problems: using a non linear least squares (NLSQs) description of the problem, these math tools serve to formalize the minimization of an error cost function that relates state variables through relative sensor observations. The simplest case just considers as state variables the locations of the sensor/robot in the environment deriving in a pose graph subproblem. In general, the cost function is based on the L_2 norm whose principal iterative solutions exploit the sparse connectivity of the corresponding Gaussian Markov Field (GMRF) or the Factor Graph, whose adjacency matrices are given by the fill-in of the Hessian and Jacobian of the cost function respectively. In this paper we propose a novel solution based on the L_1 norm as a back-end to the pose graph subproblem. In contrast to other NLSQs approaches, we formulate an iterative algorithm inspired directly on the Factor Graph structure to solve for the linearized residual |Ax-b|_1. Although the results obtained when spurious measurements are present is similar to the robust Huber norm, our main interest in L_1 optimization is that it opens the door to more robust non-convex L_p norms where p <= 1. Since our approach depends on the minimization of a non differentiable function, we provide the theoretical insights to solve for the L_1 norm. Our optimization is based on a primal-dual formulation successfully applied for solving variational convex problems in computer vision. We show the effectiveness of the L_1 norm to produce both a robust initial seed and a final optimized solution on challenging and well known datasets widely used in other state of the art works.
|
|
11:45-12:00, Paper MoAT1.4 | |
>Linear SLAM: A Linear Solution to the Feature-based and Pose Graph SLAM based on Submap Joining |
Zhao, Liang | Peking Univ. Univ. of Tech. Sydney |
Huang, Shoudong | Univ. of Tech. Sydney |
Dissanayake, Gamini | Univ. of Tech. Sydney |
Keywords: SLAM, Mapping, Localization
Abstract: This paper presents a strategy for large-scale SLAM through solving a sequence of linear least squares problems. The algorithm is based on submap joining where submaps are built using any existing SLAM technique. It is demonstrated that if submaps coordinate frames are judiciously selected, the least squares objective function for joining two submaps becomes a quadratic function of the state vector. Therefore, a linear solution to large-scale SLAM that requires joining a number of local submaps either sequentially or in a more efficient Divide and Conquer manner, can be obtained. The proposed Linear SLAM technique is applicable to both feature-based and pose graph SLAM, in two and three dimensions, and does not require any assumption on the character of the covariance matrices or an initial guess of the state vector. Although this algorithm is an approximation to the optimal full nonlinear least squares SLAM, simulations and experiments using publicly available datasets in 2D and 3D show that Linear SLAM produces results that are very close to the best solutions that can be obtained using full nonlinear optimization started from an accurate initial value. The C/C++ and MATLAB source codes for the proposed algorithm are available on OpenSLAM.
|
|
12:00-12:15, Paper MoAT1.5 | |
>Segmented DP-SLAM |
Maffei, Renan | Univ. Federal do Rio Grande do Sul |
Jorge, Vitor | Univ. Federal do Rio Grande do Sul |
Kolberg, Mariana | UFRGS |
Prestes, Edson | UFRGS |
Keywords: SLAM
Abstract: Simultaneous Localization and Mapping (SLAM) is one of the most difficult tasks in mobile robotics. While the construction of consistent and coherent local solutions is simple, the SLAM remains a critical problem as the distance travelled by the robot increases. To circumvent this limitation, many strategies divide the environment in small regions, and formulate the SLAM problem as a combination of multiple precise submaps. In this paper, we propose a new submap-based particle filter algorithm called Segmented DP-SLAM, that combines an optimized data structure to store the maps of the particles with a probabilistic map of segments, representing hypothesis of submaps topologies. We evaluate our method through experimental results obtained in simulated and real environments.
|
|
12:15-12:30, Paper MoAT1.6 | |
>Towards a Reliable SLAM Back-End |
Hu, Gibson | Univ. |
Khosoussi, Kasra | Centre for Autonomous Systems, Univ. of Tech. Sydney |
Huang, Shoudong | Univ. of Tech. Sydney |
Keywords: SLAM
Abstract: In the state-of-the-art approaches to SLAM, the problem is often formulated as a non-linear least squares. SLAM back-ends often employ iterative methods such as Gauss-Newton or Levenberg-Marquardt to solve that problem. In general, there is no guarantee on the global convergence of these methods. The back-end might get trapped into a local minimum or even diverge depending on how good the initial estimate is. Due to the large noise in odometry data, it is not wise to rely on dead reckoning for obtaining an initial guess, especially in long trajectories. In this paper we demonstrate how M-estimation can be used as a bootstrapping technique to obtain a reliable initial guess. We show that this initial guess is more likely to be in the basin of attraction of the global minimum than existing bootstrapping methods. As the main contribution of this paper, we present new insights about the similarities between robustness against outliers and robustness against a bad initial guess. Through simulations and experiments on real data, we substantiate the reliability of our proposed method.
|
|
MoAT2 |
Room607 |
Visual Servo I |
Regular Session |
Chair: Pradalier, Cedric | GeorgiaTech Lorraine |
Co-Chair: Wang, Hesheng | Shanghai Jiao Tong Univ. |
|
11:00-11:15, Paper MoAT2.1 | |
> >High Speed/Accuracy Visual Servoing Based on Virtual Visual Servo with Stereo Cameras |
Nammoto, Takashi | Tohoku Univ. |
Hashimoto, Koichi | Tohoku Univ. |
Kagami, Shingo | Tohoku Univ. |
Kosuge, Kazuhiro | Tohoku Univ. |
Attachments: Video Attachment
Keywords: Visual Servoing, Visual Tracking, Industrial Robots
Abstract: This paper presents high speed and high accuracy visual servoing system. The algorithm has three major improvements, which can be implemented in practical applications; First pose estimation under rough calibration, second real time implementation issues with non-real-time image processing hardware and third a consideration for industrial positioncontroller. To resolve the issues, position-based visual servoing PBVS) is adopted and appearance model-based virtual visual servoing (VVS) is applied for position estimation. VVS approach does not compute the stereo matching but directly compares the OpenGL rendered image and camera image for each camera; estimate the position/orientation using VVS independently for each camera; and provides a theoretically optimal compromise among those estimates under inaccurate camera calibration. To enhance estimation accuracy, a hybrid method of stereo trigonometry for position estimation and weighted least squares for the orientation estimation is proposed in combination with the information from the stereo cameras. Operation speed is enhanced by using graphic processing unit (GPU) acceleration and an on-line trajectory generator which can accommodate the variable cycle of the visual servoing and the fixed cycle of a common robot controller. Finally, some experimental results illustrate the effectiveness of the proposed framework.
|
|
11:15-11:30, Paper MoAT2.2 | |
> >Aircraft Collision Avoidance Using Spherical Visual Predictive Control and Single Point Features |
Mcfadyen, Aaron Douglas | Queensland Univ. of Tech. |
Mejias, Luis | Queensland Univ. of Tech. |
Corke, Peter | QUT |
Pradalier, Cedric | GeorgiaTech Lorraine |
Attachments: Video Attachment
Keywords: Visual Servoing, Collision Detection and Avoidance, Aerial Robotics
Abstract: This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series of experimental results using a small quadrotor platform.
|
|
11:30-11:45, Paper MoAT2.3 | |
> >Visual Servo Control of Cable-Driven Soft Robotic Manipulator |
Wang, Hesheng | Shanghai Jiao Tong Univ. |
Chen, Weidong | Shanghai Jiao Tong Univ. |
Wang, Xiaozhou | Shanghai Chest Hospital |
Pfeifer, Rolf | Univ. of Zurich |
Attachments: Video Attachment
Keywords: Visual Servoing
Abstract: Aim at enhancing dexterous and safe operation in unstructured environment, a cable-driven soft robotic manipulator is designed in this paper. Due to soft material it made of and nearly infinite degree of freedom it owns, the soft robotic manipulator has higher security and dexterity than traditional rigid-link manipulator, which make it suitable to perform tasks in complex environments that is narrow, confined and unstructured. Though the soft robotic manipulator possesses advantages above, it is not an easy thing for it to achieve precise position control. In order to solve this problem, a kinematic model based on piecewise constant curvature hypothesis is proposed. Through building up three spaces and two mappings, the relationship between the length variables of 4 cables and the position and orientation of the soft robotic manipulator end-effector is obtained. Afterwards, a depth-independent image Jacobian matrix is introduced and an image-based visual servo controller is presented. Applied by adaptive algorithm, the controller could estimate unknown position of the feature point online, and then Lyapunov theory is used to prove the stability of the proposed controller. At last, experiments are conducted to demonstrate rationality and validity of the kinematic model and adaptive visual servo controller.
|
|
11:45-12:00, Paper MoAT2.4 | |
>Uncalibrated 3D Stereo Image-Based Dynamic Visual Servoing for Robot Manipulators |
Cai, Caixia | Tech. Univ. München,Germany |
Dean-Leon, Emmanuel | fotiss An-Inst. der Tech. Univ. Muenchen |
Mendoza Gallegos, Darío | Unidad Profesional Interdisciplinaria en IngenieríayTecnologías |
Somani, Nikhil | TUM |
Knoll, Alois C. | TU Munich |
Keywords: Visual Servoing
Abstract: This paper introduces a new comprehensive solution for the open problem of uncalibrated 3D image-based stereo visual servoing for robot manipulators. One of the main contributions of this article is a novel 3D stereo camera model to map positions in the task space to positions in a new 3D Visual Cartesian Space (a visual feature space where 3D positions are measured in pixel). This model is used to compute a full-rank Image Jacobian Matrix(Jimg), which solves several common problems presented on the classical image Jacobians, e.g., image space singularities and local minima. This Jacobian is a fundamental key for the image-based control design, where uncalibrated stereo camera systems can be used to drive a robot manipulator. Furthermore, an adaptive second order sliding mode visual servo control is designed to track 3D visual motions using the 3D trajectory errors defined in the Visual Cartesian Space. The stability of the control in closed loop with a dynamic robot system is formally analyzed and proved, where exponential convergence of errors in the Visual Cartesian Space and task space without local minima are demonstrated. The complete control system is evaluated both in simulation and on a real industrial robot. The robustness of the control scheme is evaluated for cases where the extrinsic parameters of the stereo camera system change on-line and the kinematic/dynamic robot parameters are considered as unknown. This approach offers a proper solution for the common problem of visual occlusion, since the stereo system can be moved to obtain a clear view of the task at any time.
|
|
12:00-12:15, Paper MoAT2.5 | |
>Decoupled Direct Visual Servoing |
Silveira, Geraldo | CTI |
Mirisola, Luiz Gustavo | Univ. of Coimbra |
Morin, Pascal | UPMC |
Keywords: Visual Servoing, Visual Tracking, Computer Vision
Abstract: This article addresses the problem of direct vision-based robot control where the equilibrium state is defined via a reference image. Direct methods refer to intensity-based nonmetric techniques to perform that stabilization. Intensity-based strategies provide for higher accuracy, whereas not requiring any metric information improves their versatility. However, existing direct techniques either have a coupled error dynamics, or are designed for planar objects only. This paper proposes a new direct technique that decouples the translational motion from the rotational one for the general case of both planar and nonplanar targets under general translational and rotational displacements. Furthermore, for the important case of a fronto-parallel planar object, the proposed technique leads to a fully diagonal interaction matrix. The equilibrium state is made locally exponentially stable for all those cases. These improvements are theoretically proven and experimentally demonstrated using a 6-DoF robotic arm.
|
|
12:15-12:30, Paper MoAT2.6 | |
> >Robotic Visual Servoing of Moving Targets |
Shahriari, Navid | Univ. of Rome "La Sapienza" |
Fantasia, Silvia | Univ. di Roma "La Sapienza" |
Flacco, Fabrizio | Univ. di Roma "La Sapienza" |
Oriolo, Giuseppe | Sapienza Univ. of Rome |
Attachments: Video Attachment
Keywords: Visual Servoing, Visual Tracking
Abstract: We present a new image-based visual servoing scheme for tracking moving targets. This is achieved with a twofold approach. First, we devise a straightforward adaptation of a previously proposed depth observer to account for the fact that the target is not stationary. Second, we estimate the disturbance on the visual feature dynamics due to the target motion, and we add a related compensation term to the visual controller. In particular, the target velocity components parallel to the image plane are reconstructed using a disturbance observer, whereas the orthogonal component is retrieved from the measurement of the Focus Of Expansion. Comparative experiments show that the proposed method can improve over classical visual servoing schemes by 50% or more.
|
|
MoAT3 |
Room703 |
Human Skills and Control |
Regular Session |
Chair: Suh, Il Hong | Hanyang Univ. |
Co-Chair: Kamezaki, Mitsuhiro | Waseda Univ. |
|
11:00-11:15, Paper MoAT3.1 | |
>Inferring Categories to Accelerate the Learning of New Classes |
Goeddel, Robert | Univ. of Michigan |
Olson, Edwin | Univ. of Michigan |
Keywords: Recognition, Learning and Adaptive Systems
Abstract: On-the-fly learning systems are necessary for the deployment of general purpose robots. New training examples for such systems are often supplied by mentor interactions. Due to the cost of acquiring such examples, it is desirable to reduce the number of necessary interactions. Transfer learning has been shown to improve classification results for classes with small numbers of training examples by pooling knowledge from related classes. Standard practice in these works is to assume that the relationship between the transfer target and related classes is already known. In this work, we explore how previously learned categories, or related groupings of classes, can be used to transfer knowledge to novel classes without explicitly known relationships to them. We demonstrate an algorithm for determining the category membership of a novel class, focusing on the difficult case when few training examples are available. We show that classifiers trained via this method outperform classifiers optimized to learn the novel class individually when evaluated on both synthetic and real-world datasets.
|
|
11:15-11:30, Paper MoAT3.2 | |
>Vascular Load Reduction Control Based on Operator's Skill for Catheter Insertion |
Fudaba, Yudai | Panasonic Corp. |
Tsusaka, Yuko | Panasonic Corp. |
Ozawa, Jun | Panasonic |
Keywords: Human and humanoid skills/cognition/interaction, Medical Robots and Systems, Contact Modelling
Abstract: This paper proposes a method for vascular load reduction control that operates in regard to the load on contact points of catheter and blood vessels in catheter insertion. We make an extracorporeal estimation of the load on the contact points, and perform control with a robot arm to reduce the estimated load. We aim to reduce the vascular load through extracting and vibrating action as actually applied by operators. In order to confirm the effectiveness of the proposed method, we conduct an evaluation experiment where a wire is inserted in a tube that simulates a blood vessel. As a result of the experiment we were able to estimate the force on the contact points with an accuracy of an estimation error of 0.0531[N]. Moreover, through vibration control we were able to reduce the load to below 0.1[N] for places where there was an overload of more than 0.5[N]. For vibration control, the experiment also enabled us to derive an effective parameter adjustment method to remove obstructions.
|
|
11:30-11:45, Paper MoAT3.3 | |
>Identification of a Piecewise Controller of Lateral Human Standing Based on Returning Recursive-Least-Square Method |
Murai, Nobuyuki | Osaka Univ. |
Kaneta, Daishi | Osaka Univ. |
Sugihara, Tomomichi | Graduate School of Engineering, Osaka Univ. |
Keywords: Human and humanoid skills/cognition/interaction, Humanoid Robots, Dynamics
Abstract: This paper proposes an identification technique of a human standing controller. The dynamics of a human is approximated by the macroscopic relationship between the center of mass and the zero-moment point. The standing controller is modelled by a piecewise-linear feedback, which was originally developed for humanoid robots. In the previous work, the authors found a qualitative similarity of the model to an actual human behavior observed in a phase space, and the next challenge was to identify the controller from those data. A difficulty is that the observed dynamics is a piecewise system due to the unilaterality of reaction forces, so that the identification is not straightforward. It is not trivial how to detect the switching point in each motion locus and how to find the trust region of the supposed model. The recursive-least-square (RLS) method, which can present the deviation of identified parameters and that of the reliability of the results, helps to estimate the trust region with a returning computation process. Through the identification, the validity of the proposed method was verified. More study about the availability of the COM-ZMP model and the piecewise-linear controller for the analyses of the human standing control is also reported.
|
|
11:45-12:00, Paper MoAT3.4 | |
>Acquiring Task Models for Imitation Learning through Games with a Purpose |
Kunze, Lars | Univ. of Birmingham |
Haidu, Andrei | Tech. Univ. München |
Beetz, Michael | Univ. of Bremen |
Keywords: Virtual Reality and Interfaces, Learning from Demonstration, AI Reasoning Methods
Abstract: Teaching robots everyday tasks like making pancakes by instructions requires interfaces that can be intuitively operated by non-experts. By performing novel manipulation tasks in a virtual environment using a data glove task-related information of the demonstrated actions can directly be accessed and extracted from the simulator. We translate low-level data structures of these simulations into meaningful first-order representations whereby we are able to select data segments and analyze them at an abstract level. Hence, the proposed system is a powerful tool for acquiring examples of manipulation actions and for analyzing them whereby robots can be informed how to perform a task.
|
|
12:00-12:15, Paper MoAT3.5 | |
> >Skill Learning and Inference Framework for Skilligent Robot |
Lee, Sang Hyoung | Hanyang Univ. |
Suh, Il Hong | Hanyang Univ. |
Attachments: Video Attachment
Keywords: Behaviour-Based Systems, Autonomous Agents, Human and humanoid skills/cognition/interaction
Abstract: To achieve a certain task, a skilligent robot should be able to learn the skills embedded in that task. Furthermore, the robot should be able to infer such skills to handle uncertainties and perturbations, since most robot tasks are usually daily-life tasks that include many unexpected situations. Therefore, we propose a unified skill learning and inference framework. The framework includes six processing modules: 1) a human demonstration process, 2) an autonomous segmentation process, 3) a dynamic movement primitive learning process, 4) a Bayesian network learning process, 5) a motivation graph construction process, and 6) a skill-inferring process. Based on the framework, the robot learns and infers situation-adequate and goal-oriented skills to handle uncertainties and human perturbations. To show the validity of our framework, some experimental results are illustrated using a robot arm that performs a 'tea service' task.
|
|
12:15-12:30, Paper MoAT3.6 | |
> >A Two Party Haptic Guidance Controller Via a Hard Rein |
Ranasinghe, Anuradha | Kings Coll. London |
Penders, Jacques | Sheffield Hallam Univ. |
Dasgupta, Prokar | King's Coll. London |
Althoefer, Kaspar | Kings Coll. London |
Nanayakkara, Thrishantha | King's Coll. Univ. of London |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Robotics in Hazardous Fields, Haptics and Haptic Interfaces
Abstract: In the case of human intervention in disaster response operations like indoor firefighting, where the environment perception is limited due to thick smoke, noise in the oxygen masks and clutter, not only limit the environmental perception of the human responders, but also causes distress. An intelligent agent (man/machine) with full environment perceptual capabilities is an alternative to enhance navigation in such unfavorable environments. Since haptic communication is the least affected mode of communication in such cases, we consider human demonstrations to use a hard rein to guide blindfolded followers with auditory distraction to be a good paradigm to extract salient features of guiding using hard reins. Based on numerical simulations and experimental systems identification based on demonstrations from eight pairs of human subjects, we show that, the relationship between the orientation difference between the follower and the guider, and the lateral swing patterns of the hard rein by the guider can be explained by a novel 3 rd order auto regressive predictive controller. Moreover, by modeling the two party voluntary movement dynamics using a virtual damped inertial model, we were able to model the mutual trust between two parties. In the future, the novel controller extracted based on human demonstrations can be tested on a human-robot interaction scenario to guide a visually impaired person in various applications like fire fighting, search and rescue, medical surgery, etc.
|
|
MoAT4 |
Room601 |
Gaze, Speech, Language |
Regular Session |
Chair: Evers, Vanessa | Univ. of Amsterdam |
Co-Chair: Metta, Giorgio | Istituto Italiano di Tecnologia (IIT) |
|
11:00-11:15, Paper MoAT4.1 | |
>Picking Favorites: The Influence of Robot Eye-Gaze on Interactions with Multiple Users |
Karreman, Daphne Eleonora | UTwente |
Sepúlveda Bradford, Gilberto U. | Univ. of Twente |
van Dijk, Elisabeth | Univ. of Twente |
Lohse, Manja | Univ. of Twente |
Evers, Vanessa | Univ. of Amsterdam |
Keywords: Robot Companions and Social Human-Robot Interaction, Gesture, Posture, Social Spaces and Facial Expressions, Performance Evaluation and Benchmarking
Abstract: We evaluated the effects of robot gaze behavior on interactions with multiple users in a museum-like setting. We posit that a robot needs to divide its attention between multiple users and may be able to use its gaze to ‘point’ at objects of interest. A 2 (person-oriented [only looking at participants] vs. object-oriented [also looking at artworks] gaze) x 2 (‘favored’ [looked at more] vs. ‘not favored’ [looked at less] by the robot) mixed factorial design (N=57) study was carried out in a museum-like lab setting where a robot talked about two artworks to groups of three participants. Results indicate that ‘favored’ participants did indeed pay more attention to the robot and the artworks. However, surprisingly they paid more attention when the robot did not look over to the object of interest compared to when it did give this gaze cue. The findings suggest that using an object-oriented gaze as a cue for people to look at an object may not carry across readily from person-to-person to human-robot communication. People had trouble interpreting the cue and were possibly distracted by the robot’s movement.
|
|
11:15-11:30, Paper MoAT4.2 | |
> >Cooperative Human Robot Interaction Systems: IV. Communication of Shared Plans with Naïve Humans Using Gaze and Speech |
Lallée, Stéphane | Univ. Pompeu Fabra, |
Hamann, Katharina | Max Planck Inst. for Evolutionary Anthropology |
Steinwender, Jasmin | Max Planck Inst. for Evolutionary Anthropology |
Warneken, Felix | Harvard Univ. |
Martinez-Hernandez, Uriel | Univ. of Sheffield |
Barron-Gonzalez, Hector | Sheffield Univ. UK |
Pattacini, Ugo | Istituto Italiano di Tecnologia |
Gori, Ilaria | Istituto Italiano di Tecnologia |
Petit, Maxime | INSERM |
Metta, Giorgio | Istituto Italiano di Tecnologia (IIT) |
Verschure, Paul | Catalan Inst. of Advanced Studies (ICREA), Foundation &Univ. |
Dominey, Peter Ford | INSERM Stem Cell & Brain Res. Inst. |
Attachments: Video Attachment
Keywords: Human-Humanoid Interaction, Cognitive Human-Robot Interaction, Human and humanoid skills/cognition/interaction
Abstract: Cooperation is at the core of human social life. In this context, two major challenges face research on human-robot interaction: the first is to understand the underlying structure of cooperation, and the second is to build, based on this understanding, artificial agents that can successfully and safely interact with humans. Here we take a psychologically grounded and human-centered approach that addresses these two challenges. We test the hypothesis that optimal cooperation between a naïve human and a robot requires that the robot can acquire and execute a joint plan, and that it communicates this joint plan through ecologically valid modalities including spoken language, gesture and gaze. We developed a cognitive system that comprises the human-like control of social actions, the ability to acquire and express shared plans and a spoken language stage. In order to test the psychological validity of our approach we tested 12 naïve subjects in a cooperative task with the robot. We experimentally manipulated the presence of a joint plan (vs. a solo plan), the use of task-oriented gaze and gestures, and the use of language accompanying the unfolding plan. The quality of cooperation was analyzed in terms of proper turn taking, collisions and cognitive errors. Results showed that while successful turn taking could take place in the absence of the explicit use of a joint plan, its presence yielded significantly greater success. One advantage of the solo plan was that the robot would always be ready to generate actions, and could thus adapt if the human intervened at the wrong time, whereas in the joint plan the robot expected the human to take his/her turn. Interestingly, when the robot represented the action as involving a joint plan, gaze provided a highly potent nonverbal cue that facilitated successful collaboration and reduced errors in the absence of verbal communication. These results support the cooperative stance in human social cognition, and suggest that cooperative robots should employ joint plans, fully communicate them in order to sustain effective collaboration while being ready to adapt if the human makes a midstream mistake.
|
|
11:30-11:45, Paper MoAT4.3 | |
> >“You Two! Take Off!”: Creating, Modifying and Commanding Groups of Robots Using Face Engagement and Indirect Speech in Voice Commands |
Pourmehr, Shokoofeh | Simon Fraser Univ. |
Monajjemi, Valiallah (Mani) | Simon Fraser Univ. |
Vaughan, Richard | Simon Fraser Univ. |
Mori, Greg | Simon Fraser Univ. |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Robot Companions and Social Human-Robot Interaction, Autonomous Agents
Abstract: We present a multimodal system for creating, modifying and commanding groups of robots from a population. Extending our previous work on selecting an individual robot from a population by face engagement, we show that we can dynamically create groups of a desired number of robots by speaking the number we desire, e.g. “You three”, and looking at the robots we intend to form the group. We evaluate two different methods of detecting which robots are intended by the user, and show that an iterated election performs well in our setting. We also show that teams can be modified by adding and removing individual robots: “And you. Not you”. The success of the system is examined for different spatial configurations of robots with respect to each other and the user to find the proper workspace of selection methods.
|
|
11:45-12:00, Paper MoAT4.4 | |
>Using Semantic Fields to Model Dynamic Spatial Relations in a Robot Architecture for Natural Language Instruction of Service Robots |
Fasola, Juan | Univ. of Southern California |
Mataric, Maja | Univ. of Southern California |
Keywords: Cognitive Human-Robot Interaction, Integrated Task and Motion Planning, Software and Architecture
Abstract: We present a methodology for enabling service robots to follow natural language commands from non-expert users, with and without user-specified constraints, with a particular focus on spatial language understanding. As part of our approach, we propose a novel extension to the semantic field model of spatial prepositions that enables the representation of dynamic spatial relations involving paths. The design, system modules, and implementation details of our robot software architecture are presented and the relevance of the proposed methodology to interactive instruction and task modification through the addition of constraints is discussed. The paper concludes with an evaluation of our robot software architecture implemented on a simulated mobile robot operating in both a 2D home environment and in real world environment maps to demonstrate the generalizability and usefulness of our approach in real world applications.
|
|
12:00-12:15, Paper MoAT4.5 | |
>Generating Sentence from Motion by Using Large-Scale and High-Order N-Grams |
Goutsu, Yusuke | The Univ. of Tokyo |
Takano, Wataru | Univ. of Tokyo |
Nakamura, Yoshihiko | Univ. of Tokyo |
Keywords: Behaviour-Based Systems, Human-Robot Interaction, Recognition
Abstract: Motion recognition is an essential technology for social robots in various environments such as homes, offices and shopping center, where the robots are expected to understand human behavior and interact with them. In this paper, we present a system composed of three models: motion language model, natural language model and integration inference model, and achieved to generate sentences from motions using large high-order N-grams. We confirmed not only that using higher-order N-grams improves precision in generating long sentences but also that the computational complexity of the proposed system is almost the same as our previous one. In addition, we improved the precision by aligning the graph structure representing generated sentences into confusion network form. This means that simplifying and compacting word sequences affect the precision of sentence generation.
|
|
12:15-12:30, Paper MoAT4.6 | |
>Multimodal Concept and Word Learning Using Phoneme Sequences with Errors |
Nakamura, Tomoaki | Univ. of Electro-Communications |
Araki, Takaya | Univ. of Electro-Communications |
Nagai, Takayuki | Univ. of Electro-Communications |
Nagasaka, Shogo | Ritsumeikan Univ. |
Taniguchi, Tadahiro | Ritsumeikan Univ. |
Iwahashi, Naoto | National Inst. ofInformationandCommunicationsTechnology |
Keywords: Recognition, Visual Learning, Learning and Adaptive Systems
Abstract: In this study, we propose a method for concept formation and word acquisition for robots. The proposed method is based on multimodal latent Dirichlet allocation (MLDA) and the nested Pitman-Yor language model (NPYLM). A robot obtains haptic, visual, and auditory information by grasping, observing, and shaking an object. At the same time, a user teaches object features to the robot through speech, which is recognized using only acoustic models and transformed into phoneme sequences. As the robot is supposed to have no language model in advance, the recognized phoneme sequences include many phoneme recognition errors. Moreover, the recognized phoneme sequences with errors are segmented into words in an unsupervised manner; however, not all words are necessarily segmented correctly. The words including these errors have a negative effect on the learning of word meanings. To overcome this problem, we propose a method to improve unsupervised word segmentation and to reduce phoneme recognition errors by using multimodal object concepts. %To improve the unsupervised word segmentation, we use multimodal object concepts for segmenting the sentences. In the proposed method, object concepts are used to enhance the accuracy of word segmentation, reduce phoneme recognition errors, and correct words so as to improve the categorization accuracy. We experimentally demonstrate that the proposed method can improve the accuracy of word segmentation and reduce the phoneme recognition error and that the obtained words enhance the categorization accuracy.
|
|
MoAT5 |
Room605 |
Robot Learning I |
Regular Session |
Chair: Wang, Zhidong | Chiba Inst. of Tech. |
Co-Chair: Caarls, Wouter | Delft Univ. of Tech. |
|
11:00-11:15, Paper MoAT5.1 | |
>Open and Closed-Loop Task Space Trajectory Control of Redundant Robots Using Learned Models |
Damas, Bruno | IST-ID |
Jamone, Lorenzo | Inst. Superior Tecnico |
Santos-Victor, José | Inst. Superior Técnico - Lisbon |
Keywords: Learning and Adaptive Systems, Kinematics, Redundant Robots
Abstract: This paper presents a comparison of open-loop and closed-loop control strategies for tracking a task space trajectory, using redundant robots. We do not assume any knowledge of the analytical forward and inverse kinematics, relying instead on learning these models online, while executing a desired task. Specifically, we employ a recent learning algorithm that allows to learn a probabilistic model from which both the forward and inverse solutions can be obtained, as well as the Jacobian of the kinematics map. Such learned model can then be used to implement both types of control. Moreover, the multi-valued solutions provided by the learned model can be applied to redundant systems in which an infinite number of inverse solutions may exist. We present experiments with a simulated version of the iCub, a highly redundant humanoid robot, in which this learned model is employed to execute both open-loop and closed-loop trajectory control. We show the advantages and drawbacks of both control strategies, and we propose a way to combine them to deal with sensor noise and failures, showing the benefits of using a learning algorithm that can simultaneously provide forward and inverse predictions.
|
|
11:15-11:30, Paper MoAT5.2 | |
>Ensuring Safety of Policies Learned by Reinforcement: Reaching Objects in the Presence of Obstacles with the Icub |
Pathak, Shashank | RBCS, Istituto Italiano Di Tecnologia,Genova |
Pulina, Luca | Univ. di Sassari |
Metta, Giorgio | Istituto Italiano di Tecnologia (IIT) |
Tacchella, Armando | Univ. di Genova |
Keywords: Formal Methods in Robotics and Automation, Learning and Adaptive Systems, Collision Detection and Avoidance
Abstract: Given a stochastic policy learned by reinforcement, we wish to ensure that it can be deployed on a robot with demonstrably low probability of unsafe behavior. Our case study is about learning to reach target objects positioned close to obstacles, and ensuring a reasonably low collision probability. Learning is carried out in a simulator to avoid physical damage in the trial-and-error phase. Once a policy is learned, we analyze it with probabilistic model checking tools to identify and correct potential unsafe behaviors. The whole process is automated and, in principle, it can be integrated step-by-step with routine task-learning. As our results demonstrate, automated fixing of policies is both feasible and highly effective in bounding the probability of unsafe behaviors.
|
|
11:30-11:45, Paper MoAT5.3 | |
> >Visual Teach and Repeat, Repeat, Repeat: Iterative Learning Control to Improve Mobile Robot Path Tracking in Challenging Outdoor Environments |
Ostafew, Chris J. | Univ. of Toronto |
Barfoot, Timothy | Univ. of Toronto |
Schoellig, Angela P. | Univ. of Toronto |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Visual Navigation, Adaptive Control
Abstract: This paper presents a path-repeating, mobile robot controller that combines a feedforward, proportional Iterative Learning Control (ILC) algorithm with a feedback-linearized path-tracking controller to reduce path-tracking errors over repeated traverses along a reference path. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, GPS-denied, extreme environments. The paper presents experimental results including over 600 m of travel by a four-wheeled, 50 kg robot travelling through challenging terrain including steep hills and sandy turns and by a six-wheeled, 160 kg robot at gradually-increased speeds up to three times faster than the nominal, safe speed. In the absence of a global localization system, ILC is demonstrated to reduce path-tracking errors caused by unmodelled robot dynamics and terrain challenges.
|
|
11:45-12:00, Paper MoAT5.4 | |
>Learning While Preventing Mechanical Failure Due to Random Motions |
Meijdam, Hendrik Jan | Delft Univ. of Tech. |
Plooij, Michiel | Delft Univ. of Tech. |
Caarls, Wouter | Delft Univ. of Tech. |
Keywords: Learning and Adaptive Systems, Motion Control, Humanoid Robots
Abstract: Learning can be used to optimize robot motions to new situations. Learning motions can cause high frequency random motions in the exploration phase and can cause failure before the motion is learned. The mean time between failures (MTBF) of a robot can be predicted while it is performing these motions. The predicted MTBF in the exploration phase can be increased by filtering actions or possible actions of the algorithm. We investigated five algorithms that apply this filtering in various ways and compared them to SARSA(lambda) learning. In general, increasing the MTBF decreases the learning performance. Three of the investigated algorithms are unable to increase the MTBF while keeping their learning performance approximately equal to SARSA(lambda). Two algorithms are able to do this: the PADA algorithm and the low-pass filter algorithm. In case of LEO, a bipedal walking robot that tries to optimize a walking motion, the MTBF can be increased by a factor of 108 compared to SARSA(lambda). This indicates that, in some cases, failures due to high frequency random motions can be prevented without decreasing the performance.
|
|
12:00-12:15, Paper MoAT5.5 | |
> >Reinforcement Learning of Single Legged Locomotion |
Fankhauser, Péter | ETH Zurich |
Hutter, Marco | ETH Zurich |
Gehring, Christian | ETH Zurich, Disney Res. Zurich |
Bloesch, Michael | ETH Zurich |
Hoepflinger, Mark | ETH Zurich |
Siegwart, Roland | ETH Zurich |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Legged Robots, Motion Control
Abstract: This paper presents the application of reinforcement learning to improve the performance of highly dynamic single legged locomotion with compliant series elastic actuators. The goal is to optimally exploit the capabilities of the hardware in terms of maximum jump height, jump distance, and energy efficiency of periodic hopping. These challenges are tackled with the reinforcement learning method Policy Improvement with Path Integrals (PI 2) in a model-free approach to learn parameterized motor velocity trajectories as well as high-level control parameters. The combination of simulation and hardware-based optimization allows to efficiently obtain optimal control policies in an up to 10-dimensional parameter space. The robotic leg learns to temporarily store energy in the elastic elements of the joints in order to improve the jump height and distance. In addition, we present a method to learn time-independent control policies and apply it to improve the energetic efficiency of periodic hopping.
|
|
12:15-12:30, Paper MoAT5.6 | |
> >Learning Robot Gait Stability using Neural Networks as Sensory Feedback Function for Central Pattern Generators |
Gay, Sébastien | EPFL Ec. Pol. Fédérale de Lausanne |
Ijspeert, Auke | EPFL |
Santos-Victor, José | Inst. Superior Técnico - Lisbon |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Legged Robots, Sensor Fusion
Abstract: In this paper we present a framework to learn a model-free feedback controller for locomotion and balance control of a compliant quadruped robot walking on rough terrain. Having designed an open-loop gait encoded in a Central Pattern Generator (CPG), we use a neural network to represent sensory feedback inside the CPG dynamics. This neural network accepts sensory inputs from a gyroscope or a camera, and its weights are learned using Particle Swarm Optimization (unsupervised learning). We show with a simulated compliant quadruped robot that our controller can perform significantly better than the open-loop one on slopes and randomized height maps.
|
|
MoAT6 |
Room604 |
Task and Motion Planning |
Regular Session |
Chair: Knoll, Alois C. | TU Munich |
Co-Chair: Yoshida, Eiichi | National Inst. of AIST |
|
11:00-11:15, Paper MoAT6.1 | |
> >KVP: A Knowledge of Volumes Approach to Robot Task Planning |
Gaschler, Andre K. | Tech. Univ. Muenchen |
Petrick, Ron | Univ. of Edinburgh |
Giuliani, Manuel | fortiss GmbH |
Rickert, Markus | fortiss GmbH |
Knoll, Alois C. | TU Munich |
Attachments: Video Attachment
Keywords: Integrated Task and Motion Planning, Task Planning, Manipulation Planning and Control
Abstract: Robot task planning is an inherently challenging problem, as it covers both continuous-space geometric reasoning about robot motion and perception, as well as purely symbolic knowledge about actions and objects. This paper presents a novel "knowledge of volumes" framework for solving generic robot tasks in partially known environments. In particular, this approach (abbreviated, KVP) combines the power of symbolic, knowledge-level AI planning with the efficient computation of volumes, which serve as an intermediate representation for both robot action and perception. While we demonstrate the effectiveness of our framework in a bimanual robot bartender scenario, our approach is also more generally applicable to tasks in automation and mobile manipulation, involving arbitrary numbers of manipulators.
|
|
11:15-11:30, Paper MoAT6.2 | |
>Balancing Workloads for Service Vehicles Over a Geographic Territory |
Devulapalli, Raghuveer | Univ. of Minnesota |
Carlsson, John Gunnar | Univ. of Minnesota |
Carlsson, Erik | Simons center for geometry and physics |
Keywords: Integrated Task and Motion Planning, Task Planning, Autonomous Agents
Abstract: Autonomous vehicles (or drones) are very frequently used for servicing a geographic region in numerous applications. Given a geographic territory and a set of n fixed vehicle depots, we consider the problem of designing service districts so as to balance the workload of a collection of vehicles which service this region. We assume that the territory is a connected polygonal region, i.e. a simply connected polygon containing a set of simply connected obstacles. We give a fast algorithm, based on an infinite-dimensional optimization formulation, that divides the territory into compact, connected sub-regions, each of which contains a vehicle depot, such that all regions have equal area. We also show how we can use this algorithm to find better locations of the vehicle depots.
|
|
11:30-11:45, Paper MoAT6.3 | |
>On Optimizing a Sequence of Robotic Tasks |
Alatartsev, Sergey | Otto-von-Guericke Univ. of Magdeburg |
Mersheeva, Vera | Univ. of Klagenfurt |
Augustine, Marcus | Otto-von-Guericke-Univ. Magdeburg |
Ortmeier, Frank | Otto-von-Guericke-Univ. Magdeburg |
Keywords: Task Planning, Planning, Scheduling and Coordination, Industrial Robots
Abstract: Production speed and energy efficiency are crucial factors for any application scenario in industrial robotics. The most important factor for this is planning of an optimized sequence of atomic subtasks. In a welding scenario, an atomic subtask could be understood as a single welding seam/spot while the sequence could be the ordering of these atomic tasks. Optimization of a task sequence is normally modeled as the Traveling Salesman Problem (TSP). This works well for simple scenarios with atomic tasks without execution freedom like spot welding. However, many types of tasks allow a certain freedom of execution. A simple example is seam welding of a closed-contour, where typically the starting-ending point is not specified by the application. This extra degree of freedom allows for much more efficient task sequencing. In this paper, we describe an extension of TSP to model a problem of finding an optimal sequence of tasks with such extra degree of freedom. We propose a new, efficient heuristic to solve such problems and show its applicability. Obtained computational results are close to the optimum on small instances and outperforms the state of the art approaches on benchmarks available in literature.
|
|
11:45-12:00, Paper MoAT6.4 | |
> >Foresight and Reconsideration in Hierarchical Planning and Execution |
Levihn, Martin | Georgia Inst. of Tech. |
Kaelbling, Leslie | MIT |
Lozano-Perez, Tomas | MIT |
Stilman, Mike | Georgia Tech. |
Attachments: Video Attachment
Keywords: Integrated Task and Motion Planning, AI Reasoning Methods, Mobile Manipulation
Abstract: We present a hierarchical planning and execution architecture that maintains the computational efficiency of hierarchical decomposition while improving optimality. It provides mechanisms for monitoring the belief state during execution and performing selective replanning to repair poor choices and take advantage of new opportunities. It also provides mechanisms for looking ahead into future plans to avoid making short-sighted choices. The effectiveness of this mechanism is shown through comparative experiments in simulation and demonstrated on a real PR2 robot.
|
|
12:00-12:15, Paper MoAT6.5 | |
>An Interface for Interleaved Symbolic-Geometric Planning and Backtracking |
De Silva, Lavindra | LAAS-CNRS |
Pandey, Amit Kumar | LAAS-CNRS |
Alami, Rachid | CNRS |
Keywords: Integrated Task and Motion Planning, Task Planning
Abstract: While symbolic planners work with an abstract representation of the real world, allowing plans to be constructed relatively quickly, geometric planning—although more computationally complex—is essential for building symbolic plans that actually work in the real world. To combine the two types of systems, we present in this paper a meaningful interface, and insights into a methodology for developing interwoven symbolic-geometric domains. We concretely present this “link” between the two approaches with algorithms and data structures that amount to an intermediate layer that coordinates symbolic-geometric planning. Since both planners are capable of “backtracking” at their own levels, we also investigate the issue of how to interleave their backtracking, which we do in the context of the algorithms that form the link. Finally, we present a prototype implementation of the combined system on a PR2 robot.
|
|
12:15-12:30, Paper MoAT6.6 | |
>Motion and Action Planning under LTL Specifications Using Navigation Functions and Action Description Language |
Guo, Meng | KTH Royal Inst. of Tech. |
Johansson, Karl H. | Royal Inst. of Tech. |
Dimarogonas, Dimos V. | Royal Inst. of Tech. |
Keywords: Integrated Task and Motion Planning, Behaviour-Based Systems, Integrated Planning and Control
Abstract: We propose a novel framework to combine model-checking-based motion planning with action planning using action description languages, aiming to tackle task specifications given as Linear Temporal Logic (LTL) formulas. The specifications implicitly require both sequential regions to visit and the desired actions to perform at these regions. The robot's motion is abstracted based on sphere regions of interest in the workspace and the structure of navigation function(NF)-based controllers, while the robot's action map is constructed based on precondition and effect functions associated with the actions. An optimal planner is designed that generates the discrete motion-and-action plan fulfilling the specification, as well as the low-level hybrid controllers that implement this plan. The whole framework is demonstrated by a case study.
|
|
MoAT7 |
Room701 |
Novel Robotic Mechanisms & Systems |
Regular Session |
Chair: Tadakuma, Kenjiro | Osaka Univ. |
Co-Chair: Choi, Hyouk Ryeol | Sungkyunkwan Univ. |
|
11:00-11:15, Paper MoAT7.1 | |
>Robot Design for High Flow Liquid Pipe Networks |
Choi, Changrak | Massachusetts Inst. of Tech. |
Youcef-Toumi, Kamal | Massachusetts Inst. of Tech. |
Keywords: Field Robots, Mechanism Design, Joint/Mechanism
Abstract: In-pipe robots are important for inspection of pipe network that form vital infrastructure of modern society. Nevertheless, most in-pipe robots developed so far are targeted at working inside gas pipes and not suitable for liquid pipes. This paper presents a new approach for designing in-pipe robot to work inside a liquid environment in the presence of high drag forces. Three major subsystems – propulsion, braking, and turning – are described in detail with new concepts and mechanisms that differ from conventional in-pipe robots. Prototypes of each subsystem are designed, built and tested for validation. Resulting is a robot design that navigates efficiently inside liquid pipe network and can be used for practical inspection purposes.
|
|
11:15-11:30, Paper MoAT7.2 | |
> >An In-Pipe Robot with Multi-Axial Differential Gear Mechanism |
Kim, Ho Moon | Sungkyunkwan Univ. |
Suh, Jung Seok | SungKyunKwan Univ. |
Choi, Yun Seok | SungKyunKwan Univ. |
Tran, Duc Trong | SungKyunKwan Univ. |
Moon, Hyungpil | Sungkyunkwan Univ. |
Koo, Ja Choon | Sungkyunkwan Univ. |
Ryew, Sung Moo | KnR Systems Inc. |
Choi, Hyouk Ryeol | Sungkyunkwan Univ. |
Attachments: Video Attachment
Keywords: Field Robots, Mechanism Design, Wheeled Robots
Abstract: This paper presents a mechanism for an in-pipe robot, called MRINSPECT VI (Multifunctional Robotic crawler for In-pipe inspection VI), which is under development for the inspection of gas pipelines with 150mm inside diameter. The mechanism is composed of multi-axial differential gear mechanism, wall pressing one, and driven by single motor. It is designed to adapt to the varying inside geometries of pipelines such as elbows by modulating the velocities of active wheels mechanically without any control effort. In this paper, the design features of the mechanism are detailed and its effectiveness is experimentally validated.
|
|
11:30-11:45, Paper MoAT7.3 | |
>Automatic In-Pipe Robot Centering from 3D to 2D Controller Simplification |
Mateos, Luis | Vienna Univ. of Tech. |
Vincze, Markus | Vienna Univ. of Tech. |
Keywords: Industrial Robots, Field Robots, Rehabilitation Robotics
Abstract: After 50 years the connections between fresh water pipes (800-1200mm diameter) need to be repaired due to aging and dissolution of the filling material. Only in Vienna 3000km of pipes need to be improved, which requires a robotic solution. The main challenge is to accurately align the robot axis with the pipe axis to enable the rotary motion of the maintenance tool. The tool system for cleaning and sealing is mounted on the maintenance unit of the robot consisting of six wheeled- legs. These legs extend to the irregular cast-iron pipe and set the robot structure eccentric to the pipe′s center. In order to center the maintenance unit, distance sensors on the legs allow to adapt to the noncircular shape of the pipe. Correcting the leg extension allows to obtain better positioning of the cleaning tool.
|
|
11:45-12:00, Paper MoAT7.4 | |
> >Magnetic Omnidirectional Wheels for Climbing Robots |
Tavakoli, Mahmoud | Univ. of Coimbra |
Viegas, Carlos | Inst. of Systems and Robotics |
Marques, Lino | Univ. of Coimbra |
Pires, Norberto | Univ. of Coimbra |
de Almeida, Anibal T. | Univ. of Coimbra |
Attachments: Video Attachment
Keywords: Field Robots, Mechanism Design, Robotics in Hazardous Fields
Abstract: This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main objectives of this research is to implement a robot which is able to climb and navigate over ferromagnetic structures considering: High maneuverability. High speed. Adaptability to a reasonable range of curvature. Adaptability to a reasonable range of structures’ material and thickness. Simplicity. Omni-Climbers involve the following main novelties: Utilizing omni-directional wheels in order to increase maneuverability Flexible chassis with side magnets for non-actuated adaptation to the curvature The main focus of this article is design, analyzes and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.
|
|
12:00-12:15, Paper MoAT7.5 | |
> >Automatic Page Turner Machine for High-Speed Book Digitization |
Watanabe, Yoshihiro | The Univ. of Tokyo |
Tamei, Miho | Univ. of Tokyo |
Yamada, Masahiro | Univ. of Tokyo |
Ishikawa, Masatoshi | Univ. of Tokyo |
Attachments: Video Attachment
Keywords: Industrial Robots, Factory Automation, Domestic Robots and Home Automation
Abstract: In recent years, there has been an increasing demand to digitize a huge number of books. A promising new approach for meeting this demand, called Book Flipping Scanning, has been proposed. This is a new style of scanning in which all pages of a book are captured while a user continuously flips through the pages without stopping at each page. Although this new technology has had a tremendous impact in the field of book digitization, page turning is still done manually, which acts as a bottleneck in the development of high-speed book digitization. Against this background, this paper proposes a newly designed high-speed, high-precision book page turner machine. Our machine turns the pages in a contactless manner by utilizing the elastic force of the paper and an air blast. This design enables high-speed performance that is ten times faster than conventional approaches and, in addition, causes no obstruction in the digitization process. This paper reports the evaluation of the proposed machine using various types of paper with different qualities. Our machine achieved almost 100% success rate when turning pages at around 300 pages/min, showing that it is a promising technology for turning pages at high-speed and with high precision.
|
|
12:15-12:30, Paper MoAT7.6 | |
>Passive Collision Force Suppression Mechanism for Robot Manipulator |
Ono, Yoshiaki | Kanagawa Univ. |
Shimamoto, Kazuya | Kanagawa Univ. |
Nogawa, Takuma | Kanagawa Univ. |
Masuta, Hiroyuki | Kanagawa Univ. |
Lim, Hun-ok | Kanagawa Univ. |
Keywords: Robot Safety, Joint/Mechanism, Collision Detection and Avoidance
Abstract: Abstract— This paper presents a robot manipulator with collision force suppression mechanism that can passively suppress collision force. The collision suppression mechanism consists of a release air pad, a transmission rack, a clutch gear and a compression spring. An air cushion bag is attached to the exterior of the robot manipulator. If a robot manipulator collides with an object or human when a task is performed, the collision force suppression mechanism disjoints the specific joint corresponding to the direction of the collision force to reduce the collision force. The robot manipulator will then return to the former task when the colliding object is eliminated. Through collision experiments, the effectiveness of the collision force suppression mechanism is verified.
|
|
MoAT8 |
Room702 |
Manipulation & Control |
Regular Session |
Chair: Clanton, Samuel | Univ. of Pittsburgh |
Co-Chair: Kosuge, Kazuhiro | Tohoku Univ. |
|
11:00-11:15, Paper MoAT8.1 | |
> >Fast Peg-And-Hole Alignment Using Visual Compliance |
Huang, Shouren | Univ. of Tokyo |
Murakami, Kenichi | Univ. of Tokyo |
Yamakawa, Yuji | Univ. of Tokyo |
Senoo, Taku | Univ. of Tokyo |
Ishikawa, Masatoshi | Univ. of Tokyo |
Attachments: Video Attachment
Keywords: Manipulation and Compliant Assembly, Industrial Robots, Compliant Assembly
Abstract: This paper presents a visual compliance strategy to deal with the problem of fast peg-and-hole alignment with large position and attitude uncertainty. With the use of visual compliance and adoption of a light-weight 3-DOF active peg, decoupled alignment for position and attitude is realized. The active peg is capable of high-speed motion and with less dynamic defects than a traditional robot arm. Two high-speed cameras, one configured as eye-in-hand and the other as eye-to-hand are adopted to provide with the task-space feedback. Visual constraints for effecting the visual compliant motion are analyzed. Alignment experiments show that peg-and-hole alignment with the proposed approach could be successfully realized with robust convergence, and on average, the alignment could be realized within 0.7 s in our experimental setting.
|
|
11:15-11:30, Paper MoAT8.2 | |
>Vision Based Compliant Motion Control for Part Assembly |
Kobari, Yuki | Tohoku Univ. |
Nammoto, Takashi | Tohoku Univ. |
Kinugawa, Jun | Tohoku Univ. |
Kosuge, Kazuhiro | Tohoku Univ. |
Keywords: Manipulation and Compliant Assembly, Computer Vision, Compliance and Impedance Control
Abstract: In this paper, we propose a vision based compliant motion control method for part assembly work. Some industrial parts are deformed during assembly of parts. If work progress continues, deformation of the part increases, humans check the deformation and adjust force corresponding to the progress state. The proposed method enables a robot manipulator to adjust force applied for assembly work like a human. In our proposed method, force applied for the work is generated by visual information from a camera reading the deformation of the parts. Processing the visual information quantifies the deformation and the data show the work progress. NCC is generally used for template matching, but in this paper we use it for quantifying deformation. Connectors are assembled by a robot manipulator using proposed method and impedance control in experiments. Experimental results are presented to verify the effectiveness of the proposed method.
|
|
11:30-11:45, Paper MoAT8.3 | |
> >Human-Robot Collaborative Manipulation Planning Using Early Prediction of Human Motion |
Mainprice, Jim | Worcester Pol. Inst. |
Berenson, Dmitry | Worcester Pol. Inst. (WPI) |
Attachments: Video Attachment
Keywords: Cooperative Manipulators, Human Centered Planning and Control, Manipulation Planning and Control
Abstract: In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the human's motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a human workspace occupancy prediction by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy and interleaves planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motion and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robot's task, even in cases where the initial prediction of the human's motion is incorrect. We also show that taking into account human workspace occupancy prediction in the robot's motion planner leads to safer and more efficient interactions between the user and the robot than only considering the human's current configuration.
|
|
11:45-12:00, Paper MoAT8.4 | |
>Adaptive Force/Velocity Control for Multi-Robot Cooperative Manipulation under Uncertain Kinematic Parameters |
Erhart, Sebastian | Tech. Univ. München |
Hirche, Sandra | Tech. Univ. München |
Keywords: Cooperative Manipulators, Adaptive Control
Abstract: Multi-robot cooperative manipulation of a common object requires precise kinematic coordination of the attached end effectors in order to avoid excessive forces on the object and the manipulators. A manipulation task is considered successful if the desired object motion and forces are tracked accurately. In this paper we present a systematic analysis on the effect of uncertain kinematic parameters on the tracking behavior in a planar manipulation task. An adaptive control scheme is proposed, which achieves the desired control goal asymptotically. The presented scheme employs the current force/motion data of the attached end effectors without relying on a common reference frame. The algorithm is applicable to common manipulator types with wrist-mounted force/torque sensors and implementable in real-time. The performance of the proposed control scheme is evaluated experimentally with two 7DoF manipulators who cooperatively manipulate an object of uncertain length.
|
|
12:00-12:15, Paper MoAT8.5 | |
>An Impedance-Based Control Architecture for Multi-Robot Cooperative, Dual-Arm Mobile Manipulation |
Erhart, Sebastian | Tech. Univ. München |
Sieber, Dominik | Tech. Univ. München |
Hirche, Sandra | Tech. Univ. München |
Keywords: Cooperative Manipulators, Mobile Manipulation, Compliance and Impedance Control
Abstract: Cooperative manipulation in robotic teams likely results in an increased manipulation performance due to complementary sensing and actuation capabilities or increased redundancy. However, a precise coordination of the involved manipulators is required in order to avoid undesired stress on the manipulated object. Extending the workspace of the robots by means of mobile platforms greatly enlarges the potential task spectrum but simultaneously poses new challenges for example in terms of increased kinematic errors. In this paper we show how kinematic errors in the closed kinematic chain originating from uncertainties in the geometry of object and manipulators limit the cooperative task performance. We extend an impedance-based coordination control scheme towards mobile multi-robot manipulation to limit undesired internal forces in the presence of kinematic uncertainties. Furthermore, we employ a task-space decoupling approach to reduce the impact of disturbances at the mobile platforms on the end effectors. The presented control scheme for cooperative, mobile dual-arm manipulation is applicable in real-time and suitable for a team of heterogeneous manipulators. We evaluate the presented architecture by means of a large-scale experiment with four 7DoF manipulators on two mobile platforms.
|
|
12:15-12:30, Paper MoAT8.6 | |
>Generalized Virtual Fixtures for Shared-Control Grasping in Brain-Machine Interfaces |
Clanton, Samuel | Univ. of Pittsburgh |
Rasmussen, Robert | Univ. of Pittsburgh |
Zohny, Zohny | Washington Univ. in St. Louis |
Velliste, Meel | Univ. of Pittsburgh |
Keywords: Grasping, Brain Machine Interface, Human-Robot Interaction
Abstract: Operator-machine shared control systems that automatically modulate or augment human control of robots using the concept of "Virtual Fixtures" have to our knowledge been limited to translational constraints toward simple geometric primitives. In this work we present a novel form of Virtual Fixture-based shared control that extends the concept to high dimensional control spaces and direct constraint towards irregular fixture shapes and point clouds. This "Positive-Span Virtual Fixturing" method was developed as a training mechanism to allow subjects to control 6 and 7 degree of freedom manipulators in a brain-machine interface robotic grasping experiment. Here we describe the Positive-Span Virtual Fixturing algorithm, the results from an online experiment that showed how it improved human performance in a 6-DoF robot control task, and a simulated model of how the fixturing system could potentially be integrated with automated grasp planning software to constrain manipulator action towards more dextrous tasks.
|
|
MoAT9 |
Room608 |
Neuro Robotics |
Regular Session |
Chair: Hosoda, Koh | Osaka Univ. |
Co-Chair: Arie, Hiroaki | Waseda Univ. |
|
11:00-11:15, Paper MoAT9.1 | |
> >Modeling and Identification of the Human Arm Stretch Reflex Using a Realistic Spiking Neural Network and Musculoskeletal Model |
Sreenivasa, Manish | Univ. of Tokyo |
Murai, Akihiko | The Univ. of Tokyo |
Nakamura, Yoshihiko | Univ. of Tokyo |
Attachments: Video Attachment
Keywords: Neurorobotics, Biomimetics, Biologically-Inspired Robots
Abstract: This study proposes a model that combines a realistically scaled neural network made up of pools of spiking neurons, with a musculoskeletal model of the human arm. We use evidence from literature to design topological pools of motor, sensory and interneurons and the nature of synaptic connections between them. The spiking output of the motor neuron pools are used as the command signals that generate motor unit forces, and drive joint motion. Feedback information from modeled muscle spindles is relayed to the neural network via monosynaptic and disynaptic pathways. We conduct experiments in specifically designed steady-state and dynamic conditions, to record participant data. Participant-specific parameters of the combined neuromusculoskeletal (NMS) system are then found using parameter identification methods. The identified NMS model is used to simulate the arm stretch reflex and the results are validated by comparison to an independent recorded dataset. The models and methodology proposed in this study show that seemingly large and complex neural systems can be identified in conjunction with the musculoskeletal systems that they control. This additional layer of detail in NMS models has important relevance to the research communities related to rehabilitation robotics and human movement analysis.
|
|
11:15-11:30, Paper MoAT9.2 | |
>"Anti-Fatigue" Control for Over-Actuated Bionic Arm with Muscle Force Constraints |
Dong, Haiwei | New York Univ. AD |
Yazdkhasti, Setareh | Al Ghurair Univ. |
Figueroa, Nadia | New York Univ. Abu Dhabi (NYU AD) |
Saddik, Abdulmotaleb | New York Univ. AD and Univ. of Ottawa |
Keywords: Biologically-Inspired Robots, Biomimetics
Abstract: In this paper, we propose an "anti-fatigue" control method for bionic actuated systems. Specifically, the proposed method is illustrated on an over-actuated bionic arm. Our control method consists of two steps. In the first step, a set of linear equations is derived by connecting the acceleration description in both joint and muscle space. The pseudo inverse solution to these equations provides an initial optimal muscle force distribution. As a second step, we derive a gradient direction for muscle force redistribution. This allows the muscles to satisfy force constraints and generate an even distribution of forces throughout all the muscles (i.e. towards "anti-fatigue"). The overall proposed method is tested for a bending-stretching movement. We used two models (bionic arm with 6 and 10 muscles) to verify the method. The force distribution analysis verifies the "anti-fatigue" property of the computed muscle force. The efficiency comparison shows that the computational time does not increase significantly with the increase of muscle number. The tracking error statistics of the two models show the validity of the method.
|
|
11:30-11:45, Paper MoAT9.3 | |
>Minimalistic Decentralized Control Using Stochastic Resonance Inspired from a Skeletal Muscle |
Ikemoto, Shuhei | Osaka Univ. |
Inoue, Yosuke | Osaka Univ. |
Shimizu, Masahiro | Osaka Univ. |
Hosoda, Koh | Osaka Univ. |
Keywords: Biologically-Inspired Robots, Biomimetics, Cellular and Modular Robots
Abstract: Sarcomere is a functional unit constituting a skeletal muscle, which can only contract and relax in response to changes in Ca+ concentration. In order from the simple to the most complex, it builds structures corresponding to myofibrils, muscle fibers, muscle fiber bundles and the skeletal muscle. This distinctive hierarchical structure of skeletal muscles has been intensively studied in interdisciplinary research fields. In engineering, how the system efficiently controls a large number of sarcomeres to express continuous output force, is a point that has been focused. In this research, we propose a new decentralized control which is very simple but can manage many binary functional units by exploiting environmental noise. The validity of method is confirmed in both numerical simulation and a developed biologically inspired actuator.
|
|
11:45-12:00, Paper MoAT9.4 | |
>The Poppy Humanoid Robot: Leg Design for Biped Locomotion |
Lapeyre, Matthieu | INRIA Bordeaux Sud Ouest |
Rouanet, Pierre | INRIA Bordeaux Sud-Ouest |
Oudeyer, Pierre-Yves | Inria and Ensta ParisTech |
Keywords: Biologically-Inspired Robots, Human-Robot Interaction, Humanoid Robots
Abstract: We introduce a novel humanoid robotic platform designed to jointly address three central goals to humanoid robotics: 1) study the role of morphology in biped locomotion; 2) study full-body compliant physical human-robot interaction; 3) be robust while easy and fast to duplicate to facilitate experimentation. The taken approach relies on functional modeling of certain aspects of human morphology, optimizing materials and geometry, as well as on the use of 3D printing techniques. In this article, we focus on the presentation of the design of specific morphological parts related to biped locomotion: the hip, the thigh, the limb mesh and the knee. We present initial experiments showing properties of the robot when walking with the physical guidance of a human.
|
|
12:00-12:15, Paper MoAT9.5 | |
>Adaptive Control System of an Insect Brain During Odor Source Localization |
Minegishi, Ryo | Tokyo Inst. of Tech. |
Takahashi, Yosuke | Tokyo Inst. of Tech. |
Takashima, Atsushi | Tokyo Inst. of Tech. |
Kurabayashi, Daisuke | Tokyo Inst. of Tech. |
Kanzaki, Ryohei | The Univ. of Tokyo |
Keywords: Neurorobotics, Brain Machine Interface, Adaptive Control
Abstract: To realize an autonomous odor source localization robot, we focused on the adaptability of an insect’s brain to compensate for rotational disturbances during odor source searching behavior. We manipulated motor outputs to control the sensory feedback of an insect using a brain-machine hybrid system. This system is composed of an insect’s head and a two-wheeled mobile robot. The velocity of the robot is proportional to neural activities descending from an insect brain. We successfully manipulated the behavior of the robot. In disturbance experiments, insects responded to given rotational disturbances by modifying their neural activities to make compensative angular velocity. We assumed this control system of the compensation as an output-error model. We calculated the parameters under different motor gains to reveal it as an adaptive controller. We propose that an insect has its appropriate angular velocity during odor source localization, and performed simulation experiments involving an odor source searching agent and odor distribution environment. We calculated the cost for odor source localization by changing the angular velocity of the agent, and found that it had the minimum value.
|
|
12:15-12:30, Paper MoAT9.6 | |
> >Speed Generalization Capabilities of a Cerebellar Model on a Rapid Navigation Task |
Herreros-Alonso, Ivan | Univ. Pompeu Fabra |
Maffei, Giovanni | SPECS, UPF |
Brandi, Santiago | UPF |
Sanchez Fibla, Marti | Univ. Pompeu Fabra (UPF) |
Verschure, Paul | Catalan Inst. of Advanced Studies (ICREA), Foundation &Univ. |
Attachments: Video Attachment
Keywords: Neurorobotics, Learning and Adaptive Systems, Adaptive Control
Abstract: It is suggested that the cerebellum can replace reflexes by anticipatory actions. Classical conditioning paradigms such as eyeblink conditioning offer a means to study the acquisition of anticipatory actions. In eyeblink conditioning a Conditioned Response (CR) to a Conditioning Stimulus (CS) is acquired such that it peaks at the expected time of arrival of a noxious Unconditioned Stimulus (US). Interestingly, the CS intensity effect links the intensity of the CS with the latency and amplitude of the CR. Having trained an animal with a tone of a certain loudness as a CS, the response to a CS of increased loudness is elicited earlier and has greater amplitude, with the opposite effect achieved lowering CS loudness. Here we propose that the CS intensity effect can be considered a built-in sensorimotor contingency applicable in the acquisition of anticipatory avoidance responses and that this contingency stems from the non-linear dynamics of the input stage of the cerebellum, conforming a generalization extrinsic to the cerebellar learning algorithm. Finally, with a robotic task, we demonstrate how this contingency between stimulus intensity and timing of the response eases the acquisition of skilled behavior as a fast action.
|
|
MoAT10 |
Room609 |
Localization I |
Regular Session |
Chair: Suzuki, Taro | Tokyo Univ. of Marine Science and Tech. |
Co-Chair: Bonnifait, Philippe | Univ. of Tech. of Compiegne |
|
11:00-11:15, Paper MoAT10.1 | |
>Precise Point Positioning for Mobile Robots Using Software GNSS Receiver and QZSS LEX Signal |
Suzuki, Taro | Tokyo Univ. of Marine Science and Tech. |
Kubo, Nobuaki | Tokyo Univ. of Marine Science and Tech. |
Keywords: Localization, Navigation, Field Robots
Abstract: This paper describes outdoor localization for a mobile robot using precise point positioning (PPP) based on the Quasi-Zenith Satellite System (QZSS) L-band Experiment (LEX) signal. For autonomous navigation applications, a real-time kinematic (RTK) global positioning system (GPS) technique is widely used to estimate user position with high-precision accuracy in real time. However, RTK-GPS requires a reference station, and there are data acquisition costs involved in estimating the position. Our approach corrects position error by applying PPP using the QZSS LEX message. PPP can estimate a single receiver position without any reference station or baseline, through use of satellite position fixing and clocks. We developed a method for extracting the QZSS LEX message in real time using a software GNSS receiver. We then constructed the PPP framework based on an LEX message containing the satellite ephemeris and clock errors. Finally, we conducted field experiments to evaluate the accuracy and precision of our proposed method. The experimental results confirmed that our method made a localization precision of 1.29 m (root mean square) possible without using a GNSS reference station.
|
|
11:15-11:30, Paper MoAT10.2 | |
> >C-LOG: A Chamfer Distance Based Method for Localisation in Occupancy Grid-Maps |
Dantanarayana, Lakshitha | Univ. of Tech. Sydney |
Ranasinghe, Ravindra | Univ. of Tech. Sydney |
Dissanayake, Gamini | Univ. of Tech. Sydney |
Attachments: Video Attachment
Keywords: Localization
Abstract: In this paper, the problem of localising a robot within a known two-dimensional environment is formulated as one of minimising the Chamfer Distance between the corresponding occupancy grid map and information gathered from a sensor such as a laser range finder. It is shown that this non-linear optimisation problem can be solved efficiently and that the resulting localisation algorithm has a number of attractive characteristics when compared with the conventional particle filter based solution for robot localisation in occupancy grids. The proposed algorithm is able to perform well even when robot odometry is unavailable, insensitive to noise models and does not critically depend on any tuning parameters. Experimental results based on a number of public domain datasets as well as data collected by the authors are used to demonstrate the effectiveness of the proposed algorithm.
|
|
11:30-11:45, Paper MoAT10.3 | |
>Normal Distributions Transform Monte-Carlo Localization (NDT-MCL) |
Saarinen, Jari Pekka | Aalto Univ. |
Andreasson, Henrik | Örebro Univ. |
Stoyanov, Todor | Center for Applied Autonomous Sensor Systems |
Lilienthal, Achim J. | Örebro Univ. |
Keywords: Localization, Navigation
Abstract: Industrial applications often impose hard requirements on the precision of autonomous vehicle systems. As a consequence industrial Automatically Guided Vehicle (AGV) systems still use high-cost infrastructure based positioning solutions. In this paper we propose a map based localization method that fulfills the requirements on precision and repeatability, typical for industrial application scenarios. The proposed method - Normal Distributions Transform Monte Carlo Localization (NDT-MCL) is based on a well established probabilistic framework. In a novel contribution, we formulate the MCL localization approach using the Normal Distributions Transform (NDT) as an underlying representation for both map and sensor data. By relaxing the hard discretization assumption imposed by grid-map models and utilizing the piece-wise continuous NDT representation the proposed algorithm achieves substantially improved accuracy and repeatability. The proposed NDT-MCL algorithm is evaluated using offline data sets from both a laboratory and a real-world industrial environments. Additionally, we report a comparison of the proposed algorithm to grid-based MCL and to a commercial localization system when used in a closed-loop with the control system of an AGV platform. In all tests the proposed algorithm is demonstrated to provide performance superior to that of standard grid-based MCL and comparable to the performance of a commercial infrastructure based positioning system.
|
|
11:45-12:00, Paper MoAT10.4 | |
>Mechanisms for Efficient Integration of RSSI in Localization and Tracking with Wireless Camera Networks |
De San Bernabe, Alberto | Univ. de Sevilla |
Martinez-de Dios, J.R. | Univ. of Seville |
Ollero, Anibal | Univ. of Seville |
Keywords: Sensor Networks, Localization
Abstract: This paper proposes a scheme that exploits synergies between RSSI and camera measurements in object localization and tracking using Wireless Camera Networks (WCN). It is based on three main mechanisms: a training method that accurately adapts RSSI-range models to the particular environment; a sensor activation/deactivation method that balances the different information contribution and energy consumptions of camera and RSSI measurements; and a distributed Information Filter to integrate the available measurements. The joint use of these mechanisms drastically reduces energy consumption -40%- with no significant degradation w.r.t. existing schemes based on only cameras and shows better robustness to target occlusions. The scheme has been implemented and validated in the indoor CONET Integrated Testbed.
|
|
12:00-12:15, Paper MoAT10.5 | |
>Continuous Vehicle Localisation Using Sparse 3D Sensing, Kernelised Renyi Distance and Fast Gauss Transforms |
Sheehan, Mark | Oxford Univ. |
Harrison, Alastair | Univ. of Oxford |
Newman, Paul | Oxford Univ. |
Keywords: Range Sensing, Localization, Mapping
Abstract: Abstract--- This paper is about estimating a smooth, continuous-time trajectory of a vehicle relative to a prior 3D laser map. We pose the estimation problem as that of finding a sequence of Catmull-Rom splines which optimise the Kernelised Rényi Distance (KRD) between the prior map and live measurements from a 3D laser sensor. Our approach treats the laser measurements as a continual stream of data from a smoothly moving vehicle. We side-step entirely the segmentation and feature matching problems incumbent in traditional point cloud matching algorithms, relying instead on a smooth and well behaved objective function. Importantly our approach admits the exploitation of sensors with modest sampling rates - sensors that take seconds to densely sample the workspace. We show how by appropriate use of the Improved Fast Gauss Transform we can reduce the order of the estimation problem from quadratic (straight forward application of the KRD) to linear. Although in this paper we use 3D laser, our approach is also applicable to vehicles using 2D laser sensing or dense stereo. We demonstrate and evaluate the performance of our approach when estimating the full 6DOF continuous time pose of a road vehicle undertaking over 2.7km of outdoor travel.
|
|
12:15-12:30, Paper MoAT10.6 | |
>Mapping and Localization Using GPS, Lane Markings and Proprioceptive Sensors |
Tao, Zui | Univ. de Tech. de Compiègne |
Bonnifait, Philippe | Univ. of Tech. of Compiegne |
Fremont, Vincent | UTC - HEUDIASYC CNRS |
Ibanez-Guzman, Javier | Renault |
Keywords: Localization, Mapping, Sensor Fusion
Abstract: Abstract— Estimating the pose in real-time is a primary func-tion for intelligent vehicle navigation. Whilst different solutions exist, most of them rely on the use of high-end sensors. This paper proposes a solution that exploits an automotive type L1-GPS receiver, features extracted by low-cost perception sensors and vehicle proprioceptive information. A key idea is to use the lane detection function of a video camera to retrieve accurate lateral and orientation information with respect to road lane markings. To this end, lane markings are mobile-mapped by the vehicle itself during a first stage by using an accurate localizer. Then, the resulting map allows for the exploitation of camera-detected features for autonomous real-time localization. The results are then combined with GPS estimates and dead-reckoning sensors in order to provide localization information with high availability. As L1-GPS errors can be large and are time correlated, we study in the paper several GPS error models that are experimentally tested with shaping filters. The approach demonstrates that the use of low-cost sensors with adequate data-fusion algorithms should lead to computer-controlled guidance functions in complex road networks.
|
|
MoAT11 |
Room801 |
Parallel Mechanism |
Regular Session |
Chair: Hu, Jwu-Sheng | ITRI |
Co-Chair: Martinet, Philippe | Ec. Centrale de Nantes |
|
11:00-11:15, Paper MoAT11.1 | |
>Design of a Novel Tremor Suppression Device Using a Linear Delta Manipulator for Micromanipulation |
Chang, Dongjune | KAIST |
Gu, Gwang Min | KAIST |
Kim, Jung | KAIST |
Keywords: Mechanism Design, Parallel Robots, Micro-manipulation
Abstract: In this paper, the design of a high precision device using a Linear Delta manipulator was proposed to compensate for the tremor signal in three translational directions. A Linear Delta manipulator is a suitable tremor suppression device due to the simple structure and high stiffness with the vertical direction in the application of micro manipulation such as microsurgery and cell manipulation. In order to implement the mechanism of the Linear Delta manipulator to the device, three voice coil motors and three linear encoders with high resolution were used. The flexure mechanism was applied to the device to avoid the friction effect of the small ball joint. Finally, the experiments for the validation of the proposed device were performed as follows: (1) position control in each axis for accuracy, and (2) sine wave tracking (500 micrometers, 12Hz) for bandwidth of the system.
|
|
11:15-11:30, Paper MoAT11.2 | |
>Using UWB Sensor for Delta Robot Vibration Detection |
Chen, Jyun-Long | ITRI |
Tseng, Tien-Cheng | ITRI |
Cheng, Yu-Yi | ITRI |
Chang, Kuang-I | ITRI |
Hu, Jwu-Sheng | ITRI |
Keywords: Range Sensing, Industrial Robots, Factory Automation
Abstract: This study proposed the ultra-wideband(UWB) sensor to detect the vibration of the delta robot and the distance between UWB sensor and robot arm. Based on the radar propagating principle and the proposed algorithm, the vibration status of robot arm can be obtained. Besides, the vibration status can be reduced by feedback the information to the controller of the robot arm. The advantage of the proposed sensor is that without contacting to the robot arm, UWB sensor is more flexible to be used and will not cause any unnecessary payload as accelerometers. With proper calibration, the simulation of vibration frequency and the real-time absolute position of the robot arm were calculated and demonstrated. The experimental results show that the correlations for measured frequency and distance approximate to 1.
|
|
11:30-11:45, Paper MoAT11.3 | |
>High Speed Parallel Kinematic Manipulator State Estimation from Legs Observation |
Ozgur, Erol | Pascal Inst. |
Dahmouche, Redwan | Univ. de Franche Comté |
Andreff, Nicolas | Univ. de Franche Comté |
Martinet, Philippe | Ec. Centrale de Nantes |
Keywords: Parallel Robots, Visual Tracking
Abstract: To control dynamics of a parallel robot, we should measure the state feedback accurately and fast. In this paper, we show how to estimate positions and velocities simultaneously (i.e., the state feedback) at a reasonable accuracy and speed. We did this using only the sequential visual contours of the legs. A single-iteration virtual visual servoing scheme regulates rapidly an error of these contours. We validated this theory, a step to control parallel robots at high speed by their leg kinematics, with simulations and experiments.
|
|
11:45-12:00, Paper MoAT11.4 | |
>Minimal Representation for the Control of the Adept Quattro with Rigid Platform Via Leg Observation Considering a Hidden Robot Model |
Rosenzveig, Victor | Ec. Centrale de Nantes, IRCCyN |
Briot, Sébastien | IRCCyN |
Martinet, Philippe | Ec. Centrale de Nantes |
Keywords: Parallel Robots, Visual Servoing, Kinematics
Abstract: Previous works on the Gough-Stewart (GS) platform have shown that its visual servoing using the observation of its leg directions was possible by observing only three of its six legs but that the convergence to the desired pose was not guarantied. This can be explained by considering that the visual servoing of the leg direction of the GS platform was equivalent to controlling another robot, the 3-UPS that has assembly modes and singular configurations different from those of the GS platform. Considering this hidden robot model allowed the simplification of the singularity analysis of the mapping between the leg direction space and the Cartesian space. In this paper, the work on the definition of the hidden robot models involved in the visual servoing using the observation of the robot leg directions is extended to another robot, the Adept Quattro. It will be shown that the hidden robot model is completely different from the model involved in the control of the GS platform. Therefore, the results obtained for the GS platform are not valuable for this robot. The hidden robot has assembly modes and singular configurations different from those of the Quattro. An accuracy analysis is performed to show the importance of the leg selection. All these results are validated on a Quattro simulator created using ADAMS/Controls and interfaced with Matlab/Simulink.
|
|
12:00-12:15, Paper MoAT11.5 | |
>A Novel (3T-1R) Redundant Parallel Mechanism with Large Operational Workspace and Rotational Capability |
Shayya, Samah | Tecnalia France |
Krut, Sebastien | LIRMM (CNRS & Univ. Montpellier 2) |
Company, Olivier | Univ. of Montpellier 2 |
Baradat, Cédric | Tecnalia France |
Pierrot, François | CNRS - LIRMM |
Keywords: Parallel Robots, Joint/Mechanism, Redundant Robots
Abstract: This paper presents a novel 4 dofs (3T-1R) parallel redundant mechanism, with its complete study regarding inverse and direct geometric models (IGM and DGM), as well as singularity and workspace analysis. The robot is capable of performing a half-turn about the z axis (a complete turn would be theoretically possible if it were not for possible unavoidable inter-collisions in the practical case), and having all of its prismatic actuators along one direction, enables it to have an independent x motion - only limited by the stroke of the prismatic actuators. The mechanism is characterized by elevated dynamical capabilities having its actuators at base. Moreover, the performance of the robot is evaluated considering isotropy in velocity and forces.
|
|
12:15-12:30, Paper MoAT11.6 | |
>A 3T2R Parallel and Partially Decoupled Kinematic Architecture |
Malosio, Matteo | Italian National Res. Council |
Negri, Simone Pio | ITIA-CNR |
Pedrocchi, Nicola | National Res. Council of Italy |
Vicentini, Federico | Italian National Res. Council (CNR) |
Molinari, Lorenzo | CNR-ITIA |
Keywords: Parallel Robots, Kinematics, Mechanism Design
Abstract: This paper presents a parallel and partially decoupled mechanism characterized by three translational and two rotational degrees of freedom. A set of parallel kinematic chains actuates five degrees of freedom of the mobile platform and constrains one of its rotations. Its kinematics combines advantages typical of parallel architectures, as high dynamics, with positive aspects of partially decoupled ones, in terms of mechanical design, control and motion planning, through a relatively simple direct kinematic formulation. The presented architecture constitutes the mechanical heart of a robotic prototype designed to actively support the patient's head in open-skull awake surgery.
|
|
MoAT12 |
Room610 |
Teleoperation for Medical Robotics |
Regular Session |
Chair: Tsai, Chia-Hung Dylan | Osaka Univ. |
Co-Chair: Iordachita, Iulian | Johns Hopkins Univ. |
|
11:00-11:15, Paper MoAT12.1 | |
>Teleoperated Control Based on Virtual Fixtures for a Redundant Surgical System |
Lopez, Edoardo | Univ. Campus Bio-Medico di Roma |
Zollo, Loredana | Univ. Campus Bio-Medico |
Guglielmelli, Eugenio | Univ. Campus Bio-Medico |
Keywords: Teleoperated surgical systems, Haptics and Haptic Interfaces, Kinematics
Abstract: One of the main limitations of systems for Minimally Invasive Robotic Surgery is the lack of haptic feedback. In this paper, a teleoperated system for robotic surgery is introduced, able to guide the surgeon towards a target anatomy by providing her with force feedback based on Virtual Fixtures (VF). The teleoperated system has a redundant slave robot. A closed-form inverse kinematics is proposed to solve redundancy that is based on an optimization approach in one variable. Four different cost objective functions are proposed in the paper and one is implemented and validated, i.e. the cost function aimed at minimizing the amount of space of the robot in the operative theater during the surgical procedure. The proposed teleoperated architecture has been tested on a teleoperated system composed of a 3-DoFs haptic joystick and a 7-DoFs anthropomorphic manipulator. Experimental tests on 12 volunteer subjects have been carried out. Results demonstrate that a force feedback based on VF provides a statistically significant enhancement of procedure accuracy.
|
|
11:15-11:30, Paper MoAT12.2 | |
>Stability and Performance Analysis of Three-Channel Teleoperation Control Architectures for Medical Applications |
Albakri, Abdulrahman | Univ. Montpellier II - LIRMM |
Liu, Chao | LIRMM (UMR5506), CNRS, France |
Poignet, Philippe | LIRMM UMR 5506 CNRS UM2 |
Keywords: Teleoperated surgical systems, Telerobotics, Surgical Robotics
Abstract: Tele-surgery has been more and more popular in robot-assisted medical intervention. Most existing teleoperation architectures for medical applications adopt 2-channel architectures. The 2-channel architectures have been evaluated in literature and it is shown that some architectures, e.g. positionforce (P-F), are able to provide the surgeon a reliable haptic sense of the working environment (transparency). However, stability of these P-F architecture is still a considerable concern especially when physiological disturbances exist in the remote environment. P-PF architecture is proved to provide a convenient alternative. With one more channel 3-channel teleoperation architectures present promising options due to their augmented design flexibility. This paper evaluates stability and transparency of general 3-channel bilateral teleoperation control architectures and provides a design framework guidelines to improve the architectures’ stability robustness and optimize the transparency. Simulation evaluations are provided to illustrate how the optimal 3-channel teleoperation architecture is chosen for medical applications given their dedicated requirements.
|
|
11:30-11:45, Paper MoAT12.3 | |
>Telerobotic Palpation for Tumor Localization with Depth Estimation |
Talasaz, Ali | Univ. of Western Ontario |
Patel, Rajnikant V. | The Univ. of Western Ontario |
Keywords: Force and Tactile Sensing, Medical Robots and Systems, Haptics and Haptic Interfaces
Abstract: This work is aimed at developing a new minimally invasive approach to characterize tissue properties in real time during telerobotic palpation and to localize tissue abnormality while estimating its depth. This method relies on using a minimally invasive probe with a rigidly mounted tactile sensor at the tip to capture the force distribution map and the indentation depth by each tactile element and thereby generating a stiffness map for the palpated tissue. The hybrid impedance control technique is used for this approach to enable the operator to switch between position control and force control and thereby to autonomously obtain the required information from the remote tissue. The operator would then be able to localize tissue abnormality based on the force distribution map, the tissue stiffness map and the indentation depth which are visually presented to him/her in real time. This method also enables the operator to estimate the depth at which the tissue abnormality is located. Our results show that tactile sensing alone may be unable to detect tumors embedded deep inside tissue and may also not be a good alternative for palpation on uneven tissue surfaces.
|
|
11:45-12:00, Paper MoAT12.4 | |
> >Real-Time Tracking of a Bevel-Tip Needle with Varying Insertion Depth: Toward Teleoperated MRI-Guided Needle Steering |
Seifabadi, Reza | The Johns Hopkins Univ. Queens Univ. |
Escobar Gomez, Esteban | The Johns Hopkins Univ. |
Aalamifar, Fereshteh | Johns Hopkins Univ. |
Fichtinger, Gabor | Queen's Univ. |
Iordachita, Iulian | Johns Hopkins Univ. |
Attachments: Video Attachment
Keywords: Teleoperated surgical systems, Visual Tracking, Medical Robots and Systems
Abstract: This study presents one of the enabling technologies for teleoperated bevel-tip needle steering under real-time MRI guidance i.e. capability of tracking the needle with higher accuracy and bandwidth than real-time MRI. Three fibers, each with three Fiber Bragg Gratings (FBG) were embedded into a 0.6 mm inner stylet of a 20G MRI-compatible biopsy needle. The axial force caused by the bevel-tip was considered in the analysis using beam-column theory. Since the insertion depth is varying, the minimum number of sensors and their optimal locations in the fibers were determined such that the tip position error estimation is below 0.5 mm for all insertion depths. A practical and accurate calibration method for the apparatus is presented. The instrumented needle was fabricated to fit in the needle driver unit of a MRI-compatible needle steering robot. The tracking apparatus was calibrated, including compensation for temperature changes in tissue during insertion. Experimental results showed needle tip tracking error below 0.5 mm at different insertion depths. Real-time 3D shape of the needle was visualized in 3D Slicer yielding navigation of the needle in real-time.
|
|
12:00-12:15, Paper MoAT12.5 | |
>Projection-Based Force Reflection Algorithms for Teleoperated Rehabilitation Therapy |
Atashzar, Seyed Farokh | Western Univ. (The Univ. of Western Ontario) |
Polushin, Ilia G. | Western Univ. |
Patel, Rajnikant V. | The Univ. of Western Ontario |
Keywords: Telerobotics, Rehabilitation Robotics
Abstract: The problem of designing of a haptics-enabled teleoperated rehabilitation system in the presence of communication delays is addressed. In a teleoperated rehabilitation system communication delays introduce phase shift which may result in the task inversion phenomenon. To overcome task inversion, a new type of projection-based force reflection algorithm is proposed which is suitable for assistive/resistive therapy in the presence of irregular communication delays. Additionally, algorithms for augmented therapy are introduced which combine the projection-based force reflection with a delay-free local virtual therapist. A small-gain design is developed which guarantees stability of the proposed schemes for both assistive and resistive modes of the therapy. Simulations and experimental results are presented which confirm the improvement achieved by the proposed methods.
|
|
12:15-12:30, Paper MoAT12.6 | |
>Master / Slave Control of Flexible Instruments for Minimally Invasive Surgery |
De Donno, Antonio | Univ. of Strasbourg |
Nageotte, Florent | Univ. of Strasbourg |
Zanne, Philippe | Univ. of Strasbourg |
Zorn, Lucile | Univ. of Strasbourg |
De Mathelin, Michel | Univ. of Strasbourg |
Keywords: Teleoperated surgical systems, Surgical Robotics, Tendon/Wire Mechanism
Abstract: STRAS is a flexible robotic system based on the Anubis platform from Karl Storz and is aimed for intraluminal and transluminal procedures. It is composed of three cable-driven sub-systems, one endoscope and two insertable instruments. The bending instruments have three degrees of freedom and can be teleoperated by the user via two commercial master interfaces (Omega.7, Force Dimension). In this paper we investigate several ways to map the motions from the master side to the instruments, from joint per joint control to cartesian control. We describe these mappings and compare them in elementary tasks in an attempt to analyze how non-linearities affect the accuracy of control. Results show that joint control and pseudo-cartesian control provide equivalent accuracy but with different difficulties for the user.
|
|
MoAT13 |
Room802 |
Micro-Manipulation |
Regular Session |
Chair: Sun, Dong | City Univ. of Hong Kong |
Co-Chair: Morishima, Keisuke | Osaka Univ. |
|
11:00-11:15, Paper MoAT13.1 | |
>Cell Patterning with Robotically Controlled Optical Tweezers |
Yan, Xiao | City Univ. of Hong Kong |
Sun, Dong | City Univ. of Hong Kong |
Keywords: Micro-manipulation, Nano manipilation, Nano automation
Abstract: This paper presents the use of robotically controlled optical tweezers to manipulate a group of cells into a region of interest to form the required pattern. A novel multilevel-based topology is designed to present different cell patterns in the region of interest. A potential function-based controller is developed to control the cells to form the required pattern. A pattern regulatory control force is developed which particularly addresses the special case when cells stop at undesired positions. The system stability is analyzed using Lyapunov approach. Experiment is performed with robotically controller optical tweezers to demonstrate the effectiveness of the proposed approach.
|
|
11:15-11:30, Paper MoAT13.2 | |
>Automated Microfluidic System for Orientation Control of Mouse Embryos |
Shin, Yong Kyun | Korea Advanced Inst. of Science and Tech. (KAIST) |
Kim, Yeongjin | KAIST |
Kim, Jung | KAIST |
Keywords: Micro-manipulation, Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Visual Tracking
Abstract: Microinjection and biopsy of oocytes and embryos in Assisted Reproductive Technology (ART) require highly delicate handling of cells. In particular, efficient control of cell orientation is necessary to maintain their integrity during tool penetration, which currently remains challenging to accomplish by the existing method of repeated aspiration/release via micropipette. We present a microfluidic platform to automate the process of cell orientation control and trapping by means of hydrodyanmic force and vision-based position control. The device is accessible by conventional micropipettes via a cavity, allowing immobilized cells to be operated on. An orientation control algorithm based on the movement of the embryo within the microchannel is proposed. Visual tracking of the polar body is used to provide the information of cell orientation. Experimental results with mouse embryos indicate that cell orientation can be systematically controlled autonomously without human intervention and therefore provides a framework for further development of robotics approach to precise manipulation of microparticles within microfluidic devices.
|
|
11:30-11:45, Paper MoAT13.3 | |
>Piezoelectric Inkjet-based One Cell per One Droplet Automatic Printing by Image Processing |
The, Ryanto | Osaka Univ. |
Yamaguchi, Shuichi | Microjet Corp. |
Ueno, Akira | Microjet Corp. |
Akiyama, Yoshitake | Osaka Univ. |
Morishima, Keisuke | Osaka Univ. |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Micro-manipulation, Visual Tracking
Abstract: Piezoelectric inkjet printer technology recently has gained attention and was utilized eject particles and cells of several tens of μm scale, which is large compared to colloid used in printer ink. However, the inkjet head was originally design for color ink. Therefore, problems, such as, head clogging and trajectory error occurred. In this study, those problems were addressed, and optimal condition for stable cells ejection was verified. Furthermore, one cell per one droplet printing method was established by observing cells position on the inside of inkjet head. Finally, automatic one cell per one droplet printing system was successfully developed by processing image of the inkjet head to automatically detect cells position. The automatic one cell per one droplet printing system was successfully developed with 98% successful ratio.
|
|
11:45-12:00, Paper MoAT13.4 | |
>Multiple Microfluidic Stream Based Manipulation for Single Cell Handling* |
Yalikun, Yaxar | Tokyo Univ. of agriculture and Tech. |
Morishima, Keisuke | Osaka Univ. |
Keywords: Micro-manipulation, Micro/Nano Robots, Motion Control
Abstract: This paper proposed a Multiple Microfluidic Stream based Manipulation (MMSM) system for bio-objects using micro hydrodynamics and Lab on Chip (LOC) technology. Our method can implement the function of micro manipulation and micro assembly of bio-objects without contact in an open space. Compared with other conventional bio-micro manipulation and assembly methods, this system is manipulating a micro object by controlling multiple microfluidic streams onto it from various directions. The advantage of this method is open space, multi-function, multi-scale, multi degree of freedom, and non-invasive three dimensions manipulation. These microfluidic streams are generated simultaneously from multiple orifices. By regulating the parameters of the microfluidic stream such as flow rates, position and number of operating orifices, the direction and velocity of the object can be controlled. To verify this principle, we designed an open space fluidic system for on-chip manipulation, and calculated velocity and direction of the microfluidic stream using CFD simulation. Then the prototype microchip with an array of 9 orifices was fabricated with glass. In experiments, demonstrations of rectilinear motion of a single cell and micro particle were observed. The results presented in this paper showed that this MMSM has the capability for bio micro manipulation and micro assembly of bio-object.
|
|
12:00-12:15, Paper MoAT13.5 | |
>Fabrication and Assembly of Multi-Layered Microstructures Embedding Cells Inside Microfluidic Devices |
Yue, Tao | Nagoya Univ. |
Nakajima, Masahiro | Nagoya Univ. |
Wang, Huaping | Beijig Inst. of Tech. |
Hu, Chengzhi | Nagoya Univ. |
Takeuchi, Masaru | Nagoya Univ. |
Fukuda, Toshio | Meijo Univ. |
Keywords: Micro-manipulation
Abstract: Recently the research about constructing 3 dimensional cell structures is very important for its great potential applications in tissue engineering. In this paper, we report a novel method of constructing multi-layered microstructures embedding cells via microfluidic devices. The on-chip fabrication of movable microstructures embedding fibroblasts (NIH/3T3) based on Poly (ethylene glycol) Diacrylate (PEGDA) was reported. Two approaches for assembling these movable microstructures were presented. One was a manual assembly method based on micromanipulation system and the other one was a self-assembly method based on microfluidic channel. Several manual assembly ways were demonstrated and a tube-shaped microstructure with 17 layers was assembled by an efficient assembly method. A novel microfluidic channel was presented for conducting self-assembly method and a 2-layered experimental microfluidic device was fabricated by Polydimethylsiloxane (PDMS). The self-assembly process of fabricated microstructures via this device was preliminarily demonstrated.
|
|
12:15-12:30, Paper MoAT13.6 | |
> >Massive Uniform Manipulation: Controlling Large Populations of Simple Robots with a Common Input Signal |
Becker, Aaron | Rice Univ. |
Habibi, Golnaz | Rice Univ. |
Werfel, Justin | Harvard Univ. |
Rubenstein, Michael | Harvard Univ. |
McLurkin, James | Rice Univ. |
Attachments: Video Attachment
Keywords: Micro/Nano Robots, Nano assembly, Nano automation
Abstract: Roboticists, biologists, and chemists are now producing large populations of simple robots, but controlling large populations of robots with limited capabilities is difficult, due to communication and onboard-computation constraints. Direct human control of large populations seems even more challenging. In this paper we investigate control of mobile robots that move in a 2D workspace using three different system models. We focus on a model that uses broadcast control inputs specified in the global reference frame. In an obstacle-free workspace this system model is uncontrollable because it has only two controllable degrees of freedom---all robots receive the same inputs and move uniformly. We prove that adding a single obstacle can make the system controllable, for any number of robots. We provide a position control algorithm, and demonstrate through extensive testing with human subjects that many manipulation tasks can be reliably completed, even by novice users, under this system model, with performance benefits compared to the alternate models. We compare the sensing, computation, communication, time, and bandwidth costs for all three system models. Results are validated with extensive simulations and hardware experiments using over 100 robots.
|
|
MoBT1 |
Room606 |
SLAM II |
Regular Session |
Chair: Lee, Gim Hee | ETH Zurich |
Co-Chair: Olson, Edwin | Univ. of Michigan |
|
13:30-13:45, Paper MoBT1.1 | |
>Robust Sensor Characterization Via Mixture Models: GPS Sensors |
Morton, Ryan | Univ. of Michigan |
Olson, Edwin | Univ. of Michigan |
Keywords: SLAM, Localization, Calibration and Identification
Abstract: Large position errors plague GNSS-based sensors (e.g., GPS) due to poor satellite configuration and multipath effects, resulting in frequent outliers. Due to quadratic cost functions when optimizing SLAM via nonlinear least square methods, a single such outlier can cause severe map distortions. Following in the footsteps of recent improvements in the robustness of SLAM optimization process, this work presents a framework for improving sensor noise characterizations by combining a machine learning approach with max-mixture error models. By using max-mixtures, the sensor's noise distribution can be modeled to a desired accuracy, with robustness to outliers. We apply the framework to the task of accurately modeling the uncertainties of consumer-grade GPS sensors. Our method estimates the observation covariances using only weighted feature vectors and a single max operator, learning parameters off-line for efficient on-line calculation.
|
|
13:45-14:00, Paper MoBT1.2 | |
>Odometry-Driven Inference to Link Multiple Exemplars of a Location |
Lowry, Stephanie | Queensland Univ. of Tech. |
Wyeth, Gordon | Queensland Univ. of Tech. |
Milford, Michael J | Queensland Univ. of Tech. |
Keywords: SLAM, Localization
Abstract: A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path’s end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
|
|
14:00-14:15, Paper MoBT1.3 | |
>GPU Accelerated Graph SLAM and Occupancy Voxel Based ICP for Encoder-Free Mobile Robots |
Ratter, Adrian Brian | Univ. of New South Wales |
Sammut, Claude | The Univ. of New South Wales |
McGill, Matthew J | Univ. of New South Wales |
Keywords: SLAM, Search and Rescue Robots, Mapping
Abstract: Learning a map of an unknown environment and localising a robot in it is a common problem in robotics, with solutions usually requiring an estimate of the robot’s motion. In scenarios such as Urban Search and Rescue, motion encoders can be highly inaccurate, and weight and battery requirements often limit computing power. We have developed a GPU based algorithm using Iterative Closest Point position tracking and Graph SLAM that can accurately generate a map of an unknown environment without the need for motion encoders and requiring minimal computational resources. The algorithm is able to correct for drift in the position tracking by rapidly identifying loops and optimising the map. We present a method for refining the existing map when revisiting areas to increase the accuracy of the existing map and bound the run-time to the size of the environment.
|
|
14:15-14:30, Paper MoBT1.4 | |
>Deformation-Based Loop Closure for Large Scale Dense RGB-D SLAM |
Whelan, Thomas | National Univ. of Ireland Maynooth |
Kaess, Michael | MIT |
Leonard, John | MIT |
McDonald, John | National Univ. of Ireland Maynooth |
Keywords: SLAM, Mapping, Localization
Abstract: In this paper we present a system for capturing large scale dense maps in an online setting with a low cost RGB-D sensor. Central to this work is the use of an "as-rigid-as-possible" space deformation for efficient dense map correction in a pose graph optimisation framework. By combining pose graph optimisation with non-rigid deformation of a dense map we are able to obtain highly accurate dense maps over large scale trajectories that are both locally and globally consistent. With low latency in mind we derive an incremental method for deformation graph construction, allowing multi-million point maps to be captured over hundreds of metres in real-time. We provide benchmark results on a well established RGB-D SLAM dataset demonstrating the accuracy of the system and also provide a number of our own datasets which cover a wide range of environments, both indoors, outdoors and across multiple floors.
|
|
14:30-14:45, Paper MoBT1.5 | |
>Robust Pose-Graph Loop-Closures with Expectation-Maximization |
Lee, Gim Hee | ETH Zurich |
Fraundorfer, Friedrich | Tech. Unitversität München |
Pollefeys, Marc | ETH Zurich |
Keywords: SLAM
Abstract: In this paper, we model the robust loop-closure pose-graph SLAM problem as a Bayesian network and show that it can be solved with the Classification Expectation-Maximization (EM) algorithm. In particular, we express our robust pose-graph SLAM as a Bayesian network where the robot poses and constraints are latent and observed variables. An additional set of latent variables is introduced as weights for the loop-constraints. We show that the weights can be chosen as the Cauchy function, which are iteratively computed from the errors between the predicted robot poses and observed loop-closure constraints in the Expectation step, and used to weigh the cost functions from the pose-graph loop-closure constraints in the Maximization step. As a result, outlier loop-closure constraints are assigned low weights and exert less influences in the pose-graph optimization within the EM iterations. To prevent the EM algorithm from getting stuck at local minima, we perform the EM algorithm multiple times where the loop constraints with very low weights are removed after each EM process. This is repeated until there are no more changes to the weights. We show proofs of the conceptual similarity between our EM algorithm and the M-Estimator. Specifically, we show that the weight function in our EM algorithm is equivalent to the robust residual function in the M-Estimator. We verify our proposed algorithm with experimental results from multiple simulated and real-world datasets, and comparisons with other existing works.
|
|
14:45-15:00, Paper MoBT1.6 | |
>Structureless Pose-Graph Loop-Closure with a Multi-Camera System on a Self-Driving Car |
Lee, Gim Hee | ETH Zurich |
Fraundorfer, Friedrich | Tech. Unitversität München |
Pollefeys, Marc | ETH Zurich |
Keywords: SLAM, Computer Vision
Abstract: In this paper, we propose a method to compute the pose-graph loop-closure constraints using multiple non/minimal overlapping field-of-views cameras mounted rigidly on a self-driving car without the need to reconstruct any 3D scene points. In particular, we show that the relative pose with metric scale between two loop-closing pose-graph vertices can be directly obtained from the epipolar geometry of the multi-cameras system. As a result, we avoid the additional time complexities and uncertainties from the reconstruction of 3D scene points which are needed by standard monocular and stereo approaches. In addition, there is a greater flexibility in choosing a configuration for the multi-camera system to cover a wider field-of-view so as to avoid missing out any loop-closure opportunities. We show that by expressing the point correspondences between two frames as Pluecker lines and enforcing the planar motion constraint on the car, we are able to use multiple cameras as one and formulate the relative pose problem for loop-closure as a minimal problem which requires 3-point correspondences that yields up to six real solutions. The RANSAC algorithm is used to determine the correct solution and for robust estimation. We verify our method with results from multiple large-scale real-world data.
|
|
MoBT2 |
Room607 |
Visual Servo II |
Regular Session |
Chair: Tsai, Chia-Hung Dylan | Osaka Univ. |
Co-Chair: Kermorgant, Olivier | Univ. of Strasbourg |
|
13:30-13:45, Paper MoBT2.1 | |
> >Partial Visibility Constraint in 3D Visual Servoing |
Kermorgant, Olivier | Univ. of Strasbourg |
Attachments: Video Attachment
Keywords: Visual Servoing, Visual Tracking
Abstract: In this paper we address the problem of visibility in position-based visual servoing. It is well known that the observed object may leave the field of view in such schemes, as there is usually no control in the image. Recent control schemes try to cope with this issue by defining an image-based constraint such that the object stays in the image. We propose to increase the convergence domain of such schemes by defining a new constraint that allows the observed object to leave partially the field of view. The general formulation is exposed and the computation of this constraint is detailed. Experiments show that controlling the visibility loss allows performing position-based visual servoing tasks that were impossible to perform while keeping the whole object in the image.
|
|
13:45-14:00, Paper MoBT2.2 | |
> >Motion Planning from Demonstrations and Polynomial Optimization for Visual Servoing Applications |
Shen, Tiantian | the Univ. of Hong Kong |
Radmard, Sina | Univ. of British Columbia |
Chan, Ambrose | Univ. of British Columbia |
Croft, Elizabeth | Univ. of British Columbia |
Chesi, Graziano | Univ. of Hong Kong |
Attachments: Video Attachment
Keywords: Visual Servoing, Motion and Path Planning, Learning from Demonstration
Abstract: Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external- constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and ``whole arm" collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential ``whole arm" collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.
|
|
14:00-14:15, Paper MoBT2.3 | |
>Uncalibrated Visual Servoing of Nonholonomic Mobile Robots |
Li, Baoquan | Nankai Univ. |
Fang, Yongchun | Nankai Univ. |
Zhang, Xuebo | Nankai Univ. |
Keywords: Visual Servoing, Wheeled Robots
Abstract: In this paper, an uncalibrated visual servo regulation strategy is designed for a nonholonomic mobile robot equipped with an eye-in-hand camera, which drives the mobile robot to the target pose with exponential convergence. Specifically, a novel fundamental matrix-based algorithm is firstly proposed to rotate the robot to point toward the desired position, with the camera intrinsic parameters estimated simultaneously by employing the fundamental matrix and a projection homography matrix. Subsequently, by utilizing the obtained camera intrinsic parameters, a straight-line motion controller is developed to drive the robot to the desired position, with the orientation of the robot always facing the target position. Another pure rotation controller is finally adopted to correct the orientation error. The exponentially convergent properties of the visual servo errors are proven with mathematical analysis. The performance of the proposed uncalibrated visual servo regulation method is further validated by simulation results.
|
|
14:15-14:30, Paper MoBT2.4 | |
> >Corridor Following Wheelchair by Visual Servoing |
Pasteau, François | INSA Rennes / IRISA Lagadic Team / IETR |
Babel, Marie | IRISA UMR CNRS 6074 - INRIA - INSA Rennes |
Sekkal, Rafiq | IRISA Lagadic Team / INSA Rennes |
Attachments: Video Attachment
Keywords: Visual Servoing, Wheeled Robots, Service Robots
Abstract: In this paper, we present an autonomous navigation framework of a wheelchair by means of a single camera and visual servoing. We focus on a corridor following task where no prior knowledge of the environment is required. Our approach embeds an image-based controller, thus avoiding to estimate the pose of the wheelchair. The servoing process matches the non holonomous constraints of the wheelchair and relies on two visual features, namely the vanishing point location and the orientation of the median line formed by the straight lines related to the bottom of the walls. This overcomes the process initialization issue typically raised in the literature. The control scheme has been implemented onto a robotized wheelchair and results show that it can follow a corridor with an accuracy of +/- 3cm.
|
|
14:30-14:45, Paper MoBT2.5 | |
> >Visual Servoing of UAV Using Cuboid Model with Simultaneous Tracking of Multiple Planar Faces |
Barajas, Manlio | ITESM |
Dávalos, José Pablo | ITESM |
Garcia-Lumbreras, Salvador | Tecnologico de Monterrey |
Gordillo, José-Luis | Tecnológico de Monterrey |
Attachments: Video Attachment
Keywords: Visual Servoing, Visual Tracking, Unmanned Aerial Vehicles
Abstract: Pose estimation is a key component for robot navigation. An Unmanned Aerial Vehicle (UAV) that is instructed to reach certain location requires a way of measuring its pose. This article presents a method for UAV visual servoing that uses the 3D pose of the drone as controller feedback. A remote monocular camera observes the tracked UAV while it moves and rotates in a 3D space. Pose is obtained from a 3D tracking process based on a cuboid model. In particular, a simultaneous face tracking strategy where 3D pose estimations from different faces are combined is introduced. Face combination was validated using a robotic arm with a cuboid at the final joint. For the UAV control, hover and path following tasks were tested. Results show that the proposed method correctly handles changes in pose, even though no face is always visible. Also, the UAV maintained a low speed in order to satisfy the small inter-frame displacement constraint imposed by visual tracking algorithm.
|
|
14:45-15:00, Paper MoBT2.6 | |
>Image Moments for Higher-Level Feature Based Navigation |
Dani, Ashwin | Univ. of Illinois |
Panahandeh, Ghazaleh | KTH Royal Inst. of Tech. |
Chung, Soon-Jo | Univ. of Illinois at Urbana-Champaign |
Hutchinson, Seth | Univ. of Illinois |
Keywords: Visual Servoing, SLAM, Visual Navigation
Abstract: Abstract--- This paper presents a novel vision-based localization and mapping algorithm using image moments of region features. The environment is represented using regions, such as planes and/or 3D objects instead of only a dense set of feature points. The regions can be uniquely defined using a small number of parameters; e.g., a plane can be completely characterized by normal vector and distance to a local coordinate frame attached to the plane. The variation of image moments of the regions in successive images can be related to the parameters of the regions. Instead of tracking a large number of feature points, variations of image moments of regions can be computed by tracking the segmented regions or a few feature points on the objects in successive images. A map represented by regions can be characterized using a minimal set of parameters. The problem is formulated as a nonlinear filtering problem. A new discrete-time nonlinear filter based on the state-dependent coefficient (SDC) form of nonlinear functions is presented. It is shown via Monte-Carlo simulations that the new nonlinear filter is more accurate and consistent than EKF by evaluating the root-mean squared error (RMSE) and normalized estimation error squared (NEES).
|
|
MoBT3 |
Room703 |
Human Detection and Tracking |
Regular Session |
Chair: Lilienthal, Achim J. | Örebro Univ. |
Co-Chair: Naseer, Tayyab | Univ. of Freiburg |
|
13:30-13:45, Paper MoBT3.1 | |
> >On Improving the Extrapolation Capability of Task-Parameterized Movement Models |
Calinon, Sylvain | Istituto Italiano di Tecnologia (IIT) |
Alizadeh, Tohid | Istituto Italiano di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano di Tecnologia |
Attachments: Video Attachment
Keywords: Learning from Demonstration, Learning and Adaptive Systems
Abstract: Gestures are characterized by intermediary or final landmarks (real or virtual) in task space or joint space that can change during the course of the motion, and that are described by varying accuracy and correlation constraints. Generalizing these trajectories in robot learning by imitation is challenging, because of the small number of demonstrations provided by the user. We present an approach to statistically encode movements in a task-parameterized mixture model, and derive an expectation-maximization (EM) algorithm to train it. The model automatically extracts the relevance of candidate coordinate systems during the task, and exploits this information during reproduction to adapt the movement in real-time to changing position and orientation of landmarks or objects. The approach is tested with a robotic arm learning to roll out a pizza dough. It is compared to three categories of task-parameterized models: 1) Gaussian process regression (GPR) with a trajectory models database; 2) Multi-streams approach with models trained in several frames of reference; and 3) Parametric Gaussian mixture model (PGMM) modulating the Gaussian centers with the task parameters. We show that the extrapolation capability of the proposed approach outperforms existing methods, by extracting the local structures of the task instead of relying on interpolation principles.
|
|
13:45-14:00, Paper MoBT3.2 | |
> >HRI in the Sky: Creating and Commanding Teams of UAVs with a Vision-Mediated Gestural Interface |
Monajjemi, Valiallah (Mani) | Simon Fraser Univ. |
Wawerla, Jens | Simon Fraser Univ. |
Vaughan, Richard | Simon Fraser Univ. |
Mori, Greg | Simon Fraser Univ. |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Distributed Robot Systems, Aerial Robotics
Abstract: Extending our previous work in real-time vision-based human robot interaction with multi-robot systems, we present the first example of creating, modifying and commanding teams of UAVs by an uninstrumented human. To create a team the user focuses attention on an individual robot by simply looking at it, then adds or removes it from the current team with a motion-based hand gesture. Another gesture commands the entire team to begin task execution. Robots communicate among themselves by wireless network to ensure that no more than one robot is focused, and so that the whole team agrees that it has been commanded. Since robots can be added and removed from the team, the system is robust to incorrect additions. A series of trials with two and three very low-cost UAVs and off-board processing demonstrates the practicality of our approach.
|
|
14:00-14:15, Paper MoBT3.3 | |
> >FollowMe: Person Following and Gesture Recognition with a Quadrocopter |
Naseer, Tayyab | Univ. of Freiburg |
Sturm, Jürgen | Tech. Univ. of Munich |
Cremers, Daniel | Tech. Univ. of Munich |
Attachments: Video Attachment
Keywords: Human detection and tracking, Robot Companions and Social Human-Robot Interaction, Unmanned Aerial Vehicles
Abstract: In this paper, we present an approach that allows a quadrocopter to follow a person and to recognize simple gestures using an onboard depth camera. This enables novel applications such as hands-free filming and picture taking. The problem of tracking a person with an onboard camera however is highly challenging due to the self-motion of the platform. To overcome this problem, we stabilize the depth image by warping it to a virtual-static camera, using the estimated pose of the quadrocopter obtained from vision and inertial sensors using an Extended Kalman filter. We show that such a stabilized depth video is well suited to use with existing person trackers such as the OpenNI tracker. Using this approach, the quadrocopter not only obtains the position and orientation of the tracked person, but also the full body pose -- which can then for example be used to recognize hand gestures to control the quadrocopter's behaviour. We implemented a small set of example commands (``follow me'', ``take picture'', ``land''), and generate corresponding motion commands. We demonstrate the practical performance of our approach in an extensive set of experiments with a quadrocopter. Although our current system is limited to indoor environments and small motions due to the restrictions of the used depth sensor, it indicates that there is large potential for such applications in the near future.
|
|
14:15-14:30, Paper MoBT3.4 | |
>Fast HOG Based Person Detection Devoted to a Mobile Robot with a Spherical Camera |
Mekonnen, Alhayat Ali | LAAS-CNRS, Univ. of Toulouse |
Briand, Cyril | LAAS-CNRS |
Lerasle, Frederic | LAAS - CNRS |
Herbulot, Ariane | LAAS-CNRS |
Keywords: Human detection and tracking
Abstract: In this paper, we present a fast Histogram of Oriented Gradients (HOG) based person detector. The detector adopts a cascade of rejectors framework by selecting discriminant features via a new proposed feature selection framework based on Binary Integer Programming. The mathematical programming explicitly formulates an optimization problem to select discriminant features taking detection performance and computation time into account. The learning of the cascade classifier and its detection capability are validated using a proprietary dataset acquired using the Ladybug2 spherical camera and the public INRIA person detection dataset. The final detector achieves a comparable detection performance as Dalal and Triggs~cite{Dalal05} detector while achieving on average more than 2.5x - 8x speed up depending on the training dataset.
|
|
14:30-14:45, Paper MoBT3.5 | |
> >Multi-Human Tracking Using High-Visibility Clothing for Industrial Safety |
Mosberger, Rafael | Örebro Univ. |
Andreasson, Henrik | Örebro Univ. |
Lilienthal, Achim J. | Örebro Univ. |
Attachments: Video Attachment
Keywords: Human detection and tracking, Visual Tracking, Collision Detection and Avoidance
Abstract: We propose and evaluate a system for detecting and tracking multiple humans wearing high-visibility clothing from vehicles operating in industrial work environments. We use a customized stereo camera setup equipped with IR flash and IR filter to detect the reflective material on the worker's garments and estimate their trajectories in 3D space. An evaluation in two distinct industrial environments with different degrees of complexity demonstrates the approach to be robust and accurate for tracking workers in arbitrary body poses, under occlusion, and under a wide range of different illumination settings.
|
|
14:45-15:00, Paper MoBT3.6 | |
>Unconstrained 1D Range and 2D Image Based Human Detection |
Kocamaz, Mehmet Kemal | Univ. of Delaware |
Porikli, Fatih | Mitsubishi Electric Res. Lab. |
Keywords: Aerial Robotics
Abstract: An accurate and computationally very fast multi-modal human detector is presented. This 1D+2D detector fuses 1D range scan and 2D image information via an effective geometric descriptor and a silhouette based visual representation within a radial basis function kernel support vector machine learning framework. Unlike the existing approaches, the proposed 1D+2D detector does not make any restrictive assumptions on the range scan positions, thus it is applicable to a wide range of real-life detection tasks. To analyze the discriminative power of the geometric descriptor, a range scan only version, 1D+, is also evaluated. Extensive experiments demonstrate that the 1D+2D detector works robustly under challenging imaging conditions and achieves several orders of magnitude performance improvement while reducing the computational load drastically. In addition, a new multi-modal (LIDAR, depth image, optical image) dataset, DontHitMe, is introduced. This dataset contains 40,000 registered frames and 3,600 manually annotated human objects. It depicts challenging illumination conditions in indoors and outdoors environments and is publicly available to our community.
|
|
MoBT4 |
Room601 |
Mobile Assistance |
Regular Session |
Chair: Hirose, Noriaki | Toyota Central R&D Lab. INC. |
Co-Chair: Iwase, Masami | Tokyo Denki Univ. |
|
13:30-13:45, Paper MoBT4.1 | |
> >Hemispherical Net-Structure Proximity Sensor Detecting Azimuth and Elevation for Guide Dog Robot |
Arita, Hikaru | Univ. of Electro-Communications |
Suzuki, Yosuke | The Univ. of Electro-Communications |
Ogawa, Hironori | NSK Ltd. |
Tobita, Kazuteru | NSK Ltd. |
Shimojo, Makoto | Univ. of Electro-COmmunications |
Attachments: Video Attachment
Keywords: Force and Tactile Sensing, Human detection and tracking, Human-Robot Interaction
Abstract: We have developed a net-structure proximity sensor that detects the azimuth and elevation to a nearby object. This information can be used by robots to avoid obstacles or to respond to human behavior. We propose detection principles where the azimuth is detected by arranging two one-dimensional net-structure proximity sensors along orthogonal axes, and the elevation is detected by arranging two one-dimensional net-structure proximity sensors in a stacked ring. We also experimentally demonstrate the feasibility of these detection principles. The experimental result shows the sensor can detect azimuth at all peripheral angles and elevation from side to top up.
|
|
13:45-14:00, Paper MoBT4.2 | |
>Personal Robot Assisting Transportation to Support Active Human Life -Posture Stabilization Based on Feedback Compensation of Lateral Acceleration |
Hirose, Noriaki | Toyota Central R&D Lab. INC. |
Tajima, Ryosuke | Toyota Central R&D Lab. Inc. |
Sukigara, Kazutoshi | Toyota Central R&D Lab. INC. |
Keywords: Personal Robots, Motion Control, Human detection and tracking
Abstract: Recently, a super-aging society has developed in many countries around the world. The research and development of PRs (personal robots) that improve the quality of human life is needed in order to accommodate the aging society. Elderly people will be able to spend their lives happily and effortlessly with the aid of useful and convenient PRs. However, excessive or premature use of PRs may cause health deterioration or contribute to the quick aging phenomenon. In this paper, a new prototype PR is proposed that can follow human beings with their baggage. Elderly people, therefore, will be able to go outside empty handed to shop, enjoy the fresh air, and visit friends. This PR will encourage people to walk outside and can eventually support an active lifestyle in its true sense. For actual use, PRs should have both a small footprint for coexistence in human society and high traveling performance for following the human wherever they go. Active posture control for the roll and pitch angles is applied to the PR to realize these requirements. The proposed structure and control approach using lateral acceleration as a control variable is verified by experiment using the new prototype robot.
|
|
14:00-14:15, Paper MoBT4.3 | |
>Teleoperation of Mobile Robots by Generating Augmented Free-Viewpoint Images |
Okura, Fumio | Nara Inst. of Science and Tech. |
Ueda, Yuko | Nara Inst. of Science and Tech. |
Sato, Tomokazu | Nara Inst. of Science and Tech. |
Yokoya, Naokazu | Nara Inst. of Science and Tech. |
Keywords: Virtual Reality and Interfaces, Telerobotics, Omnidirectional Vision
Abstract: This paper proposes a teleoperation interface by which an operator can control a robot from freely configured viewpoints using realistic images of the physical world. The viewpoints generated by the proposed interface provide human operators with intuitive control using a head-mounted display and head tracker, and assist them to grasp the environment surrounding the robot. A state-of-the-art free-viewpoint image generation technique is employed to generate the scene presented to the operator. In addition, an augmented reality technique is used to superimpose a 3D model of the robot onto the generated scenes. Through evaluations under virtual and physical environments, we confirmed that the proposed interface improves the accuracy of teleoperation.
|
|
14:15-14:30, Paper MoBT4.4 | |
> >Steering Assist System for a Cycling Wheelchair Based on Braking Control |
Hirata, Yasuhisa | Tohoku Univ. |
Kosuge, Kazuhiro | Tohoku Univ. |
Monacelli, Eric | LISV, Univ. of Versailles |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Medical Systems, Healthcare, and Assisted Living, Rehabilitation Robotics
Abstract: In this study, we propose a steering control method for a cycling wheelchair. The commercially available cycling wheelchair is a pedal-driven system that is similar to a bicycle, and patients with impairment of their lower extremities can move the wheelchair based on the pedaling force if they can slightly move their legs by themselves. The user can also change the wheelchair direction using the steering handle. However, right and left turns are perceived differently and a large steering torque is required while operating the steering handle because of hardware problems associated with the cycling wheelchair. To overcome this problem, we propose a new hardware solution and a method for steering motion control using servo brakes for the cycling wheelchair. The proposed method is applied to the developed cycling wheelchair, and the experimental results illustrate the validity of the system.
|
|
14:30-14:45, Paper MoBT4.5 | |
>Power Steering System for Electrically Assisted Bicycles Riding with Toddlers -Experimental Implementation and Verification |
Kowata, Taiki | Tokyo Denki Univ. |
Sato, Naonari | Tokyo Denki Univ. |
Iwase, Masami | Tokyo Denki Univ. |
Keywords: Human Centered Automation, Human-Robot Interaction, Humanitarian technology for energy, environment and safety
Abstract: A bicycle riding with toddlers becomes convenient and indispensable transportation for parents in Japan. On the other hand, the bicycle with toddlers tends to be less-stable when start moving, low-speed biking and abrupt steering because it is hard to steer due to increasing the moment of inertia of the handlebars. Hence this study proposes a power steering system for electrically-assisted bicycles riding with a toddler. The power steering system is designed to allow a rider to steer the handlebars with a toddler as if steer the handlebars without him. The power steering system is mounted on a real bicycle, and its effectiveness is verified through experiments.
|
|
MoBT5 |
Room605 |
Robot Learning II |
Regular Session |
Chair: Yamashita, Atsushi | The Univ. of Tokyo |
Co-Chair: Huang, Han-Pang | National Taiwan Univ. |
|
13:30-13:45, Paper MoBT5.1 | |
> >Visuospatial Skill Learning for Object Reconfiguration Tasks |
Ahmadzadeh, Seyed Reza | Department of Advanced Robotics, Istituto ItalianodiTecnologia, |
Kormushev, Petar | Istituto Italiano di Tecnologia |
Caldwell, Darwin G. | Istituto Italiano di Tecnologia |
Attachments: Video Attachment
Keywords: Visual Learning, Learning and Adaptive Systems, Learning from Demonstration
Abstract: We present a novel robot learning approach based on visual perception that allows a robot to acquire new skills by observing a demonstration from a tutor. Unlike most existing learning from demonstration approaches, where the focus is placed on the trajectories, in our approach the focus is on achieving a desired goal configuration of objects relative to one another. Our approach is based on visual perception which captures the object's context for each demonstrated action. This context is the basis of the visuospatial representation and encodes implicitly the relative positioning of the object with respect to multiple other objects simultaneously. The proposed approach is capable of learning and generalizing multi-operation skills from a single demonstration, while requiring minimum a priori knowledge about the environment. The learned skills comprise a sequence of operations that aim to achieve the desired goal configuration using the given objects. We illustrate the capabilities of our approach using three object reconfiguration tasks with a Barrett WAM robot.
|
|
13:45-14:00, Paper MoBT5.2 | |
>Selective Exploration Exploiting Skills in Hierarchical Reinforcement Learning Framework |
Masuyama, Gakuto | Chuo Univ. |
Yamashita, Atsushi | The Univ. of Tokyo |
Asama, Hajime | The Univ. of Tokyo |
Keywords: Learning and Adaptive Systems, Autonomous Agents
Abstract: In this paper, novel reinforcement learning method with intrinsic motivation for reproducibility of the past successful experience is presented. The experience is extracted as skill, which is composed of action sequence and abstract knowledge about observed sensor input. Utilizing the collected skills, reproduction of the successful experience is attempted in novel and unknown environment. Consistent exploration and active reduction of search space are realized by learning with intrinsic motivation for reproducibility of experience. Simulation experiments in grid world demonstrate that proposed method significantly accelerate speed of learning.
|
|
14:00-14:15, Paper MoBT5.3 | |
>Self-Learning Assistive Exoskeleton with Sliding Mode Admittance Control |
Huang, Tzu-Hao | National Taiwan Univ. |
Cheng, Ching-An | National Taiwan Univ. |
Huang, Han-Pang | National Taiwan Univ. |
Keywords: Learning and Adaptive Systems, Human Performance Augmentation, Human-Robot Interaction
Abstract: Abstract— Human intention estimation is important for assistive lower limb exoskeleton, and the task is realized mostly by the dynamics model or the EMG model. Although the dynamics model offers better estimation, it fails when unmodeled disturbances come into the system, such as the ground reaction force. In contrast, the EMG model is non-stationary, and therefore the offline calibrated EMG model is not satisfactory for long-time operation. In this paper, we propose the self-learning scheme with the sliding mode admittance control to overcome the deficiency. In the swing phase, the dynamics model is used to estimate the intention and teaches the EMG model; in the consecutive swing phase, the taught EMG model is used alternatively. In consequence, the self-learning control scheme provides better estimations during the whole operation. In addition, the admittance interface and the sliding mode controller ensure robust performance. The control scheme is justified by the knee orthosis with the backdrivable spring torsion actuator, and the experimental results are prominent.
|
|
14:15-14:30, Paper MoBT5.4 | |
>Evaluating Techniques for Learning a Feedback Controller for Low-Cost Manipulators |
Cliff, Oliver Michael | Univ. of Sydney |
Monteiro, Sildomar | Univ. of Sydney |
Keywords: Agent-Based Systems, Autonomous Agents, Learning and Adaptive Systems
Abstract: Robust manipulation with tractability in unstructured environments is a prominent hurdle in robotics. Learning algorithms to control robotic arms have introduced elegant solutions to the complexities faced in such systems. A novel method of Reinforcement Learning (RL), Gaussian Process Dynamic Programming (GPDP), yields promissing results for closed-loop control of a low-cost manipulator however research surrounding most RL techniques lack breadth of comparable experiments into the viability of particular learning techniques on equivalent environments. We introduce several model-based learning agents as mechanisms to control a noisy, low-cost robotic system. The agents were tested in a simulated domain for learning closed-loop policies of a simple task with no prior information. Then, the fidelity of the simulations is confirmed by application of GPDP to a physical system.
|
|
14:30-14:45, Paper MoBT5.5 | |
>Human Like Learning Algorithm for Simultaneous Force Control and Haptic Identification |
Yang, Chenguang | Univ. of Plymouth |
Li, Zhijun | South China Univ. of Tech. |
Burdet, Etienne | imperial Coll. london |
Keywords: Learning and Adaptive Systems, Force Control, Human and humanoid skills/cognition/interaction
Abstract: This paper develops a learning control algorithm adapting the reference point and force to interact with an object of unknown geometry and elasticity. The controller is inspired by neuroscience studies that investigated the neural mechanisms when human adapt to virtual objects of different properties. The learning control algorithm estimates the shape and stiffness of the given object while maintaining a specified contact force with the environment. Simulations demonstrate the efficiency of the algorithm to identify the geometry and impedance of an unknown object without requiring force sensing. These properties are attractive for robotic haptic exploration with little demand on the sensing.
|
|
14:45-15:00, Paper MoBT5.6 | |
>Adaptation of Quadruped Gaits Using Surface Classification and Gait Optimization |
Kim, Jeong-Jung | KAIST |
Lee, Ju-Jang | KAIST |
Keywords: Evolutionary Robotics, Legged Robots, Learning and Adaptive Systems
Abstract: An evolutionary computational approach for a gait generation of a quadruped robot autonomously generates a gait that adapts in an environment. In this approach, a fitness function that measures a performance of the gait is defined and parameters are optimized by maximizing or minimizing the function with evolutionary computation algorithms. However the previous research only has considered the optimization on an environment. In this paper, we suggest a gait adaptation method for a quadruped robot using a terrain classification and a gait optimization for an adaptation on various surfaces. The surfaces for the adaptation are learnt with a classification algorithm and a gait parameter on each surface is optimized with Particle Swarm Optimization (PSO). After the learning and the optimization, the classifier is used for classifying a surface that a robot is located and an optimized gait parameter is selected based on the classification result for the adaptation. The adaptation framework, a feature design and a filtering method for a classifier and a gait design for a quadruped robot are proposed in this paper. The proposed method was verified in a realistic 3D simulator and it successfully classified surfaces and selected optimized gaits for adaptations.
|
|
MoBT6 |
Room604 |
Control Strategies for High Level Behaviors |
Regular Session |
Chair: Colas, Francis | ETH Zürich |
Co-Chair: Kress-Gazit, Hadas | Cornell Univ. |
|
13:30-13:45, Paper MoBT6.1 | |
>3D Path Planning and Execution for Search and Rescue Ground Robots |
Colas, Francis | ETH Zürich |
Mahesh, Srivatsa | ETH Zurich -- ASL |
Pomerleau, Francois | ETH Zurich |
Liu, Ming | Hong Kong Univ. of Science and Tech. |
Siegwart, Roland | ETH Zurich |
Keywords: Motion and Path Planning, Navigation, Search and Rescue Robots
Abstract: One milestone for autonomous mobile robotics is to endow robots with the capability to compute the plans and motor commands necessary to reach a defined goal position. For indoor or car-like robots moving on flat terrain, this problem is well mastered and open-source software can be deployed to such robots. However, for many applications such as search and rescue, ground robots must handle three-dimensional terrain. In this article, we present a system that is able to plan and execute a path in a complex environment. In order to cope with the complexity of a high-dimensional configuration space, we separate position and configuration planning. We demonstrate our system on a search and rescue robot with flippers by climbing up and down a difficult curved staircase.
|
|
13:45-14:00, Paper MoBT6.2 | |
> >An Overall Control Strategy Based on Target Reaching for the Navigation of an Urban Electric Vehicle |
Vilca Ventura, José Miguel | Blaise Pascal Univ. |
Adouane, Lounis | Inst. Pascal, UMR CNRS 6602 |
Mezouar, Youcef | IFMA |
Lébraly, Pierre | LASMEA Blaise Pascal Univ. / CNRS |
Attachments: Video Attachment
Keywords: Nonholonomic Motion Planning, Integrated Planning and Control, Motion Control
Abstract: This paper deals with reactive and flexible human-like autonomous vehicle navigation. A human driver reactively guides his vehicle, performing a smooth trajectory within the roads limits until reaching the defined goal. To obtain a similar behavior with an unmanned ground vehicle (UGV), this paper proposes a flexible control law to drive a vehicle towards desired static or dynamic targets based on a novel definition of control variables and Lyapunov stability analysis. Moreover, a target assignment strategy, combined with an appropriate sigmoid function, that allow to perform smooth, flexible and safe vehicle navigation through successive waypoints is presented. The stability of the proposed control strategy is proved according to Lyapunov synthesis. Simulations and experiments are performed in different cases to demonstrate the reliability and efficiency of the control strategy.
|
|
14:00-14:15, Paper MoBT6.3 | |
>3D Motion Estimation Based on Pitch and Azimuth from Respective Camera and Laser Rangefinder Sensing |
Hoang, Van-Dung | Univ. of Ulsan |
Caceres Hernandez, Danilo | Univ. of Ulsan |
Le, My-Ha | Univ. of Ulsan |
Jo, Kang-Hyun | Univ. of Ulsan |
Keywords: Nonholonomic Motion Planning, Omnidirectional Vision, SLAM
Abstract: This paper proposes a new method to estimate the 3D motion of a vehicle based on car-like structured motion model using an omnidirectional camera and a laser rangefinder. In recent years, motion estimation using vision sensor has improved by assuming planar motion in most conventional research to reduce requirement parameters and computational cost. However, for real applications in environment of outdoor terrain, the motion does not satisfy this condition. In contrast, our proposed method uses one corresponding image point and motion orientation to estimate the vehicle motion in 3D. In order to reduce requirement parameters for speedup computational systems, the vehicle moves under car-like structured motion model assumption. The system consists of a camera and a laser rangefinder mounted on the vehicle. The laser rangefinder is used to estimate motion orientation and absolute translation of the vehicle. An omnidirectional image-based one-point correspondence is used for combining with motion orientation and absolute translation to estimate rotation components of yaw, pitch angles and three translation components of Tx, Ty, and Tz. Real experiments in sloping terrain demonstrate the accuracy of vehicle localization estimation using the proposed method. The error at the end of travel position of our method, one-point RANSAC are 1.1%, 5.1%, respectively.
|
|
14:15-14:30, Paper MoBT6.4 | |
>Analyzing and Revising High-Level Robot Behaviors under Actuator Error |
Johnson, Benjamin | Cornell Univ. |
Kress-Gazit, Hadas | Cornell Univ. |
Keywords: Formal Methods in Robotics and Automation, Task Planning, Reactive and Sensor-Based Planning
Abstract: One increasingly popular approach for creating robot controllers for complex tasks is to automatically synthesize a hybrid controller from a high-level task specification. Such an approach, in addition to reducing the time and expertise required for creating a controller, guarantees that the robot will satisfy all of the underlying specifications, given perfect sensing and actuation. This paper investigates the probabilistic guarantees that can be made about the behavior of the robot when the actuation of the robot is no longer assumed to be perfect, as well as the possible specification revisions that can be made to improve the behavior of the robot. The approach described in this paper composes probabilistic models of the environment behavior and the robot actuation error with the synthesized controller, and uses probabilistic model checking techniques to find the probability that the robot satisfies a set of high level specifications. This paper also presents a preliminary approach for analyzing the composed model and automatically generating revisions to improve the robot's high-level behavior.
|
|
14:30-14:45, Paper MoBT6.5 | |
>Guaranteeing Reactive High-Level Behaviors for Robots with Complex Dynamics |
DeCastro, Jonathan | Cornell Univ. |
Kress-Gazit, Hadas | Cornell Univ. |
Keywords: Formal Methods in Robotics and Automation, Reactive and Sensor-Based Planning, Motion and Path Planning
Abstract: Applying correct-by-construction planning techniques to robots with complex nonlinear dynamics requires new formal analysis methods which guarantee that the requested behaviors can be achieved in the continuous space. In this paper, we construct low-level controllers that ensure the execution of a high-level mission plan. Controllers are generated using trajectory-based verification to produce a set of robust reach tubes which strictly guarantee that the required motions achieve the desired task specification. Reach tubes, computed here by solving a series of sum-of-squares optimization problems, are composed in such a way that all trajectories ensure correct high-level behaviors. We illustrate the new method using an input-limited unicycle robot satisfying task specifications expressed in linear temporal logic.
|
|
14:45-15:00, Paper MoBT6.6 | |
>Towards Minimal Explanations of Unsynthesizability for High-Level Robot Behaviors |
Raman, Vasumathi | Cornell Univ. |
Kress-Gazit, Hadas | Cornell Univ. |
Keywords: Formal Methods in Robotics and Automation, Task Planning
Abstract: High-level robot control has recently seen the application of formal methods to the automatic synthesis of correct-by-construction controllers from user-defined specifications. When a specification fails to yield a corresponding controller, existing techniques provide feedback on portions of the specification that cause the failure, but at a coarse granularity. This work provides techniques for extracting minimal explanations of such failures. The approach is shown to provide refinement of the feedback on several example specifications.
|
|
MoBT7 |
Room701 |
Space Robotics |
Regular Session |
Chair: Uchiyama, Masaru | Tohoku Univ. |
Co-Chair: Yoshida, Kazuya | Tohoku Univ. |
|
13:30-13:45, Paper MoBT7.1 | |
>Modeling and Analysis of Ciliary Micro-Hopping Locomotion Actuated by an Eccentric Motor in a Microgravity |
Nagaoka, Kenji | Tohoku Univ. |
Yoshida, Kazuya | Tohoku Univ. |
Keywords: Space Robotics and Automation, Field Robots, Contact Modelling
Abstract: This paper presents the modeling and analysis of ciliary micro-hopping locomotion actuated by an eccentric motor, for enabling mobile robots to explore asteroids. Under the proposed system, elastic cilia are attached to the surface of the robot; this arrangement should enable the robot to have better mobility in a microgravity environment. However, in the development of the ciliary micro-hopping mechanism theoretical modeling and analysis of the interactive mechanics between the cilia and the environment pose technical challenges that need to be addressed. In this paper, we present the dynamics modeling of the ciliary micro-hopping locomotion actuated by an eccentric motor, along with its experimental validations and numerical simulations. The results of this study contribute to the design optimization of both the cilia mechanism and the motor control scheme.
|
|
13:45-14:00, Paper MoBT7.2 | |
>Self-Localization Using Plural Small Rovers for Asteroid Wide-Area Exploration |
Mikawa, Masahiko | Univ. of Tsukuba |
Keywords: Space Robotics and Automation, Distributed Robot Systems, Localization
Abstract: This paper presents a new robot system consisting of plural small size rovers for an asteroid exploration. Each rover can communicate with others using radio, and a wireless mesh network is configured on an asteroid's surface. Our proposed system has the following three advantages against a conventional exploration system using one or two rovers: (1) It is possible to explore a wider area of an asteroid. (2) Since the mesh network has redundant communication paths, it has more robustness against some troubles. (3) It is possible to estimate the relative distances among plural rovers by using the mesh network. This relative distance estimation is useful for asteroid analyses using sensors that rovers have. Simulation results reveal the validity and effectiveness of our proposed rover system with the wireless mesh network and the relative distance estimation method.
|
|
14:00-14:15, Paper MoBT7.3 | |
>Probabilistic Surface Classification for Rover Instrument Targeting |
Foil, Greydon | Carnegie Mellon Univ. |
Thompson, David | JPL/ California Inst. of Tech. |
Abbey, William | Jet Propulsion Lab. |
Wettergreen, David | Carnegie Mellon Univ. |
Keywords: Space Robotics and Automation, Computer Vision, Field Robots
Abstract: Communication blackouts and latency are significant bottlenecks for planetary surface exploration; rovers cannot typically communicate during long traverses, so human operators cannot respond to unanticipated science targets discovered along the route. Targeted data collection by point spectrometers or high-resolution imagery requires precise aim, so it typically happens under human supervision during the start of each command cycle, directed at known targets in the local field of view. Spacecraft can overcome this limitation using onboard science data analysis to perform autonomous instrument targeting. Two critical target selection capabilities are the ability to target priority features of a known geologic class, and the ability to target anomalous surfaces that are unlike anything seen before. This work addresses both challenges using probabilistic surface classification in traverse images. We first describe a method for targeting known classes in the presence of high measurement cost that is typical for power- and time-constrained rover operations. We demonstrate a Bayesian approach that abstains from uncertain classifications to significantly improve the precision of geologic surface classifications. Our results show a significant increase in classification performance, including a seven-fold decrease in misclassification rate for our random forest classifier. We then take advantage of these classifications and learned scene context in order to train a semi-supervised novelty detector. Operators can train the novelty detection to ignore known content from previous scenes, a critical requirement for multi-day rover operations. By making use of prior scene knowledge we find nearly a 100 percent increase in the number of abnormal features detected over comparable algorithms. We evaluate both of these techniques on a set of images acquired during field expeditions in the Mojave Desert.
|
|
14:15-14:30, Paper MoBT7.4 | |
> >Detumbling an Uncontrolled Satellite with Contactless Force by Using an Eddy Current Brake |
Sugai, Fumihito | Tohoku Univ. |
Abiko, Satoko | Tohoku Univ. |
Tsujita, Teppei | Tohoku Univ. |
Jiang, Xin | Tohoku Univ. |
Uchiyama, Masaru | Tohoku Univ. |
Attachments: Video Attachment
Keywords: Space Robotics and Automation, New Actuators for Robotics
Abstract: In this paper we propose a new method to detumble a malfunctioning satellite. Large space debris such as malfunctioning satellites generally rotates with nutational motion. Thus several researches have proposed the methods to use a space robot for capturing and deorbiting these debris. The most of the past studies considered the method to detumble an uncontrollable satellite and then capture a single spinning satellite. However these methods require physical contact with malfunctioning satellites, which has a risk of accident. Therefore, we propose a method with an eddy current brake. The eddy current brake system can produce braking force to the target without any physical contact. Thus, we can reduce the risk of critical collision between the space robot and the target object. This paper firstly reviews dynamics of a tumbling satellite and proposes a detumbling strategy with the eddy current brake. We carry out a fundamental experiment to evaluate the performance of the braking force of the developed eddy current brake system, and then we simulate detumbling operation by using the experimental data and show an effectiveness of the proposed detumbling method.
|
|
14:30-14:45, Paper MoBT7.5 | |
>Vibration Suppression Control of a Space Robot with Flexible Appendage Based on Simple Dynamic Model |
Hirano, Daichi | Tohoku Univ. |
Fujii, Yusuke | Tohoku Univ. |
Abiko, Satoko | Tohoku Univ. |
Lampariello, Roberto | German Aerospace Center (DLR) |
Nagaoka, Kenji | Tohoku Univ. |
Yoshida, Kazuya | Tohoku Univ. |
Keywords: Space Robotics and Automation, Dynamics, Motion Control
Abstract: This paper discusses a vibration suppression control method for a space robot with a rigid manipulator and flexible appendage. A suitable dynamic model that considers the coupling between the manipulator and flexible appendage was developed for the controller to accomplish the vibration suppression control of the flexible appendage. The flexible appendage was modeled using a virtual joint model, and the control method was developed on the basis of this model. Although this type of control requires feedback of the flexible appendage state, its direct measurement is generally difficult. Thus, an estimator of the flexible appendage state was constructed using a force/torque sensor attached between the base and flexible appendage. The control method was experimentally verified using an air-floating system.
|
|
14:45-15:00, Paper MoBT7.6 | |
>Identifying the Singularity Conditions of Canadarm2 Based on Elementary Jacobian Transformation |
Xu, Wenfu | Harbin Inst. of Tech. |
Zhang, Jintao | Harbin Inst. of Tech. |
Qian, Huihuan | CUHK |
Chen, Yongquan | The Chinese Univ. of Hong Kong |
Xu, Yangsheng | The Chinese Univ. of Hong Kong |
Keywords: Space Robotics and Automation, Redundant Robots, Path Planning for Manipulators
Abstract: The Canadarm2, also named pace Station Remote Manipulator System (SSRMS), is a 7-joint redundant manipulator. Without spherical wrists, the singularity analysis and avoidance of these manipulators are very difficult. In this paper, a method is presented to analytically identify its singular configurations based on the elementary transformation of Jacobian matrix. Firstly, we constructed a general kinematics model to describe them in a united manner. Correspondingly, the differential kinematics equation and the modified form are derived. Secondly, the singularity conditions are isolated and collected in a 3×4 sub-matrix by only four times row transformation of the modified Jacobian matrix, which is partitioned into a block-triangle matrix. Finally, all the singularity configurations are determined by analyzing the rank degeneracy conditions of the 3×4 sub-matrix. The proposed method isolates the singularity conditions, and collects them in a 3×4 sub-matrix, largely reducing the computation workload.
|
|
MoBT8 |
Room702 |
Perception Grasping and Manipulation |
Regular Session |
Chair: Cowley, Anthony | Univ. of Pennsylvania |
Co-Chair: Yang, Guang-Zhong | Imperial Coll. London |
|
13:30-13:45, Paper MoBT8.1 | |
>Slip Interface Classification through Tactile Signal Coherence |
Heyneman, Barrett | Stanford Univ. |
Cutkosky, Mark | Stanford Univ. |
Keywords: Perception for Grasping and Manipulation, Force and Tactile Sensing, Biologically-Inspired Robots
Abstract: The manipulation of objects in a hand or gripper is typically accompanied by events such as slippage, between the fingers and a grasped object or between the object and external surfaces. Humans can identify such events using a combination of superficial and deep mechanoreceptors. In robotic hands, with more limited tactile sensing, such events can be hard to distinguish. This paper presents a signal processing method that can help to distinguish finger/object and object/world events based on multidimensional coherence, which measures whether a group of signals are sampling a single input or a group of incoherent inputs. A simple linear model of the fingertip/object interaction demonstrates how signal coherence can be used for slip classification. The method is evaluated through controlled experiments that produce similar results for two very different tactile sensing suites.
|
|
13:45-14:00, Paper MoBT8.2 | |
>Learning Support Order for Manipulation in Clutter |
Panda, Swagatika | International Inst. of Information Tech. Hyderabad |
Abdul Hafez, A. H. | Damascus Uiversity |
Jawahar, C.V. | IIIT, Hyderabad |
Keywords: Perception for Grasping and Manipulation, Personal Robots, Computer Vision
Abstract: Understanding positional semantics of the environment plays an important role in manipulating an object in clutter. The interaction with surrounding objects in the environment must be considered in order to perform the task without causing the objects fall or get damaged. In this paper, we learn the semantics in terms of support relationship among different objects in a cluttered environment by utilizing various photometric and geometric properties of the scene. To manipulate an object of interest, we use the inferred support relationship to derive a sequence in which its surrounding objects should be removed while causing minimal damage to the environment. We believe, this work can push the boundary of robotic applications in grasping, object manipulation and picking-from-bin, towards objects of generic shape and size and scenarios with physical contact and overlap. We have created an RGBD dataset that consists of various objects used in day-to-day life present in clutter. We explore many different settings involving different kind of object-object interaction. We successfully learn support relationships and predict support order in these settings.
|
|
14:00-14:15, Paper MoBT8.3 | |
> >Perception and Motion Planning for Pick-And-Place of Dynamic Objects |
Cowley, Anthony | Univ. of Pennsylvania |
Cohen, Benjamin | Univ. of Pennsylvania |
Likhachev, Maxim | Carnegie Mellon Univ. |
Taylor, Camillo Jose | Univ. of Pennsylvania |
Marshall, William | Lehigh Univ. |
Attachments: Video Attachment
Keywords: Perception for Grasping and Manipulation, Path Planning for Manipulators
Abstract: Mobile manipulators have brought a new level of flexibility to traditional automation tasks such as tabletop manipulation, but are not yet capable of the same speed and reliability as industrial automation. We present approaches to 3D perception and manipulator motion planning that enable a general purpose robotic platform to recognize and manipulate a variety of objects at a rate of one pick-and-place operation every 6.7s, and work with a conveyor belt carrying objects at a speed of 33 cm/s.
|
|
14:15-14:30, Paper MoBT8.4 | |
>FINDDD: A Fast 3D Descriptor to Characterize Textiles for Robot Manipulation |
Ramisa, Arnau | CSIC-UPC |
Alenyà, Guillem | CSIC-UPC |
Moreno-Noguer, Francesc | CSIC |
Torras, Carme | CSIC - UPC |
Keywords: Perception for Grasping and Manipulation, Computer Vision, Range Sensing
Abstract: Most current depth sensors provide 2.5D range images in which depth values are assigned to a rectangular 2D array. In this paper we take advantage of this structured information to build an efficient shape descriptor which is about two orders of magnitude faster than competing approaches, while showing similar performance in several tasks involving deformable object recognition. Given a 2D patch surrounding a point and its associated depth values, we build the descriptor for that point, based on the cumulative distances between their normals and a discrete set of normal directions. This processing is made very efficient using integral images, even allowing to compute descriptors for every range image pixel in a few seconds. The discriminative power of the resulting descriptor, dubbed FINDDD, is evaluated in three different scenarios: recognition of specific cloth wrinkles, instance recognition from geometry alone, and detection of reliable and informed grasping points.
|
|
14:30-14:45, Paper MoBT8.5 | |
>Snake Robot Shape Sensing Using Micro-Inertial Sensors |
Zhang, Zhiqiang | Imperial Coll. London |
Shang, Jianzhong | Imperial Coll. London |
Seneci, Carlo Alberto | Imperial Coll. London |
Yang, Guang-Zhong | Imperial Coll. London |
Keywords: Perception for Grasping and Manipulation, Surgical Robotics, Sensor Fusion
Abstract: Real-time shape sensing and state acquisition is important for closed-loop control of hyper-redundant snake robots in minimally invasive surgery. Due to the miniaturized size of such minimally invasive surgery robots, it is not feasible to use existing angular sensors, such as the rotary encoders. With the advances of the MEMS technology, micro inertial sensor has shown its potential for robot state estimation. Previous studies have demonstrated that accurate joint angles can be estimated for one degree of freedom joints. However, more than one degree of freedom joints in the robot impose a number of challenges for the current joint angles estimation methods. This paper presents a micro-sensing platform and shape reconstruction algorithm for minimally invasive surgery snake robot with two degrees of freedom joints. The method incorporates gravitational and gyroscopic sensing by calculating the rotation difference between any consecutive robot segments. The gyroscope measurements are first used as the input to predict the rotation difference by direct orientation integration. The orientation difference is also derived from the consecutive acceleration vectors to update the prediction through a complementary filter. To demonstrate the performance of our proposed approach, a robot prototype with two universal joints was fabricated. Detailed experimental results have demonstrated that high accuracy can be achieved by using the proposed method for joint angles estimation.
|
|
14:45-15:00, Paper MoBT8.6 | |
>Tangled: Learning to Untangle Ropes with RGB-D Perception |
Lui, Wen Hao | Cornell Univ. |
Saxena, Ashutosh | Cornell Univ. |
Keywords: Perception for Grasping and Manipulation, Learning and Adaptive Systems, Visual Learning
Abstract: In this paper, we address the problem of inferring and manipulation deformable objects such as ropes. Starting with a RGBD view of the tangled rope, we first present a learning algorithm to infer the rope structure. We design appropriate features, an inference algorithm based on particle filters, and a learning algorithm based on max-margin learning. Then we reason about the knot structure of the rope, and choose an appropriate manipulation strategy based on current rope structure. We first perform extensive evaluation on offline datasets with five different rope types and 10 different knot types. In robotic experiments, we found that our bimanual manipulator (PR2) can untangle ropes successfully 76.9% of the time.
|
|
MoBT9 |
Room608 |
Brain-Machine Interface |
Regular Session |
Chair: Matsuno, Fumitoshi | Kyoto Univ. |
Co-Chair: Ito, Tomotaka | Shizuoka Univ. |
|
13:30-13:45, Paper MoBT9.1 | |
> >Continuous Robot Control Using Surface Electromyography of Atrophic Muscles |
Vogel, Joern | German Aerospace Center |
Bayer, Justin | Tech. Univ. München |
van der Smagt, Patrick | TUM |
Attachments: Video Attachment
Keywords: Brain Machine Interface, Medical Systems, Healthcare, and Assisted Living, Human-Robot Interaction
Abstract: The development of new, light robotic systems has opened up a wealth of human–robot interaction applications. In particular, the use of robot manipulators as personal assistant for the disabled is realistic and affordable, but still requires research as to the brain-computer interface. Based on our previous work with tetraplegic individuals, we investigate the use of low-cost yet stable surface Electromyography (sEMG) interfaces for individuals with Spinal Muscular Atrophy (SMA), a disease leading to the death of neuronal cells in the anterior horn of the spinal cord; with sEMG, we can record remaining active muscle fibers. We show the ability of two individuals with SMA to actively control a robot in 3.5D continuously decoded through sEMG after a few minutes of training, allowing them to regain some independence in daily life. Although movement is not nearly as fast as natural, unimpaired movement, reach and grasp success rates are near 100% after 50 s of movement.
|
|
13:45-14:00, Paper MoBT9.2 | |
> >Brain Machine Interface Using Portable Near-InfraRed Spectroscopy -Improvement of Classification Performance Based on ICA Analysis and Self-Proliferating LVQ |
Ito, Tomotaka | Shizuoka Univ. |
Akiyama, Hideki | Shizuoka Univ. |
Hirano, Tokihisa | Shizuoka Univ. |
Attachments: Video Attachment
Keywords: Brain Machine Interface
Abstract: Recently, the Brain-Machine Interface (BMI) has been expected to be applied to robotics and medical science field as a new intuitive interface. BMI measures human cerebral activities and uses them directly as an input signal to various instruments. The future goal of our research is to design a practical BMI system that can be used reliably in daily lives. In this paper, we will discuss a design method of a BMI system using a portable Near-InfraRed Spectroscopy (NIRS) device and then we will consider improving the performance of the learning vector quantization (LVQ) classifier by using the independent component analysis (ICA) and the self-proliferating function of neurons. The effectiveness of the proposed method is investigated in human imagery classification experiments.
|
|
14:00-14:15, Paper MoBT9.3 | |
>EOG/ERP Hybrid Human-Machine Interface for Robot Control |
Ma, Jiaxin | Kyoto Univ. |
Zhang, Yu | East China Univ. of Science and Tech. |
Nam, Yunjun | Pohang Univ. of Science and Tech. |
Cichocki, Andrzej | RIKEN Brain Science Inst. |
Matsuno, Fumitoshi | Kyoto Univ. |
Keywords: Brain Machine Interface, Humanoid Robots
Abstract: Electrooculogram (EOG) signals are potential responses generated by eye movements, and event related potential (ERP) is a special electroencephalogram (EEG) pattern which evoked by external stimuli. Both EOG and ERP have been used separately for implementing human-machine interfaces which can assist disabled patients in performing daily tasks. In this paper, we present a novel EOG/ERP hybrid human-machine interface which integrates the traditional EOG and ERP interfaces together. Eye movements like the blink, wink, gaze, and frown are detected from EOG signals using double threshold algorithm. Multiple ERP components, i.e., N170, VPP and P300 are evoked by inverted face stimuli and classified by linear discriminant analysis (LDA). Based on this hybrid interface, we also design a control scheme for the humanoid robot NAO (Aldebaran robotics, Inc). On-line experiment results show that the proposed hybrid interface can effectively control the robot's basic movements and order it to make various behaviors. While normally operating the robot by hands takes 49.1 s to complete the experiment sessions, using the proposed EOG/ERP interface, the subject is able to finish the sessions in 54.1 s.
|
|
14:15-14:30, Paper MoBT9.4 | |
>A Waypoint-Based Framework in Brain-Controlled Smart Home Environments: Brain Interfaces, Domotics, and Robotics Integration |
Kanemura, Atsunori | Osaka Univ. |
Morales Saiki, Luis Yoichi | Advanced Telecommunications Res. Inst. International |
Kawanabe, Motoaki | Advanced Telecommunications Res. Inst. International |
Morioka, Hiroshi | Tokyo Inst. of Tech. |
Kallakuri, Nagasrikanth | Advanced Telecommunications Res. Inst. |
Ikeda, Tetsushi | ATR |
Miyashita, Takahiro | ATR |
Hagita, Norihiro | ATR |
Ishii, Shin | ATR Neural Information Analysis Lab. |
Keywords: Brain Machine Interface, Smart Infrastructures, Medical Systems, Healthcare, and Assisted Living
Abstract: The noninvasive brain-machine interface (BMI) is anticipated to be an effective tool of communication not only in laboratory settings but also in our daily livings. The direct communication channel created by BMI can assist aging societies and the handicapped and improve human welfare. In this paper we propose and experiment a BMI framework that combines BMI with a robotic house and autonomous robotic wheelchair. Autonomous navigation is achieved by placing waypoints within the house and, from the user side, the user performs BMI to give commands to the house and wheelchair. The waypoint framework can offer essential services to the user with an effectively improved information-transfer rate and is an excellent examples of the fusion of data measured by sensors in the house, which can offer insight into further studies.
|
|
14:30-14:45, Paper MoBT9.5 | |
>Auditory Paradigm for a P300 BCI System Using Spatial Hearing |
Ferracuti, Francesco | Univ. Pol. delle Marche |
Freddi, Alessandro | Univ. Pol. delle Marche |
Iarlori, Sabrina | Univ. Pol. delle Marche |
Longhi, Sauro | Univ. Pol. delle Marche |
Peretti, Paolo | Univ. Pol. delle Marche |
Keywords: Brain Machine Interface, Medical Systems, Healthcare, and Assisted Living
Abstract: The present paper proposes an auditory BCI paradigm for systems based on P300 signals which are generated by auditory stimuli characterized by different sound typologies and locations. A Head Related Transfer Function approach is adopted to virtualize auditory stimuli. When virtualized audio is used, the user has to focus the attention both on the type and location of the stimulus, thus generating P300 signals whose amplitude is higher than that generated without audio virtualization. Classification is performed by Support Vector Machines in which gaussian radial basis functions are used as kernel functions. The system has been validated with 14 users, who were asked to choose one among five common spoken words, previously virtualized and transmitted to stereophonic headphones. Classification results prove that the proposed auditory BCI system performed similarly to common visual BCI P300 systems, representing then an alternative to visual BCI for users with visual impairments.
|
|
14:45-15:00, Paper MoBT9.6 | |
> >Experimental Validation of Imposed Safety Regions for Neural Controlled Human Patient Self-Feeding Using the Modular Prosthetic Limb |
Wester, Brock | Johns Hopkins Univ. Applied Physics Lab. |
Para, Matthew | Johns Hopkins Univ. Applied Physics Lab. |
Sivakumar, Ashok | The Johns Hopkins Univ. Applied Physics Lab. |
Kutzer, Michael Dennis Mays | Johns Hopkins Univ. Applied Physics Lab. |
Katyal, Kapil | Johns Hopkins Univ. Applied Physics Lab. |
Ravitz, Alan | The Johns Hopkins Univ. Applied Physics Lab. |
Beaty, James | The Johns Hopkins Univ. Applied Physics Lab. |
Mcloughlin, Michael | Johns Hopkins Univ. Applied Physics Lab. |
Johannes, Matthew | The Johns Hopkins Univ. Applied Physics Lab. |
Attachments: Video Attachment
Keywords: Brain Machine Interface, Human-Robot Interaction, Telerobotics
Abstract: This paper presents the experimental validation of software-based safety features implemented during the control of a prosthetic limb in self-feeding tasks with a human patient. To ensure safe operation during patient controlled movements of the limb, velocity-based virtual fixtures are constructed with respect to the patient's location and orientation relative to the limb. These imposed virtual fixtures or safety zones modulate the allowable movement direction and speed of the limb to ensure patient safety during commanded limb trajectories directed toward the patient's body or environmental obstacles. In this implementation, the Modular Prosthetic Limb (MPL) will be controlled by a quadriplegic patient using implanted intracortical electrodes. These virtual fixtures leverage existing sensors internal to the MPL and operate in conjunction with the existing limb control. Validation of the virtual fixtures was conducted by executing a recorded set of limb control inputs while collecting both direct feedback from the limb sensors and ground truth measurements of the limb configuration using a Vicon tracking system. Analysis of the collected data indicates that the system performed within the limitations prescribed by the imposed virtual fixtures. This successful implementation and validation enabled the approved clinical use of the MPL system for a neural controlled self-feeding task.
|
|
MoBT10 |
Room609 |
Localization II |
Regular Session |
Chair: Scaramuzza, Davide | Univ. of Zurich |
Co-Chair: Gao, Grace Xingxin | Univ. of Illinois at Urbana Champaign |
|
13:30-13:45, Paper MoBT10.1 | |
>Single Beacon Based Localization of AUVs Using Moving Horizon Estimation |
Wang, Sen | Univ. of Essex |
Chen, Ling | Univ. of Essex |
Hu, Huosheng | Univ. of Essex |
Gu, Dongbing | Univ. of Essex |
Keywords: Localization, Marine Robotics
Abstract: This paper studies the underwater localization problem for a school of robotic fish, i.e., a kind of Autonomous Underwater Vehicles with limited size, power and payload. These robotic fish cannot be equipped with traditional underwater localization sensors that are big and heavy. The proposed localization system is performed by using a single surface mobile beacon which provides range measurement to bound the localization error. The main contribution of this paper lies in twofold: 1) Observability of single beacon based localization is first analyzed in the context of nonlinear discrete time system, deriving a sufficient condition on observability. 2) Moving Horizon Estimation is then integrated with Extended Kalman Filters for three-dimensional localization using single beacon, which can reduce the computational complexity, impose various constraints and make use of previous range measurements for current estimation. Extensive numerical simulations are conducted to verify the observability and high localization accuracy of the proposed underwater localization method.
|
|
13:45-14:00, Paper MoBT10.2 | |
>Low-Latency Localization by Active LED Markers Tracking Using a Dynamic Vision Sensor |
Censi, Andrea | California Inst. of Tech. |
Strubel, Jonas | Univ. of Zurich |
Brandli, Christian | Inst. of Neuroinformatics, Univ. Zurich and ETH Zurich |
Delbruck, Tobi | Inst. of Neuroinformatics, Univ. of Zurich/ETH |
Scaramuzza, Davide | Univ. of Zurich |
Keywords: Localization, Neurorobotics, Aerial Robotics
Abstract: At the current state of the art, the agility of an autonomous flying robot is limited by its sensing pipeline, because the relatively high latency and low sampling frequency limit the aggressiveness of the control strategies that can be implemented. To obtain more agile robots, we need faster sensing pipelines. A Dynamic Vision Sensor (DVS) is a very different sensor than a normal CMOS camera: rather than providing discrete frames like a CMOS camera, the sensor output is a sequence of asynchronous timestamped events each describing a change in the perceived brightness at a single pixel. The latency of such sensors can be measured in the microseconds, thus offering the theoretical possibility of creating a sensing pipeline whose latency is negligible compared to the dynamics of the platform. However, to use these sensors we must rethink the way we interpret visual data. This paper presents a method for low-latency pose tracking using a DVS and Active Led Markers (ALMs), which are LEDs blinking at high frequency (>1 KHz). The sensor's time resolution allows distinguishing different frequencies, thus avoiding the need for data association. This approach is compared to traditional pose tracking based on a CMOS camera. The DVS performance is not affected by fast motion, unlike the CMOS camera, which suffers from motion blur.
|
|
14:00-14:15, Paper MoBT10.3 | |
> >Enhancing 6D Visual Relocalisation with Depth Cameras |
Martinez-Carranza, Jose | Univ. of Bristol |
Calway, Andrew | Univ. of Bristol |
Mayol, Walterio | Univ. of Bristol |
Attachments: Video Attachment
Keywords: Localization, Computer Vision
Abstract: Relocalisation in 6D is relevant to a variety of Robotics applications and in particular to agile cameras exploring a 3D environment. While the use of geometry has commonly helped to validate appearance as a back-end process in several relocalisation systems before, we are interested in using 3D information to assist fast pose relocalisation computation as part of a front-end task. Our approach rapidly searches for a reduced number of visual descriptors, previously observed and stored in a database, that can be used to effectively compute the camera pose corresponding to the current view. We guide the search by means of constructing validated candidate sets using a 3D test involving the depth information obtained with an RGB-D camera (e.g. stereo of with structured light). Our experiments demonstrate that this process returns a compact quality set that works better for the pose estimation stage than when using a typical Nearest-Neighbor search over appearance only. The improvements are observed in terms of percentage of relocalised frames and speed, where the latter goes up to two orders of magnitude w.r.t. the conventional search.
|
|
14:15-14:30, Paper MoBT10.4 | |
>Accuracy of Range-Based Localization Schemes in Random Sensor Networks: A Lower Bound Analysis |
Heng, Liang | Univ. of Illinois at Urbana-Champaign |
Gao, Grace Xingxin | Univ. of Illinois at Urbana Champaign |
Keywords: Localization, Sensor Networks
Abstract: Accuracy is a fundamental performance requirement in network localization. This paper studies the accuracy of range-based localization schemes for random sensor networks with respect to network connectivity and scale. We show that the variance of localization errors are proportional to average geometric dilution of precision (AGDOP). The paper proves a novel lower bound of expectation of DOP (LB-E-AGDOP). Our analysis based on LB-E-AGDOP shows that localization accuracy is approximately inversely proportional to the average degree of network. A further analysis shows that when network connectivity merely guarantees localizability, increasing sensor nodes leads to bounded monotonic increase in AGDOP; when a network is densely connected, increasing sensor nodes leads to bounded monotonic decrease in AGDOP. Finally, these conclusions are validated by numerical simulations.
|
|
14:30-14:45, Paper MoBT10.5 | |
>Magnetic Maps of Indoor Environments for Precise Localization of Legged and Non-Legged Locomotion |
Frassl, Martin | German Aerospace Center (DLR) |
Angermann, Michael | German Aerospace Center |
Lichtenstern, Michael | German Aerospace Center (DLR) |
Robertson, Patrick | German Aerospace Center |
Julian, Brian | MIT |
Doniec, Marek | MIT |
Keywords: Localization, Mapping, Sensor Fusion
Abstract: The magnetic field in indoor environments is rich in features and exceptionally easy to sense. In conjunction with any form of odometry, such as signals produced from inertial sensors or wheel encoders, a map of this field can be used to precisely localize a human or robot in indoor environments. We show how the use of this field yields significant improvements in terms of localization accuracy and computational complexity for both legged and non-legged locomotion. We suggest various likelihood functions for sequential Monte Carlo localization and evaluate their performance based on magnetic maps of different quality. Specifically, we investigate the influence that measurement representation (e.g., intensity-based, vector-based) and map resolution have on localization accuracy, robustness, and complexity. Compared to other localization approaches (e.g., camera-based, LIDAR-based), there exist far fever privacy concerns when sensing the indoor environment's magnetic field. Furthermore, the required sensors are less costly, compact, and have a lower raw data rate and power consumption. The combination of technical and privacy-related advantages makes the use of the magnetic field a very viable solution to indoor navigation for both humans and robots.
|
|
14:45-15:00, Paper MoBT10.6 | |
> >Light-Weight Localization for Vehicles Using Road Markings |
Ranganathan, Ananth | Honda Res. Inst. USA |
Ilstrup, David | Honda Res. Inst. USA, Silicon Valley |
Wu, Tao | Univ. of Maryland |
Attachments: Video Attachment
Keywords: Localization, Computer Vision, Intelligent Transportation Systems
Abstract: Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.
|
|
MoBT11 |
Room801 |
Wire-Driven/parallel Robot |
Regular Session |
Chair: Ozawa, Ryuta | Ritsumeikan Univ. |
Co-Chair: Ichikawa, Akihiko | Meijo Univ. |
|
13:30-13:45, Paper MoBT11.1 | |
>On the Simplifications of Cable Model in Static Analysis of Large-Dimension Cable-Driven Parallel Robots |
Nguyen, Dinh Quan | Univ. Montpellier 2, LIRMM/CNRS |
Gouttefarde, Marc | LIRMM |
Company, Olivier | Univ. of Montpellier 2 |
Pierrot, François | CNRS - LIRMM |
Keywords: Parallel Robots, Tendon/Wire Mechanism
Abstract: This paper addresses the simplification of cable model in static analysis of large-dimension cable-driven parallel robots (CDPR). An approach to derive a simplified hefty cable model is presented. The approach provides an insight into the limitation of such a simplification. The resulting cable tension computation is then used to solve the inverse kinematic problem of CDPR. A new expression of cable length taking into account both the non-negligible cable mass and elasticity is also introduced. Finally, simulations and experiments on a large CDPR prototype are provided. The results show that taking into account both cable mass and elasticity improves the robot accuracy.
|
|
13:45-14:00, Paper MoBT11.2 | |
>Design of Upper Limb by Adhesion of Muscles and Bones -Detail Human Mimetic Musculoskeletal Humanoid Kenshiro |
Kozuki, Toyotaka | Univ. of Tokyo |
Motegi, Yotaro | The Univ. of Tokyo |
Shirai, Takuma | Tokyo Univ. |
Asano, Yuki | The Univ. of Tokyo |
Urata, Junichi | The Univ. of Tokyo |
Nakanishi, Yuto | The Univ. of Tokyo |
Okada, Kei | The Univ. of Tokyo |
Inaba, Masayuki | The Univ. of Tokyo |
Keywords: Tendon/Wire Mechanism, Biologically-Inspired Robots, Humanoid Robots
Abstract: This paper presents a design methodology for humanoid upper limb based on human anatomy. Kenshiro is a full body tendon driven humanoid robot and is designed from the data of average 14 year old Japanese boy. The design of his upper limb is realizing detail features of muscles, bones and the adhesive relation of the two. Human mimetic design is realized by focusing on the fact that joints are being stabled by muscles winding around the bones, and by accurately mimicking the bone shape this was enabled. In this paper we also introduce details of mechanical specifications of the upper limb. By having muscles, bones, and joint structures based on human anatomy, Kenshiro can move flexibly. The use as human body simulator can be expected by measuring sensor data which can correspond to bilogical data.
|
|
14:00-14:15, Paper MoBT11.3 | |
>A Novel Underactuated Wire-Driven Robot Fish with Vector Propulsion |
Li, Zheng | the chinese Univ. of hong kong |
Zhong, Yong | Shenzhen Inst. of Advanced Tech. Acad. of Sc |
Du, Ruxu | The Chinese Univ. of Hong Kong |
Keywords: Biologically-Inspired Robots, Underactuated Robots, Tendon/Wire Mechanism
Abstract: This paper presents a novel robot fish with vector propulsion. It can swim like a shark and/or a dolphin. The propulsor (tail) of the robot has an underactuated serpentine backbone and the actuation is done by two sets of orthogonally distributed wires. The backbone is composed of seven vertebras and an elastic rod. The vertebras are articulated by the rod and spherical joints. The horizontal flapping and vertical flapping are independently actuated by two motors. This enables the propulsor providing thrust in all directions. Propulsion model of the propulsor is developed by integrating the kinematic model and Lighthill’s elongated body theory. A prototype is built. Tests show that the robot fish could flap its tail like the shark or the dolphin effectively. In the swimming tests, the maximum swimming speed of the robot is 0.35 BL/s.
|
|
14:15-14:30, Paper MoBT11.4 | |
>Bio-Inspired Friction Switches: Adaptive Pulley Systems |
Dermitzakis, Konstantinos | Univ. of Zurich |
Carbajal, Juan Pablo | Ghent Univ. |
Keywords: Tendon/Wire Mechanism, Mechanism Design, Biomimetics
Abstract: Frictional influences in tendon-driven robotic systems are generally unwanted, with efforts towards minimizing them where possible. In the human hand however, the tendon-pulley system is found to be frictional with a difference between high-loaded static post-eccentric and post-concentric force production of 9-12% of the total output force. This difference can be directly attributed to tendon-pulley friction. Exploiting this phenomenon for robotic and prosthetic applications we can achieve a reduction of actuator size, weight and consequently energy consumption. In this study, we present the design of a bio-inspired friction switch. The adaptive pulley is designed to minimize the influence of frictional forces under low and medium-loading conditions and maximize it under high-loading conditions. This is achieved with a dual-material system that consists of a high-friction silicone substrate and low-friction polished steel pins. The system is described and its behavior experimentally validated with respect to the number and spacing of pins. The results validate its intended behavior, making it a viable choice for robotic tendon-driven systems.
|
|
14:30-14:45, Paper MoBT11.5 | |
>A Bilinear Formulation for the Motion Planning of Non-Holonomic Parallel Orienting Platforms |
Grosch, Patrick | Consejo Superior de Investigaciones Científicas/ UPC |
Thomas, Federico | CSIC-UPC |
Keywords: Kinematics, Nonholonomic Motion Planning, Parallel Robots
Abstract: This paper deals with the motion planning problem for parallel orienting platforms with one non-holonomic joint and two prismatic actuators which can maneuver to reach any three-degree-of-freedom pose of the moving platform. Since any system with two inputs and up to four generalized coordinates can always be transformed into chained form, this path planning problem can be solved using well-established procedures. Nevertheless, the use of these procedures requires a good understanding of Lie algebraic methods whose technicalities have proven a challenge to many practitioners who are not familiar with them. As an alternative, we show how by (a) properly locating the actuators, and (b) representing the platform orientation using Euler parameters, the studied path planning problem admits a closed-form solution whose derivation requires no other tools than ordinary linear algebra.
|
|
14:45-15:00, Paper MoBT11.6 | |
>Cartesian Stiffness Evaluation of a Novel 2 DoF Parallel Wrist under Redundant and Antagonistic Actuation |
Li, Cheng | Hong Kong Univ. of Science and Tech. |
Wu, Yuanqing | Hong Kong Univ. of Science and Tech. |
Wu, Jiachun | The Hong Kong Univ. of Science and Tech. |
Shi, Weiyi | Hong Kong Univ. of Science and Tech. |
Dai, Dan | HKUST |
Shi, Jinbo | Hong Kong Univ. of Science and Tech. |
Li, Zexiang | Hong Kong Univ. of Science and Tech. |
Keywords: Redundant Robots, Parallel Robots
Abstract: In this paper, we present an experimental eval- uation of the Cartesian stiffness of a novel parallel wrist under redundant and antagonistic actuation. The mechanism in consideration is Omni-Wrist V (OW5), a two degrees-of- freedom (DoF) parallel mechanism redundantly actuated by three subchains. We first give a brief review of its kineto- statics and derive its reduced Cartesian stiffness model. To illustrate the stiffness enhancement of OW5 under redundant and antagonistic actuation, its Cartesian stiffness is measured and evaluated under four control schemes: the non-redundant control, the minimum 2-norm torque control without or with redundant encoder, and the antagonistic actuation control. Measurement data are represented using stiffness matrices and stiffness ellipses. Our study offers a quick quantitative evaluation of stiffness enhancement of OW5 under redundant and antagonistic actuation.
|
|
MoBT12 |
Room610 |
Lower Limb Rehabilitation Systems |
Regular Session |
Chair: Guglielmelli, Eugenio | Univ. Campus Bio-Medico |
Co-Chair: Fujie, Masakatsu G. | Waseda Univ. |
|
13:30-13:45, Paper MoBT12.1 | |
> >Actively Controlled Lateral Gait Assistance in a Lower Limb Exoskeleton |
Wang, Letian | Twente Univ. |
Wang, Shiqian | Tech. Univ. of Delft |
van Asseldonk, Edwin | Univ. of Twente |
Van der Kooij, Herman | Univ. of Twente |
Attachments: Video Attachment
Keywords: Rehabilitation Robotics, Motion Control, Medical Systems, Healthcare, and Assisted Living
Abstract: Various powered wearable lower limb exoskeletons are designed for paraplegics to make them walk again. Control methods are developed and implemented in these exoskeletons to provide active gait assistance in the sagittal plane while active control in the frontal plane is still missing. This paper proposed a control method that provided gait assistance in both lateral and sagittal plane. First, in the lateral plane, the exoskeleton was controlled to support the weight shift during stepping by providing assisting hip ab/adduction torques when the subject initiated a small amount of weight shift to the stance side to trigger a step. Second, the exoskeleton’s hip ab/adduction during stepping was controlled to improve lateral stability. This was achieved by altering the amount of hip ab/adduction to change step width at heel strike. Using these controls, an able-bodied subject could walk in the exoskeleton without any external balance aids, i.e. crutches or a walker, where his hip and knee joints were controlled by the exoskeleton and his ankle joints were constrained by the exoskeleton. The next step is to test whether the proposed method improves balance in spinal cord injured subjects.
|
|
13:45-14:00, Paper MoBT12.2 | |
>Soft Robot for Gait Rehabilitation of Spinalized Rodents |
Song, Yun Seong | École Pol. Fédérale de Lausanne (EPFL) |
Sun, Yi | Swiss Federal Inst. of Tech. (EPFL) |
van den Brand, Rubia | EPFL |
von Zitzewitz, Joachim | EPFL |
Micera, Silvestro | Scuola Superiore Sant'Anna |
Courtine, Gregoire | Univ. of Zurich |
Paik, Jamie | Ec. Pol. Federale de Lausanne |
Keywords: Hydraulic/Pneumatic Actuators, Medical Robots and Systems, Rehabilitation Robotics
Abstract: Soft actuators made of highly elastic polymers allow novel robotic system designs, yet application-specific soft robotic systems are rarely reported. Taking notice of the characteristics of soft pneumatic actuators (SPAs) such as high customizability and low inherent stiffness, we report in this work the use of soft pneumatic actuators for a biomedical use – the development of a soft robot for rodents, aimed to provide a physical assistance during gait rehabilitation of a spinalized animal. The design requirements to perform this unconventional task are introduced. Customized soft actuators, soft joints and soft couplings for the robot are presented. Live animal experiment was performed to evaluate and show the potential of SPAs for their use in the current and future biomedical applications.
|
|
14:00-14:15, Paper MoBT12.3 | |
>Development of a Novel Gait Rehabilitation System Based on FES and Treadmill-Walk for Convalescent Hemiplegic Stroke Survivors |
Ye, Jing | Waseda Univ. |
Nakashima, Yasutaka | Waseda Univ. |
Watanabe, Takao | Waseda Univ. |
Seki, Masatoshi | Waseda Univ. |
Zhang, Bo | Waseda Univ. |
Liu, Quanquan | Waseda Univ. |
Yokoo, Yuki | Waseda Univ. |
Kobayashi, Yo | Waseda Univ. |
Fujie, Masakatsu G. | Waseda Univ. |
Cao, Qixin | Shanghai Jiao Tong Univ. |
Keywords: Rehabilitation Robotics, Motion Control, Medical Systems, Healthcare, and Assisted Living
Abstract: Recently, a large amount of stroke survivors are suffering from motor impairment. However, existed therapy interventions have limited effects to restore normal motor function. Thus, we proposed a novel control strategy for gait rehabilitation of hemiplegic patients. The whole system consists of a Functional Electrical Stimulation (FES) device and Treadmill-Walk system. FES contributes to improve the quality of the gait based on real-time adjustment of gait pattern. During gait, the electrical stimuli from separate output channels of an FES device are launched to stimulate two lower extremity muscles (Tibialis Anterior (TA) and Hamstrings). Stimulus launching procedure is based on identifying subject`s gait state (stance and swing phases). According to the current variation of treadmill motor, gait phase and muscle activation of lower limbs can be determined during walking on Treadmill-Walk. Three able-bodied subjects simulated hemiplegic patients in the experiment. The results indicated that the proposed method is a safe, feasible and promising intervention.
|
|
14:15-14:30, Paper MoBT12.4 | |
>Nonlinear Model Predictive Control of Joint Ankle by Electrical Stimulation for Drop Foot Correction |
Benoussaad, Mourad | Heidelberg Univ. |
Mombaur, Katja | Univ. of Heidelberg |
Azevedo, Christine | INRIA |
Keywords: Automation in Life Sciences: Biotechnology, Pharmaceutical and Health Care, Medical Systems, Healthcare, and Assisted Living, Rehabilitation Robotics
Abstract: In this paper we investigate the use of optimal control techniques to improve Functional Electrical Stimulation (FES) for drop foot correction on hemiplegic patients. A model of the foot and the tibialis anterior muscle, the contraction of which is controlled by electrical stimulation has been established and is used in the optimal control problem. The novelty in this work is the use of the ankle accelerations and shank orientations (so-called external states) in the model, which have been measured on hemiplegic patients in a previous experiment using Inertial Measurement Units (IMUs). The optimal control problem minimizes the square of muscle excitations which serves the overall goal of reducing energy consumption in the muscle. % and muscular fatigue. In a first step, an offline optimal control problem is solved for test purposes and shows the efficiency of the FES optimal control for drop foot correction. In a second step, a Nonlinear Model Predictive Control (NMPC) problem - or online optimal control problem, is solved in a simulated environment. While the ulitmate goal is to use NMPC on the real system, i.e. directly on the patient, this test in simulation was meant to show the feasibility of NMPC for online drop foot correction. In the optimization problem, a set of fixed constraints of foot orientation was applied. Then, an original adaptive constraint taking into account the current ankle height, was introduced and tested. Comparisons between results under fixed and adaptive constraints highlight the advantage of the adaptive constraints in terms of energy consumption, where its quadratic sum of controls, obtained by NMPC, was three times lower than with the fixed constraint. This feasibility study was a first step in application of NMPC on real hemiplegic patients for online FES-based drop foot correction. The adaptive constraints method presents a new and efficient approach in terms of muscular energy consumption minimization.
|
|
14:30-14:45, Paper MoBT12.5 | |
>EMG Based Approach for Wearer-Centered Control of a Knee Joint Actuated Orthosis |
Hassani, Walid | Univ. of Paris Est Créteil - (UPEC) |
Mohammed, Samer | Univ. of Paris Est Créteil - (UPEC) |
Rifai, Hala | Univ. of Paris Est Créteil |
Amirat, Yacine | Univ. of Paris Est Créteil (UPEC) |
Keywords: Medical Systems, Healthcare, and Assisted Living, Rehabilitation Robotics, Human Performance Augmentation
Abstract: This paper presents a new human-exoskeleton interaction approach to provide torque assistance of the lower limb movements upon wearer’s intention. The exoskeleton interacts with the wearer; the shank-foot orthosis system behaves as a second order dynamic system with gravity and elastic torque balance. The intention of the wearer is estimated by using a realistic musculoskeletal model of the muscles actuating the knee joint. The identification process concerns the inertial parameters of the shank-foot, the exoskeleton and the musculotendon parameters. Real-time experiments, conducted on a healthy subject during flexion and extension movements of the knee joint, have shown satisfactory results in terms of tracking error, intention detection and assistance torque generation. This approach guarantees asymptotic stability of the shank-foot-exoskeleton and adaptation to human-exoskeleton interaction. Moreover, the proposed control law is robust with respect to external disturbances.
|
|
14:45-15:00, Paper MoBT12.6 | |
>AssistOn-Knee: A Self-Aligning Knee Exoskeleton |
Celebi, Besir | Sabanci Univ. |
Yalcin, Mustafa | Sabanci Univ. |
Patoglu, Volkan | Sabanci Univ. |
Keywords: Rehabilitation Robotics, Physical Human-Robot Interaction, Force Control
Abstract: We present kinematics, actuation, detailed design, characterization results and initial user evaluations of AssistOn-Knee, a novel self-aligning active exoskeleton for robot-assisted knee rehabilitation. AssistOn-Knee can, not only assist flexion/extension movements of the knee joint but also accommodate its translational movements in the sagittal plane. Automatically aligning its joint axes, AssistOn-Knee enables an ideal match between human knee axis and the exoskeleton axis, guaranteeing ergonomy and comfort throughout the therapy. Self-aligning feature significantly shortens the setup time required to attach the patient to the exoskeleton, allowing more effective time spent on exercises. The proposed exoskeleton actively controls the rotational degree of freedom of the knee through a Bowden cable-driven series elastic actuator, while the translational movements of the knee joints are passively accommodated through use of a 3 degrees of freedom planar parallel mechanism. AssistOn-Knee possesses a light-weight and compact design with significantly low apparent inertia, thanks to its Bowden cable based transmission that allows remote location of the actuator and reduction unit. Furthermore, thanks to its series-elastic actuation, AssistOn-Knee enables high-fidelity force control and active backdriveability within its control bandwidth, while featuring passive elasticity for excitations above this bandwidth, ensuring safety and robustness throughout the whole frequency spectrum.
|
|
MoBT13 |
Room802 |
Micro/Nano Manipulation |
Regular Session |
Chair: Régnier, Stéphane | Univ. Pierre et Marie Curie |
Co-Chair: Dong, Lixin | Michigan State Univ. |
|
13:30-13:45, Paper MoBT13.1 | |
>Development of Wet Tweezers Based on Capillary Force for Complex-Shaped and Heterogeneous Micro-Assembly |
Fuchiwaki, Ohmi | Yokohama National Univ. (YNU) |
Kumagai, Kazuya | Yokohama National Univ. |
Keywords: Micro-manipulation, Mechanism Design, Manufacturing and production systems
Abstract: In this paper, we have described newly developed wet tweezers based on capillary force for micro-assembly of complex-shaped objects. The tweezers are composed of two movable rods driven by a piezoelectric linear motor and two thin tubes. The two rods penetrate the liquids in the thin tubes. If we simply stamp the rods on a base, we can apply a drop of even high viscosity liquids. If we contact them to an object, we can pick it up by capillary force. Because we use the pair of the rods for picking up, posture of objects are fixed. That is important for accurate micro-assembly. To investigate generative force of the wet tweezers, we study relationship among viscosity, surface tension, and shape of meniscus of the liquid. We also formulate the capillary force as a function of gap distance between the rod and the object with three different liquids. In several experiments, we have confirmed the wet tweezers have good assembling accuracy and possibility to realize heterogeneous and complex-shaped micro-assembly.
|
|
13:45-14:00, Paper MoBT13.2 | |
>Measurement System for Biomechanical Properties of Cell Sheet |
Uesugi, Kaoru | Osaka Univ. |
Akiyama, Yoshitake | Osaka Univ. |
Hoshino, Takayuki | Tokyo Univ. of Agriculture & Tech. |
Akiyama, Yoshikatsu | Tokyo Women's Medical Univ. |
Yamato, Masayuki | Tokyo Women's Medical Univ. |
Okano, Teruo | Tokyo Women's Medical Univ. |
Morishima, Keisuke | Osaka Univ. |
Keywords: Micro-manipulation, Micro/Nano Robots
Abstract: In this study, we present a new fixture (self-attachable fixture) and tensile test system for measuring mechanical properties of cell sheet. To evaluate strength of cell sheet, it is the most important to measure mechanical properties of tensile mode. However, there has been no study which measured the tensile mechanical properties of cell sheet, since it has been difficult to attach a cell sheet in the tensile test system owing to the structure of the conventional fixture, and there has been no tensile test system which had a measurement range that covered the tension force range of the cell sheets. Therefore, we have addressed these problems by developing a self-attachable fixture and a tensile test system. By using developed system, we measured mechanical properties (tension, stress and initial stiffness) C2C12 of cell sheet cultured in different recipe of culture medium. The initial stiffness of cell sheet cultured in culture medium without FBS had a tendency to become stiffer. This indicates that our new fixture and test system are applicable for evaluating mechanical properties of cell sheets.
|
|
14:00-14:15, Paper MoBT13.3 | |
> >Robotic in Situ Stiffness Cartography of InP Membranes by Dynamic Force Sensing |
Abrahamians, Jean-Ochin | ISIR |
Sauvet, Bruno | ISIR |
Polesel-Maris, Jerome | CEA Saclay |
Braive, Rémy | CNRS |
Régnier, Stéphane | Univ. Pierre et Marie Curie |
Attachments: Video Attachment
Keywords: Nano manipilation, Force and Tactile Sensing, Mapping
Abstract: Current methods of measuring mechanical properties at the micro-scale are destructive, and do not allow proper characterisation on resonant MEMS/NEMS. In this paper, a cartography of local stiffness variations on a suspended micromembrane is established for the first time, by a tuning-fork-based dynamic force sensor inside a SEM. Experiments are conducted on InP membranes 200nm thin, using a 9-DoF nano-manipulation system, complemented with virtual reality and automation tools. Results provide stiffness values ranging from 0.6 to 3 N/m on a single sample.
|
|
14:15-14:30, Paper MoBT13.4 | |
>Closed-Loop Control of Silicon Nanotweezers for Improvement of Sensitivity to Mechanical Stiffness Measurement and Bio-Sensing on DNA Molecules |
Lafitte, Nicolas | CNRS-Univ. of Tokyo |
Haddab, Yassine | FEMTO-ST |
Le Gorrec, Yann | FEMTO-ST/AS2M ENSMM Besançon |
Guillou, Hervé | Inst. Néel/CNRS-Univ. Joseph Fourier |
Kumemura, Momoko | The Univ. of Tokyo |
Jalabert, Laurent | The Univ. of Tokyo |
Fujita, Hiroyuki | Univ. of Tokyo |
Collard, Dominique | The Univ. of Tokyo |
Keywords: Nano manipilation
Abstract: In this work we show that implementation of closed-loop control to silicon nanotweezers improves the sensitivity of the tool for mechanical characterization of biological molecules. Micromachined tweezers have already been used for the characterization of mechanical properties of DNA molecules as well as for the sensing of enzymatic reactions on DNA bundle. However the resolution of the experiments does not allow the sensing on single molecules. Hereafter we show theoretically and experimentally that, reducing the resonance frequency of the system by the implementation of a state feedback, the sensitivity to stiffness variation is enhanced. Such improvement leads to better resolution for detection of enzymatic reactions on DNA.
|
|
14:30-14:45, Paper MoBT13.5 | |
>Nanorobotic in Situ Characterization of Nanowire Memristors and “Memsensing” |
Fan, Zheng | Michigan State Univ. |
Fan, Xudong | Michigan State Univ. |
Dong, Lixin | Michigan State Univ. |
Keywords: Nano manufacturing, Nano manipilation
Abstract: We report the nanorobotic in situ forming and characterization of memristors based on individual copper oxide nanowires (CuO NWs) and their potential applications as nanosensors with memory (memristic sensors or “memsensors”). A series of in situ techniques for the experimental investigations of memristors are developed including nanorobotic manipulation, electro-beam-based forming, and electron energy loss spectroscopy (EELS) enabled correlation of transport properties and carrier distribution. All experimental investigations are performed inside a transmission electron microscope (TEM). The initial CuO NW memristors are formed by localized electron-beam irradiation to generate oxygen vacancies as dopants. Current-voltage properties show distinctive hysteresis characteristics of memristors. The mechanism of such memristic behaviors is explained with an oxygen vacancy migration model. The presence and migration of the oxygen vacancies is identified with EELS. Investigations also reveal that the memristic behavior can be influenced by the deformation of the nanowire, showing that the nanowire memristor can serve as a deformation/force memorable sensor. The CuO NW-based memristors will enrich the binary transition oxide family but hold a simpler and more compact design than the conventional thin-film version. With these advantages, the CuO NW-based memristors will not only facilitate their applications in nanoelectronics but play a unique role in micro-/nano-electromechanical systems (MEMS/NEMS) as well.
|
|
MoCT1 |
Room606 |
SLAM III |
Regular Session |
Chair: Zhang, Jianwei | Univ. of Hamburg |
Co-Chair: Eustice, Ryan | Univ. of Michigan |
|
15:15-15:30, Paper MoCT1.1 | |
>Long-Term Simultaneous Localization and Mapping with Generic Linear Constraint Node Removal |
Carlevaris-Bianco, Nicholas | Univ. of Michigan |
Eustice, Ryan | Univ. of Michigan |
Keywords: SLAM, Localization, Mapping
Abstract: This paper reports on the use of generic linear constraint (GLC) node removal as a method to control the computational complexity of long-term simultaneous localization and mapping. We experimentally demonstrate that GLC provides a principled and flexible tool enabling a wide variety of complexity management schemes. Specifically, we consider two main classes: batch multi-session node removal, in which nodes are removed in a batch operation between mapping sessions, and online node removal, in which nodes are removed as the robot operates. Results are shown for 34.9 h of real- world indoor-outdoor data covering 147.4 km collected over 27 mapping sessions spanning a period of 15 months.
|
|
15:30-15:45, Paper MoCT1.2 | |
> >Real-Time SLAM with Piecewise-Planar Surface Models and Sparse 3D Point Clouds |
Ozog, Paul | Univ. of Michigan |
Eustice, Ryan | Univ. of Michigan |
Attachments: Video Attachment
Keywords: SLAM, Marine Robotics, Field Robots
Abstract: This paper reports on the use of planar patches as features in a real-time simultaneous localization and mapping (SLAM) system to model smooth surfaces as piecewise-planar. This approach works well for using observed point clouds to correct odometry error, even when the point cloud is sparse. Such sparse point clouds are easily derived by Doppler velocity log sensors for underwater navigation. Each planar patch contained in this point cloud can be constrained in a factor-graph-based approach to SLAM so that neighboring patches are sufficiently coplanar so as to constrain the robot trajectory, but not so much so that the curvature of the surface is lost in the representation. To validate our approach, we simulated a virtual 6-degree of freedom robot performing a spiral-like survey of a sphere, and provide real-world experimental results for an autonomous underwater vehicle used for automated ship hull inspection. We demonstrate that using the sparse 3D point cloud greatly improves the self-consistency of the map. Furthermore, the use of our piecewise-planar framework provides an additional constraint to multi-session underwater SLAM, improving performance over monocular camera measurements alone.
|
|
15:45-16:00, Paper MoCT1.3 | |
> >Photorealistic 3D Mapping of Indoors by RGB-D Scanning Process |
Tykkala, Tommi Mikael | i3s |
Comport, Andrew Ian | CNRS-I3S/UNS |
Kamarainen, Joni-Kristian | Tampere Univ. of Tech. |
Attachments: Video Attachment
Keywords: SLAM, Visual Tracking, Mapping
Abstract: In this work, a RGB-D input stream is utilized for GPU-boosted 3D reconstruction of textured indoor environments. The goal is to develop a process which produces standard 3D models from indoors to explore them virtually. Camera motion is tracked in 3D space by registering the current view with a reference view. Depending on the trajectory shape, the reference is either fetched from a concurrently built keyframe model or from a previous RGB-D measurement. Real-time tracking (30Hz) is executed on a low-end GPU, which is possible because structural data is not fused concurrently. After camera poses are estimated, both trajectory and structure are refined in post-processing. The global point cloud is compressed into triangulated surfaces by using Poisson reconstruction method, which is well-suited, because it fills holes and filters noise efficiently. Texturing is generated by backprojecting the nearest RGB image onto the watertight polygon mesh. The final model is stored in a standard 3D model format to allow easy user exploration and navigation in virtual 3D environment.
|
|
16:00-16:15, Paper MoCT1.4 | |
> >Finding Next Best Views for Autonomous UAV Mapping through GPU-Accelerated Particle Simulation |
Adler, Benjamin | Univ. of Hamburg |
Xiao, Junhao | Univ. of Hamburg |
Zhang, Jianwei | Univ. of Hamburg |
Attachments: Video Attachment
Keywords: Motion and Trajectory Generation, Unmanned Aerial Vehicles, Mapping
Abstract: This paper presents a novel algorithm capable of generating multiple next best views (NBVs), sorted by achievable information gain. Although being designed for waypoint generation in autonomous airborne mapping of outdoor environments, it works directly on raw point clouds and thus can be used with any sensor generating spatial occupancy information (e.g. LIDAR, kinect or Time-of-Flight cameras). To satisfy time-constraints introduced by operation on UAVs, the algorithm is implemented on a highly parallel architecture and benchmarked against the previous, CPU-based proof of concept. As the underlying hardware imposes limitations with regards to memory access and concurrency, necessary data structures and further performance considerations are explained in detail. Open-source code for this paper is available at http://www.github.com/benadler/.
|
|
16:15-16:30, Paper MoCT1.5 | |
> >Efficient Onbard RGBD-SLAM for Fully Autonomous MAVs |
Scherer, Sebastian Andreas | Univ. of Tuebingen |
Zell, Andreas | Univ. of Tübingen |
Attachments: Video Attachment
Keywords: Computer Vision, Unmanned Aerial Vehicles, SLAM
Abstract: We present a computationally inexpensive RGBD-SLAM solution taylored to the application on autonomous MAVs, which enables our MAV to fly in an unknown environment and create a map of its surroundings completely autonomously, with all computations running on its onboard computer. We achieve this by implementing efficient methods for both tracking its current location with respect to a heavily processed previously seen RGBD image (keyframe) and efficient relative registration of a set of keyframes using bundle adjustment with depth constraints as a front-end for pose graph optimization. We prove the accuracy and efficiency of our system based on a public benchmark dataset and demonstrate that the proposed method enables our quadrotor to fly autonomously.
|
|
16:30-16:45, Paper MoCT1.6 | |
>Multi-Robot SLAM Using Condensed Measurements |
Lazaro, Maria Teresa | Univ. de Zaragoza |
Paz, Lina María | Univ. of Zaragoza |
Pinies, Pedro | Univ. de Zaragoza |
Castellanos, Jose A. | Univ. of Zaragoza |
Grisetti, Giorgio | Sapienza Univ. of Rome |
Keywords: SLAM, Distributed Robot Systems, Networked Robots
Abstract: In this paper we describe a Simultaneous Localization and Mapping (SLAM) approach specifically designed to address the communication and computational issues that affect multi-robot systems. Our method utilizes condensed measurements to exchange map information between the robots. These measurements can effectively compress relevant portions of a map in a few data. This results in a substantial reduction of both the data to be transmitted and processed, that renders the system more robust and efficient. As documented by our simulated and real world experiments, these advantages come with a very little decrease in accuracy compared to ideal (but not realistic) methods that share the full data among all the robots.
|
|
MoCT2 |
Room607 |
Applications of RGB-D Cameras |
Regular Session |
Chair: Tamura, Yusuke | Chuo Univ. |
Co-Chair: Knoll, Alois C. | TU Munich |
|
15:15-15:30, Paper MoCT2.1 | |
>RGB-D Sensor Data Correction and Enhancement by Introduction of an Additional RGB View |
Mkhitaryan, Artashes | Tech. Univ. of Munich |
Burschka, Darius | Tech. Univ. Muenchen |
Keywords: Computer Vision, Range Sensing, Distributed Robot Systems
Abstract: RGB-D sensors are becoming more and more vital to robotics. Sensors such as the Microsoft Kinect and time of flight cameras provide 3D colored point-clouds in real time can play a crucial role in Robot Vision. However these sensors suffer from precision deficiencies, and often the density of the point- clouds they provide is insufficient. In this paper, we present a multi-camera system for correction and enhancement of the data acquired from an RGB-D sensor. Our system consists of two sensors, the RGB-D sensor (main sensor) and a regular RGB camera (auxiliary sensor). We perform the correction and the enhancement of the data acquired from the RGB-D sensor by placing the auxiliary sensor in a close proximity to the target object and taking advantage of the established epipolar geometry. We have managed to reduce the relative error of the raw point-cloud from a Microsoft Kinect RGB-D sensor by 74.5 % and increase its density up to 2.5 times.
|
|
15:30-15:45, Paper MoCT2.2 | |
> >RGB-D Object Tracking: A Particle Filter Approach on GPU |
Choi, Changhyun | Georgia Inst. of Tech. |
Christensen, Henrik Iskov | Georgia Inst. of Tech. |
Attachments: Video Attachment
Keywords: Visual Tracking, Perception for Grasping and Manipulation, Range Sensing
Abstract: This paper presents a particle filtering approach for 6-DoF object pose tracking using an RGB-D camera. Our particle filter is massively parallelized in a modern GPU so that it exhibits real-time performance even with several thousand particles. Given an a priori 3D mesh model, the proposed approach renders the object model onto texture buffers on the GPU, and the rendered results are directly used by our parallelized likelihood evaluation. Both photometric (colors) and geometric (3D points and surface normals) features are employed to determine the likelihood of particles with respect to the given RGB-D scene. Our approach is compared with a tracker in the PCL both quantitatively and qualitatively in synthetic and real RGB-D sequences, respectively.
|
|
15:45-16:00, Paper MoCT2.3 | |
>Multi RGB-D Camera Setup for Generating Large 3D Point Clouds |
Lemkens, Wim | International Univ. Coll. Groep T |
Prabhjot Kaur, Prabhjot | International Univ. Coll. Groep T |
Buys, Koen | KU Leuven |
Slaets, Peter | Katholieke Univ. Leuven |
Tuytelaars, Tinne | KU Leuven |
De Schutter, Joris | Katholieke Univ. Leuven |
Keywords: Visual Tracking, Sensor Fusion, Calibration and Identification
Abstract: The advent of inexpensive RGB-D cameras brings new opportunities to capture a 3D environment. This paper presents a method to create a modular setup for generating a large 3D point cloud, with attention to the study of interference, the influence of a USB extension cable, and the calibration procedure. The study of interference includes the influence of the distance between the cameras, the orientation of the cameras, and the lightning. Furthermore, this paper proposes a number of evaluation metrics for similar setups.
|
|
16:00-16:15, Paper MoCT2.4 | |
>Efficient Compositional Approaches for Real-Time Robust Direct Visual Odometry from RGB-D Data |
Klose, Sebastian | Tech. Univ. München |
Heise, Philipp | Tech. Univ. München |
Knoll, Alois C. | TU Munich |
Keywords: Computer Vision, Visual Navigation, SLAM
Abstract: In this paper we give an evaluation of different methods for computing frame-to-frame motion estimates for a moving RGB-D sensor, by means of aligning two images using photometric error minimization. These kind of algorithms have recently shown to be very accurate and robust and therefore provide an attractive solution for robot ego-motion estimation and navigation. We demonstrate three different alignment strategies, namely the Forward-Compositional, the Inverse-Compositional and the Efficient Second-Order Minimization approach, in a general robust estimation framework. We further show how estimating global affine illumination changes, in general improves the performance of the algorithms. We compare our results with recently published work, considered as state-of-the art in this field, and show that our solutions are in general more precise and can perform in real-time on standard hardware.
|
|
16:15-16:30, Paper MoCT2.5 | |
>Learning to Discover Objects in RGB-D Images Using Correlation Clustering |
Firman, Michael David | Univ. Coll. London |
Thomas, Diego | National Inst. of Informatics |
Julier, Simon Justin | Univ. Coll. London |
Sugimoto, Akihiro | National Inst. of Informatics |
Keywords: Computer Vision, Visual Learning
Abstract: We introduce a method to discover objects from RGB-D image collections which does not require a user to specify the number of objects expected to be found. We propose a probabilistic formulation to find pairwise similarity between image segments, using a classifier trained on labelled pairs from the recently released RGB-D Object Dataset. We then use a correlation clustering solver to both find the optimal clustering of all the segments in the collection and to recover the number of clusters. Unlike traditional supervised learning methods, our training data need not be of the same class or category as the objects we expect to discover. We show that this parameter- free supervised clustering method has superior performance to traditional clustering methods.
|
|
16:30-16:45, Paper MoCT2.6 | |
> >Multiple Object Tracking Using an RGB-D Camera by Hierarchical Spatiotemporal Data Association |
Koo, Seongyong | KAIST |
Lee, Dongheui | Tech. Univ. of Munich |
Kwon, Dong-Soo | KAIST |
Attachments: Video Attachment
Keywords: Visual Tracking, Computer Vision, Human detection and tracking
Abstract: In this paper, we propose a novel multiple object tracking method from RGB-D point set data by introducing the hierarchical spatiotemporal data association method (HSTA) in order to robustly track multiple objects without prior knowledge. HSTA is able to construct not only temporal associations between multiple objects, but also component-level spatiotemporal associations that allow the correction of falsely detected objects in the presence of various types of interaction among multiple objects. The proposed method was evaluated using the four representative interaction cases such as split, complete occlusion, partial occlusion, and multiple contacts. As a result, HSTA showed significantly more robust performance than did other temporal data association methods in the experiments.
|
|
MoCT3 |
Room703 |
Safety of Robots |
Regular Session |
Chair: Ogasawara, Tsukasa | Nara Inst. of Science and Tech. |
Co-Chair: Rocco, Paolo | Pol. di Milano |
|
15:15-15:30, Paper MoCT3.1 | |
>Withdrawal Strategy for Human Safety Based on a Virtual Force Model |
Garcia Ricardez, Gustavo Alfonso | Nara Inst. of Science and Tech. (NAIST) |
Yamaguchi, Akihiko | Nara Inst. of Science and Tech. |
Takamatsu, Jun | Nara Inst. of Science and Tech. |
Ogasawara, Tsukasa | Nara Inst. of Science and Tech. |
Keywords: Robot Safety, Human-Humanoid Interaction, Physical Human-Robot Interaction
Abstract: The Human-Robot Interaction gets increasingly closer. In consequence, human safety has become a key issue for the success of the symbiosis between humans and robots. When the minimum distance between a human and a robot is too short, it can be naturally considered that the probability of a collision increases. Therefore, we consider that the robot should increase the distance to the human when the human is getting closer. We propose Withdrawal strategy as a method that aims to increase the distance by moving the end-effector not only away from the human but also to a parking position that can be previously assessed to be safer. To withdraw the end-effector, we use a virtual force model consisting of two virtual forces: a repelling force exerted by the human and an attractive force exerted by the parking position. We carry out experiments using a human-sized humanoid robot and five human subjects, and report the task completion time to evaluate the efficiency of the robot when performing a simple task.
|
|
15:30-15:45, Paper MoCT3.2 | |
>Development of a Walking Support Robot with Velocity-Based Mechanical Safety Devices |
Kai, Yoshihiro | Tokai Univ. |
Keywords: Robot Safety, Rehabilitation Robotics, Mechanism Design
Abstract: Safety is one of the most important issues in walking support robots. This paper presents a walking support robot equipped with velocity-based mechanical safety devices. The safety devices consist of only mechanical components without actuators, controllers, or batteries. The safety device is attached to each drive-shaft of the robot. If the safety device detects an unexpected angular velocity of the drive-shaft, the safety device can switch off all motors of the robot and lock the drive-shaft. The safety devices can work even if the robot's controller does not work. Firstly, we describe the characteristics of the safety device. Secondly, we explain the walking support robot and the structure and mechanism of the safety device. Thirdly, we show the walking support robot which we developed. Finally, we experimentally verify the effectiveness of the safety device.
|
|
15:45-16:00, Paper MoCT3.3 | |
>Path-Consistent Safety in Mixed Human-Robot Collaborative Manufacturing Environments |
Zanchettin, Andrea Maria | Pol. di Milano |
Rocco, Paolo | Pol. di Milano |
Keywords: Robot Safety, Human-Robot Interaction, Industrial Robots
Abstract: In order to improve production flexibility, it is widely agreed that future working environments will be populated by both humans and robot manipulators, sharing the same workspace. This scenario introduces a series of safety issues which are uncommon in industrial settings where physical separation of robot areas is typically enforced. While several approaches for safe human-robot interaction exist, none of them can be easily integrated with production constraints. This paper discusses the composition of safety constraints with production ones. An algorithm is derived in order to maximize productivity, while guaranteeing a safe separation distance of the robot from the human. Experimental results showing the effectiveness of the approach in a typical industrial setting are also discussed.
|
|
16:00-16:15, Paper MoCT3.4 | |
>Model Driven Safety Assessment of Robotic Systems |
Yakymets, Nataliya | CEA |
Dhouib, Saadia | CEA LIST |
Jaber, Hadi | CEA |
Lanusse, Agnes | CEA |
Keywords: Robot Safety, Robotics in Hazardous Fields
Abstract: Robotic systems (RSs) are often used for performing critical tasks with little or no human intervention. Such RSs must satisfy certain dependability requirements including reliability, availability, security and safety. In this paper, we focus on the safety aspect and propose a methodology and associated framework for safety assessment of RSs in the early phases of development. The methodology relies upon model driven engineering approach and describes a preliminary safety assessment of safety-critical RSs using fault tree (FT) analysis (FTA). The framework supports a domain specific language for RSs called RobotML and includes facilities (i) to automatically generate or manually construct FTs and perform both qualitative and quantitative FTA, (ii) to make semantic connections with formal verification and FTA tools, (iii) to represent FTA results in the RobotML modeling environment. In the case study, we illustrate the proposed methodology and framework by considering a mobile robot developed in the scope of the Proteus project.
|
|
16:15-16:30, Paper MoCT3.5 | |
>Adaptive Collision-Limitation Behavior for an Assistive Manipulator |
Stoelen, Martin Fodstad | Univ. Carlos III de Madrid |
Fernández de Tejada, Virginia | Univ. Carlos III de Madrid |
Victores, Juan G. | Univ. Carlos III de Madrid |
Jardon Huete, Alberto | Univ. CARLOS III DE MADRID |
Bonsignorio, Fabio Paolo | Heron Robots srl Univ. Carlos III de Madrid |
Balaguer, Carlos | Univ. Carlos III de Madrid |
Keywords: Personal Robots, Human Performance Augmentation, Physical Human-Robot Interaction
Abstract: An approach for adaptive shared control of an assistive manipulator is presented. A set of distributed collision and proximity sensors is used to aid in limiting collisions during direct control by the disabled user. Artificial neural networks adapt the use of the proximity sensors online, which limits movements in the direction of an obstacle before a collision occurs. The system learns by associating the different proximity sensors to the collision sensors where collisions are detected. This enables the user and the robot to adapt simultaneously and in real-time, with the objective of converging on a usage of the proximity sensors that increases performance for a given user, robot implementation and task-set. The system was tested in a controlled setting with a simulated 5 DOF assistive manipulator and showed promising reductions in the mean time on simplified manipulation tasks. It extends earlier work by showing that the approach can be applied to full multi-link manipulators.
|
|
16:30-16:45, Paper MoCT3.6 | |
> >Methods for Safe Human-Robot-Interaction Using Capacitive Tactile Proximity Sensors |
Escaida Navarro, Stefan | Karlsruhe Inst. of Tech. |
Marufo da Silva, Maximiliano | Karlsruhe Inst. of Tech. (KIT) |
Ding, Yitao | Karlsruhe Inst. of Tech. |
Puls, Stephan | Karlsruhe Inst. of Tech. |
Goeger, Dirk | Univ. Karlsruhe |
Hein, Björn | Karlsruhe Inst. of Tech. (KIT) |
Woern, Heinz | Karlsruhe Inst. of Tech. (KIT) |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Sensor Networks, Force and Tactile Sensing
Abstract: In this paper we base upon capacitive tactile proximity sensor modules developed in a previous work to demonstrate applications for safe human-robot-interaction. Arranged as a matrix, the modules can be used to model events in the near proximity of the robot surface, closing the near field perception gap in robotics. The central application investigated here is object tracking. Several results are shown: the tracking of two human hands as well as the handling of occlusions and the prediction of collision for object trajectories. These results are important for novel pretouch- and touch-based human-robot interaction strategies and for assessing and implementing safety capabilities with these sensor systems.
|
|
MoCT4 |
Room601 |
Human Motion Analysis and Assistance |
Regular Session |
Chair: Lee, C. S. George | Purdue Univ. |
Co-Chair: Papanikolopoulos, Nikos | Univ. of Minnesota |
|
15:15-15:30, Paper MoCT4.1 | |
>From Human Motion Analysis to Whole-Body Control of a Dual-Arm Robot for Pick-And-Place Tasks |
Kim, Sung-Kyun | KIST |
Lee, Dong-hyun | Interaction and Robotics Res. Center, KIST, Seoul,Korea |
Hong, Seokmin | Univ. of Science and Tech. |
Oh, Yonghwan | Korea Inst. of Science & Tech. (KIST) |
Oh, Sang-Rok | KIST |
Keywords: Human Centered Planning and Control, Humanoid Robots, Mobile Manipulation
Abstract: Human's action strategy is a good source of robot controller design. For there is no decisive criterion on balance control during manipulation tasks, human motion data are obtained and analyzed in this paper. Based on the observation of the center of mass (CoM) being proportional to target object distance but limited inside the supporting polygon, the bound-proportional CoM planner is proposed. Along with the CoM planner, whole-body balance and grasping controller for a dual-arm robot is suggested in a simple and computationally efficient structure. Dynamic simulation is conducted for validation, and showed competent results.
|
|
15:30-15:45, Paper MoCT4.2 | |
>Recognition of Ballet Micro-Movements for Use in Choreography |
Dancs, Justin | Univ. of Minnesota |
Sivalingam, Ravishankar | Univ. of Minnesota |
Somasundaram, Guruprasad | UMN |
Morellas, Vassilios | U. of Minnesota |
Papanikolopoulos, Nikos | Univ. of Minnesota |
Keywords: Gesture, Posture, Social Spaces and Facial Expressions, Surveillance Systems, Visual Learning
Abstract: Computer vision as an entire field has a wide and diverse range of applications. The specific application for this project was in the realm of dance, notably ballet and choreography. This project was proof-of-concept for a choreography assistance tool used to recognize and record dance movements demonstrated by a choreographer. Keeping the commercial arena in mind, the Kinect from Microsoft was chosen as the imaging hardware, and a pilot set chosen to verify recognition feasibility. Before implementing a classifier, all training and test data was transformed to a more applicable representation scheme to only pass the important aspects to the classifier to distinguish moves for the pilot set. In addition, several classification algorithms using the Nearest Neighbor (NN) and Support Vector Machine (SVM) methods were tested and compared from a single dictionary as well as on several different subjects. The results were promising given the framework of the project, and several new expansions of this work are proposed.
|
|
15:45-16:00, Paper MoCT4.3 | |
>Dynamic Movement Primitives for Human Robot Interaction: Comparison with Human Behavioral Observation |
Prada, Miguel | Tecnalia |
Remazeilles, Anthony | Tecnalia Res. and Innovation |
Koene, Ansgar Roald | Univ. of Birmingham |
Endo, Satoshi | Univ. of Birmingham |
Keywords: Human-Robot Interaction, Learning from Demonstration, Motion Control
Abstract: This article presents the current state of an ongoing work on Human Robot interaction in which two partners collaborate during an object hand-over interaction. The manipulator control is based on the Dynamic Movement Primitives model, specialized for the object hand-over context. The proposed modifications enable finer control of the dynamic of the DMP to align it to human control strategies, where the contributions of the feedforward and feedback parts of the control are slightly different than in the original DMP formulation. Furthermore, the proposed scheme handles moving goals. These two modifications combined, remove the requirement of an explicit estimation of the exchange position, allowing to generate the motion purely reactively given the instantaneous position of the human hand. The quality of the control system is evaluated through an extensive comparison with ground truth data related to the object interaction between two humans acquired in the context of the European project CogLaboration which envisages an application in an industrial setting.
|
|
16:00-16:15, Paper MoCT4.4 | |
>Using Action Classification for Human-Pose Estimation |
Chan, Kai-Chi | Purdue Univ. |
Koh, Cheng-Kok | Purdue Univ. |
Lee, C. S. George | Purdue Univ. |
Keywords: Gesture, Posture, Social Spaces and Facial Expressions
Abstract: This paper presents a 3D-point-cloud system that extracts a 3D-point-cloud feature (VISH) from the observation of a depth sensor to reduce feature/depth ambiguity and estimates human poses using the result of action classification and a kinematic model. Based on the concept of distributed representation, a non-parametric action-mixture model is proposed in the system to represent high-dimensional human-pose space using low-dimensional manifolds in searching human poses. In each manifold, the probability distribution is estimated by the similarity of features. The distributions in the manifolds are then redistributed according to the stationary distribution of a Markov chain that models the frequency of actions. After the redistribution, the manifolds are combined according to the distribution determined by the action classification. In addition, the spatial relationship between human-body parts is explicitly modeled by a kinematic chain. Computer-simulation results showed that multiple low-dimensional manifolds can represent human-pose space. The 3D-point-cloud system showed reduction of the overall error and standard deviation compared with other approaches without using action classification.
|
|
16:15-16:30, Paper MoCT4.5 | |
>Learning Muscle Activation Patterns Via Nonlinear Oscillators: Application to Lower-Limb Assistance |
Aguirre-Ollinger, Gabriel | Univ. of Tecnology, Sydney |
Keywords: Physical Human-Robot Interaction, Rehabilitation Robotics, Medical Robots and Systems
Abstract: Achieving coordination between a lower-limb exoskeleton and its user is challenging because walking is a dynamic process that involves multiple, precisely timed muscle activations. Electromyographical (EMG) feedback, in spite of its drawbacks, provides an avenue for assistance by enabling users to reduce the level of muscle activation required for walking. As an alternative to direct EMG feedback, we present a method for exoskeleton control based on learning the activation pattern of specific muscles during cyclic movements. Using the example of pendular leg motion, the torque profile of one muscle group (hip flexors) is learned in a two-step process. First, the estimated torque profile is indexed to the phase of the swing movement using an adaptive frequency oscillator (AFO). The profile is then encoded using linear weighted regression. In the algorithm's assistive mode, the learned profile is reconstructed by means of the AFO and without need for additional EMG input. The reconstructed profile is converted into a torque profile to be physically delivered by the exoskeleton. We tested our method on a single-actuator exoskeleton that assists the hip joint during stationary leg swing. The learning and assistance functions were built on top of an admittance controller that enhances the exoskeleton's mechanical transparency. Initial tests showed a high level of coordination, i.e. simultaneous positive work, between the subjects' hip flexor torque and the exoskeleton's assistive torque. This result opens the door for future studies to test the users' ability to reduce their muscle activation in proportion to the assistance delivered by the exoskeleton.
|
|
16:30-16:45, Paper MoCT4.6 | |
>Standing Mobility Vehicle with Passive Exoskeleton Assisting Voluntary Postural Changes |
Eguchi, Yosuke | Univ. of Tsukuba |
Kadone, Hideki | Univ. of Tsukuba |
Suzuki, Kenji | Univ. of Tsukuba |
Keywords: Medical Systems, Healthcare, and Assisted Living, Human Performance Augmentation, Mechanism Design
Abstract: This study proposes a novel personal mobility vehicle for supporting and assisting people with disabled lower limbs. The developed mobile platform is capable of assisting voluntary postural transition between standing and sitting, called the Passively Assistive Limb (PAL) mechanism, in addition to high mobility with upright posture. The device consists of gas-spring-powered passive exoskeleton for postural support and two in-wheel motors for mobility support. The developed robot system allows a user to sit down on chairs and beds, and stand up through easy operations. In addition, a user can move the system in the standing posture without using their hands. In this paper, the development and assessments of the standing mobility vehicle are described.
|
|
MoCT5 |
Room605 |
Robot Learning III |
Regular Session |
Chair: Nakamura, Yoshihiko | Univ. of Tokyo |
Co-Chair: Chatila, Raja | ISIR |
|
15:15-15:30, Paper MoCT5.1 | |
>Conditional Transition Maps: Learning Motion Patterns in Dynamic Environments |
Kucner, Tomasz | Örebro Univ. |
Saarinen, Jari Pekka | Aalto Univ. |
Magnusson, Martin | Örebro Univ. |
Lilienthal, Achim J. | Örebro Univ. |
Keywords: Mapping, Navigation
Abstract: In this paper we introduce a method for learning motion patterns in dynamic environments. Representations of dynamic environments have recently received an increasing amount of attention in the research community. Understanding dynamic environments is seen as one of the key challenges in order to enable autonomous navigation in real-world scenarios. However, representing the temporal dimension is a challenge yet to be solved. In this paper we introduce a spatial representation, which encapsulates the statistical dynamic behavior observed in the environment. The proposed Conditional Transition Map (CTMap) is a grid-based representation that associates a probability distribution for an object exiting the cell, given its entry direction. The transition parameters are learned from a temporal signal of occupancy on cells by using a local-neighborhood cross-correlation method. In this paper, we introduce the CTMap, the learning approach and present a proof-of-concept method for estimating future paths of dynamic objects, called Conditional Probability Propagation Tree (CPPTree). The evaluation is done using a real-world data-set collected at a busy roundabout.
|
|
15:30-15:45, Paper MoCT5.2 | |
> >Learning-Based Robot Control with Localized Sparse Online Gaussian Process |
Park, Sooho | Carnegie Mellon Univ. |
Mustafa, Shabbir Kurbanhusen | Singapore Inst. of Manufacturing Tech. |
Shimada, Kenji | Carnegie Mellon Univ. |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Sensor-based Planning, AI Reasoning Methods
Abstract: In recent years, robots have been increasingly utilized in applications with complex unknown environments, which makes system modeling challenging. In order to meet the demand from such applications, an experience-based learning approach can be used. In this paper, a novel learning algorithm is proposed, which can learn an unknown system model from given data iteratively using a localization approach to manage the computational costs for real time applications. The algorithm segments the data domain by measuring significance of data. As case studies, the proposed algorithm is tested on the control of the mecanum-wheeled robot and in learning the inverse kinematics of a kinematically-redundant manipulator. As the result, the algorithm achieves the on-line system model learning for real time robotics applications.
|
|
15:45-16:00, Paper MoCT5.3 | |
>Accurate Recursive Learning of Uncertain Diffeomorphism Dynamics |
Nilsson, Adam | California Inst. of Tech. |
Censi, Andrea | California Inst. of Tech. |
Keywords: Learning and Adaptive Systems, Visual Learning
Abstract: Diffeomorphisms dynamical systems are dynamical systems where the state is an image and each commands induce a diffeomorphism of the state. These systems can approximate the dynamics of robotic sensorimotor cascades well enough to be used for problems such as planning in observations space. Learning of an arbitrary diffeomorphism from pairs of images is a high dimensional learning problem. Learning of an arbitrary diffeomorphism from pairs of images is an extremely high dimensional problem. The previous method had required O(ρ⁴) memory as a function of the desired resolution ρ, which, in practice, was the main limitation to the resolution of the diffeomorphisms that could be learned. This paper describes an algorithm based on recursive refinement that lowers the memory requirement to O(ρ²). Another improvement regards the estimation the diffeomorphism uncertainty, which is used to represent the sensor's limited field of view; the improved method obtains a more accurate estimation of the uncertainty by checking the consistency of a learned diffeomorphism and its independently learned inverse. The methods are tested on two robotic systems (a pan-tilt camera and a 5-DOF manipulator).
|
|
16:00-16:15, Paper MoCT5.4 | |
>Neural Learning of Stable Dynamical Systems Based on Data-Driven Lyapunov Candidates |
Neumann, Klaus | CoR-Lab. Bielefeld Univ. |
Lemme, Andre | CoR-Lab. |
Steil, Jochen J. | Bielefeld Univ. |
Keywords: Learning and Adaptive Systems, Learning from Demonstration, Motion and Trajectory Generation
Abstract: Nonlinear dynamical systems are a promising representation to learn complex robot movements. Besides their undoubted modeling power, it is of major importance that such systems work in a stable manner. We therefore present a neural learning scheme that estimates stable dynamical systems from demonstrations based on a two-stage process: first, a data-driven Lyapunov function candidate is estimated. Second, stability is incorporated by means of a novel method to respect local constraints in the neural learning. We show in two experiments that this method is capable of learning stable dynamics while simultaneously sustaining the accuracy of the estimate and robustly generates complex movements.
|
|
16:15-16:30, Paper MoCT5.5 | |
>Locally Weighted Least Squares Policy Iteration for Model-Free Learning in Uncertain Environments |
Howard, Matthew | King's Coll. London |
Nakamura, Yoshihiko | Univ. of Tokyo |
Keywords: Learning and Adaptive Systems, Adaptive Control, Integrated Task and Motion Planning
Abstract: This paper introduces Locally Weighted Least Squares Policy Iteration for learning approximate optimal control in settings where models of the dynamics and cost function are either unavailable or hard to obtain. Building on recent advances in Least Squares Temporal Difference Learning, the proposed approach is able to learn from data collected from interactions with a system, in order to build a global control policy based on localised models of the state-action value function. Evaluations are reported characterising learning performance for non-linear control problems including an under-powered pendulum swing-up task, and a robotic door-opening problem under different dynamical conditions.
|
|
16:30-16:45, Paper MoCT5.6 | |
>Learning an Internal Representation of the End-Effector Configuration Space |
Laflaquière, Alban | Univ. Pierre et Marie Curie; Inst. desSystèmesIntelligen |
Terekhov, Alexander V. | UPMC / CNRS |
Gas, Bruno | Univ. Pierre et Marie Curie |
O'Regan, J. Kevin | Univ. Paris 05 Descartes - LPP |
Keywords: Learning and Adaptive Systems, Autonomous Agents, Failure Detection and Recovery
Abstract: Current machine learning techniques proposed to automatically discover a robot kinematics usually rely on a priori information about the robot’s structure, sensors properties or end-effector position. This paper proposes a method to estimate a certain aspect of the forward kinematics model with no such information. An internal representation of the end-effector configuration is generated from unstructured proprioceptive and exteroceptive data flow under very limited assumptions. A mapping from the proprioceptive space to this representational space can then be used to control the robot.
|
|
MoCT6 |
Room604 |
Sampling-Based Planning |
Regular Session |
Chair: Ohara, Kenichi | Meijo Univ. |
Co-Chair: Lien, Jyh-Ming | George Mason Univ. |
|
15:15-15:30, Paper MoCT6.1 | |
>A Study on the Finite-Time Near-Optimality Properties of Sampling-Based Motion Planners |
Dobson, Andrew | Rutgers Univ. |
Bekris, Kostas E. | Rutgers, the State Univ. of New Jersey |
Keywords: Motion and Path Planning
Abstract: Sampling-based algorithms have proven practical in solving motion planning challenges in relatively high-dimensional instances in geometrically complex workspaces. Early work focused on quickly returning feasible solutions. Only recently was it shown under which conditions these algorithms asymptotically return optimal or near-optimal solutions. These methods yield desired properties only in an asymptotic fashion, i.e., the properties are attained after infinite computation time. This work studies the finite-time properties of sampling-based planners in terms of path quality. The focus is on roadmap-based methods, due to their simplicity. This work illustrates that existing sampling-based planners, which construct roadmaps in an asymptotically (near-)optimal manner exhibit a "probably near-optimal" property in finite time. This means that it is possible to compute a confidence value, i.e. a probability, regarding the existence of upper bounds for the length of the path returned by the roadmap as a function of the number of configuration space samples. This property can result in useful tools for determining existence of solutions and a probabilistic stopping criterion for prm -like methods. These properties are validated through experimental trials.
|
|
15:30-15:45, Paper MoCT6.2 | |
>Mapping the Configuration Space of Polygons Using Reduced Convolution |
Behar, Evan | George Mason Univ. |
Lien, Jyh-Ming | George Mason Univ. |
Keywords: Motion and Path Planning, Collision Detection and Avoidance
Abstract: Configuration space plays an important role not only in motion planning but also in geometric modeling, shape and kinematic reasoning, and is fundamental to several basic geometric operations, such as continuous collision detection and generalized penetration depth estimation, that also find their applications in motion planning, animation and simulation. In this paper, we developed a new method for constructing the boundary of the CSpace obstacles (C-Obst) of polygons. This method is simpler to implement and often more efficient than the existing techniques. These main advantages are provided by a new algorithm that allows us to extract the Minkowski sum from the reduced convolution of the input polygons. We also developed a method for estimating the generalized penetration depth by computing the distance between the query point and the C-Obst surface.
|
|
15:45-16:00, Paper MoCT6.3 | |
>Adaptive Neighbor Connection for PRMs: A Natural Fit for Heterogeneous Environments and Parallelism |
Ekenna, Chinwe | Texas A&M Univ. |
Jacobs, Sam Ade | Texas A&M Univ. |
Thomas, Shawna | Texas A&M Univ. |
Amato, Nancy | Texas A&M Univ. |
Keywords: Motion and Path Planning
Abstract: Probabilistic Roadmap Methods (PRMs) are widely used motion planning methods that sample robot configurations (nodes) and connect them to form a graph (roadmap) containing feasible trajectories. Many variants propose different strategies for each of the steps and choosing among them is problem dependent. Planning in heterogeneous environments and/or on parallel machines necessitates dividing the problem into regions where these choices have to be made for each one. Hand-selecting the best method for each region becomes intractable. In particular, there are many ways to select connection candidates, and choosing the appropriate strategy is input dependent. In this paper, we present a general connection framework that adaptively selects a neighbor finding strategy from a candidate set of options. Our framework learns which strategy to use by examining their success rate and cost. It frees the user of the burden of selecting the best strategy and allows the selection to change over time. We perform experiments on rigid bodies of varying geometry and articulated linkages up to 37 degrees of freedom. Our results show that strategy performance is indeed problem/region dependent, and our adaptive method harnesses their strengths. Over all problems studied, our method differs the least from manual selection of the best method, and if one were to manually select a single method across all problems, the performance can be quite poor. Our method is able to adapt to changing sampling density and learns different strategies for each region when the problem is partitioned for parallelism.
|
|
16:00-16:15, Paper MoCT6.4 | |
>A Fast Streaming Spanner Algorithm for Incrementally Constructing Sparse Roadmaps |
Wang, Weifu | Dartmouth Coll. |
Balkcom, Devin | Dartmouth Coll. |
Chakrabarti, Amit | Dartmouth Coll. |
Keywords: Motion and Path Planning
Abstract: Sampling-based probabilistic roadmap algorithms such as PRM and PRM* have been shown to be effective at solving certain motion planning problems, but the large graphs generated to express the connectivity and a metric on the configuration space may require much storage space and be expensive to search. Recent work by Marble and Bekris~cite{Marble2011a, Marble2011b} applied spanner algorithms to PRM*; these algorithms prune some edges in a dense graph, while guaranteeably maintaining an approximation to the metric. In this paper, we apply (and improve) a state-of-the-art streaming spanner algorithm to prune PRM* roadmaps. The algorithm we present has the main advantage of computational speed; when applied to PRM*, the processing time per vertex is independent of the number of sampled vertices, n, as compared to O(n log 2(n) log(log(n)) ) in~cite{Marble2011b}. In practice, the algorithm we present prunes a graph with about 20 million edges in less than 20 seconds on a modern desktop computer; compared to the time required for generating such a roadmap, this additional processing time is essentially trivial. Because the combination of this algorithm with PRM* avoids many collision detections, the combination runs in several times faster than PRM*. We conduct experiments using OMPL, and analyze and compare the results to the results of existing sparse roadmap generation algorithms.
|
|
16:15-16:30, Paper MoCT6.5 | |
> >Construction and Use of Roadmaps That Incorporate Workspace Modeling Errors |
Malone, Nicholas | Univ. of New Mexico |
Manavi, Kasra | Univ. of New Mexico |
Wood, John | Univ. of New Mexico |
Tapia, Lydia | Univ. of New Mexico |
Attachments: Video Attachment
Keywords: Motion and Path Planning
Abstract: Probabilistic Roadmap Methods (PRMs) have been shown to work well at solving high Degree of Freedom (DoF) motion planning problems. They work by constructing a roadmap that approximates the topology of collisionfree configuration space. However, this requires an accurate model of the robot’s workspace in order to test if a sampled configuration is in collision or not. In this paper, we present a method for roadmap construction that can be used in workspaces with uncertainties in the model. For example, these can be inaccuracies that are caused by sensor error when an environment model was constructed. The uncertainty is encoded into the roadmap directly through the incorporation of non-binary collision detection values, e.g., a probability of collision. We refer to this new roadmap as a Safety-PRM because it allows tunability between the expected safety of the robot and the distance along a path. We compare the computational cost of Safety-PRM against two planning methods for environments without modeling errors, basic PRM and Medial Axis PRM(MAPRM), known for low computational cost and maximizing clearance, respectively. We demonstrate that in most cases, Safety-PRM produces high quality paths maximized for clearance and safety with the least amount of computational cost. We show that these paths are tunable for both robot safety and clearance. Finally, we demonstrate the applicability of Safety-PRM on an experimental system, a Barrett Whole Arm Manipulator (WAM). On the WAM, we demonstrate the mapping of expected collision to robot speeds to enable the robot to physically test the safety of the roadmap and use torque estimation to make roadmap modifications.
|
|
16:30-16:45, Paper MoCT6.6 | |
>Free-Configuration Biased Sampling for Motion Planning |
Bialkowski, Joshua J | Massachusetts Inst. of Tech. |
Otte, Michael W. | MIT |
Frazzoli, Emilio | Massachusetts Inst. of Tech. |
Keywords: Motion and Path Planning
Abstract: In sampling based motion planning algorithms the initial step at every iteration is to generate a new sample from the obstacle-free portion of the configuration space. This is usually accomplished via rejection sampling, i.e., repeatedly drawing points from the entire space until an obstacle-free point is found. This strategy is rarely questioned because the extra work associated with sampling (and then rejecting) useless points contributes at most a constant factor to the planning algorithm's asymptotic runtime complexity. However, this constant factor can be quite high in practice. We propose an alternative approach that enables sampling from a distribution that provably converges to a uniform distribution over only the obstacle-free space. Our method works by storing empirically observed estimates of obstacle-free space in a point-proximity data structure, and then using this information to generate future samples. Both theoretical and experimental results validate our approach.
|
|
MoCT7 |
Room701 |
Sensor Calibration |
Regular Session |
Chair: Furgale, Paul Timothy | ETH Zürich |
Co-Chair: Takeuchi, Masaru | Nagoya Univ. |
|
15:15-15:30, Paper MoCT7.1 | |
>Unified Temporal and Spatial Calibration for Multi-Sensor Systems |
Furgale, Paul Timothy | ETH Zürich |
Rehder, Joern | ETH Zurich |
Siegwart, Roland | ETH Zurich |
Keywords: Calibration and Identification, Sensor Fusion, Computer Vision
Abstract: In order to increase accuracy and robustness in state estimation for robotics, a growing number of applications rely on data from multiple complementary sensors. For the best performance in sensor fusion, these different sensors must be spatially and temporally registered with respect to each other. To this end, a number of approaches have been developed to estimate these system parameters in a two stage process, first estimating the time offset and subsequently solving for the spatial transformation between sensors. In this work, we present on a novel framework for jointly estimating the temporal offset between measurements of different sensors and their spatial displacements with respect to each other. The approach is enabled by continuous-time batch estimation and extends previous work by seamlessly incorporating time offsets within the rigorous theoretical framework of maximum likelihood estimation. Experimental results for a camera to inertial measurement unit (IMU) calibration prove the ability of this framework to accurately estimate time offsets up to a fraction of the smallest measurement period.
|
|
15:30-15:45, Paper MoCT7.2 | |
>Odometry-Based Online Extrinsic Sensor Calibration |
Schneider, Sebastian | Univ. of the Bundeswehr Munich |
Luettel, Thorsten | Univ. of the Bundeswehr Muenchen |
Wuensche, Hans J | UniBw Munich |
Keywords: Calibration and Identification
Abstract: In recent years vehicles have been equipped with more and more sensors for environment perception. Among these sensors are cameras, RADAR, single-layer and multi-layer LiDAR. One key challenge for the fusion of these sensors is sensor calibration. In this paper we present a novel extrinsic calibration algorithm based on sensor odometry. Given the time-synchronized delta poses of two sensors our technique recursively estimates the relative pose between these sensors. The method is generic in that it can be used to estimate complete 6DOF poses, given the sensors provide a 6DOF odometry, as well as 3DOF poses (planar offset and yaw angle) for sensors providing a 3DOF odometry, like a single-beam LiDAR. We show that the proposed method is robust against motion degeneracy and present results on both simulated and real world data using an inertial measurement unit and a stereo camera system.
|
|
15:45-16:00, Paper MoCT7.3 | |
> >Automatic Calibration of Multi-Modal Sensor Systems Using a Gradient Orientation Measure |
Taylor, Zachary Jeremy | Univ. of Sydney, Australian Centre for Field Robotics |
Nieto, Juan | Univ. of Sydney, Australian Centre for Field Robotics |
Johnson, David | Univ. of Sydney |
Attachments: Video Attachment
Keywords: Calibration and Identification, Field Robots, Sensor Fusion
Abstract: A novel technique for calibrating a multi-modal sensor system has been developed. Our calibration method is based on the comparative alignment of output gradients from two candidate sensors. The algorithm is applied to the calibration of several camera-lidar systems. In this calibration the lidar scan is projected onto the camera's image using a camera model. Particle swarm optimization is used to find the optimal parameters for this model. This method requires no markers to be placed in the scene. While the system can use a set of scans, unlike many existing techniques it can also automatically calibrate the system reliably using a single scan. The method presented is successfully validated on a variety of cameras, lidars and locations. It is also compared to three existing techniques and shown to give comparable or superior results on the datasets tested.
|
|
16:00-16:15, Paper MoCT7.4 | |
>A Multiple-Camera System Calibration Toolbox Using a Feature Descriptor-Based Calibration Pattern |
Li, Bo | ETH Zurich |
Heng, Lionel | ETH Zurich |
Koeser, Kevin | ETH Zurich |
Pollefeys, Marc | ETH Zurich |
Keywords: Calibration and Identification, Computer Vision
Abstract: This paper presents a novel feature descriptor-based calibration pattern and a Matlab toolbox which uses the specially designed pattern to easily calibrate both the intrinsics and extrinsics of a multiple-camera system. In contrast to existing calibration patterns, in particular, the ubiquitous chessboard, the proposed pattern contains many more features of varying scales; such features can be easily and automatically detected. The proposed toolbox supports the calibration of a camera system which can comprise either normal pinhole cameras or catadioptric cameras. The calibration only requires that neighboring cameras observe parts of the calibration pattern at the same time; the observed parts may not overlap at all. No overlapping fields of view are assumed for the camera system. We show that the toolbox can be easily used to automatically calibrate camera systems.
|
|
16:15-16:30, Paper MoCT7.5 | |
>Sensor Calibration with Unknown Correspondence: Solving AX=XB Using Euclidean-Group Invariants |
Ackerman, Martin Kendal | Johns Hopkins Univ. |
Cheng, Alexis | Johns Hopkins Univ. |
Shiffman, Bernard | The Johns Hopkins Univ. |
Boctor, Emad | Johns Hopkins Univ. |
Chirikjian, Gregory | Johns Hopkins Univ. |
Keywords: Calibration and Identification, Medical Robots and Systems
Abstract: The AX=XB sensor calibration problem must often be solved in image guided therapy systems, such as those used in robotic surgical procedures. In this problem, A, X, and B are homogeneous transformations with A and B acquired from sensor measurements and X being the unknown. It has been known for decades that this problem is solvable for X when a set of exactly measured A's and B's, in a priori correspondence, is given. However, in practical problems, the data streams containing the A’s and B’s will be asynchronous and may contain gaps (i.e., the correspondence is unknown, or does not exist, for the sensor measurements) and temporal registration is required. For the AX=XB problem, an exact solution can be found when four independent invariant quantities exist between A and B. We formally define these invariants, reviewing and elaborating results from classical screw theory, and illustrate how they can be used, with sensor data from multiple sources that contain unknown or missing correspondences, to provide a solution for X.
|
|
16:30-16:45, Paper MoCT7.6 | |
> >Autonomous Movement-Driven Place Recognition Calibration for Generic Multi-Sensor Robot Platforms |
Jacobson, Adam | Queensland Univ. of Tech. |
Chen, Zetao | Queensland Univ. of Tech. |
Milford, Michael J | Queensland Univ. of Tech. |
Attachments: Video Attachment
Keywords: Calibration and Identification, Sensor Fusion, Biologically-Inspired Robots
Abstract: In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.
|
|
MoCT8 |
Room702 |
Simulation and Estimation |
Regular Session |
Chair: Amato, Nancy | Texas A&M Univ. |
Co-Chair: Smart, William | Oregon State Univ. |
|
15:15-15:30, Paper MoCT8.1 | |
>V-REP: A Versatile and Scalable Robot Simulation Framework |
Rohmer, Eric | Tohoku Univ. |
Singh, Surya | The Univ. of Queensland |
Freese, Marc Andreas | Coppelia Robotics |
Keywords: Animation and Simulation, Control Architectures and Programming
Abstract: From exploring planets to cleaning homes, the reach and versatility of robotics is vast. The integration of actuation, sensing and control makes robotics systems powerful, but complicates their simulation. This paper introduces a versatile, scalable, yet powerful general-purpose robot simulation framework called V-REP. The paper discusses the utility of a portable and flexible simulation framework that allows for direct incorporation of various control techniques. This renders simulations and simulation models more accessible to a general-public, by reducing the simulation model deployment complexity. It also increases productivity by offering built-in and ready-to-use functionalities, as well as a multitude of programming approaches. This allows for a multitude of applications including rapid algorithm development, system verification, rapid prototyping, and deployment for cases such as safety/remote monitoring, training and education, hardware control, and factory automation simulation.
|
|
15:30-15:45, Paper MoCT8.2 | |
>Optimizing Aspects of Pedestrian Traffic in Building Designs |
Rodriguez, Samuel | Texas A&M Univ. |
Zhang, Yinghua | The Univ. of Texas at Dallas |
Gans, Nicholas | Univ. Texas at Dallas |
Amato, Nancy | Texas A&M Univ. |
Keywords: Animation and Simulation, Learning and Adaptive Systems, Self-Organised Robot Systems
Abstract: In this work, we investigate aspects of building design that can be optimized. Architectural features that we explore include pillar placement in simple corridors, doorway placement in buildings, and agent placement for information dispersement in an evacuation. The metrics utilized are tuned to the specific scenarios we study, which include continuous flow pedestrian movement and building evacuation. We use Multi-dimensional Direct Search (MDS) optimization with an extreme barrier criteria to find optimal placements while enforcing building constraints.
|
|
15:45-16:00, Paper MoCT8.3 | |
>Automatic Relational Scene Representation for Safe Robotic Manipulation Tasks |
Mojtahedzadeh, Rasoul | Örebro Univ. |
Bouguerra, Abdelbaki | Orebro Univ. |
Lilienthal, Achim J. | Örebro Univ. |
Keywords: Autonomous Agents, AI Reasoning Methods
Abstract: In this paper, we propose a new approach for automatically building symbolic relational descriptions of static configurations of objects to be manipulated by a robotic system. The main goal of our work is to provide advanced cognitive abilities for such robotic systems to make them more aware of the outcome of their actions. We describe how such symbolic relations are automatically extracted for configurations of box-shaped objects using notions from geometry and static equilibrium in classical mechanics. We also present extensive simulation results as well as some real-world experiments aimed at verifying the output of the proposed approach.
|
|
16:00-16:15, Paper MoCT8.4 | |
>Lifelong Transfer Learning with an Option Hierarchy |
Hawasly, Majd | Univ. of Edinburgh |
Ramamoorthy, Subramanian | The Univ. of Edinburgh |
Keywords: Autonomous Agents, Learning and Adaptive Systems, Integrated Planning and Control
Abstract: Many applications require autonomous agents to achieve quick responses to task instances drawn from a rich family of qualitatively-related tasks. We address the setting where the tasks share a state-action space and have the same qualitative objective but differ in dynamics. We adopt a transfer learning approach where common structure in previously-learnt policies, in the form of shared subtasks, is exploited to accelerate learning in subsequent ones. We use a probabilistic mixture model to describe regions in state space which are common to successful trajectories in different instances. Then, we extract policy fragments from previously-learnt policies that are specialised to these regions. These policy fragments are options, whose initiation and termination sets are automatically extracted from data by the mixture model. In novel task instances, these options are used in an SMDP learning process and option learning repeats over the resulting policy library. The utility of this method is demonstrated through experiments in a standard navigation environment and then in the RoboCup simulated soccer domain with opponent teams of different skill.
|
|
16:15-16:30, Paper MoCT8.5 | |
>Mutual Localization: Two Camera Relative 6-DOF Pose Estimation from Reciprocal Fiducial Observation |
Dhiman, Vikas | SUNY at Buffalo |
Ryde, Julian | Univ. at Buffalo |
Corso, Jason | SUNY Buffalo |
Keywords: Cooperating Robots, Localization, Computer Vision
Abstract: Concurrently estimating the 6-DOF pose of multiple cameras or robots---cooperative localization---is a core problem in contemporary robotics. Current works focus on a set of mutually observable world landmarks and often require inbuilt egomotion estimates; situations in which both assumptions are violated often arise, for example, robots with erroneous low quality odometry and IMU exploring an unknown environment. In contrast to these existing works in cooperative localization, we propose a cooperative localization method, which we call *mutual localization*, that uses reciprocal observations of camera-fiducials to obviate the need for egomotion estimates and mutually observable world landmarks. We formulate and solve an algebraic formulation for the pose of the two camera mutual localization setup under these assumptions. Our experiments demonstrate the capabilities of our proposal egomotion-free cooperative localization method: for example, the method achieves 2cm range and 0.7 degree accuracy at 2m sensing for 6-DOF pose. To demonstrate the applicability of the proposed work, we deploy our method on Turtlebots and we compare our results with ARToolKit and Bundler, over which our method achieves a ten fold improvement in translation estimation accuracy.
|
|
16:30-16:45, Paper MoCT8.6 | |
>Global Identification of Spring Balancer, Dynamic Parameters and Drive Gains of Heavy Industrial Robots |
Jubien, Anthony | Univ. de nantes |
Gautier, Maxime | Univ. of Nantes/IRCCyN |
Keywords: Industrial Robots, Calibration and Identification, Dynamics
Abstract: In this paper, the global identification of spring balancer, dynamic parameters and joint drive gains of a 6 Degrees Of Freedom (DOF) robot is performed. Off-line identification method is based on the use of the Inverse Dynamic Identification Model (IDIM) which takes into account a spring balancer for gravity compensation and linear Least Squares (LS) technique to estimate the parameters from the positions and joint torques. It is key to get accurate values of joint drive gains to get accurate identification because the joint torques are calculated as the product of the current reference by the joint drive gains. Recently a new method validated on small payload robots (less than 10 Kg) allows to identify simultaneously all joint drive gains and dynamic parameters. This method is based on the Total Least Squares (TLS) solution of an over-determined linear system obtained with the inverse dynamic model calculated while the robot is tracking reference trajectories without load and trajectories with a known payload fixed on the robot. This method is used to identify accurately the heavy industrial robot Kuka KR270 (270Kg payload) with its spring balancer. This is a new step to promote a practical and easy to use method for global dynamic identification of any small or heavy gravity compensated industrial robots that does not need any a priori data, which are too often missing from manufacturer's data sheet.
|
|
MoCT9 |
Room608 |
Aerial Robotics I |
Regular Session |
Chair: Paulos, James | Univ. of Pennsylvania |
Co-Chair: Pounds, Paul | The Univ. of Queensland |
|
15:15-15:30, Paper MoCT9.1 | |
> >A Flying Robot with Adaptive Morphology for Multi-Modal Locomotion |
Daler, Ludovic | Ec. Pol. Federale de Lausanne |
Lecoeur, Julien | EPFL |
Hählen, Patrizia Bernadette | EPFL |
Floreano, Dario | Ec. Pol. Federal, Lausanne |
Attachments: Video Attachment
Keywords: Aerial Robotics, Field Robots, Robotics in Hazardous Fields
Abstract: Most existing robots are designed to exploit only one single locomotion mode, such as rolling, walking, flying, swimming, or jumping, which limits their flexibility and adaptability to different environments where specific and different locomotion capabilities could be more effective. Here we introduce the concept and the design of a flying robot with Adaptive Morphology for Multi-Modal Locomotion. We present a prototype that can use its wings to walk on the ground and fly forward. The wings are used as whegs to move on rough terrains. This solution allows to minimize the structural mass of the robot by reusing the same structure (here the wings) for different modes of locomotion. Furthermore, the morphology of the robot is analysed and optimized for ground speed.
|
|
15:30-15:45, Paper MoCT9.2 | |
>A Wing Characterization Method for Flapping-Wing Robotic Insects |
Lussier Desbiens, Alexis | Stanford Univ. |
Chen, YuFeng | Microrobotics Lab. School of Applied Sciences and Enginee |
Wood, Robert | Harvard Univ. |
Keywords: Unmanned Aerial Vehicles, Kinematics, Calibration and Identification
Abstract: This paper presents a wing characterization method for insect-scale flapping-wing robots. A quasi-steady model is developed to predict passive wing pitching at mid-stroke. Millimeter scale wings and passive hinges are manufactured using the SCM fabrication processes. Flapping experiments at various frequencies and driving voltages are performed to extract kinematics for comparison with the quasi-steady predictions. These experiments examine the validity of the quasi-steady model and demonstrate the robustness of the wing characterization method. In addition, because time-averaged lift and drag are strongly correlated with flapping kinematics, quasi-steady prediction of wing kinematics directly leads to predictions of lift and drag generation. Given a flapping frequency and a driving voltage, the model computes the hinge stiffness that leads to optimal flapping kinematics. This reduces the number of flapping experiments required for wing characterization by a factor of four.
|
|
15:45-16:00, Paper MoCT9.3 | |
> >An Underactuated Propeller for Attitude Control in Micro Air Vehicles |
Paulos, James | Univ. of Pennsylvania |
Yim, Mark | Univ. of Pennsylvania |
Attachments: Video Attachment
Keywords: Unmanned Aerial Vehicles, Aerial Robotics, Mechanism Design
Abstract: Traditional coaxial helicopter micro air vehicles use a large propeller motor in conjunction with two small servomotors to control thrust, pitch, and roll forces and moments. Quadrotors similarly generate these necessary forces and moments through the coordinated control of multiple actuators. We present a novel propeller architecture which allows a single motor and rotor to express such control by modulating the torque applied to one passively hinged, underactuated propeller. Flight tests of a two-motor coaxial helicopter demonstrate that such a system can provide active stability and control in a real flight system.
|
|
16:00-16:15, Paper MoCT9.4 | |
> >Complete Dynamic Modeling, Control and Optimization for an Over-Actuated MAV |
Long, Yangbo | Stevens Inst. of Tech. |
Cappelleri, David | Stevens Inst. of Tech. |
Attachments: Video Attachment
Keywords: Unmanned Aerial Vehicles, Dynamics, Motion Control
Abstract: This paper presents an original configuration of a micro aerial vehicle (MAV), the Omnicopter. Two central counter-rotating coaxial propellers provide a major part of lift force, and three perimeter-mounted tiltable ducted fans are used to supplement the lift force, provide lateral forces and adjust its attitude. Different from traditional underactuated MAVs, the presence of the tilt-rotor mechanism, composed of three ducted fans and three servo motors, on the Omnicopter make it over-actuated. The characteristic of over-actuation enables the Omnicopter’s position dynamics to be decoupled from its attitude dynamics. Based on a complete description of its dynamic model derived using the Newton-Euler motion equations, we propose attitude and position controllers and control allocation for the Omnicopter MAV. Simulation and experimental results are shown to demonstrate its performance.
|
|
16:15-16:30, Paper MoCT9.5 | |
>Towards a More Efficient Quadrotor Configuration |
Driessens, Scott | Univ. of Queensland |
Pounds, Paul | The Univ. of Queensland |
Keywords: Unmanned Aerial Vehicles, Aerial Robotics, Unmanned Aerial Systems
Abstract: The small rotor sizes of quadrotors and multi-rotors makes them intrinsically less energy efficient than a traditional helicopter with a large single rotor. However, the quadrotor configuration’s innate simplicity and inexpensive construction recommends its use in many aerial robotics applications. We present a four-rotor configuration that merges the simplicity of a quadrotor with the energy efficiency of a helicopter, while potentially improving the attitude control bandwidth. This class of aircraft, called a ‘Y4’ or ‘triangular quadrotor’, consists of a single fixed-pitch main rotor with three smaller rotors on booms that provide both counter-torque and manoeuvering control. Our analysis indicates that a Y4 may provide a 20 per cent reduction in hovering power required, compared with a similarly sized conventional quadrotor. Using a matched pair of quadrotor/triangular quadrotor aircraft, our preliminary experiments show that the test-bed Y4 used 15 per cent less power, without optimisation. We present a dynamic model and demonstrate experimentally that the aircraft can be stabilised in flight with PID control.
|
|
16:30-16:45, Paper MoCT9.6 | |
>A Modular Aerial Vehicle with Redundant Actuation |
Naldi, Roberto | CASY - D.E.I.S. - Univ. di Bologna |
Riccò, Alessio | Univ. of Bologna |
Serrani, Andrea | The Ohio State Univ. |
Marconi, Lorenzo | Univ. of Bologna |
Keywords: Aerial Robotics, Motion Control, Underactuated Robots
Abstract: This work presents the design and experimental validation of a control strategy for an innovative modular aerial vehicle characterized by redundant actuation. For this class of aircraft, the distinguishing feature of the proposed design – which sets it apart from standard vertical take-off and landing (VTOL) under-actuated configurations such as helicopters, ducted-fan tail-sitters or multi-rotors – is that the input redundancy can be employed to improve the dynamical properties of the system. In particular, the vehicle performance can be enhanced in certain applications that benefit from a larger number of degrees of freedom being simultaneously controlled. A control strategy is proposed which is capable of globally stabilizing the dynamics of this class of vehicles along a desired trajectory. The methodology is validated by means of experiments carried out on a special prototype obtained by rigidly connecting two ducted-fan tail-sitter UAVs.
|
|
MoCT10 |
Room609 |
Cooperative Localization |
Regular Session |
Chair: Nerurkar, Esha | Univ. of Minnesota |
Co-Chair: Zhang, Feihu | TU München |
|
15:15-15:30, Paper MoCT10.1 | |
>Recursive Bayesian Initialization of Localization Based on Ranging and Dead Reckoning |
Nilsson, John-Olof | KTH Royal Inst. of Tech. |
Händel, Peter | KTH Royal Inst. of Tech. |
Keywords: Localization, Sensor Fusion, Self-Organised Robot Systems
Abstract: The initialization of the state estimation in a localization scenario based on ranging and dead reckoning is studied. Specifically, we treat a cooperative localization setup and consider the problem of recursively arriving at a uni-modal state estimate with sufficiently low covariance such that covariance based filters can be used to estimate an agent's state subsequently. The initialization of the position of an anchor node will be a special case of this. A number of simplifications/assumptions are made such that the estimation problem can be seen as that of estimating the initial agent state given a deterministic surrounding and dead reckoning. This problem is solved by means of a particle filter and it is described how continual states and covariance estimates are derived from the solution. Finally, simulations are used to illustrate the characteristics of the method and experimental data are briefly presented.
|
|
15:30-15:45, Paper MoCT10.2 | |
>Multiple Vehicle Cooperative Localization under Random Finite Set Framework |
Zhang, Feihu | TU München |
Staehle, Hauke | Tech. Univ. Munich |
Chen, Guang | Tech. Univ. of Munich |
Buckl, Christian | fortiss |
Knoll, Alois C. | TU Munich |
Keywords: Localization, Sensor Fusion, Intelligent Transportation Systems
Abstract: This paper presents a new multiple vehicle cooperative localization approach based on Random Finite Set (RFS) theory. Assuming vehicles are equipped with proprioceptive and exteroceptive sensors to localize the positions, a solution based on RFS statistics is proposed to consider the whole group behavior instead of each vehicle. For this, we rely on Probability Hypothesis Density (PHD) filtering. Compared to other methods, our approach presents a recursive filtering algorithm that provides dynamic estimation of multiple vehicle states. The proposed method addresses the current challenges in the multiple vehicle localization domain such as communication bandwidth issue, data association uncertainty and the over-convergence problem. A comparative study based on simulations demonstrates the reliability and the feasibility of the proposed approach in large scale environments.
|
|
15:45-16:00, Paper MoCT10.3 | |
>Decentralized Multi-Robot Cooperative Localization Using Covariance Intersection |
Carrillo-Arce, Luis C. | Tecnologico de Monterrey |
Nerurkar, Esha | Univ. of Minnesota |
Gordillo, José-Luis | Tecnológico de Monterrey |
Roumeliotis, Stergios | Univ. of Minnesota |
Keywords: Localization
Abstract: In this paper, we present a Covariance Intersection (CI)-based algorithm for reducing the processing and communication complexity of multi-robot Cooperative Localization (CL). Specifically, for a team of N robots, our proposed approximate CI-based CL approach has processing and communication complexity only linear, O(N), in the number of robots. Moreover, and in contrast to alternative approximate methods, our approach is provably consistent, can handle asynchronous communication, and does not place any restriction on the robots’ motion. We test the performance of our proposed approach in both simulations and experimentally, and show that it outperforms the existing linear-complexity split CI-based CL method.
|
|
16:00-16:15, Paper MoCT10.4 | |
>A Communication-Bandwidth-Aware Hybrid Estimation Framework for Multi-Robot Cooperative Localization |
Nerurkar, Esha | Univ. of Minnesota |
Roumeliotis, Stergios | Univ. of Minnesota |
Keywords: Localization, Networked Robots, Sensor Networks
Abstract: This paper presents hybrid Minimum Mean Squared Error-based estimators for wireless sensor networks with time-varying communication-bandwidth constraints, focusing on the particular application of multi-robot Cooperative Localization. When sensor nodes (e.g., robots) communicate only a quantized version of their analog measurements to the team, our proposed hybrid filters enable robots to process all available information, i.e., local analog measurements (recorded by its own sensors) as well as remote quantized measurements (collected and communicated by other sensors). Moreover, these filters are resource-aware and can utilize additional bandwidth, whenever available, to maximize estimation accuracy. Specifically, in this paper, we present two filters, the Hybrid Batch-Quantized Kalman filter (H-BQKF) and the Hybrid Iteratively-Quantized Kalman filter (H-IQKF), that can process local analog measurements along with remote measurements quantized to any number of bits. We test our proposed filters in simulations and experimentally, and demonstrate that they achieve performance comparable to the standard Kalman filter.
|
|
16:15-16:30, Paper MoCT10.5 | |
>A Visibility Information for Multi-Robots Localization |
Guyonneau, Rémy | Univ. d'Angers, Lab. d'IngénieriedesSystèmesAutomati |
Lagrange, Sebastien | Univ. of Angers |
Hardouin, Laurent | Univ. of Angers |
Keywords: Localization, Wheeled Robots, Cooperating Robots
Abstract: This paper proposes a set-membership method based on interval analysis to solve the pose tracking problem for a team of robots. The originality of this approach is to consider only weak sensor data: the visibility between two robots. The paper demonstrates that with this poor information, without using bearing or range sensors, a localization is possible. By using this boolean information (two robots see each other or not), the objective is to compensate the odometry errors and be able to localize in an indoor environment all the robots of the team, in a guaranteed way. The environment is supposed to be defined by two sets, an inner and an outer characterizations. This paper mainly presents the visibility theory used to develop the method. Simulated results allow to evaluate the efficiency and the limits of the proposed algorithm.
|
|
16:30-16:45, Paper MoCT10.6 | |
>Matching of Ground-Based LiDAR and Aerial Image Data for Mobile Robot Localization in Densely Forested Environments |
Hussein, Marwan | Massachusetts Inst. of Tech. |
Renner, Matthew | ERDC |
Watanabe, Masaaki | IHI Corp. |
Iagnemma, Karl | MIT |
Keywords: Localization, Mapping, Visual Navigation
Abstract: We present a vision based method for the autonomous geolocation of ground vehicles and unmanned mobile robots in forested environments. The method provides an estimate of the global horizontal position of a vehicle strictly based on finding a geometric match between a map of observed tree stems, scanned in 3D by sensors onboard the vehicle, to another stem map generated from the structure of tree crowns observed in overhead imagery of the forest canopy. This method can be used in real-time as a complement to the Global Positioning System (GPS) in areas where signal coverage is inadequate due to attenuation by the forest canopy, or due to intentional denied access. The method presented in this paper has two key properties that are significant: i) It does not require a priori knowledge of the area surrounding the robot. ii) It uses the geometry of detected tree stems as the only input to determine horizontal geoposition.
|
|
MoCT11 |
Room801 |
Multilegged Robot |
Regular Session |
Chair: Ozcan, Onur | Harvard Univ. |
Co-Chair: Hosoda, Koh | Osaka Univ. |
|
15:15-15:30, Paper MoCT11.1 | |
>Design and Feedback Control of a Biologically-Inspired Miniature Quadruped |
Ozcan, Onur | Harvard Univ. |
Baisch, Andrew | Harvard Univ. |
Wood, Robert | Harvard Univ. |
Keywords: Biologically-Inspired Robots, Legged Robots
Abstract: Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple on-board microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.
|
|
15:30-15:45, Paper MoCT11.2 | |
>Spine Dynamics As a Computational Resource in Spine-Driven Quadruped Locomotion |
Zhao, Qian | AI Lab. Univ. of zurich |
Nakajima, Kohei | Univ. of Zurich |
Sumioka, Hidenobu | ATR |
Hauser, Helmut | Univ. of Zurich |
Pfeifer, Rolf | Univ. of Zurich |
Keywords: Biologically-Inspired Robots
Abstract: Recent results suggest that compliance and nonlinearity in physical bodies of soft robots may not be disadvantageous properties with respect to control, but rather of advantage. In the context of morphological computation one could see such complex structures as potential computational resources. In this study, we implement and exploit this view point in a spine-driven quadruped robot called Kitty by using its flexible spine as a computational resource. The spine is an actuated multi-joint structure consisting of a sequence of soft silicone blocks. Its complex dynamics are captured by a set of force sensors and used to construct a closed-loop to drive the motor commands. We use simple static, linear readout weights to combine the sensors to generate multiple gait patterns (bounding, trotting, turning behavior). In addition, we demonstrate the robustness of the setup by applying strong external perturbations in form of additional weights. The system is able to recover to its gait patters encoded in the linear weights after the perturbation has vanished.
|
|
15:45-16:00, Paper MoCT11.3 | |
>Pneupard : A Biomimetic Musculoskeletal Approach for a Feline-Inspired Quadruped Robot |
Rosendo, Andre | Osaka Univ. |
Nakatsu, Shogo | Osaka Univ. |
Narioka, Kenichi | Osaka Univ. |
Hosoda, Koh | Osaka Univ. |
Keywords: Biologically-Inspired Robots, Biomimetics, Legged Robots
Abstract: Feline locomotion combines great acrobatic proficiency, unparalleled balance and higher accelerations than other animals. Capable of accelerating from 0 to 100 km/h in three seconds, the cheetah (Acinonyx jubatus) is still a mystery which intrigues scientists. Aiming for a better understanding of the source of such higher speeds, we develop a biomimetic platform, where musculoskeletal parameters (range of motion and moment arms) from the biological system can be evaluated with air muscles within a lightweight robotic structure. We performed experiments validating the muscular structure during a treadmill walk, successfully reproducing animal locomotion while adopting an EMG based control method.
|
|
16:00-16:15, Paper MoCT11.4 | |
> >Stability and Performance of the Compliance Controller of the Quadruped Robot HyQ |
Boaventura, Thiago | ETH Zurich |
Medrano-Cerda, Gustavo | Italian Inst. of Tech. |
Semini, Claudio | Istituto Italiano di Tecnologia |
Buchli, Jonas | ETH Zurich |
Caldwell, Darwin G. | Istituto Italiano di Tecnologia |
Attachments: Video Attachment
Keywords: Compliance and Impedance Control, Legged Robots, Hydraulic/Pneumatic Actuators
Abstract: A legged robot has to deal with environmental contacts every time it takes a step. To properly handle these interactions, it is desirable to be able to set the foot compliance. For an actively-compliant legged robot, in order to ensure a stable contact with the environment the robot leg has to be passive at the contact point. In this work, we asses some passivity and stability issues of the actively-compliant leg of the quadruped robot HyQ, which employs a high-performance cascade compliance controller. We demonstrate that both the nested torque loop performance as well as the actuator bandwidth have a strong influence in the range of virtual impedances that can be passively rendered by the robot leg. Based on the stability analyses and experimental results, we propose a procedure for designing cascade compliance controllers. Furthermore, we experimentally demonstrate that the HyQ’s actively-compliant leg is able to reproduce the compliant behavior presented by an identical but passively-compliant version of the same leg.
|
|
16:15-16:30, Paper MoCT11.5 | |
> >A Lightweight Modular 12-DOF Print-And-Fold Hexapod |
Soltero, Daniel E. | Massachusetts Inst. of Tech. |
Julian, Brian | MIT |
Onal, Cagdas Denizel | WPI |
Rus, Daniela | MIT |
Attachments: Video Attachment
Keywords: Legged Robots, Autonomous Agents, Cellular and Modular Robots
Abstract: This paper presents the design, fabrication and operation of a hexapod fabricated using a combination of printing and folding flat sheets of polyester. The polyester sheets are cut and engraved with crease patterns, which are then manually folded to create 3D functional modules, inspired by the Japanese art of Origami. These modules, when connected, form a hexapod with two degrees of freedom per leg. All custom mechanical parts are manufactured in a planar fashion using a laser cutter. We created this print-and-fold hexapod as a miniature version of a commercially available platform, to which we compare several metrics, such as weight, walking speed, and cost of transportation. Our print-and-fold hexapod has a mass of 195 g, can walk at speeds of up to 38.1 cm/sec (two body lengths per second), and can be manufactured and assembled from scratch by a single person in approximately seven hours. Experimental results of gait control and trajectory tracking are provided.
|
|
16:30-16:45, Paper MoCT11.6 | |
> >Robustness of Centipede-Inspired Millirobot Locomotion to Leg Failures |
Hoffman, Katie | Harvard Univ. |
Wood, Robert | Harvard Univ. |
Attachments: Video Attachment
Keywords: Biologically-Inspired Robots, Legged Robots
Abstract: This paper explores the use of mechanical redundancy to enhance robustness to leg failures in miniature ambulatory robots. Graceful degradation, rather than immediate catastrophic failure, is exhibited experimentally in 10-20 leg centipede-inspired millirobots as legs are removed without altering the gait, using speed and radius of curvature as performance metrics. Static stability retention is examined as a function of the nominal number of legs, and for cases where static stability is lost, two gait options are tested. The effect of location of missing legs on performance is also described.
|
|
MoCT12 |
Room610 |
Upper Limb Rehabilitation Systems |
Regular Session |
Chair: Agrawal, Sunil | Columbia Univ. |
Co-Chair: Yoshikawa, Masahiro | Nara Inst. of Science and Tech. (NAIST) |
|
15:15-15:30, Paper MoCT12.1 | |
>Stiffness Control of a Pneumatic Rehabilitation Robot for Exercise Therapy with Multiple Stages |
Tsuji, Toshiaki | Saitama Univ. |
Momiki, Chinami | Saitama Univ. |
Sakaino, Sho | Saitama Univ. |
Keywords: Rehabilitation Robotics, Human-Robot Interaction, Haptics and Haptic Interfaces
Abstract: In exercise therapy, the training program will differ depending on the degree of disability. In order to gradually transition from passive exercise to exercises with more voluntary movements, it is important to reduce the assistance provided by the therapist in stages. Since stiffness control is the key factor of assistance adjustment in robotic movement rehabilitation, this paper focuses on stiffness control as a tool to adjust the assistance in stages. It is necessary to set a proper stiffness ellipse to perform assistance with directivity, especially in the case of active-assistive exercise. The performance of the proposed stiffness control for exercise therapy is validated through experimental results.
|
|
15:30-15:45, Paper MoCT12.2 | |
>Ultrasound Imaging As a Human-Machine Interface in a Realistic Scenario |
Castellini, Claudio | DLR - German Aerospace Res. Center |
Sierra González, David | DLR |
Keywords: Rehabilitation Robotics, Learning and Adaptive Systems, Cognitive Human-Robot Interaction
Abstract: Medical ultrasound imaging is a widespread high-resolution (both spatial and temporal) method to gather live images of the interior of the human body. Its potential as a human-machine interface for the disabled --- amputees in particular --- is being explored in the rehabilitation robotics community. Following up the recent discovery that first-order spatial features of the ultrasound images of the human forearm are linearly related to the hand configuration, we hereby push the approach to a realistic scenario. We show that an extremely simple calibration procedure can be used to obtain a linear regression system which will effectively predict the forces required by a human subject at the fingertips, using live ultrasound images of the forearm. In particular, the system can be trained on minimum and maximum forces only, thereby dramatically shortening the calibration phase; and it will generalise to intermediate force values. This phenomenon is uniform across 5 intact subjects whom we examined in a controlled experiment. Moreover, it is not necessary to use any force sensor, as learning-by-imitation, namely using a visual stimulus, yields similar results. This result is particularly useful in the case of amputees, who normally cannot perform graded-force tasks as proprioception may be lost since decades. Applications of this system range from advanced prosthetics to phantom pain therapy to smart teleoperation.
|
|
15:45-16:00, Paper MoCT12.3 | |
> >Trans-Radial Prosthesis with Three Opposed Fingers |
Yoshikawa, Masahiro | Nara Inst. of Science and Tech. (NAIST) |
Taguchi, Yuya | Nara Inst. of Science and Tech. (NAIST) |
Sakamoto, Shin | Keio Univ. |
Yamanaka, Shunji | the Univ. of Tokyo |
Matsumoto, Yoshio | AIST |
Ogasawara, Tsukasa | Nara Inst. of Science and Tech. |
Kawashima, Noritaka | Res. Inst. National Rehabilitation Center forPersonswith Di |
Attachments: Video Attachment
Keywords: Medical Robots and Systems, Medical Systems, Healthcare, and Assisted Living, Rehabilitation Robotics
Abstract: There are body-powered hooks and myoelectric prosthetic hands that trans-radial amputees can use for work. Though the body-powered hooks have good workability for complex operations, the design of the hook is unappealing and the harness is cumbersome. The myoelectric prosthetic hand has a natural appearance similar to the human hand and intuitive operability using a myoelectric control system. However, it is expensive and heavy. Because of these problems associated with prostheses for work, many amputees use cosmetic prostheses. In this paper, we report a lightweight, low-cost electric trans-radial prosthesis with three opposed fingers. A simple mechanism to control the fingers by a linear actuator contributes to its good workability, lightweight, and low-cost. An operation system using an inexpensive distance sensor allows intuitive operability equivalent to the myoelectric control system. A socket makes the prosthesis easily removable. The total weight of the hand and socket is 300 g, and both can be produced with a 3D printer. An evaluation using the Southampton Hand Assessment Procedure (SHAP) demonstrated that an amputee was able to operate abstract objects which require six types of grasps with the developed prosthesis.
|
|
16:00-16:15, Paper MoCT12.4 | |
> >Augmenting Neuroprosthetic Hand Control through Evaluation of a Bioacoustic Interface |
Mace, Michael | Imperial Coll. London |
Subbiah, Samir | Imperial Coll. London |
Naeem, Ali Azzam | Imperial Coll. London |
Vaidyanathan, Ravi | Imperial Coll. London |
Attachments: Video Attachment
Keywords: Rehabilitation Robotics, Learning and Adaptive Systems, Human-Robot Interaction
Abstract: The majority of neuroprosthetic interfaces, linking amputee to prosthetic hand, utilise proportional-based control through electromyography (EMG). The clinical translation of these interfaces can be attributed to their relative simplicity, usually requiring only two EMG electrodes to be placed on the flexor and extensor of the forearm. This bi-electrode setup enables opening and closing of hand grasp with an additional manual input used to cycle through the various grip patterns. In recent literature, the main focus has been on higher degree-of-freedom control leading to more complicated interfaces which can be considered the main barrier preventing their clinical utility. As such, new methods for grip pattern switching have not been explored with this fieldable strategy lacking any serious attention. In this work, a novel input, augmenting neuroprothetic hand control, is proposed. This interface is based on bioacoustic signals generated through prescribed tongue movements. We demonstrate that such an interface can provide comparable performance to existing proportional-based systems without requiring any additional movements of the upper extremities.
|
|
16:15-16:30, Paper MoCT12.5 | |
>A Virtual Reality System for Robotic-Assisted Orthopedic Rehabilitation of Forearm and Elbow Fractures |
Padilla Castañeda, Miguel Angel | Scuola Superiore Sant' Anna |
Sotgiu, Edoardo | scuola superiore s.anna |
Frisoli, Antonio | Scuola Superiore Sant'Anna |
Bergamasco, Massimo | Scuola Superiore S.Anna |
Orsini, Piero | Azienda Usl5 Pisa |
Martiradonna, Alessandro | Azienda Usl5 Pisa |
Olivieri, Samuele | Azienda Usl5 Pisa |
Mazzinghi, Gloria | Azienda Usl5 Pisa |
Laddaga, Cristina | Azienda Usl5 Pisa |
Keywords: Rehabilitation Robotics, Human-Robot Interaction, Virtual Reality and Interfaces
Abstract: The combination of robotics and virtual reality seems promising for the rehabilitation of the upper limb by promoting intensive training on specific deficits with motor control and multimodal feedback in engaging game-like scenarios. In this paper we present the integration of a robotic system and virtual reality applications for the orthopedic rehabilitation of the arm, in terms of strengthening training and motion recovery. The system simulates the upper limb of the patient and their actions, and allows exhaustive exercising and motor control, giving visuomotor and haptic feedback and trajectory positioning guidance. The system allows assign specific tasks to perform within the virtual environments and aids to evaluate the motility condition of the patient, to personalize the difficulty level of the therapy and provides kineseologic measures of the patient evolution. We present the results of a preliminary clinical assessment we are carried out on three patients in order to assess the usability and acceptance of the system.
|
|
16:30-16:45, Paper MoCT12.6 | |
>Towards a Soft Pneumatic Glove for Hand Rehabilitation |
Polygerinos, Panagiotis | Harvard Univ. |
Lyne, Stacey | Harvard Univ. |
Wang, Zheng | Harvard |
Nicolini, Luis Fernando | Harvard Univ. |
Mosadegh, Bobak | Harvard Univ. |
Whitesides, George | Harvard Univ. |
Walsh, Conor James | Harvard Univ. |
Keywords: Rehabilitation Robotics
Abstract: This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.
|
|
MoCT13 |
Room802 |
Micro/Nano Robotics I |
Regular Session |
Chair: Arai, Tatsuo | Osaka Univ. |
Co-Chair: Tang, Hui | Univ. of Macau |
|
15:15-15:30, Paper MoCT13.1 | |
>Pop-Up Assembly of a Quadrupedal Ambulatory MicroRobot |
Baisch, Andrew | Harvard Univ. |
Wood, Robert | Harvard Univ. |
Keywords: Micro/Nano Robots, Legged Robots, Biologically-Inspired Robots
Abstract: Here we present the design of a 1.27g quadrupedal microrobot manufactured using “Pop-up book MEMS”; the first such device capable of locomotion. Implementing popup assembly techniques enables manufacturing of the robot’s exoskeleton and drivetrain transmissions from a single 23-layer laminate. Its demonstrated capabilities include payload capacity greater than 1.35g, maneuverability on flat terrain, and high-speed locomotion up to 30cm/s. Additionally, locomotion performance is compared to a hand-assembled quadruped with similar design parameters. The results demonstrate that the pop-up manufacturing methodology enables more complex mechanisms while simultaneously increasing performance over hand-assembled alternatives.
|
|
15:30-15:45, Paper MoCT13.2 | |
>Development of Microhand Utilizing Singularity of Parallel Mechanism |
Ejima, Toru | Osaka Univ. |
Ohara, Kenichi | Meijo Univ. |
Kojima, Masaru | Osaka Univ. |
Horade, Mitsuhiro | Osaka Univ. |
Tanikawa, Tamio | National Inst. of AIST |
Mae, Yasushi | Osaka Univ. |
Arai, Tatsuo | Osaka Univ. |
Keywords: Micro-manipulation, Micro/Nano Robots
Abstract: In the fields of medicine and biology, it is essential to realize fine manipulation. Therefore, micromanipulation techniques and micromanipulators such as microgrippers and optical tweezers have been developed. We have developed a two-fingered microhand which is using the parallel mechanism to realize precise and stable micromanipulation. However, the previous microhand has problems about workspace and vibration. In this paper, we report the development of a new microhand which solves problems of the previous microhand. The characteristic of the new microhand is to enlarge the workspace by utilizing the singularity of the parallel mechanisms. Inverse kinematics and structural analysis are used to analyze the workspace, and we show that results of two analyses match. Vibration analysis simulates transportation task and grasping task for manipulation. The new microhand is showed potential to reduce the vibration by vibration analysis results.
|
|
15:45-16:00, Paper MoCT13.3 | |
>A Novel Flexure-Based Dual-Arm Robotic System for High-Throughput Biomanipulations on Micro-Fluidic Chip |
Tang, Hui | Univ. of Macau |
Li, Yangmin | Univ. of Macau |
Xiao, Xiao | Univ. of Macau |
Keywords: Micro/Nano Robots, Micro-manipulation, Nano manipilation
Abstract: In recent years, robotic bio-manipulation emerges as a hot research topic in the micro/nano technology. In these applications, biological cell microinjection is a focus since it is a critical process for the further biological research such as genetic engineering and pharmacology research. This study aims to develop a novel robotic biomanipulation system combining with the micro-fluidic chip technology to improve the cell manipulation stability and throughput. Two novel flexure-based large-workspace micromanipulators with modified differential lever displacement amplifier (MDLDA) are presented in this paper. After a series of optimal designs and mechanism modeling, the mechanism performances are evaluated by the FEA method. Finally, the proposed micromanipulators are fabricated and visual-servo controlled to perform the practical zebrafish embryos injection task. In this work, two piezoelectric (PZT) actuators P-216.80 (open-loop travel is 120 um) and one PZT actuator P-840.20 (open-loop travel is 30 um) are utilized in the compliant mechanisms, the experiment results indicate that the displacement amplification ratios can reach up to 30.6 and 17.6, thus the maximum output displacements can achieve around 3.1273 mm and 0.528 mm, the rotation angle of the left micromanipulator can reach to around 26.5. Both theoretical derivation and experimental implementation results well verify the advanced performance of the developed system.
|
|
16:00-16:15, Paper MoCT13.4 | |
>Microstructuring Thermoresponsive Gel Using Hysteresis towards 3D Cell Assembly |
Takeuchi, Masaru | Nagoya Univ. |
Nakajima, Masahiro | Nagoya Univ. |
Tajima, Hirotaka | Nagoya Univ. |
Fukuda, Toshio | Meijo Univ. |
Keywords: Micro-manipulation, Micro/Nano Robots, Medical Robots and Systems
Abstract: In this paper, we conducted an assembly of microstructures made of a thermoresponsive gel using hysteresis character of the thermoresponsive polymer solution. This method can be used for 3 dimensional cell assembly by embedding cells in the thermoresponsive gel structures. Gel sheets can be fabricated by micrheaters on a substrate and maintained in the gel condition by the hysteresis character. The microstructures can be formed by assembling the gel sheets using a probe device. The generation of a thermoresponsive gel was conducted using the microheaters made of ITO on a substrate and the generated gel sheets were manipulated by a micromanipulator. The fabrication of a gel sheets was achieved using the hysteresis character and the fabricated gel sheets were picked and placed by the probe. The positioning of the gel blocks can be precisely controlled by the micromanipulator. The results indicate the method we propose has a great possibility to achieve 3D cell assembly without large stress to cells during the assembly and cell culture.
|
|
16:15-16:30, Paper MoCT13.5 | |
>Robust Laser Beam Tracking Control Using Micro/Nano Dual-Stage Manipulators |
Amari, Nabil | Ec. Nationale Supérieure d'Ingénieurs de Bourges |
Folio, David | ENSI de Bourges; Groupe INSA partenaires |
Ferreira, Antoine | Ec. Nationale Supérieure d'Ingénieurs de Bourges |
Keywords: Micro/Nano Robots, Micro-manipulation, Manipulation Planning and Control
Abstract: This paper presents a study of the control problem of a laser beam illuminating and focusing a microobject subjected to dynamic disturbances using light intensity for feedback only. The main idea is to guide and track the beam with a hybrid micro/nanomanipulator which is driven by a control signal generated by processing the beam intensity sensed by a four-quadrant photodiode. Since the pointing location of the beam depends on real-time control issues related to temperature variation, vibrations, output intensity control, and collimation of the light output, the 2-D beam location to the photodiode sensor measurement output is estimated in real-time. We use the Kalman filter (KF) algorithm for estimating the state of the linear system necessary for implementing the proposed track-following control approach. To do so a robust master/ slave control strategy for dual-stage micro/nanomanipulator is presented based on sensitivity function decoupling design methodology. The decoupled feedback controller is synthesized and implemented in a 6 dof micro/nanomanipulator capable of nanometer resolution through several hundreds micrometer range. A case study relevant to tracking a laser-beam for imaging purposes is presented.
|
|
16:30-16:45, Paper MoCT13.6 | |
>Towards Quorum Sensing Based Distributed Control for Network of Mobile Sensors |
Geuther, Brian | Virginia Tech. |
Behkam, Bahareh | Virginia Tech. |
Keywords: Micro/Nano Robots, Biologically-Inspired Robots, Sensor Networks
Abstract: Control and communication for a distributed network of robotic agents is a difficult problem to solve at the microscale. In nature, bacteria utilize chemical signaling to execute controlled movement, communication, and collaborative task completion. Chemotactic response (i.e. biased random walk of bacteria towards a chemo-attractant source) enables effective sensing and creates a biased distribution of bacteria in a field. Quorum sensing allows a robust collective response to be achieved at specific bacteria number densities. In this work, we present a computational model for bio-inspired sensing, communication, and control that is based on the combination of chemotaxis and quorum sensing. We have computationally demonstrated that these bio-inspired strategies can be implemented in synthetic mobile sensor network. Robustness and response time of such systems are also examined.
|
|
MoDT1 |
Room606 |
SLAM IV |
Regular Session |
Chair: Tsai, Chia-Hung Dylan | Osaka Univ. |
Co-Chair: Tanaka, Kanji | Fukui Univ. |
|
17:00-17:15, Paper MoDT1.1 | |
> >Undelayed 3D RO-SLAM Based on Gaussian-Mixture and Reduced Spherical Parametrization |
R.Fabresse, Felipe | Univ. of Seville |
Caballero, Fernando | Univ. of Seville |
Maza, Ivan | Univ. of Seville |
Ollero, Anibal | Univ. of Seville |
Attachments: Video Attachment
Keywords: Localization, SLAM
Abstract: This paper presents an undelayed range-only simultaneous localization and mapping (RO-SLAM) based on the Extended Kalman filter. The approach is optimized for working in 3D scenarios, reducing the required computational payload at two levels: first, using a reduced spherical state vector parametrization and, second, proposing a new EKF update scheme. The paper proposes a state vector parametrization based on Gaussian-Mixture to cope with the multi-modal nature of range-only measurements and a reduced spherical parametrization of the range sensor positions that allows to shorten the length of the state vector for a given number of hypotheses. The approach is firstly tested and discussed in simulation, followed by experimental results involving a real robot and radio-based range sensors.
|
|
17:15-17:30, Paper MoDT1.2 | |
>RGB-D Based Cognitive Map Building and Navigation |
Tian, Bo | Inst. for Infocomm Res. A*STAR |
Shim, Vui Ann | Inst. for Infocomm Res. |
Yuan, Miaolong | Inst. for Infocomm Res. |
Srinivasan, Chithra | Inst. for Infocomm Res. |
Tang, Huajin | Inst. for Infocomm Res. |
Li, Haizhou | Inst. for Infocomm Res. |
Keywords: Neurorobotics, Mapping, Navigation
Abstract: This paper describes a cognitive map building and navigation system using an RGB-D sensor for mobile robots. A brain-inspired simultaneously localization and mapping (SLAM) system, that requires raw odometry data and RGB-D information, is used to construct a spatial cognitive map of an office environment. The cognitive map contains a set of spatial coordinates that the robot has traveled. A global path is extracted from the built cognitive map and subsequently used by a local planner to instruct the robot to navigate. The global path is a subset of the path that builds up the cognitive map. This is different from other path planning mechanisms that construct a path based on a ground-truth map. Experiment results show that the employment of the RGB-D sensor significantly improves the mapping results.
|
|
17:30-17:45, Paper MoDT1.3 | |
>RGB-D Edge Detection and Edge-Based Registration |
Choi, Changhyun | Georgia Inst. of Tech. |
Trevor, Alexander J B | Georgia Inst. of Tech. |
Christensen, Henrik Iskov | Georgia Inst. of Tech. |
Keywords: Range Sensing, SLAM, Computer Vision
Abstract: We present a 3D edge detection approach for RGB-D point clouds and its application in point cloud registration. Our approach detects several types of edges, and makes use of both 3D shape information and photometric texture information. Edges are categorized as occluding edges, occluded edges, boundary edges, high-curvature edges, and RGB edges. We exploit the organized structure of the RGB-D image to efficiently detect edges, enabling near real-time performance. We present two applications of these edge features: edge-based pair-wise registration and a pose-graph SLAM approach based on this registration, which we compare to state-of-the-art methods. Experimental results demonstrate the performance of edge detection and edge-based registration both quantitatively and qualitatively.
|
|
17:45-18:00, Paper MoDT1.4 | |
>Navigability Analysis of Natural Terrains with Fuzzy Elevation Maps from Ground-Based 3D Range Scans |
Martinez, Jorge L. | Univ. of Malaga |
Mandow, Anthony | Univ. of Malaga |
Reina, Antonio J. | Univ. of Malaga |
Cantador, Tomás J. | Univ. of Málaga |
Morales, Jesús | Univ. of Málaga |
García-Cerezo, Alfonso | Univ. of Malaga |
Keywords: Field Robots, Motion and Path Planning, Range Sensing
Abstract: Mobile robot navigation through natural terrains is a challenging issue with applications such as planetary exploration or search and rescue. This paper proposes navigability assessment of natural terrains scanned from ground-based 3D laser rangefinders. A continuous model of the terrain is obtained as a fuzzy elevation map (FEM). Based on this model, the proposed solution incorporates terrain navigability both in terms of uncertainties of the 3D input data and slope of the fuzzy surface. Moreover, the paper discusses the application of this method for local path planning. For this purpose, the Bug algorithm has been adapted to compute local paths on the navigable region of the FEM. The method has been applied to actual 3D point clouds on two different experimental sites.
|
|
18:00-18:15, Paper MoDT1.5 | |
>PartSLAM: Unsupervised Part-Based Scene Modeling for Fast Succinct Map Matching |
Hanada, Shogo | Univ. of fukui |
Tanaka, Kanji | Fukui Univ. |
Keywords: Localization, Recognition, Visual Navigation
Abstract: In this paper, we explore the challenging 1-to-N map matching problem, which exploits a compact description of map data, to improve the scalability of map matching techniques used by various robot vision tasks. We propose a first method explicitly aimed at fast succinct map matching, which consists only of map-matching subtasks. These tasks include offline map matching attempts to find a compact part-based scene model that effectively explains each map using fewer larger parts. The tasks also include an online map matching attempt to efficiently find correspondence between the part-based maps. Our part-based scene modeling approach is unsupervised and uses common pattern discovery (CPD) between the input and known reference maps. This enables a robot to learn a compact map model without human intervention. We also present a practical implementation that uses the state-of-the-art CPD technique of randomized visual phrases (RVP) with a compact bounding box (BB) based part descriptor, which consists of keypoint and descriptor BBs. The results of our challenging map-matching experiments, which use a publicly available radish dataset, show that the proposed approach achieves successful map matching with significant speedup and a compact description of map data that is tens of times more compact. Although this paper focuses on the standard 2D point-set map and the BB-based part representation, we believe our approach is sufficiently general to be applicable to a broad range of map formats, such as the 3D point cloud map, as well as to general bounding volumes and other compact part representations.
|
|
18:15-18:30, Paper MoDT1.6 | |
>Mapping UHF RFID Tags with a Mobile Robot Using a 3D Sensor Model |
Liu, Ran | Univ. of Tuebingen |
Koch, Artur | Univ. Tübingen |
Zell, Andreas | Univ. of Tübingen |
Keywords: Sensor Networks, Service Robots, Mapping
Abstract: Recently, researchers showed growing interest in utilizing UHF Radio-Frequency Identification (RFID) technology for localizing tagged items with mobile robots in industrial scenarios. In this paper we present a novel three-dimensional (3D) probability sensor model of RFID antennas in the context of mapping passive RFID tags with mobile robots. The proposed 3D sensor model characterizes both detection rates and received signal strength (RSS). Compared to 2D-sensor model based approaches, the 3D model gains a higher mapping accuracy for 2D position estimation. Specially, with this sensor model, we are able to localize the tags in 3D by integrating the measurements from a pair of RFID antennas mounted at different heights of the robot. Furthermore, by integrating negative information (i.e. non-detections), the 3D mapping accuracy can be improved. Additionally, we utilize KLD-sampling to reduce the number of particles for our specific application, so that our algorithm can be performed online. Indoor experiments with a Scitos G5 robot demonstrate the effectiveness of our approach. We also provide the datasets of this work for download.
|
|
MoDT2 |
Room607 |
Pose Estimation |
Regular Session |
Chair: Chaumette, Francois | INRIA Rennes-Bretagne Atlantique |
Co-Chair: Hashimoto, Minoru | Shinshu Univ. |
|
17:00-17:15, Paper MoDT2.1 | |
>A 4-Point Algorithm for Relative Pose Estimation of a Calibrated Camera with a Known Relative Rotation Angle |
Li, Bo | ETH Zurich |
Heng, Lionel | ETH Zurich |
Lee, Gim Hee | ETH Zurich |
Pollefeys, Marc | ETH Zurich |
Keywords: Computer Vision
Abstract: We propose an algorithm to estimate the relative camera pose using four feature correspondences and one relative rotation angle measurement. The algorithm can be used for relative pose estimation of a rigid body equipped with a camera and a relative rotation angle sensor which can be either an odometer, an IMU or a GPS/INS system. This algorithm exploits the fact that the relative rotation angles of both the camera and relative rotation angle sensor are the same as the camera and sensor are rigidly mounted to a rigid body. Therefore, knowledge of the extrinsic calibration between the camera and sensor is not required. We carry out a quantitative comparison of our algorithm with the well-known 5-point and 1-point algorithms, and show that our algorithm exhibits the highest level of accuracy.
|
|
17:15-17:30, Paper MoDT2.2 | |
>Uncalibrated Visual Compass from Omnidirectional Line Images with Application to Attitude MAV Estimation |
Scheggi, Stefano | Univ. of Siena |
Morbidi, Fabio | Inria, Grenoble - Rhone-Alpes |
Prattichizzo, Domenico | Univ. di Siena |
Keywords: Omnidirectional Vision, Visual Servoing, Unmanned Aerial Vehicles
Abstract: This paper presents a new algorithm based on previous results of the authors, for the estimation of the yaw angle of an omnidirectional camera/robot undergoing a 6-DoF rigid motion. Our real-time algorithm is uncalibrated, robust to noisy data, and it only relies on the projection of 3-D parallel lines as image features. Numerical and real-world experiments conducted with an eye-in-hand robot manipulator, which we used to simulate the 3-D motion of a Micro unmanned Aerial Vehicle (MAV), show the accuracy and reliability of our estimation algorithm.
|
|
17:30-17:45, Paper MoDT2.3 | |
>Efficient Decoupled Pose Estimation from a Set of Points |
Tahri, Omar | Inst. de Sistemas e Robótica, Univ. de Coimbra |
Araujo, Helder | Univ. of Coimbra |
Mezouar, Youcef | IFMA |
Chaumette, Francois | INRIA Rennes-Bretagne Atlantique |
Keywords: Computer Vision, Visual Servoing
Abstract: This paper deals with pose estimation using an iterative scheme. We show that using adequate visual information, pose estimation can be performed in a decoupling the estimation of translation and rotation. More precisely, we show that pose estimation can be achieved iteratively as a function of only three independent unknowns, which are the translation parameters. An invariant to rotational motion is used to estimate the camera position. Once the camera position is estimated, we show that the rotation can be estimated efficiently using a direct method. The proposed approach is compared against two classical methods from the literature. The results show that using our method, pose tracking in image sequences and the convergence rate for randomly generated poses are improved.
|
|
17:45-18:00, Paper MoDT2.4 | |
> >Humanoid Self-Correction of Posture Using a Mirror |
Hayashi, Naohiro | The Univ. of Electro-Communications |
Tomizawa, Tetsuo | Univ. of Electoronics and Communications |
Suehiro, Takashi | The Univ. of Electro-Communications |
Kudoh, Shunsuke | The Univ. of Electro-Communications |
Attachments: Video Attachment
Keywords: Computer Vision, Humanoid Robots
Abstract: Humanoid robots have discrepancies between the postures simulated on models and those of actual humanoid bodies. To correct this problem, this paper proposes a method in which a robot observes its own posture in a mirror. This method does not need an additional time-consuming setup, such as calibration of external cameras. We attach a reference point to a known position on the robot body. By observing markers and the reference point in the same view, we can estimate the position of the markers accurately based on the geometric relationship between the mirror, an object, and its mirror reflection. We evaluate the proposed method by experiments of measuring and correcting the posture of an actual robot body to attain the target posture.
|
|
18:00-18:15, Paper MoDT2.5 | |
> >Camera Localization Using Mutual Information-Based Multiplane Tracking |
Delabarre, Bertrand | IRISA, INRIA Rennes-Bretagne Atlantique |
Marchand, Eric | Univ. de Rennes 1, IRISA, INRIA Rennes |
Attachments: Video Attachment
Keywords: Visual Tracking
Abstract: This paper deals with dense visual tracking robust towards scene perturbations using 3D information to provide a space-time coherency. The proposed method is based on an piecewise-planar scenes visual tracking algorithm which aims to minimize an error between an observed image and a reference template by estimating the parameters of a rigid 3D transformation taking into acount the relative positions of the planes in the scene. The major drawback of this approch stems from the registration function used to perform the minimization (the sum of squared differences) as it is very poorly robust towards scene variations. In this paper, the tracking process is adapted to take into account two more complex registration functions. First, the sum of conditional variance. Since it is invariant to global illumination variations, the proposed algorithm is robust with relation to those conditions whilst keeping a low computation complexity. Then, the mutual information is considered. In that case the complexity is greater but so is the robustness towards non global illumination variations, specularities or occlusions. The proposed approaches, after being described, are tested on different scenes under varying illumination conditions to assess their respective efficiency.
|
|
18:15-18:30, Paper MoDT2.6 | |
> >Accurate, Robust, and Real-Time Estimation of Finger Pose with a Motion Capture System |
Yun, Youngmok | The Univ. of Texas at Austin |
Agarwal, Priyanshu | Univ. of Texas at Austin |
Deshpande, Ashish | Univ. of Texas |
Attachments: Video Attachment
Keywords: Visual Tracking, Calibration and Identification, Human-Robot Interaction
Abstract: Finger exoskeletons, haptic devices, and augmented reality applications demand an accurate, robust, and fast estimation of finger pose. We present a novel finger pose estimation method using a motion capture system. The method combines system identification and state estimation in a unified framework. The system identification stage investigates the accurate model of a finger, and the state estimation stage tracks the finger pose with the Extended Kalman Filter (EKF) algorithm based on the model obtained in the system identification stage. The algorithm is validated by simulation and experiment. The experimental results show that the method can robustly estimate the finger pose at a high frequency (greater than 1 Khz) in presence of measurement noise, occlusion of markers, and fast movement.
|
|
MoDT3 |
Room703 |
Human in the Loop |
Regular Session |
Chair: Abbeel, Pieter | UC Berkeley |
Co-Chair: Liu, Yen-Chen | National Cheng Kung Univ. |
|
17:00-17:15, Paper MoDT3.1 | |
> >Pose and Paste - an Intuitive Interface for Remote Navigation of a Multi-Robot System |
Lichtenstern, Michael | German Aerospace Center (DLR) |
Angermann, Michael | German Aerospace Center |
Frassl, Martin | German Aerospace Center (DLR) |
Berthold, Gunther | Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen |
Julian, Brian | MIT |
Rus, Daniela | MIT |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Networked Teleoperation, Aerial Robotics
Abstract: We present Pose and Paste (P&P) - an intuitive interface designed to facilitate interaction between a single user and a number of robots equipped with cameras. With this interface, a user wearing a head-mounted display is able to cycle through the real-time video streams originating from the robots’ cameras. The user is also able to select a robot and remotely position it by simply walking or turning his/her head, i.e., control the robot’s motion in a master/slave-type fashion. We report the results of an initial hardware experiment where a user located in the USA is tasked to position two quadrotor robots within a motion capture laboratory located in Germany. These results suggest that P&P is a feasible approach to remotely inspect disaster affected sites. Lastly, we conduct a user study to compare P&P with a baseline interface composed of a traditional computer monitor and a video game controller. The quantitative results and qualitative discussions resulting from this user study highlight how such multi-robot interfaces can be further improved.
|
|
17:15-17:30, Paper MoDT3.2 | |
> >Grounding Spatial Relations for Human-Robot Interaction |
Guadarrama, Sergio | Univ. of California, Berkeley |
Riano, Lorenzo | Univ. of California, Berkeley |
Golland, Dave | UC Berkeley |
Göhring, Daniel | ICSI Berkeley |
Jia, Yangqing | UC Berkeley |
Klein, Dan | Univ. of California at Berkeley |
Abbeel, Pieter | UC Berkeley |
Darrell, Trevor | UC Berkeley |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Personal Robots, Computer Vision
Abstract: We propose a system for human-robot interaction that learns both models for spatial prepositions and for object recognition. Our system grounds the meaning of an input sentence in terms of visual percepts coming from the robot's sensors in order to send an appropriate command to the PR2 or respond to spatial queries. To perform this grounding, the system recognizes the objects in the scene, determines which spatial relations hold between those objects, and semantically parses the input sentence. The proposed system uses the visual and spatial information in conjunction with the semantic parse to interpret statements that refer to objects (nouns), their spatial relationships (prepositions), and to execute commands (actions). The semantic parse is inherently compositional, allowing the robot to understand complex commands that refer to multiple objects and relations such as: "Move the cup close to the robot to the area in front of the plate and behind the tea box". Our system correctly parses 94% of the 210 online test sentences, correctly interprets 91% of the correctly parsed sentences, and correctly executes 89% of the correctly interpreted sentences.
|
|
17:30-17:45, Paper MoDT3.3 | |
> >Fast Task-Sequence Allocation for Heterogeneous Robot Teams with a Human in the Loop |
Petersen, Karen | TU Darmstadt |
Kleiner, Alexander | Linköping Univ. |
von Stryk, Oskar | Tech. Univ. Darmstadt |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Multi-Robot Coordination, Unmanned Aerial Vehicles
Abstract: Efficient task allocation with timing constraints to a team of possibly heterogeneous robots is a challenging problem with application, e. g., in search and rescue. In this paper a mixed-integer linear programming (MILP) approach is proposed for assigning heterogeneous robot teams to the simultaneous completion of sequences of tasks with specific requirements such as completion deadlines. For this purpose our approach efficiently combines the strength of state of the art mixed-integer linear programming (MILP) solvers with human expertise in mission scheduling. We experimentally show that simple and intuitive inputs by a human user have substantial impact on both computation time and quality of the solution. The presented approach can in principle be applied to quite general missions for robot teams with human supervision.
|
|
17:45-18:00, Paper MoDT3.4 | |
> >An Adjustable Autonomy Paradigm for Adapting to Expert-Novice Differences |
Lewis, Bennie | Univ. of Central Florida |
Tastan, Bulent | Univ. of Central Florida |
Sukthankar, Gita | Univ. of Central Florida |
Attachments: Video Attachment
Keywords: Human-Robot Interaction, Telerobotics, Learning and Adaptive Systems
Abstract: Multi-robot manipulation tasks are challenging for robots to complete in an entirely autonomous way due to the perceptual and cognitive requirements of grasp planning, necessitating the development of specialized user interfaces. Yet even for humans, the task is sufficiently complex that a high level of performance variability exists between a novice and an expert’s ability to teleoperate the robots in a sufficiently tightly coupled fashion to manipulate objects without dropping them. The ultimate success of the task relies on the skill level of the human operator to manage and coordinate the robot team. Although most systems focus their effort on forging a unified connection between the robots and the operator, less attention has been spent on the problem of identifying and adapting to the human operator’s skill level. In this paper, we present a method for modeling the human operator and adjusting the autonomy levels of the robots based on the operator’s skill level. This added functionality serves as a crucial mechanism toward making human operators of any skill level a vital asset to the team even when their teleoperation performance is uneven.
|
|
18:00-18:15, Paper MoDT3.5 | |
>Task-Space Control of Bilateral Human-Swarm Interaction with Constant Time Delay |
Liu, Yen-Chen | National Cheng Kung Univ. |
Keywords: Human-Robot Interaction, Swarm Robotics, Telerobotics
Abstract: This paper presents system framework and control algorithm that enable a human operator to simultaneously interact with a group of swarm robots in a remote environment. In this control system, several characteristics of the configuration of the swarm robots are encoded as task functions, for which a human operator can specify desired values that are conveyed to the end-effector of the master robot. Stability and tracking performance of the proposed control system are investigated in the presence of communication delays so that the swarm robots can be manipulated remotely. Moreover, the swarm robots, which perform like a redundant robotic system, can also regulate their position to achieve secondary tasks autonomously. The proposed control algorithms are validated via numerical simulations on a 3-DOF robot manipulator with a group of mobile robots.
|
|
18:15-18:30, Paper MoDT3.6 | |
>The Influence of Approach Speed and Functional Noise on Users’ Perception of a Robot |
Lohse, Manja | Univ. of Twente |
van Berkel, Niels | Univ. of Twente |
van Dijk, Elisabeth | Univ. of Twente |
Joosse, Michiel | Univ. of Twente |
Karreman, Daphne Eleonora | UTwente |
Evers, Vanessa | Univ. of Amsterdam |
Keywords: Robot Companions and Social Human-Robot Interaction, Gesture, Posture, Social Spaces and Facial Expressions, Performance Evaluation and Benchmarking
Abstract: How a robot approaches a person greatly determines the interaction that follows. This is particularly relevant when the person has never interacted with the robot before. In human communication, we exchange a multitude of multimodal signals to communicate our intent while we approach others. However, most robots do not have the capabilities to produce such signals and easily communicate their intent. In this paper we propose to communicate intent when a robot approaches a person through functional noise and approach speed. Both were manipulated in a between-subjects experiment (N=40) either slowly increasing at the start of the approach and slowly decreasing when the robot reached the human or maximized at the start and abruptly stopped at the end of the approach. We analyzed questionnaires and video data from the interaction and found that particularly functional noise that in-/decreased in volume was helpful to communicate the robot’s intent but only in congruence with an in-/decreasing velocity.
|
|
MoDT4 |
Room601 |
Human Environment |
Regular Session |
Chair: Spalanzani, Anne | INRIA / UPMF-Grenoble 2 |
Co-Chair: Arai, Tatsuo | Osaka Univ. |
|
17:00-17:15, Paper MoDT4.1 | |
>Generation of Human Walking Paths |
Papadopoulos, Alessandro Vittorio | Pol. di Milano |
Bascetta, Luca | Pol. di Milano |
Ferretti, Gianni | Pol. di Milano |
Keywords: Human Centered Planning and Control, Motion and Trajectory Generation
Abstract: This work investigates the way humans plan their paths in a goal-directed motion. The person can be viewed as an optimal controller that plans the path minimizing a certain (unknown) cost function. Taking this viewpoint, the problem can be formulated as an inverse optimal control one, i.e., starting from control and state trajectories we want to figure out the cost function used by a person while planning the path. To test the envisaged ideas, a set of walking paths of different volunteers were recorded using a motion capture facility. The collected data have been used to compare a solution to the inverse control problem coming from the literature to a novel one. The obtained results, ranked using the discrete Fréchet distance, show the effectiveness of the proposed approach.
|
|
17:15-17:30, Paper MoDT4.2 | |
>Social Navigation Model Based on Human Intention Analysis Using Face Orientation |
Ratsamee, Photchara | Graduate School of Engineering Science, Osaka Univ. |
Mae, Yasushi | Osaka Univ. |
Ohara, Kenichi | Meijo Univ. |
Kojima, Masaru | Osaka Univ. |
Arai, Tatsuo | Osaka Univ. |
Keywords: Human Centered Planning and Control, Human-Humanoid Interaction, Human-Robot Interaction
Abstract: We propose a social navigation model that allows a robot to navigate in a human environment according to human intentions, in particular during a situation where the human encounters a robot and he/she wants to avoid, unavoid (maintain his/her course), or approach the robot. Avoiding, unavoiding, and approaching trajectories of humans are classified based on the face orientation on a social force model and their predicted motion. The proposed model is developed based on human motion and behavior (especially face orientation and overlapping personal space) analysis in preliminary experiments. Our experimental evidence demonstrates that the robot is able to adapt its motion by preserving personal distance from passers-by, and approaching persons who want to interact with the robot. This work contributes to the future development of a human-robot socialization environment.
|
|
17:30-17:45, Paper MoDT4.3 | |
>Robot Companion: A Social-Force Based Approach with Human Awareness-Navigation in Crowded Environments |
Ferrer, Gonzalo | UPC-CSIC |
Garrell, Anais | UPC-CSIC |
Sanfeliu, Alberto | Univ. Pol. de Cataluyna |
Keywords: Robot Companions and Social Human-Robot Interaction, Human-Robot Interaction
Abstract: Robots accompanying humans is one of the core capacities every service robot deployed in urban settings should have. We present a novel robot companion approach based on the so-called Social Force Model (SFM). A new model of robot-person interaction is obtained using the SFM which is suited for our robots Tibi and Dabo. Additionally, we propose an interactive scheme for robot's human-awareness navigation using the SFM and prediction information. Moreover, we present a new metric to evaluate the robot companion performance based on vital spaces and comfortableness criteria. Also, a multimodal human feedback is proposed to enhance the behavior of the system. The validation of the model is accomplished throughout an extensive set of simulations and real-life experiments.
|
|
17:45-18:00, Paper MoDT4.4 | |
>A Gain-Scheduling Approach to Model Human Simultaneous Visual Tracking and Balancing |
Panchea, Adina Marlena | Univ. d'Orléans |
Ramdani, Nacim | Univ. Orléans |
Fraisse, Philippe | LIRMM |
Park, Sukyung | KAIST |
Keywords: Humanoid Robots, Gesture, Posture, Social Spaces and Facial Expressions, Biologically-Inspired Robots
Abstract: In this study, we endeavor to better understand the human motor control system in order to help transposing some of its features onto humanoid robots. The postural coordination task investigated is related to an experimental paradigm that consists in visual target tracking task while balancing. We want to test whether the human biomechanical responses, namely phase / anti-phase coordination mode transition, as exhibited during the actual experiments can be modeled by a linearized double inverted pendulum and parallel independent PD feedback control loops. Remarkably, these loops implement joint space control using Cartesian task space variables. Furthermore, we want to see how the feedback control gains given by an optimization procedure scale w.r.t frequency or target motion magnitude. A closed-loop synthesis is developed that consists in minimizing a minimum torque criterion under both balance and task constraints. We show that the optimal feedback control gains obtained yield model responses consistent with the literature. In a second part, we implement a gain-scheduling approach where control gains values are predicted via interpolation. Finally, our approach implements a controller capable of achieving the task even when the frequency of the target motion varies over time.
|
|
18:00-18:15, Paper MoDT4.5 | |
> >Social Mapping of Human-Populated Environments by Implicit Function Learning |
Papadakis, Panagiotis | INRIA, Rhône-Alpes, Grenoble |
Spalanzani, Anne | INRIA / UPMF-Grenoble 2 |
Laugier, Christian | INRIA Rhône-Alpes |
Attachments: Video Attachment
Keywords: Gesture, Posture, Social Spaces and Facial Expressions, Mapping, Robot Companions and Social Human-Robot Interaction
Abstract: With robots technology shifting towards entering human populated environments, the need for augmented perceptual and planning robotic skills emerges that complement to human presence. In this integration, perception and adaptation to the implicit human social conventions plays a fundamental role. Toward this goal, we propose a novel framework that can model context-dependent human spatial interactions, encoded in the form of a social map. The core idea of our approach resides in modelling human personal spaces as non-linearly scaled probability functions within the robotic state space and devise the structure and shape of a social map by solving a learning problem in kernel space. The social borders are subsequently obtained as isocontours of the learned implicit function that can realistically model arbitrarily complex social interactions of varying shape and size. We present our experiments using a rich dataset of human interactions, demonstrating the feasibility and utility of the proposed approach and promoting its application to social mapping of human-populated environments.
|
|
18:15-18:30, Paper MoDT4.6 | |
>Towards More Efficient Navigation for Robots and Humans |
Lu, David V. | Washington Univ. in St. Louis |
Smart, William | Oregon State Univ. |
Keywords: Human-Robot Interaction, Navigation, Human Centered Planning and Control
Abstract: Effective robot navigation in the presence of humans is hard. Not only do human obstacles move, they react to the movements of the robot according to instinct and social rules. In order to efficiently navigate around each other, both the robot and the human must move in a way that takes the other into account. Failure to do so can lead to a lowering of the perceived quality of the interaction and, more importantly, it can also delay one or both parties, causing them to be less efficient in whatever task they are trying to achieve. In this paper, we present a system capable of creating more efficient corridor navigation behaviors by manipulating existing navigation algorithms and introducing social cues from the robot to the human. We give the results of a user study, demonstrating the effectiveness of our system, and discuss how it can be applied more generally to a wide variety of situations.
|
|
MoDT5 |
Room605 |
Robot Learning IV |
Regular Session |
Chair: Ogata, Tetsuya | Waseda Univ. |
Co-Chair: Detry, Renaud | Univ. of Liège |
|
17:00-17:15, Paper MoDT5.1 | |
>Knowledge Transfer for High-Performance Quadrocopter Maneuvers |
Hamer, Michael | ETH Zurich |
Waibel, Markus | ETH Zurich |
D'Andrea, Raffaello | ETHZ |
Keywords: Learning and Adaptive Systems, Learning from Demonstration
Abstract: Iterative Learning Control algorithms are based on the premise that "practice makes perfect". By iteratively performing an action, repetitive errors can be learned and accounted for in subsequent iterations, in a non-causal and feed- forward manner. This method has been previously implemented for a quadrocopter system, enabling the quadrocopter to learn to accurately track high-performance slalom trajectories. However, one major limitation of this system is that knowledge from previously learned trajectories is not generalized or transferred to new trajectories; these must be learned from a state of zero experience. This paper experimentally shows that the major dynamics of the Iterative Learning Control process can be captured by a linear map, trained on previously learned slalom trajectories. This map enables this prior knowledge to be used to improve the initialization of an unseen trajectory. Experimental results show that prediction based on a single prior is enough to reduce the initial tracking error for an unseen trajectory by an order of magnitude.
|
|
17:15-17:30, Paper MoDT5.2 | |
>Unsupervised Learning of Predictive Parts for Cross-Object Grasp Transfer |
Detry, Renaud | Univ. of Liège |
Piater, Justus | Univ. of Innsbruck |
Keywords: Learning and Adaptive Systems, Grasping, Perception for Grasping and Manipulation
Abstract: We present a principled solution to the problem of transferring grasps across objects. Our approach identifies, through autonomous exploration, the size and shape of object parts that consistently predict the applicability of a grasp across multiple objects. The robot can then use these parts to plan grasps onto novel objects. By contrast to most recent methods, we aim to solve the part-learning problem without the help of a human teacher. The robot collects training data autonomously by exploring different grasps on its own. The core principle of our approach is an intensive encoding of low-level sensorimotor uncertainty with probabilistic models, which allows the robot to generalize the noisy autonomously-generated grasps. Object shape, which is our main cue for predicting grasps, is encoded with surface densities, that model the spatial distribution of points that belong to an object's surface. Grasp parameters are modeled with grasp densities, that correspond to the spatial distribution of object-relative gripper poses that lead to a grasp. The size and shape of grasp-predicting parts are identified by sampling the cross-object correlation of local shape and grasp parameters. We approximate sampling and integrals via Monte Carlo methods to make our computer implementation tractable. We demonstrate the applicability of our method in simulation. A proof of concept on a real robot is also provided.
|
|
17:30-17:45, Paper MoDT5.3 | |
> >Multimodal Integration Learning of Object Manipulation Behaviors Using Deep Neural Networks |
Noda, Kuniaki | Waseda Univ. |
Arie, Hiroaki | Waseda Univ. |
Suga, Yuki | Waseda Univ. |
Ogata, Tetsuya | Waseda Univ. |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Neurorobotics, Motion and Trajectory Generation
Abstract: This paper presents a novel computational approach for modeling and generating multiple object manipulation behaviors by a humanoid robot. The contribution of this paper is that deep learning methods are applied not only for multimodal sensor fusion but also for sensory-motor coordination. More specifically, a time-delay deep neural network is applied for modeling multiple behavior patterns represented with multi-dimensional visuomotor temporal sequences. By using the efficient training performance of Hessian-free optimization, the proposed mechanism successfully models six different object manipulation behaviors in a single network. The generalization capability of the learning mechanism enables the acquired model to perform the functions of cross-modal memory retrieval and temporal sequence prediction. The experimental results show that the motion patterns for object manipulation behaviors are successfully generated from the corresponding image sequence, and vice versa. Moreover, the temporal sequence prediction enables the robot to interactively switch multiple behaviors in accordance with changes in the displayed objects.
|
|
17:45-18:00, Paper MoDT5.4 | |
> >Robotic Calligraphy - Learning How to Write Single Strokes of Chinese and Japanese Characters |
Mueller, Samuel | ETHZ |
Huebel, Nico | ETH Zurich |
Waibel, Markus | ETH Zurich |
D'Andrea, Raffaello | ETHZ |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Visual Learning
Abstract: A robot testbed for writing Chinese and Japanese calligraphy characters is presented. Single strokes of the calligraphy characters are represented in a database and initialized with a scanned reference image and a manually chosen initial drawing spline. A learning procedure uses visual feedback to analyze each new iteration of the drawn stroke and updates the drawing spline such that every subsequent drawn stroke becomes more similar to the reference image. The learning procedure can be performed either in simulation, using a simple brush model to create simulated images of the strokes, or with a real robot arm equipped with a calligraphy brush and a camera that captures images of the drawn strokes. Results from both simulations and experiments with the robot arm are presented.
|
|
18:00-18:15, Paper MoDT5.5 | |
>Estimation-Based ILC Using Particle Filter with Application to Industrial Manipulators |
Axelsson, Patrik | Linköping Univ. |
Karlsson, Rickard | Linkoping Univ. |
Norrlöf, Mikael | Linköping Univ. |
Keywords: Learning and Adaptive Systems, Sensor Fusion
Abstract: An estimation-based iterative learning control (ILC) algorithm is applied to a realistic industrial manipulator model. By measuring the acceleration of the end-effector, the arm angular position accuracy is improved when the measurements are fused with motor angle observations. The estimation problem is formulated in a Bayesian estimation framework where three solutions are proposed: one using the extended Kalman filter (EKF), one using the unscented Kalman filter (UKF), and one using the particle filter (PF). The estimates are used in an ILC method to improve the accuracy for following a given reference trajectory. Since the ILC algorithm is repetitive no computational restrictions on the methods apply explicitly. In an extensive Monte Carlo simulation study it is shown that the PF method outperforms the other methods and that the ILC control law is substantially improved using the PF estimate.
|
|
18:15-18:30, Paper MoDT5.6 | |
> >Skills Transfer across Dissimilar Robots by Learning Context-Dependent Rewards |
Malekzadeh, Milad S. | Istituto Italiano di Tecnologia |
Bruno, Danilo | Istituto Italiano di Tecnologia (IIT) |
Calinon, Sylvain | Istituto Italiano di Tecnologia (IIT) |
Nanayakkara, Thrishantha | King's Coll. Univ. of London |
Caldwell, Darwin G. | Istituto Italiano di Tecnologia |
Attachments: Video Attachment
Keywords: Learning and Adaptive Systems, Learning from Demonstration
Abstract: Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level extraction of the underlying intent. By focusing on this last form, we study the problem of extracting the reward function explaining the demonstrations from a set of candidate reward functions, and using this information for self-refinement of the skill. This definition of the problem has links with inverse reinforcement learning problems in which the robot autonomously extracts an optimal reward function that defines the goal of the task. By relying on Gaussian mixture models, the proposed approach learns how the different candidate reward functions are combined, and in which contexts or phases of the task they are relevant for explaining the user's demonstrations. The extracted reward profile is then exploited to improve the skill with a self-refinement approach based on expectation-maximization, allowing the imitator to reach a skill level that goes beyond the demonstrations. The approach can be used to reproduce a skill in different ways or to transfer tasks across robots of different structures. The proposed approach is tested in simulation with a new type of continuum robot (STIFF-FLOP), using kinesthetic demonstrations from a Barrett WAM manipulator.
|
|
MoDT6 |
Room604 |
Sampling-Based Tree Planners |
Regular Session |
Chair: Gentilini, Iacopo | Embry-Riddle Aeronautical Univ. |
Co-Chair: Burgard, Wolfram | Univ. of Freiburg |
|
17:00-17:15, Paper MoDT6.1 | |
>Learning to Guide Random Tree Planners in High Dimensional Spaces |
Röwekämper, Jörg | Univ. of Freiburg |
Tipaldi, Gian Diego | Univ. of Freiburg |
Burgard, Wolfram | Univ. of Freiburg |
Keywords: Motion and Path Planning, Learning and Adaptive Systems
Abstract: In this paper we present the projection and bias heuristic (PBH), a motion planning algorithm that makes use of low-dimensional projections to improve sampling-based planning algorithms. In contrast to other state-of-the-art methods, we do not assume that projections are either random or given by an expert user. Rather, our goal is to learn projections such that planning on them improves the efficiency and the quality of solutions. We present both, a method to learn effective projections and a sampling algorithm that makes use of them. We show that our approach can be easily integrated into popular sampling-based planners. Extensive experiments performed in simulated environments demonstrate that our approach produces paths that are in general shorter than those obtained with state-of-the-art algorithms. Moreover, it generally requires less computation time.
|
|
17:15-17:30, Paper MoDT6.2 | |
>Blind RRT: A Probabilistically Complete, Distributed RRT |
Rodriguez, Cesar | Texas A&M Univ. |
Denny, Jory | Texas A&M Univ. |
Jacobs, Sam Ade | Texas A&M Univ. |
Thomas, Shawna | Texas A&M Univ. |
Amato, Nancy | Texas A&M Univ. |
Keywords: Motion and Path Planning
Abstract: Rapidly-Exploring Random Trees (RRTs) have been successful at finding feasible solutions for many types of problems. With motion planning becoming more computationally demanding, we turn to parallel motion planning for efficient solutions. Existing work on distributed RRTs has been limited by the overhead that global communication requires. A recent approach, Radial RRT, demonstrated a scalable algorithm that subdivides the space into regions to increase the computation locality. However, if an obstacle completely blocks RRT growth in a region, the planning space is not covered and is thus not probabilistically complete. We present a new algorithm, Blind RRT, which ignores obstacles during initial growth to efficiently explore the entire space. Because obstacles are ignored, free components of the tree become disconnected and fragmented. Thus, Blind RRT merges parts of the tree that have become disconnected from the root. We show how this algorithm can be applied to the Radial RRT framework allowing both scalability and usefulness in motion planning. This method is a probabilistically complete approach to parallel RRTs. We show that our method not only scales but also overcomes the motion planning limitations that Radial RRT has in a series of difficult motion planning tasks.
|
|
17:30-17:45, Paper MoDT6.3 | |
>HRA*: Hybrid Randomized Path Planning for Complex 3D Environments |
Teniente Avilés, Ernesto Homar | CSIC-UPC |
Andrade-Cetto, Juan | CSIC-UPC |
Keywords: Motion and Path Planning
Abstract: We propose HRA*, a new randomized path planner for complex 3D environments. The method is a modified A* algorithm that uses a hybrid node expansion technique that combines a random exploration of the action space meeting vehicle kinematic constraints with a cost to goal metric that considers only kinematically feasible paths to the goal. The method includes also a series of heuristics to accelerate the search time. These include a cost penalty near obstacles, and a filter to prevent revisiting configurations. The performance of the method is compared against A*, RRT and RRT* in a series of challenging 3D outdoor datasets. HRA* is shown to outperform all of them in computation time, and delivering shorter paths than A* and RRT.
|
|
17:45-18:00, Paper MoDT6.4 | |
>Adapting RRT Growth for Heterogeneous Environments |
Denny, Jory | Texas A&M Univ. |
Morales, Marco | Inst. Tecnológico Autónomo de México |
Rodriguez, Samuel | Texas A&M Univ. |
Amato, Nancy | Texas A&M Univ. |
Keywords: Motion and Path Planning, Learning and Adaptive Systems
Abstract: Rapidly-exploring Random Trees (RRTs) are effective for a wide range of applications ranging from kinodynamic planning to motion planning under uncertainty. However, RRTs are not as efficient when exploring heterogeneous environments and do not adapt to the space. For example, in difficult areas an expensive RRT growth method might be appropriate, while in open areas inexpensive growth methods should be chosen. In this paper, we present a novel algorithm, Adaptive RRT, that adapts RRT growth to the current exploration area using a two level growth selection mechanism. At the first level, we select groups of expansion methods according to the visibility of the node being expanded. Second, we use a cost-sensitive learning approach to select a sampler from the group of expansion methods chosen. Also, we propose a novel definition of visibility for RRT nodes which can be computed in an online manner and used by Adaptive RRT to select an appropriate expansion method. We present the algorithm and experimental analysis on a broad range of problems showing not only its adaptability, but efficiency gains achieved by adapting exploration methods appropriately.
|
|
18:00-18:15, Paper MoDT6.5 | |
>Efficient Sampling-Based Motion Planning with Asymptotic Near-Optimality Guarantees for Systems with Dynamics |
Littlefield, Zakary | Rutgers Univ. |
Li, Yanbo | Univ. of Nevada at Reno |
Bekris, Kostas E. | Rutgers, the State Univ. of New Jersey |
Keywords: Nonholonomic Motion Planning, Motion and Path Planning
Abstract: Recent progress has provided motion planners, such as RRT*, that asymptotically converge to optimal solutions. These methods, however, require a local planner, which connects two states with a trajectory. For systems with dynamics, the local planner needs to solve a two-point boundary value problem (bvp) for differential equations. Such a solver is not always available for interesting systems. Furthermore, asymptotically optimal solutions tend to increase computational relative to alternatives, such as RRT, that focus on feasibility and do not require a local planner. This paper describes a sampling-based solution with the following desirable properties: a) it does not require a BVP solver but only uses a forward propagation model, b) it employs a single propagation per iteration similar to RRT, making it very efficient, c) it is asymptotically near-optimal, and d) provides a sparse data structure for answering path queries, which further improves computational performance. Simulations on prototypical dynamical systems show the method is able to improve the quality of feasible solutions over time and that it is computationally efficient.
|
|
18:15-18:30, Paper MoDT6.6 | |
> >Cycle Time Based Multi-Goal Path Optimization for Redundant Robotic Systems |
Gentilini, Iacopo | Embry-Riddle Aeronautical Univ. |
Nagamatsu, Kenji | DENSO WAVE INCOOPRATED |
Shimada, Kenji | Carnegie Mellon Univ. |
Attachments: Video Attachment
Keywords: Redundant Robots, Path Planning for Manipulators, Motion and Path Planning
Abstract: Finding an optimal path for a redundant robotic system to visit a sequence of several goal placements poses two technical challenges. First, while searching for an optimal sequence, infinitely many feasible configurations can be used to reach each goal placement. Second, obstacle avoidance has to be considered while optimizing the path from one goal placement to the next. Previous works focused on solving a discrete formulation of this optimization problem where only few configurations are used to represent each goal placement. We instead model it as a Traveling Salesman Problem with Neighborhoods (TSPN), where each neighborhood is defined as the set of the infinitely many configurations corresponding to the same goal placement. A solution procedure based on a Hybrid Random-key Genetic Algorithm (HRKGA) and bidirectional Rapidly-exploring Random Trees (biRRTs) is then proposed. Finally, experimental tests performed on a 7-Degree Of Freedom (DOF) industrial vision inspection system show that the proposed method is able to drastically reduce the cycle time currently required by the system.
|
|
MoDT7 |
Room701 |
Calibration |
Regular Session |
Chair: Olson, Edwin | Univ. of Michigan |
Co-Chair: Briot, Sébastien | IRCCyN |
|
17:00-17:15, Paper MoDT7.1 | |
> >CamOdoCal: Automatic Intrinsic and Extrinsic Calibration of a Rig with Multiple Generic Cameras and Odometry |
Heng, Lionel | ETH Zurich |
Li, Bo | ETH Zurich |
Pollefeys, Marc | ETH Zurich |
Attachments: Video Attachment
Keywords: Calibration and Identification, Computer Vision, Field Robots
Abstract: Multiple cameras are increasingly prevalent on robotic and human-driven vehicles. These cameras come in a variety of wide-angle, fish-eye, and catadioptric models. Furthermore, wheel odometry is generally available on the vehicles on which the cameras are mounted. For robustness, vision applications tend to use wheel odometry as a strong prior for camera pose estimation, and in these cases, an accurate extrinsic calibration is required in addition to an accurate intrinsic calibration. To date, there is no known work on automatic intrinsic calibration of generic cameras, and more importantly, automatic extrinsic calibration of a rig with multiple generic cameras and odometry. We propose an easy-to-use automated pipeline that handles both intrinsic and extrinsic calibration; we do not assume that there are overlapping fields of view. At the begining, we run an intrinsic calibration for each generic camera. The intrinsic calibration is automatic and requires a chessboard. Subsequently, we run an extrinsic calibration which finds all camera-odometry transforms. The extrinsic calibration is unsupervised, uses natural features, and only requires the vehicle to be driven around for a short time. The intrinsic parameters are optimized in a final bundle adjustment step in the extrinsic calibration. In addition, the pipeline produces a globally-consistent sparse map of landmarks which can be used for visual localization. The pipeline is publicly available as a standalone C++ package.
|
|
17:15-17:30, Paper MoDT7.2 | |
>Calibrating Setups with a Single-Point Laser Range Finder and a Camera |
Nguyen, Thanh | Tech. Univ. of Graz |
Reitmayr, Gerhard | TU Graz |
Keywords: Calibration and Identification, Computer Vision
Abstract: The combination of a cheap single-point laser range finder (LRF) device and cameras has become increasingly useful in recent research and industrial applications. While the single dot laser range finder can only provide depth for a single pixel in the observed image, it's cost and size can be useful for handheld devices or very lightweight robotic platforms. In this work, we propose two accurate calibration methods for determining the position and direction of the laser range finder with respect to the camera. Notably, we can determine the full calibration, even without observing the laser range finder observation point in the camera image. We evaluate both methods on synthetic and real data demonstrating their efficiency and good behavior under noise.
|
|
17:30-17:45, Paper MoDT7.3 | |
>Uncertainty Estimation of AR-Marker Poses for Graph-SLAM Optimization in 3D Object Model Generation with RGBD Data |
Mihalyi, Razvan | Jacobs Univ. Bremen |
Pathak, Kaustubh | Jacobs Univ. Bremen |
Vaskevicius, Narunas | Jacobs Univ. |
Birk, Andreas | Jacobs Univ. |
Keywords: Calibration and Identification, Computer Vision
Abstract: This paper presents an approach to acquire textured 3D models of objects without the need for sophisticated hardware infrastructures. The approach is inexpensive, using a low-cost Microsoft Kinect RGB-D sensor and Augmented Reality (AR) markers printed on paper sheets. The AR-markers can be freely placed in the scene, allowing the modeling of objects of various sizes, and the sensor can be moved by the hand of an untrained person. To generate usable models with this very inexpensive and simple set-up, the sequence of RGB-D scans is embedded in a graph-based optimizer for automatic post-refinement. The main novelty of this contribution is the development of an uncertainty model for an AR-marker. The AR-marker uncertainty models are used as constraints in an optimization problem to better estimate the object pose. The models are in the end further fine-tuned by a standard point-based registration algorithm. The results section presents realistic models of various objects generated using this system, e.g., parcels, sport balls, human dolls etc. Additionally, a quantitative analysis is presented using objects of known dimensions.
|
|
17:45-18:00, Paper MoDT7.4 | |
>AprilCal: Assisted and Repeatable Camera Calibration |
Richardson, Andrew | Univ. of Michigan |
Strom, Johannes H. | Univ. of Michigan |
Olson, Edwin | Univ. of Michigan |
Keywords: Calibration and Identification, Computer Vision, Education Robotics
Abstract: Reliable and accurate camera calibration usually requires an expert intuition to reliably constrain all of the parameters in the camera model. Existing toolboxes ask users to capture images of a calibration target in positions of their choosing, after which the maximum-likelihood calibration is computed using all images in a batch optimization. We introduce a new interactive methodology that uses the current calibration state to suggest the position of the target in the next image and to verify that the final model parameters meet the accuracy requirements specified by the user. Suggesting target positions relies on the ability to score candidate suggestions and their effect on the calibration. We describe two methods for scoring target positions: one that computes the stability of the focal length estimates for initializing the calibration, and another that subsequently quantifies the model uncertainty in pixel space. We demonstrate that our resulting system, AprilCal, consistently yields more accurate camera calibrations than standard tools using results from a set of human trials. We also demonstrate that our approach is applicable for a variety of lenses.
|
|
18:00-18:15, Paper MoDT7.5 | |
>Dynamic Parameter Identification of Actuation Redundant Parallel Robots Using Their Power Identification Model: Application to the DualV |
Briot, Sébastien | IRCCyN |
Gautier, Maxime | Univ. of Nantes/IRCCyN |
Krut, Sebastien | LIRMM (CNRS & Univ. Montpellier 2) |
Keywords: Parallel Robots, Dynamics, Calibration and Identification
Abstract: Off-line robot dynamic identification methods are generally based on the use of the Inverse Dynamic Identification Model (IDIM), which calculates the joint forces/torques (estimated as the product of the known control signal - the input reference of the motor current loop - by the joint drive gains) that are linear in relation to the dynamic parameters, and on the use of linear least squares technique to calculate the parameters (IDIM-LS technique). However, as actuation redundant parallel robot are overconstrained, their IDIM has infinity of solutions for the force/torque prediction, depending of the value of the desired overconstraint that is a priori unknown in the identification process. As a result, the IDIM cannot be used for the identification procedure. On the contrary the Power Identification Model (PIM) of any types of robot manipulator has a unique formulation and contains the same dynamic parameters as the IDIM. This paper proposes to use the PIM of actuation redundant robots for identification purpose. The identification of the inertial parameters of a planar parallel robot with actuation redundancy, the DualV, is then carried out using its PIM. Experimental results show the validity of the method.
|
|
18:15-18:30, Paper MoDT7.6 | |
>Pairwise LIDAR Calibration Using Multi-Type 3D Geometric Features in Natural Scene |
He, Mengwen | Peking Univ. |
Zhao, Huijing | Peking Univ. |
Davoine, Franck | CNRS |
Cui, Jinshi | Peking Univ. |
Zha, Hongbin | Peking Univ. |
Keywords: Calibration and Identification, Range Sensing, Sensor Fusion
Abstract: It has become a well-known technology that 3D measurement of a large environment could be achieved by using a number of 2D LIDARs on a mobile platform. In such a system, calibration is essential for making collaborative use of different LIDAR data, while existing methods usually require modifications to the environments, such as putting calibration targets, or rely on special facilities, which is labor intensive and put many restrictions to potential applications. This research aims at developing a calibration method for multiple 2D LIDAR sensing systems, which could be conducted in a general outdoor environment using the features of a nature scene. Special focus is cast on solving the noisy sensing in a complex environment and the occlusions caused by largely different sensor viewpoints. A multi-type geometric feature based calibration algorithm is proposed, which extracts the features such as points, lines, planes and quadrics from the 3D points of each LIDAR sensing. Transformation parameters from each sensor to the frame of moving platform is estimated by matching the multi-type features. Experiments are conducted using the data sets of an intelligent vehicle platform (POSS-V) through a driving in the campus of Peking University. Results of calibrating two LIDAR sensors with largely different viewpoints are presented, and the accuracy and robustness concerning noisy feature extractions are examined intensively.
|
|
MoDT8 |
Room702 |
Control and Software Architectures |
Regular Session |
Chair: MacDonald, Bruce | Univ. of Auckland |
Co-Chair: Sanfelice, Ricardo | Univ. of Arizona |
|
17:00-17:15, Paper MoDT8.1 | |
>Asynchronous Implementation of a Distributed Average Consensus Algorithm |
Kriegleder, Maximilian | ETH Zurich |
Oung, Raymond | ETH Zurich |
D'Andrea, Raffaello | ETHZ |
Keywords: Distributed Robot Systems, Networked Robots, Sensor Networks
Abstract: This paper discusses distributed average consensus in the context of a distributed embedded system with multiple agents connected through a communication network. Adversities such as switching of network topologies, agents joining or leaving the network, and communication link creation or failure may arise in these systems. To address these difficulties, we propose an asynchronous implementation of a distributed average consensus algorithm that has the following properties: (1) unbiased average, (2) homogeneous implementation, (3) robustness to network adversities, (4) dynamic consensus, and (5) well-defined tuning parameters. We demonstrate an application of the implementation on a specific distributed embedded system, the Distributed Flight Array, where we solve two average consensus problems to estimate altitude and tilt of the vehicle from multiple distance measurements.
|
|
17:15-17:30, Paper MoDT8.2 | |
>A Methodology for Testing Mobile Autonomous Robots |
Laval, Jannik | Mines Douai |
Fabresse, Luc | Mines Douai |
Bouraqadi, Noury | Univ. de Lille Nord de France, Ec. des Mines de Douai |
Keywords: Control Architectures and Programming, Programming Environment, Software and Architecture
Abstract: Mobile autonomous robots are progressively entering the mass market. Thus, manufacturers have to perform quality assurance tests on series of robots. Therefore, tests should be repeatable and as much automated as possible. Tests are also performed for purpose of repairing robots. This calls for reusing tests already defined for quality assurance. In this paper we introduce a methodology to support the definition of repeatable, reusable, semi-automated tests. Our methodology describes the process of conducting tests in a way that maximizes safety for human operators, while avoiding to damage tested robots.
|
|
17:30-17:45, Paper MoDT8.3 | |
> >Juggling on a Bouncing Ball Apparatus Via Hybrid Control |
Tian, Xiaolu | Shandong Luneng Intelligence Tech. Co, Ltd |
Koessler, Jeffrey Horton | Univ. of Arizona, Hybrid Dynamics and Control Lab. |
Sanfelice, Ricardo | Univ. of Arizona |
Attachments: Video Attachment
Keywords: Control Architectures and Programming, Contact Modelling, Dynamics
Abstract: A novel solution to the problem of controlling a one degree-of-freedom ball juggling system that explicitly models friction. A hybrid controller is designed to direct the ball to track a specific reference trajectory. The juggling system consists of a nearly-smooth vertical shaft with a piston-actuated bouncing ball. The hybrid controller is capable of tracking a periodic reference trajectory. System stability is established using hybrid systems theory, while juggling experiments are presented to demonstrate predicted system behavior. Key to these experimental results are: 1) the use of a filtered zerocrossing impact detection algorithm; 2) a Savitzky-Golay filter for smooth piston position and velocity; 3) a custom external PID controller; and 4) the estimation of the apparatus parameters via systems identification methods.
|
|
17:45-18:00, Paper MoDT8.4 | |
> >A Practical Approach to Generalized Hierarchical Task Specification for Indirect Force Controlled Robots |
Lutscher, Ewald | Tech. Univ. München |
Cheng, Gordon | Tech. Univ. Munich |
Attachments: Video Attachment
Keywords: Control Architectures and Programming, Redundant Robots, Compliance and Impedance Control
Abstract: The main contribution of this paper is the general formulation of force and positioning tasks on joint and Cartesian level for indirect force controlled robots and combining them in a strict hierarchical way. As a secondary contribution, we provide a simple and intuitive programming paradigm,using the developed formulation. By building on the well-established indirect force control scheme, which is often already provided for commercial robots, we provide application programmers with a useful tool for specifying tasks, involving positioning and force components. Different physical interaction tasks have been implemented to show the potential of the proposed method and discuss the general advantages and drawbacks.
|
|
18:00-18:15, Paper MoDT8.5 | |
>Rapid Application Development of Constrained-Based Task Modelling and Execution Using Domain Specific Languages |
Vanthienen, Dominick | Katholieke Univ. Leuven |
Klotzbuecher, Markus | Katholieke Univ. Leuven |
De Schutter, Joris | Katholieke Univ. Leuven |
De Laet, Tinne | Univ. of Leuven |
Bruyninckx, Herman | Univ. of Leuven |
Keywords: Software and Architecture, Formal Methods in Robotics and Automation
Abstract: Current state-of-the-art robot program develop- ment needs expert programmers. Moreover, most robot pro- grams developed today are robot hardware and software specific, and therefore little reusable without modifications. This paper realizes easier robot (re-)programming, by software framework independent models that can be executed using different hard- and software platforms. First, the paper focuses on the formalization of the tasks to be fulfilled by a robot, more specifically constraint-based programming tasks using a Domain Specific Language (DSL). Second, it gives a reference implementation in Lua [1]. The presented DSL makes it easy to develop applications, yet is powerful to execute. It enables automatic model verification and code generation for different hard- and software platforms, diminishing code debugging efforts. Experimental validation shows the ease of creating an application and adapting it, the reduction of the amount of hand-written code, and the debugging aid offered through meaningful errors returned by model verification.
|
|
18:15-18:30, Paper MoDT8.6 | |
>Defining Positioning in a Core Ontology for Robotics |
Carbonera, Joel Luis | UFRGS |
Fiorini, Sandro | UFRGS |
Prestes, Edson | UFRGS |
Jorge, Vitor | Univ. Federal do Rio Grande do Sul |
Abel, Mara | UFRGS |
Madhavan, Raj | UMD-CP/NIST |
Locoro, Angela | Univ. degli Studi di Genova |
Gonçalves, Paulo | Pol. Inst. of Castelo Branco |
Haidegger, Tamas | Obuda Univ. (OU) |
Barreto, Marcos | Federal Univ. of Bahia (UFBA) |
Schlenoff, Craig | NIST |
Keywords: Localization, Industrial Robots, Service Robots
Abstract: Unambiguous definition of spatial position and orientation has crucial importance for robotics. In this paper we propose an ontology about positioning. It is part of a more extensive core ontology being developed by the IEEE RAS Working Group on ontologies for robotics and automation. The core ontology should provide a common ground for further ontology development in the field. We give a brief overview of concepts in the core ontology and then describe an integrated approach for representing quantitative and qualitative position information.
|
|
MoDT9 |
Room608 |
Aerial Robotics II |
Regular Session |
Chair: Floreano, Dario | Ec. Pol. Federal, Lausanne |
Co-Chair: Naldi, Roberto | CASY - D.E.I.S. - Univ. di Bologna |
|
17:00-17:15, Paper MoDT9.1 | |
>Design and Control of a Spherical Omnidirectional Blimp |
Burri, Matthias | ETH Zurich |
Gasser, Lukas | ETH Zurich |
Kaech, Miro | ETH Zurich |
Krebs, Matthias | ETH Zurich |
Laube, Simon | ETH Zurich |
Ledergerber, Anton | ETH Zurich |
Meier, Daniel | ETH Zurich |
Michaud, Randy | ETH Zurich |
Mosimann, Lukas | ETH Zurich |
Mueri, Luca Daniel | ETH Zurich |
Ruch, Claudio | ETH Zurich |
Schaffner, Andreas | ETH Zurich |
Vuilliomenet, Nicolas | ETH Zurich |
Weichart, Johannes | ETH Zurich |
Rudin, Konrad | ETH Zurich |
Leutenegger, Stefan | Swiss Federal Inst. of Tech. Zurich |
Alonso-Mora, Javier | ETH / Disney Res. Zurich |
Siegwart, Roland | ETH Zurich |
Beardsley, Paul | Disney Res. Zurich |
Keywords: Aerial Robotics, Entertainment Robotics, Robot Safety
Abstract: This paper presents Skye, a novel blimp design. Skye is a helium-filled sphere of diameter 2.7m with a strong inelastic outer hull and an impermeable elastic inner hull. Four tetrahedrally-arranged actuation units (AU) are mounted on the hull for locomotion, with each AU having a thruster which can be rotated around a radial axis through the sphere center. This design provides redundant control in the six degrees of freedom of motion, and Skye is able to move omnidirectionally and to rotate around any axis. A multi-camera module is also mounted on the hull for capture of aerial imagery or live video stream according to an ’eyeball’ concept - the camera module is not itself actuated, but the whole blimp is rotated in order to obtain a desired camera view. Skye is safe for use near people - the double hull minimizes the likelihood of rupture on an unwanted collision; the propellers are covered by grills to prevent accidental contact; and the blimp is near neutral buoyancy so that it makes only a light impact on contact and can be readily nudged away. The system is portable and deployable by a single operator - the electronics, AUs, and camera unit are mounted externally and are detachable from the hull during transport; operator control is via an intuitive touchpad interface. The motivating application is in entertainment robotics. Skye has a varied motion vocabulary such as swooping and bobbing, plus internal LEDs for visual effect. Computer vision enables interaction with an audience. Experimental results show dexterous maneuvers in indoor and outdoor environments, and non-dangerous impacts between the blimp and humans.
|
|
17:15-17:30, Paper MoDT9.2 | |
> >MUWA: Multi-Field Universal Wheel for Air-Land Vehicle with Quad Variable-Pitch Propellers |
Kawasaki, Koji | The Univ. of Tokyo |
Zhao, Moju | The Univ. of Tokyo |
Okada, Kei | The Univ. of Tokyo |
Inaba, Masayuki | The Univ. of Tokyo |
Attachments: Video Attachment
Keywords: Aerial Robotics, Search and Rescue Robots, Mobile Manipulation
Abstract: This paper presents a multi-field universal vehicle that is able to work at land, sea and air. The vehicle consists of a quad-copter with variable-pitch propellers that enable the vehicle to stand on the ground at a given tilt angle, roll on the ground like a wheel, and float and move on the water, in addition to flying like a conventional quad-copter. This article clarifies the behavioral objectives, structural design, basic control mechanism of the ring-shaped robot, and examples of 3D measurements.
|
|
17:30-17:45, Paper MoDT9.3 | |
> >Euler Spring Collision Protection for Flying Robots |
Klaptocz, Adam | senseFly |
Briod, Adrien | Ec. Pol. Federale de Lausanne |
Daler, Ludovic | Ec. Pol. Federale de Lausanne |
Zufferey, Jean-Christophe | EPFL |
Floreano, Dario | Ec. Pol. Federal, Lausanne |
Attachments: Video Attachment
Keywords: Aerial Robotics, Mechanism Design, Robotics in Hazardous Fields
Abstract: This paper addresses the problem of adequately protecting flying robots from damage resulting from collisions that may occur when exploring constrained and cluttered environments. A method for designing protective structures to meet the specific constraints of flying systems is presented and applied to the protection of a small coaxial hovering platform. Protective structures in the form of Euler springs in a tetrahedral configuration are designed and optimised to elastically absorb the energy of an impact while simultaneously minimizing the forces acting on the robot's stiff inner frame. These protective structures are integrated into a 282~g hovering platform and shown to consistently withstand dozens of collisions undamaged.
|
|
17:45-18:00, Paper MoDT9.4 | |
>A Simulator Environment for Aerial Service Robot Prototypes |
Naldi, Roberto | CASY - D.E.I.S. - Univ. di Bologna |
Macchelli, Alessandro | Univ. of Bologna |
Mengoli, Dario | Univ. of Bologna |
Marconi, Lorenzo | Univ. of Bologna |
Keywords: Aerial Robotics, Unmanned Aerial Systems, Software and Architecture
Abstract: This paper provides an architectural description from the software point of view of the simulator environment developed for the AIRobots project. The scope of the project is the realization of an aerial service robotic prototype, a sort of robotic hand to be employed in inspection-by-contact tasks. The simulator is then crucial in both the training of the human operator, and as a support tool for the development and validation of low- and high-level control algorithms. The tasks that can be performed are not limited to free-flight missions, but include also to the cases in which the robot has to actively interact with the environment. The simulator relies on Simulink and Blender, and has been designed with a modular structure that makes software-in-the-loop and hardware-in-theloop simulations possible by simply replacing the different control modules with the real controllers on the prototypes.
|
|
18:00-18:15, Paper MoDT9.5 | |
>Reducing Failure Rates of Robotic Systems Though Inferred Invariants Monitoring |
Jiang, Hengle | Univ. of Nebraska-Lincoln |
Elbaum, Sebastian | Univ. of Nebraska - Lincoln |
Detweiler, Carrick | Univ. of Nebraska-Lincoln |
Keywords: Aerial Robotics, Failure Detection and Recovery, Learning from Demonstration
Abstract: System monitoring can help to detect abnormalities and avoid failures. Crafting monitors for today’s robotic systems, however, can be very difficult due to the systems’ inherent complexity. In this work we address this challenge through an approach that automatically infers system invariants and synthesizes those invariants into monitors. The approach is novel in that it derives invariants by observing the messages passed between system nodes and the invariants types are tailored to match the spatial, temporal, and operational attributes of robotic systems. Further, the generated monitor can be seamlessly integrated into systems built on top of publish-subscribe architectures. An application of the technique on a system consisting of a unmanned aerial vehicle (UAV) landing on a moving platform shows that it can significantly reduce the number of crashes in unexpected landing scenarios.
|
|
18:15-18:30, Paper MoDT9.6 | |
>CELLO-EM: Adaptive Sensor Models without Ground Truth |
Vega-Brown, William | Massachusetts Inst. of Tech. |
Roy, Nicholas | Massachusetts Inst. of Tech. |
Keywords: Calibration and Identification, Sensor Fusion, Aerial Robotics
Abstract: We present an algorithm for providing a dynamic model of sensor measurements. Rather than depending on a model of the vehicle state and environment to capture the distribution of possible sensor measurements, we provide an approximation that allows the sensor model to depend on the sensor model itself. Building on previous work, we show how the sensor model predictor can be learned from data without access to ground truth labels of the vehicle state or true underlying distribution, and we show our approach to be a generalization of non-parametric kernel regressors. The algorithm is demonstrated in simulation and on real world data for both laser-range finder localization in a known map, and monocular camera state estimation in an unknown map. The performance of our algorithm is shown to quantitatively improve estimation, both in terms of consistency and absolute accuracy.
|
|
MoDT10 |
Room609 |
Visual Navigation |
Regular Session |
Chair: Corke, Peter | QUT |
Co-Chair: Rives, Patrick | INRIA |
|
17:00-17:15, Paper MoDT10.1 | |
>A Unified Visual Graph-Based Approach to Navigation for Wheeled Mobile Robots |
Hartmann, Jan | Univ. of Lübeck |
Kluessendorff, Jan Helge | Univ. of Luebeck |
Maehle, Erik | Univ. Luebeck |
Keywords: Visual Navigation, SLAM, Wheeled Robots
Abstract: The emergence of affordable 3D cameras in recent years has led to an increased interest in camera-based navigation solutions. Yet, while there have been significant efforts in the field of visual simultaneous localization and mapping (VSLAM), a complete navigation package that could rival popular laser-based solutions is not available. In this paper, we will therefore introduce visual solutions to SLAM, localization, and path planning in a unified graph-based framework with the main target of wheeled robots in industrial applications. Novel solutions will be introduced in the fields of place recognition and loop closing as well as localization. Our algorithms will be built for the Robot Operating System (ROS) and fully replace the popular gmapping and AMCL algorithms.
|
|
17:15-17:30, Paper MoDT10.2 | |
> >Vision-Only Autonomous Navigation Using Topometric Maps |
Dayoub, Feras | Queensland Univ. of Tech. |
Morris, Timothy | Queensland Univ. of Tech. |
Upcroft, Ben | Queensland Univ. of Tech. |
Corke, Peter | QUT |
Attachments: Video Attachment
Keywords: Computer Vision, Navigation, Mapping
Abstract: This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph -- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
|
|
17:30-17:45, Paper MoDT10.3 | |
>Efficient Navigation Based on the Landmark-Tree Map and the Zinf Algorithm Using an Omnidirectional Camera |
Jäger, Bastian | Tech. Univ. München (TUM) |
Mair, Elmar | German Aerospace Center (DLR) |
Brand, Christoph | German Aerospace Center (DLR) |
Stuerzl, Wolfgang | Bielefeld Univ. |
Suppa, Michael | German Aerospace Center (DLR) |
Keywords: Navigation, Mapping, Omnidirectional Vision
Abstract: Map based navigation is a crucial task for any mobile robot. On many platforms this problem is addressed by applying Simultaneous Localization and Mapping (SLAM) based on metric grid-maps. Such solutions work well on robots with adequate resources and limited workspaces. Platforms with limited payload which operate in unbounded workspaces, do often have insufficient resources to keep a metric world representation. Nevertheless, many applications demand that the robot can autonomously navigate between different operation areas. In this work the Landmark-Tree map (LT-map), a resource efficient topological map concept, is for the first time applied to a mobile robotic platform equipped with an omnidirectional camera. It enables the robot to efficiently adapt the acquired map online to the available memory. During map acquisition and navigation the motion is estimated by the Zinf-algorithm. Both methods are based on similar concepts, which results in a mutual benefit. An efficient navigation strategy based on the LT-map allows the robot to reliably follow previously recorded paths. The presented approach is evaluated on a mobile robot in indoor and outdoor scenarios. The experiments prove its feasibility and show that pruning the map just smooths the trajectories, which is the expected and desired behaviour.
|
|
17:45-18:00, Paper MoDT10.4 | |
>Road Recognition from a Single Image Using Prior Information |
Irie, Kiyoshi | Chiba Insitute of Tech. |
Tomono, Masahiro | Chiba Inst. of Tech. |
Keywords: Navigation, Localization, Mapping
Abstract: In this study, we present a novel road recognition method using a single image for mobile robot navigation. Vision-based road recognition in outdoor environments remains a significant challenge. Our approach exploits digital street maps, the robot position, and prior knowledge of the environment. We segment an input image into superpixels, which are grouped into various object classes such as roadway, sidewalk, curb, and wall. We formulate the classification problem as an energy minimization problem and employ graph cuts to estimate the optimal object classes in the image. Although prior information assists recognition, erroneous information can lead to false recognition. Therefore, we incorporate localization into our recognition method to correct errors in robot position. The effectiveness of our method was verified through experiments using real-world urban datasets.
|
|
18:00-18:15, Paper MoDT10.5 | |
> >Appearance-Based Segmentation of Indoors/outdoors Sequences of Spherical Views |
Chapoulie, Alexandre | INRIA |
Rives, Patrick | INRIA |
Filliat, David | ENSTA ParisTech |
Attachments: Video Attachment
Keywords: Visual Navigation, Mapping, Omnidirectional Vision
Abstract: Navigating in large scale, complex and dynamic environments is a challenging task for autonomous mobile robots. Reliable representations able to capture metric, topological and semantic aspects of the scene have to be built for supporting path planing and real time motion control algorithms. In a previous work, we addressed metric and topological representation levels thanks to a multi-cameras system onboard a man-driven car which allows building of dense visual maps of large scale 3D environments. The map is composed of a set of locally accurate spherical panoramas related by 6dof poses graph which are estimated using a direct multi-views registration technique. The work presented here is a further step toward a semantic representation of the scene. We aim at detecting the changes in the structural properties of the scene during navigation. Structural properties of the scene are estimated online using a global descriptor relying on spherical harmonics which are particularly well-fitted to capture the structural properties in spherical views. A change-point detection algorithm based on a statistical Neyman-Pearson test, allows us to find optimal transition between topological places. Results are presented and discussed both on indoors and outdoors experiments.
|
|
18:15-18:30, Paper MoDT10.6 | |
>Incremental Light Bundle Adjustment for Robotics Navigation |
Indelman, Vadim | Georgia Inst. of Tech. |
Melim, Andrew | Georgia Inst. of Tech. |
Dellaert, Frank | Georgia Inst. of Tech. |
Keywords: Visual Navigation, SLAM, Sensor Fusion
Abstract: This paper presents a new computationally-efficient method for vision-aided navigation (VAN) in autonomous robotic applications. While many VAN approaches are capable of processing incoming visual observations, incorporating loop-closure measurements typically requires performing a bundle adjustment (BA) optimization, that involves both all the past navigation states and the observed 3D points. Our approach extends the incremental light bundle adjustment (LBA) method, recently developed for structure from motion [Indelman12bmvc], to information fusion in robotics navigation and in particular for including loop-closure information. Since in many robotic applications the prime focus is on navigation rather then mapping, and as opposed to traditional BA, we algebraically eliminate the observed 3D points and do not explicitly estimate them. Computational complexity is further improved by applying incremental inference. To maintain high-rate performance over time, consecutive IMU measurements are summarized using a recently-developed technique and navigation states are added to the optimization only at camera rate. If required, the observed 3D points can be reconstructed at any time based on the optimized robot's poses. The proposed method is compared to BA both in terms of accuracy and computational complexity in a statistical simulation study.
|
|
MoDT11 |
Room801 |
Impedance Control |
Regular Session |
Chair: Chung, Wan Kyun | POSTECH |
Co-Chair: Kikuuwe, Ryo | Kyushu Univ. |
|
17:00-17:15, Paper MoDT11.1 | |
> >Design and Control of Anthropomorphic BIT Soft Arms for TCM Remedial Massage |
Huang, Yuancan | Beijing Inst. of Tech. |
Li, Jian | Beijing Inst. of Tech. |
Huang, Qiang | Beijing Inst. of Tech. |
Changxin, Liu | Hospital Affiliated to Beijing Univ. of Chinese Medicine |
Attachments: Video Attachment
Keywords: Compliance and Impedance Control, Medical Systems, Healthcare, and Assisted Living, Mechanism Design
Abstract: For reproducing the manipulation of TCM remedial massage and meanwhile guaranteeing safety, a 4-DOF anthropomorphic BIT soft arm with integrated elastic joints is developed, and a passivity-based impedance control is used. Due to their series elasticity, the integrated joints may minimize large forces which occur during accidental impacts and, further, may offer more accurate and stable force control and a capacity for energy storage. Then, human expert's fingertip force curve in the process of massage therapy is acquired emph{in vivo} by a dedicated measurement device. Three massage techniques, pressing, kneading and plucking, are implemented by the soft arm, respectively, on torso model emph{in vitro} and on human body emph{in vivo}. Experimental results show that the developed robotic arm can effectively imitates the TCM remedial massage techniques.
|
|
17:15-17:30, Paper MoDT11.2 | |
>A Dynamic Active Constraints Approach for Hands-On Robotic Surgery |
Petersen, Joshua | Imperial Coll. London |
Rodriguez y Baena, Ferdinando | Imperial Coll. London, UK |
Keywords: Compliance and Impedance Control, Medical Robots and Systems, Human-Robot Interaction
Abstract: Toward the goal of developing a hands-on robotic surgery control strategy which simultaneously utilizes the various strengths of both the surgeon and robot, we present a dynamic active constraint approach tailored for hands-on surgery. Forbidden region active constraints are used to prevent motion into areas which have been deemed dangerous by the surgeon, helping to overcome some of the disadvantages of fully active systems such as loss of tactile feedback, limited workspace, and limited field-of-view. The computer graphics technique of metaballs is used to represent point cloud data from an imaging system with an analytical, differentiable surface and a dynamics-based controller is proposed which controls the robot to lie on the zero set of the generated time-varying implicit function for which the motion is either known or unknown. This controller has been incorporated into a recursive null-space approach to allow for unimpeded motion along the surface and for further extension to joint optimization in the future. This methodology is demonstrated in simulation and on a lightweight, seven-degree-of-freedom serial manipulator.
|
|
17:30-17:45, Paper MoDT11.3 | |
> >Design of Nonlinear H_infty Optimal Impedance Controllers |
Kim, Min Jun | POSTECH |
Chung, Wan Kyun | POSTECH |
Attachments: Video Attachment
Keywords: Compliance and Impedance Control, Force Control, Formal Methods in Robotics and Automation
Abstract: In this paper, nonlinear H_infty optimal design of impedance controllers is proposed based on the nonlinear robust internal-loop compensator (NRIC) framework. Simply adding PD-type auxiliary input to the original control law, the robust performance and the robust stability are achieved. Nonlinear H_infty optimality is guaranteed by solving Hamilton-Jacobi-Isaccs (HJI) equation and the disturbance input-to-state stability (ISS) is guaranteed by finding ISS-Lyapunov function. Moreover, it is shown that the proposed method preserves the passivity of the impedance controllers. The proposed method can be applied to various types of impedance controllers in a unified way. Through simulations and experimental studies, the proposed method is verified.
|
|
17:45-18:00, Paper MoDT11.4 | |
> >A Modified Impedance Control for Physical Interaction of UAVs |
Fumagalli, Matteo | Univ. of Twente |
Carloni, Raffaella | Univ. of Twente |
Attachments: Video Attachment
Keywords: Compliance and Impedance Control, Force and Tactile Sensing, Aerial Robotics
Abstract: This paper proposes a modified impedance control strategy for a generic robotic system that can interact with an unknown environment or can be moved by a human. The controller makes use of a virtual mass, coupled to the robotic system, which allows for stable interaction. The focus is mainly on unmanned aerial vehicles that are required to get into contact with the environment to perform a specific task on it and that can be shifted by humans. The control architecture is validated both in simulations, on a 1-dimensional benchmark, and in experiments on a real quadrotor flying vehicle.
|
|
18:00-18:15, Paper MoDT11.5 | |
> >Teleimpedance Control of a Synergy-Driven Anthropomorphic Hand |
Ajoudani, Arash | Istituto Italiano di Tecnologia |
Godfrey, Sasha Blue | Istituto Italiano di Tecnologia |
Catalano, Manuel Giuseppe | Univ. di Pisa |
Grioli, Giorgio | Univ. di Pisa |
Tsagarakis, Nikolaos | Istituto Italiano di Tecnologia |
Bicchi, Antonio | vat 09198791007 |
Attachments: Video Attachment
Keywords: Compliance and Impedance Control, Rehabilitation Robotics, Grasping
Abstract: In this paper, a novel synergy driven teleimpedance controller for the Pisa--IIT SoftHand is presented. Towards the development of an efficient, robust, and low-cost hand prothesis, the Pisa--IIT SoftHand is built on the motor control principle of synergies, through which the immense complexity of the hand is simplified into distinct motor patterns. As the SoftHand grasps, it follows a synergistic path with built-in flexibility to allow grasping of objects of various shapes using only a single motor. In this work, the hand grasping motion is regulated with an impedance controller which incorporates the user's postural and stiffness synergy profiles in realtime. In addition, a disturbance observer is realized which estimates the grasping contact force. The estimated force is then fedback to the user via a vibration motor. Grasp robustness and transparency improvements were evaluated on two healthy subjects while grasping different objects. Implementation of the proposed teleimpedance controller led to the execution of stable grasps by controlling the grasping forces, via modulation of hand compliance. In addition, utilization of the vibrotactile feedback resulted in reduced physical load on the user. While these results need to be validated with amputees, they provide evidence that a low-cost, robust hand employing hardware-based synergies is a viable alternative to traditional myoelectric prostheses.
|
|
18:15-18:30, Paper MoDT11.6 | |
>Deformation-Tracking Impedance Control in Interaction with Uncertain Environments |
Roveda, Loris | ITIA CNR |
Vicentini, Federico | Italian National Res. Council (CNR) |
Molinari Tosatti, Lorenzo | National Council of Res. |
Keywords: Compliance and Impedance Control, Contact Modelling, Manipulation and Compliant Assembly
Abstract: A deformation-tracking impedance control strategy is discussed for applications where a manipulator interacts with environments of unknown geometrical and mechanical properties, especially with stiffness comparable to a controlled robot stiffness. Based on force-tracking impedance controls, the deformation-tracking strategy allows the control of a desired deformation of the target environment, requiring the on-line estimation of the environment stiffness. An Extended Kalman Filter is used for the estimation of the environment because of measurement uncertainties and errors in compound interaction model. The tasks presented involve full body spatial interactions with a time-varying environment stiffness. The Extended Kalman Filter and the deformat ion-tracking impedance control are validated in simulation and with experiments. In particular, a cooperative assembly task is also performed with a human operator acting as varying environment, i.e. unpredictably changing the handling arm stiffness.
|
|
MoDT12 |
Room610 |
Haptic Perception and Soft Tissue Modeling |
Regular Session |
Chair: Konyo, Masashi | Tohoku Univ. |
Co-Chair: Sawada, Hideyuki | Kagawa Univ. |
|
17:00-17:15, Paper MoDT12.1 | |
> >Force-Velocity Modulation Strategies for Soft Tissue Examination |
Konstantinova, Jelizaveta | King's Coll. London |
Li, Min | King's Coll. London |
Aminzadeh, Vahid | King's Coll. London, Centre for Robotics, Strand, London,WC2R |
Dasgupta, Prokar | King's Coll. London |
Althoefer, Kaspar | Kings Coll. London |
Nanayakkara, Thrishantha | King's Coll. Univ. of London |
Attachments: Video Attachment
Keywords: Force and Tactile Sensing, Medical Robots and Systems, Haptics and Haptic Interfaces
Abstract: Advanced tactile tools in minimally invasive surgery have become a pressing need in order to reduce time and improve accuracy in localizing potential tissue abnormalities. In this regard, one of the main challenges is to be able to estimate tissue parameters in real time. In palpation, tactile information felt at a given location is identified by the viscoelastic dynamics of the neighboring tissue. Due to this reason the tissue examination behavior and the distribution of viscoelastic parameters in tissue should be considered in conjunction. This paper investigates the salient features of palpation behavior on soft tissue determining the effectiveness of localizing hard nodules. Experimental studies involving human participants, and validation tests using finite element simulations and a tele-manipulator, were carried out. Two distinctive tissue examination strategies in force-velocity modulation for the given properties of target tissue were found. Experimental results suggest that force-velocity modulations during continuous path measurements are playing an important role in the process of mechanical soft tissue examination. These behavioral insights, validated by detailed numerical models and robotic experimentations shed light on future designs of optimal robotic palpation.
|
|
17:15-17:30, Paper MoDT12.2 | |
>Appropriate Biomechanics and Kinematics Modeling of the Respiratory System: Human Diaphragm and Thorax |
Ladjal, Hamid | LIRIS Image department team (UMR-CNRS 5205), Univ. Be |
Shariat, Behzad | LIRIS CNRS UMR 5205, Univ. Claude Bernard Lyon 1, France |
Azencot, Joseph | LIRIS CNRS UMR 5205, Univ. Claude Bernard Lyon 1, France, |
Beuve, Michael | IPNL CNRS UMR 5822, F-69622, Univ. Claude Bernard Lyon 1, F |
Keywords: Soft-tissue Modeling, Computer-assisted diagnosis and therapy, Animation and Simulation
Abstract: Tumor motion during irradiation reduces target coverage and increases dose to healthy tissues. Prediction of respiratory motion has the potential to substantially improve cancer radiation therapy. The respiratory motion is complex and its prediction is not a simple task, especially that breathing is controlled by the independent action of the diaphragm muscles and thorax. The diaphragm is the principal muscle used in the process of respiration and its modeling is essential for assessing the respiratory motion. In this context, an accurate patient-specific Finite Element(FE) based biomechanical model can be used to predict diaphragm deformation. In this paper, we have developed a FE model of the respiratory system including the diaphragm behavior and the complete thorax with musculoskeletal structure. These incorporate the ribs kinematics extracted directly from the Computed Tomography (CT) scan images. In order to demonstrate the effectiveness of our biomechanical model, a qualitative and quantitative comparison between the FE simulations and the CT scan images were performed. Upon application of linear elastic models, our results show that a linear elastic model can accurately predict diaphragm deformations. These comparisons demonstrate the effectiveness of the proposed physically-based model. The developed computational model could be a valuable tool for respiratory system deformation prediction in order to be controlled and monitored by external sensors during the treatment.
|
|
17:30-17:45, Paper MoDT12.3 | |
> >Haptic Rendering of Interacting Dynamic Deformable Objects Simulated in Real-Time at Different Frequencies |
Dervaux, François | Lab. d'Informatique Fondamentale de Lille |
Peterlik, Igor | INRIA Lille Nord Europe |
Dequidt, Jeremie | Lab. d'Informatique Fondamentale de Lille |
Cotin, Stephane | INRIA |
Duriez, Christian | INRIA |
Attachments: Video Attachment
Keywords: Soft-tissue Modeling, Haptics and Haptic Interfaces, Contact Modelling
Abstract: The dynamic response of deformable bodies varies significantly in dependence on mechanical properties of the objects: while the dynamics of a stiff and light object (e. g. wire or needle) involves high-frequency phenomena such as vibrations, much lower frequencies are sufficient for capturing dynamic response of an object composed of a soft tissue. Yet, when simulating mechanical interactions between soft and stiff deformable models, a single time-step is usually employed to compute the time integration of dynamics of both objects. However, this can be a serious issue when haptic rendering of complex scenes composed of various bodies is considered. In this paper, we present a novel method allowing for dynamic simulation of a scene composed of colliding objects modelled at different frequencies: typically, the dynamics of soft objects are calculated at frequency about 50 Hz, while the dynamics of stiff object is modeled at 1 kHz, being directly connected to the computation of haptic force feedback. The collision response is performed at both low and high frequencies employing data structures which describe the actual constraints and are shared between the high and low frequency loops. During the simulation, the realistic behaviour of the objects according to the mechanical principles (such as non-interpenetration and action-reaction principle) is guaranteed. Examples showing the scenes involving different bodies in interaction are given, demonstrating the benefits of the proposed method.
|
|
17:45-18:00, Paper MoDT12.4 | |
>Tactile Actuators Using SMA Micro-Wires and the Generation of Texture Sensation from Images |
Takeda, Yuto | Kagawa Univ. |
Sawada, Hideyuki | Kagawa Univ. |
Keywords: Haptics and Haptic Interfaces, Virtual Reality and Interfaces, Human-Robot Interaction
Abstract: Humans communicate with each other by using not only verbal media but also the five senses such as vision, audition, olfaction and tactile sensations. Various devices and systems have been introduced for supporting human communication, and most of them are based on visual and auditory media. For presenting tactile sensations, some tactile actuators have been introduced recently, however these actuators require real objects to generate the various tactile sensations and input data to be associated with output tactile sensations should be prepared manually by a user. This study introduces an algorithm to automatically generate parameters from an object’s image for driving tactile actuators. A tactile presentation system is constructed, and the validity of the texture sensations is verified by a user’s experiment.
|
|
18:00-18:15, Paper MoDT12.5 | |
>Haptic Cue of Forces on Tools: Investigation of Multi-Point Cutaneous Activity on Skin Using Suction Pressure Stimuli |
Porquis, Lope Ben | Graduate School of Information Science, Tohoku Univ. |
Maemori, Daiki | Tohoku Univ. |
Nagaya, Naohisa | Tohoku Univ. |
Konyo, Masashi | Tohoku Univ. |
Tadokoro, Satoshi | Tohoku Univ. |
Keywords: Haptics and Haptic Interfaces, Perception for Grasping and Manipulation, Contact Modelling
Abstract: This paper presents an initial data that could show a possible contribution of mechanoreceptor activity to the perception of forces applied on grasped objects. Here, we obtained detailed psychophysical characteristics of perceived force-magnitude in multiple degrees of freedom (MDOF) using multi--point suction pressure stimuli. To obtain such data, we developed a multi--point stimulation method that can represent MDOF perceived force on a tool. We characterized the perceived force response of human subjects to suction pressure stimuli through psychophysical experiments. Moreover, we analyzed the strain energy density (SED) on the finger pads considering the force applied through finite element simulation. The results of the psychophysical experiments showed that multi-point stimulation method is effective for evoking MDOF perceived force on a tool. Interestingly, we found that the results of the finite element analysis agree with those of the psychophysical data. Therefore, we have verified that it is possible to use multi-point suction pressure stimulation for representing perceived force on objects held in a hand. In addition, a preliminary insight into the role of SED for perceiving force on tools is provided.
|
|
18:15-18:30, Paper MoDT12.6 | |
>Proposal of Tactile Sensor Development Based on Tissue Engineering |
Pham, Quang Trung | Nagoya Inst. of Tech. |
Hoshi, Takayuki | Nagoya Inst. of Tech. |
Tanaka, Yoshihiro | Nagoya Inst. of Tech. |
Sano, Akihito | Nagoya Inst. of Tech. |
Keywords: Biologically-Inspired Robots
Abstract: Tactile sensation is one of the important processes of human and creatures, besides vision. In human and mammal, there are several kinds of mechanoreceptors, responsible for a wide range of interactions. Meanwhile, in robotics, the devices are called tactile sensors. With the development of modern technologies, tactile sensors have been well-developed, exploring various possible methods of transduction and used in many commercial products. But the development of tactile sensors is still modest, compared to optical sensors, because ofvarious remaining issues. By researching human mechanoreceptors, we consider a new type of tactile sensor, which combined of them with mechanical engineering product. We expect this tactile sensor may provide some solutions for development due to its specific structure. In this paper, we propose the basic concept of tactile sensor based on tissue engineering and hypotheses of various approaching methods
|
|
MoDT13 |
Room802 |
Micro/Nano Robotics II |
Regular Session |
Chair: Arai, Fumihito | Nagoya Univ. |
Co-Chair: Hwang, Gilgueng | CNRS |
|
17:00-17:15, Paper MoDT13.1 | |
> >Magnetotactic Bacteria and Microjets: A Comparative Study |
Khalil, Islam S.M. | Univ. of Twente |
Magdanz, Veronika | IFW Dresden |
Sanchez, Samuel Ordonez | IFW-Dresden, Inst. for Integrative Nanosciences |
Schmidt, Oliver G. | IFW-Dresden, Inst. for Integrative Nanosciences |
Misra, Sarthak | Univ. of Twente |
Attachments: Video Attachment
Keywords: Micro/Nano Robots
Abstract: We provide a comparative study between two self-propelled microrobots, i.e., magnetotactic bacteria and microjets. This study includes characterization of their fluidic properties (linear and rotational drag coefficients) based on their morphologies and characterization of their magnetic properties using the rotating-field technique. Further, the control characteristics of our microrobots are evaluated in the transient- and steady-states. The average boundary frequencies of our magnetotactic bacteria and microjets are 2.2 rad/s and 25.1 rad/s, respectively. The characterized fluidic properties and boundary frequencies are used in the characterization of the magnetic properties of our microrobots. The average magnetic dipole moments of our magnetotactic bacteria and microjets are 1.4×10ˆ-17 A.m2 and 1.5×10ˆ-13 A.m2 at magnetic field of 2 mT and linear velocities of 32 µm/s (approximately 6 body lengths per second) and 119 µm/s (approximately 2 body lengths per second), respectively. These characterized magnetic dipole moments are utilized in the realization of closed-loop control systems for the magnetotactic bacteria and microjets. Our closed-loop control system positions the magnetotactic bacteria and the microjets within the vicinity of reference position with average diameters of 23 µm (approximately 4 body lengths) and 417 µm (approximately 8 body lengths), respectively.
|
|
17:15-17:30, Paper MoDT13.2 | |
>Using Breakdown Phenomenon As Mobile Magnetic Field Sensor in Microfluidics |
Salmon, Hugo | CNRS-LPN |
Couraud, Laurent | CNRS-LPN |
Hwang, Gilgueng | CNRS |
Keywords: Micro/Nano Robots, Micro-manipulation
Abstract: Sensing magnetization and enhancing dynamics performances is essential while studying wireless magnetic mobile robots. Sensing physical parameters in microfluidic environments has strongly been demanded in various lab-on-a-chip applications as well. In this paper, we propose mobile microrobots as mobile sensor in microfluidics. We develop an original environment for high-resolution dynamic tracking and analysis in microfluidic chips. Studying robot dynamics in low Reynolds fluid with no magnetic sensor in the chip is challenging as the field distribution and robot magnetization are not well known. Our intended goal is to explore intrinsic magneto-fluidic sensing capacities to collect more information on the micro-system. We successfully integrate our robot into a transparent microfluidic chip for high-temporal resolution analysis of dynamics. We develop an electromagnetic setup allowing complete remote control (at low power ≲5mT) of rotational behaviour. We study a breakdown phenomenon up to 1kHz signal and develop a scalar method analyzing rotational dynamics to enhance their sensing capacity.
|
|
17:30-17:45, Paper MoDT13.3 | |
>High Throughput Mechanical Characterization of Oocyte Using Robot Integrated Microfluidic Chip |
Sakuma, Shinya | Nagoya Univ. |
Turan, Bilal | Bilkent Univ. |
Arai, Fumihito | Nagoya Univ. |
Keywords: Micro/Nano Robots, Micro-manipulation, Medical Systems, Healthcare, and Assisted Living
Abstract: This paper presents a novel measurement system of cellular mechanical properties based on a robot integrated microfluidic chip. In order to achieve the high throughput measurement of cellular mechacal properties, we proposed the robot integrated microfluidic chip (robochip), taking advantages of both of micromechanical manipulator and Lab-on-a-Chip devices. The robochip contained a pair of a magnetically driven on-chip robotic probe and a force sensor. The characterization system based on the robochip performed by the visual feedback control, and the continuous cell measurement was demonstrated. The throughput of our system was 15 to 20 seconds per one oocyte. Moreover, the measurement of the viscoelastic properties were demonstrated as a quality evaluation of oocyte. Experimental results shows that the oocyte has the viscoelastic properties among the same culture condition, and it is important to analyze the mechanical properties of oocyte for the evaluation of the quality. From these results, we concluded that the high through put cellular mechanical characterization was achieved, and our robochip approach was a promising technique for a cellular characterization because the chip part was disposable.
|
|
17:45-18:00, Paper MoDT13.4 | |
> >Magnetic-Based Minimum Input Motion Control of Paramagnetic Microparticles in Three-Dimensional Space |
Khalil, Islam S.M. | Univ. of Twente |
Metz, Roel M.P. | Univ. of Twente |
Reefman, Bart Antonius | Univ. of Twente |
Misra, Sarthak | Univ. of Twente |
Attachments: Video Attachment
Keywords: Micro/Nano Robots
Abstract: Magnetic drug carriers such as microrobots and paramagnetic microparticles have the potential to increase the therapeutic indices by selectively targeting the diseased tissue. These magnetic microobjects can be controlled using magnetic-based manipulation systems. In this study, we analyze a minimum input motion control system to minimize the currents at each of the electromagnets of a magnetic system. This minimum input control system allows us to achieve point-to-point motion control of microparticles in the three-dimensional space, at an average speed of 198 µm/s, and maximum root mean square position tracking error of 104 µm. The minimum input control system is further evaluated by comparing norm-2 of its resulting optimal current vector to the current vector of a proportional-integral (PI) control system. This comparison shows that the minimum input control achieves 11% decrease in the current input, as opposed to the PI control system. However, the PI control system achieves 43% and 285% higher average speed and positioning accuracy, respectively, as opposed to the minimum input control. The magnetic-based minimum input control can be implemented to control magnetic microrobots while decreasing the current at each of the electromagnets.
|
|
18:00-18:15, Paper MoDT13.5 | |
>Controlled Patterning of Magnetic Hydrogel Microfibers under Magnetic Tweezers |
Hu, Chengzhi | Nagoya Univ. |
Nakajima, Masahiro | Nagoya Univ. |
Yue, Tao | Nagoya Univ. |
Shen, Yajing | Nagoya Univ. |
Fukuda, Toshio | Meijo Univ. |
Arai, Fumihito | Nagoya Univ. |
Seki, Minoru | Department of Applied Chemistry and Biotechnology, Chiba Univ. |
Keywords: Micro-manipulation, Medical Robots and Systems, Force Control
Abstract: 3D tailor-made biodegradable scaffold integrated with biological cells or molecules is of great importance for tissue engineering. This paper addresses an improved method for exploring magnetic tweezers in patterning and aligning magnetic hydrogel fiber to fabricate large-scale engineered cell-hydrogel constructs. Magnetic hydrogel fibers were fabricated based on microfluidic device. The fabricated hydrogel fiber is made of alginic acid sodium and with a diameter of 34 µm. Magnetic nanoparticles is added into the alginic acid sodium solution to append magnetic material inside the fibers. The magnetic material inside the hydrogel fiber is regulated by the microfluidic device. Magnetic tweezers system based on solenoid electromagnet is utilized to evaluate the magnetic response of the magnetic hydrogel fiber. Evaluation results show the hydrogel fiber can be maneuvered by the proposed system with a positioning resolution of sub-micro level. The cultivation results of hydrogel fiber with C2C12 cells shows the potential for real applications of the proposed method in tissue engineering.
|
|
18:15-18:30, Paper MoDT13.6 | |
>Modeling of Electrostatic Forces Induced by Chemical Surface Functionalisation for Microrobotics Applications |
Cot, Amélie | Femto-st Inst. |
Dejeu, Jérôme | Joseph Fourier Univ. (Grenoble 1) |
Lakard, Sophie | Univ. of Franche-Comté |
Rougeot, Patrick | Univ. of Franche-Comté, FEMTO-ST Inst. |
Gauthier, Michael | FEMTO-ST Inst. |
Keywords: Micro-manipulation, Nano manipilation, Micro/Nano Robots
Abstract: Non-contact microrobotics is a promising way to avoid adhesion caused by the well-known scale effects. Nowadays, several non-contact micro-robots exist. Most of them are controlled by magnetic or dielectrophoresis phenomena. To complete this, we propose a method based on electrostatic force induced by chemical functionalisation of substrates. In this study, we show a model of this force supported by experimental results. We reached long range forces measuring an interaction force of several microNewtons and an interaction distance of tens micrometers. This paper shows the relevance of using chemical electrostatic forces for microrobotics applications.
|
|
MoVT14 |
Room101 |
Video Session |
Video Session |
Chair: Hasegawa, Yasuhisa | Univ. of Tsukuba |
|
17:00-17:06, Paper MoVT14.1 | |
> >Anticipating Human Activities for Reactive Robotic Response |
Koppula, Hema Swetha | Cornell Univ. |
Saxena, Ashutosh | Cornell Univ. |
Attachments: Video Session Attachment
Keywords: Human detection and tracking, Recognition, Human-Robot Interaction
Abstract: An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how to do them) can enable an assistive robot to plan ahead for reactive responses in the human environments. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In the accompanying video, we show a PR2 robot performing assistive tasks based on the anticipations generated by our proposed method.
|
|
17:06-17:12, Paper MoVT14.2 | |
> >Safe Physical Human-Robot Collaboration |
Flacco, Fabrizio | Univ. di Roma "La Sapienza" |
De Luca, Alessandro | Univ. di Roma "La Sapienza" |
|
|
17:12-17:18, Paper MoVT14.3 | |
> >Provably-Correct Robot Control with LTLMoP, OMPL and ROS |
Wong, Kai Weng | Cornell Univ. |
Finucane, Cameron | Cornell Univ. |
Kress-Gazit, Hadas | Cornell Univ. |
Attachments: Video Session Attachment
Keywords: Formal Methods in Robotics and Automation, Human-Robot Interaction, Task Planning
Abstract: This video illustrates the Linear Temporal Logic MissiOn Planning (LTLMoP toolkit. LTLMoP is an open source software package that transforms high-level specifications for robot behavior, captured using a structured English grammar, into a robot controller that guarantees the robot will complete its task, if the task is feasible. If the task cannot be guaranteed, LTLMoP provides feedback to the user as to what the problem is. Due to its modular nature, users can control a variety of different robots using LTLMoP, both simulated and physical, with the same specification. This video shows an example robot waiter scenario, with LTLMoP controlling both a PR2 in simulation (using Gazebo), showcasing the interface between LTLMoP and the Robot Operating System (ROS), as well as an Aldebaran Nao humanoid in the lab.
|
|
17:18-17:24, Paper MoVT14.4 | |
> >Virtual Reality Support for Teleoperation Using Online Grasp Planning |
Hertkorn, Katharina | German Aerospace Center (DLR) |
Roa, Maximo A. | German Aerospace Center, DLR |
Brucker, Manuel | German Aerospace Center |
Kremer, Philipp | German Aerospace Center (DLR) |
Borst, Christoph | German Aerospace Center (DLR) |
Attachments: Video Session Attachment
Keywords: Grasping, Virtual Reality and Interfaces, Telerobotics
Abstract: We present a novel shared autonomy system, which incorporates several components such as scene analysis, shared autonomy, and telepresence into a virtual reality environment. Our concept takes advantage of the different strengths of the components to improve user-friendliness and object manipulation performance in remote environments. A fast scene analysis based on the "model globally, match locally" algorithm is used for object recognition and scene updates. Reachable independent contact regions are computed online, which is particularly useful for grasping and manipulating objects using robot hands with less direct kinematic correlation with the human hand. This is achieved without the aid of precomputed grasps. Initial experiments have shown that the VR environment used with our shared autonomy approach leads to a more robust task execution when compared to direct coupling of operator and robot fingers.
|
|
17:24-17:30, Paper MoVT14.5 | |
> >Mapping Human to Robot Motion with Functional Anthropomorphism for Teleoperation and Telemanipulation with Robot Arm Hand Systems |
Liarokapis, Minas | National Tech. Univ. of Athens |
Artemiadis, Panagiotis | Arizona State Univ. |
Kyriakopoulos, Kostas | National Tech. Univ. of Athens |
Attachments: Video Session Attachment
Keywords: Telerobotics, Kinematics
Abstract: In this paper teleoperation and telemanipulation with a robot arm (Mitsubishi PA-10) and a robot hand (DLR/HIT 2) is performed, using a human to robot motion mapping scheme that guarantees anthropomorphism. Two position trackers are used to capture position and orientation of human end-effector (wrist) and human elbow in 3D space and a dataglove to capture human hand kinematics. Then the inverse kinematics (IK) of the Mitsubishi PA-10 7-DoF robot arm are solved in an analytical manner, in order for the human’s and the robot artifact’s end-effectors to achieve same position and orientation in 3D space (functional constraint). Redundancy is handled in the solution space of the robot arm’s IK, selecting the most anthropomorphic solution computed, with a criterion of “Functional Anthropomorphism”. Human hand motion is transformed to robot hand motion using the joint-to-joint mapping methodology. Finally in order for the user to be able to detect contact and “perceive” the forces exerted by the robot hand, a low-cost force feedback device, that provides a mixture of sensory information (visual and vibrotactile), was developed.
|
|
17:30-17:36, Paper MoVT14.6 | |
> >Video Presentation of a Rock Climbing Robot |
Parness, Aaron | Nasa Jet Propulsion Lab. |
Frost, Matthew | NASA Jet Propulsion Lab. |
King, Jonathan | The Ohio State Univ. |
Thatte, Nitish | Rutgers Univ. |
Witkoe, Kevin | Univ. of Idaho |
Nevarez, Moises | Univ. of Southern California |
Garrett, Michael | Jet Propulsion Lab. |
Aghazarian, Hrand | Jet Propulsion Lab. |
Kennedy, Brett | Jet Propulsion Lab. |
Attachments: Video Session Attachment
Keywords: Climbing robots, Space Robotics and Automation, Biologically-Inspired Robots
Abstract: JPL has developed the worlds first rock climbing robot. In this video we present initial climbing trials at vertical, overhanging, and fully inverted angles, and a zero-g drill designed for astronauts. One day, this technology could help explore asteroids and set up safety cables for astronauts. The climbing robot and drill also have applications to crater walls, cliff faces, and lava tubes on Mars and the Moon, and could provide mobility for Phobos/Deimos missions.
|
|
17:36-17:42, Paper MoVT14.7 | |
> >The Development of a Scalable Underactuated Gripper Based on Flexural Buckling |
Jung, Gwang-Pil | Seoul National Univ. |
Jeong, Useok | Seoul National Univ. |
Koh, Je-Sung | Seoul National Univ. |
Cho, Kyu-Jin | Seoul National Univ. Biorobotics Lab. |
Attachments: Video Session Attachment
Keywords: Gripper and Hand Design, Grasping, Biologically-Inspired Robots
Abstract: In this paper, we verify the scalability of an underactuated mechanism based on flexural buckling by applying the mechanism to multi-scale adaptive grippers. For verification, we design and fabricate two grippers having different sizes and install the grippers to a manipulator. As a result, the scalability of the mechanism will be shown by grasping from small electronic parts to large wine drinking glasses.
|
|
17:42-17:48, Paper MoVT14.8 | |
> >Adaptations of Omnidirectional Driving Gears to Practical Purposes |
Tadakuma, Kenjiro | Osaka Univ. |
Tadakuma, Riichiro | Yamagata Univ. |
Takagi, Minoru | Yamagata Univ. |
Ioka, Kyohei | Yamagata Univ. |
Matsui, Gaku | Yamagata Univ. |
Komura, Kenichi | Yamagata Univ. |
Moya, Erick | Yamagata Univ. |
Akaike, Takahiro | Yamagata Univ. |
Tsumaki, Yuichi | Yamagata Univ. |
Attachments: Video Session Attachment
Keywords: Mechanism Design, New Actuators for Robotics
Abstract: We have been developing various types of the omnidirectional driving gears to adapt them to real applications like transportation in factories and warehouses. This video shows features of the omnidirectional driving gears with various sizes and materials. The miniaturized planar omnidirectional driving gears were attached to a parallel gripper as an end effector of a robotic arm to enhance its maneuverability. Even in the narrow space like inside of the shell, the omnidirectional gears on the parallel gripper can realize various smooth motions of the grasped object without activating any joint of the robotic arm. This function can compensate limited manipulability of the robotic arm near its singularity.
|
|
17:48-17:54, Paper MoVT14.9 | |
> >Beobot 2.0: Autonomous Mobile Robot Localization and Navigation in Outdoor Pedestrian Environment |
Chang, Chin-Kai | iLab Univ. of Southern California |
Siagian, Christian | Univ. of Southern California |
Itti, Laurent | Univ. of Southern California |
Attachments: Video Session Attachment
Keywords: Navigation, Visual Navigation, Localization
Abstract: We present Beobot 2.0 [1], an autonomous mobile robot designed to operate in unconstrained urban environments. The goal of the project is to create service robots that can be deployed for various tasks that require long range travel. Over the past two years, Beobot has successfully traversed various paths across the USC campus, demonstrating its robustness in recognizing and following different types of roads, avoiding obstacles such as pedestrians and service vehicles, and finding its way to the goal. Beobot utilizes a sixteen core computing platform [2], and is equipped with sensors such as front-facing cameras, an Inertial Measurement Unit (IMU), two Laser Range Finders (LRF), and wheel encoders. Beobot represents its environment in a hierarchical way. It uses a topological map for global localization and a grid occupancy map for local navigation. By having separate and targeted maps for these tasks, the system achieves a representation that is both detailed and scalable to describe vast environments such as a university campus. The navigation system consists of two sub-tasks: road recognition and obstacle avoidance. The system recognizes the road visually, by utilizing image contour segments to detect the vanishing point, indicating the direction of the road [3]. In addition, it also tracks the road lines to estimate the lateral position of the robot. The use of segments proves to be critical as the road recognition performs robustly despite the presence of occluding pedestrians as well as shadows. As for obstacle avoidance, the robot uses a planar LRF to populate the grid occupancy map. The system then generates a rigid path to the goal using A*, and refines it using the Elastic-Band Algorithm [4]. Furthermore, the system then computes the motor commands, accounting for robot shape and velocity, using the Dynamic Window Approach [5]. To localize in the global topological map, the system models two extensively studied human visual capabilities within its Monte-Carlo Localization framework [6]. One is extracting the gist of a scene [7], a holistic statistical signature of the image, to quickly classify the robot segment location. The second is detecting and identifying the salient regions in the scene to pin-point the robot position. The localization system is responsible for informing the robot
|
|
17:54-18:00, Paper MoVT14.10 | |
> >AIRobots: Innovative Aerial Service Robots for Remote Inspection by Contact |
Huerzeler, Christoph | ETH Zürich |
Naldi, Roberto | CASY - D.E.I.S. - Univ. di Bologna |
Lippiello, Vincenzo | Univ. di Napoli Federico II |
Carloni, Raffaella | Univ. of Twente |
Nikolic, Janosch | ETH Zürich |
Alexis, Kostas | ETH Zürich - Eidgenössische Tech. Hochschule Zürich |
Marconi, Lorenzo | Univ. of Bologna |
Siegwart, Roland | ETH Zurich |
Attachments: Video Session Attachment
Keywords: Aerial Robotics, Unmanned Aerial Systems, Unmanned Aerial Vehicles
Abstract: This video presents experiments conducted within the final review meeting demonstration session of the AIRobots project. AIRobots started at 2010 and the final review meeting took place on 22 of March, 2013. The presented experiments cover a wide area of the challenges related with aerial industrial inspection. In particular, multiple test-cases related with both vision-based and contact-based inspection and in general physical interaction are shown. It is highlighted that these experiments were recorded live during the project demonstration and evaluation process.
|
|
18:00-18:06, Paper MoVT14.11 | |
> >Robotic Assembly of Emergency Stop Buttons |
Stolt, Andreas | Lund Univ. |
Linderoth, Magnus | Lund Univ. |
Robertsson, Anders | LTH, Lund Univ. |
Johansson, Rolf | Lund Univ. |
Attachments: Video Session Attachment
Keywords: Manipulation and Compliant Assembly, Industrial Robots, Force and Tactile Sensing
Abstract: Industrial robots are usually position controlled, which requires high accuracy of the robot and the workcell. Some tasks, such as assembly, are difficult to achieve by only using position sensing. This work presents a framework for robotic assembly, where a standard position-based robot program is integrated with an external controller performing force-controlled skills. The framework is used to assemble emergency stop buttons that were tailored to be assembled by humans.
|
|
18:06-18:12, Paper MoVT14.12 | |
> >Meso-Scale Robot Assembly Using Shape Memory Polymer Rivet Fastener |
Kim, Ji-Suk | Seoul National Univ. |
Jung, Gwang-Pil | Seoul National Univ. |
Koh, Je-Sung | Seoul National Univ. |
Cho, Kyu-Jin | Seoul National Univ. Biorobotics Lab. |
Attachments: Video Session Attachment
Keywords: Smart Actuators, Micro/Nano Robots, Biologically-Inspired Robots
Abstract: This paper describes a novel rivet fastener made with shape memory polymer (SMP). Shape recovery and modulus change are main two properties of SMPs that enable themselves to be promising base materials for fasteners. The new type of fastener was used to join two composite parts of a meso-scale robot. The fabrication procedure includes macro molding and subsequent laser machining in order to enhance manufacturability, and change size and design on demand. By pull-off experiment it was demonstrated that one single rivet can endure 8N of disengagement force. We applied this fastener to meso-scale flea robot and verified its feasibility.
|
|
18:12-18:18, Paper MoVT14.13 | |
> >Locomotion Diversity in an Underwater Soft-Robot Inspired by the Polyclad Flatworm |
Kazama, Toshiya | Hiroshima Univ. |
Kuroiwa, Koki | Hiroshima Univ. |
Umedachi, Takuya | Tufts Univ. |
Komatsu, Yuichi | Hiroshima Univ. |
Kobayashi, Ryo | Hiroshima Univ. |
Attachments: Video Session Attachment
Keywords: Biologically-Inspired Robots
Abstract: The underwater soft-robot inspired by polyclad flatworms has been developed. The oval, flat, soft body of the flatworm was represented by a rubber sheet. The sheet was controlled by controls with three degrees of freedom to allow flapping of both the lateral sides and the body axis. Swimming patterns, such as swimming forward, hovering, and swimming backwards, were achieved by coordinated movement of the lateral side flaps and the body axis of the soft robot.
|
|
18:18-18:24, Paper MoVT14.14 | |
> >An Over-Actuated Modular Platform for Aerial Inspection and Manipulation |
Torre, Alessio | Univ. of Bologna |
Naldi, Roberto | CASY - D.E.I.S. - Univ. di Bologna |
Riccò, Alessio | Univ. of Bologna |
Mengoli, Dario | Univ. of Bologna |
Marconi, Lorenzo | Univ. of Bologna |
Attachments: Video Session Attachment
Keywords: Aerial Robotics, Unmanned Aerial Vehicles, Unmanned Aerial Systems
Abstract: This video shows an innovative over-actuated aerial vehicle specifically designed for tasks requiring high maneuverability such as aerial inspection of infrastructure and aerial manipulation. The main feature of the system is the fact that the redundancy of actuators allows to obtain maneuvers otherwise impossible for other aerial systems such helicopters or quadrotors. The experiments proposed in the video demonstrate how this improved maneuverability can be exploited both during free-flight operations or when physical interaction with the environment is required.
|