The first aim of any visual-servoing strategy is to avoid features being lost from the field of view and that the desired location may not be reached. However, avoiding both these system failures turns out to be very difficult, especially when the initial and desired locations are distant. Moreover, the methods that succeed in presence of large camera displacements often produce a long translational trajectory that may not be allowed by the robot workspace and/or joint limits. In this paper, a new strategy for dealing with such problems is proposed, which consists of generating circular-like trajectories that may satisfy the task requirements more naturally than other solutions. Knowledge of geometrical models of the object or points depth is not required. It is shown that system failures are avoided for a calibrated camera. Moreover, necessary and sufficient conditions are provided for establishing tolerable errors on the estimates of the intrinsic and extrinsic parameters, in order to guarantee a robust field of view and robust local asymptotic stability. Several simulation results show that the translational trajectories obtained in presence of large displacements are significantly shorter than those produced by the existing methods, in cases of both correct and bad camera calibration. Very satisfactory results are achieved also in presence of small displacements.
Chesi, G., Vicino, A. (2004). Visual servoing for large camera displacements. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 20(4), 724-735 [10.1109/TRO.2004.829465].
Visual servoing for large camera displacements
CHESI, GRAZIANO;VICINO, ANTONIO
2004-01-01
Abstract
The first aim of any visual-servoing strategy is to avoid features being lost from the field of view and that the desired location may not be reached. However, avoiding both these system failures turns out to be very difficult, especially when the initial and desired locations are distant. Moreover, the methods that succeed in presence of large camera displacements often produce a long translational trajectory that may not be allowed by the robot workspace and/or joint limits. In this paper, a new strategy for dealing with such problems is proposed, which consists of generating circular-like trajectories that may satisfy the task requirements more naturally than other solutions. Knowledge of geometrical models of the object or points depth is not required. It is shown that system failures are avoided for a calibrated camera. Moreover, necessary and sufficient conditions are provided for establishing tolerable errors on the estimates of the intrinsic and extrinsic parameters, in order to guarantee a robust field of view and robust local asymptotic stability. Several simulation results show that the translational trajectories obtained in presence of large displacements are significantly shorter than those produced by the existing methods, in cases of both correct and bad camera calibration. Very satisfactory results are achieved also in presence of small displacements.File | Dimensione | Formato | |
---|---|---|---|
Chesi_Vicino_2004.pdf
non disponibili
Tipologia:
Post-print
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
815.87 kB
Formato
Adobe PDF
|
815.87 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/22174
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo