This article presents a robust visible light localization (VLL) technique for wireless sensor networks, with 2-D indoor positioning (IP) capabilities, based on embedded machine learning (ML) running on low-cost low-power microcontrollers. The implemented VLL technique uses four optical sources (i.e., LEDs), modulated at different frequencies. In particular, the received signal strengths (RSSs) of optical signals are evaluated by a microcontroller on board the sensor nodes via fast Fourier transform (FFT). RSSs are fed to four embedded ML regressors, aiming at estimating the target position within the workspace. The four neural networks (NNs), one per each possible triplet of LEDs, are trained by exploiting a novel technique to generate the training datasets. This method, called optimized fingerprinting (OF), allows for creating arbitrarily ample datasets by performing only few measurements in the field, avoiding time-consuming steps for collecting experimental data. The NNs are devised to be accurate yet lightweight facilitating their implementation and execution by the microcontroller. Furthermore, due to the presence of four NNs, four position estimates are obtained. This redundancy is exploited to detect and effectively manage situations of total or partial shading of one light source and to enhance the positioning accuracy under normal operating conditions (i.e., no obstacles), by averaging the four positions. Test results performed in a $1\times1$ m workspace show an overall mean accuracy of about 1 cm with standard deviation below the centimeter and maximum error around 3 cm.
Cappelli, I., Carli, F., Fort, A., Intravaia, M., Micheletti, F., Peruzzi, G., et al. (2023). Enhanced Visible Light Localization Based on Machine Learning and Optimized Fingerprinting in Wireless Sensor Networks. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 72, 1-10 [10.1109/TIM.2023.3240220].
Enhanced Visible Light Localization Based on Machine Learning and Optimized Fingerprinting in Wireless Sensor Networks
Cappelli I.
;Fort A.;Micheletti F.;Peruzzi G.;Vignoli V.
2023-01-01
Abstract
This article presents a robust visible light localization (VLL) technique for wireless sensor networks, with 2-D indoor positioning (IP) capabilities, based on embedded machine learning (ML) running on low-cost low-power microcontrollers. The implemented VLL technique uses four optical sources (i.e., LEDs), modulated at different frequencies. In particular, the received signal strengths (RSSs) of optical signals are evaluated by a microcontroller on board the sensor nodes via fast Fourier transform (FFT). RSSs are fed to four embedded ML regressors, aiming at estimating the target position within the workspace. The four neural networks (NNs), one per each possible triplet of LEDs, are trained by exploiting a novel technique to generate the training datasets. This method, called optimized fingerprinting (OF), allows for creating arbitrarily ample datasets by performing only few measurements in the field, avoiding time-consuming steps for collecting experimental data. The NNs are devised to be accurate yet lightweight facilitating their implementation and execution by the microcontroller. Furthermore, due to the presence of four NNs, four position estimates are obtained. This redundancy is exploited to detect and effectively manage situations of total or partial shading of one light source and to enhance the positioning accuracy under normal operating conditions (i.e., no obstacles), by averaging the four positions. Test results performed in a $1\times1$ m workspace show an overall mean accuracy of about 1 cm with standard deviation below the centimeter and maximum error around 3 cm.File | Dimensione | Formato | |
---|---|---|---|
Enhanced_Visible_Light_Localization_Based_on_Machine_Learning_and_Optimized_Fingerprinting_in_Wireless_Sensor_Networks.pdf
non disponibili
Tipologia:
PDF editoriale
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
684.73 kB
Formato
Adobe PDF
|
684.73 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11365/1232178