sensors-logo

Journal Browser

Journal Browser

Sensor Data Summarization: Theory, Applications, and Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (25 October 2022) | Viewed by 8568

Special Issue Editor


E-Mail Website
Guest Editor
Robotics & Big Data Lab, Computer Science Department, University of Haifa, Haifa 3498838, Israel
Interests: machine Learning; big data; computer vision and robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Research on data summarization techniques such as coreset and sketch constructions is growing rapidly these days, and not only in its original community, theoretical computer sciences and mathematics, but also in more modern fields that use huge amounts of data from sensors, such as machine and deep learning. In recent years, we have also seen more and more papers in applied fields such as robotics, graphics, and computer vision, and new related theories in areas such as differential privacy, cryptographic, compressed sensing, and signal processing.

Due to this multidisciplinary research, results sometimes fall between the cracks: Theory-oriented people may not appreciate experimental results, while practitioners may not be interested in or understand tedious mathematical proofs. Often, by the time reviews are available and your journal version is published, the results have already been improved upon and become obsolete.

This Special Issue is dedicated to all types of aspects in sensor data summarization, including new provable constructions, related approximation algorithms, applications to streaming and parallel computations, software implementations, and systems that are based on such techniques.

Our goal is to have an exciting Special Edition with interesting high-quality results with an efficient reviewing process that is both professional and relatively fast.

Prof. Dan Feldman
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • coreset
  • sketch
  • streaming
  • parallel computations
  • compressed sensing
  • dimension reduction
  • sparsification
  • sampling

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 592 KiB  
Article
Streaming Quantiles Algorithms with Small Space and Update Time
by Nikita Ivkin, Edo Liberty, Kevin Lang, Zohar Karnin and Vladimir Braverman
Sensors 2022, 22(24), 9612; https://0-doi-org.brum.beds.ac.uk/10.3390/s22249612 - 08 Dec 2022
Cited by 2 | Viewed by 1117
Abstract
Approximating quantiles and distributions over streaming data has been studied for roughly two decades now. Recently, Karnin, Lang, and Liberty proposed the first asymptotically optimal algorithm for doing so. This manuscript complements their theoretical result by providing a practical variants of their algorithm [...] Read more.
Approximating quantiles and distributions over streaming data has been studied for roughly two decades now. Recently, Karnin, Lang, and Liberty proposed the first asymptotically optimal algorithm for doing so. This manuscript complements their theoretical result by providing a practical variants of their algorithm with improved constants. For a given sketch size, our techniques provably reduce the upper bound on the sketch error by a factor of two. These improvements are verified experimentally. Our modified quantile sketch improves the latency as well by reducing the worst-case update time from O(1ε) down to O(log1ε). Full article
(This article belongs to the Special Issue Sensor Data Summarization: Theory, Applications, and Systems)
Show Figures

Figure 1

12 pages, 2138 KiB  
Article
Efficient and Accurate Synthesis for Array Pattern Shaping
by Minseok Kang and Jaemin Baek
Sensors 2022, 22(15), 5537; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155537 - 25 Jul 2022
Cited by 3 | Viewed by 1209
Abstract
Array pattern synthesis (APS) aims to create the desired array pattern as closely as possible to the prescribed mask template by varying the element excitations of the array. Herein, an efficient approach for the APS to control the sidelobe level is proposed. After [...] Read more.
Array pattern synthesis (APS) aims to create the desired array pattern as closely as possible to the prescribed mask template by varying the element excitations of the array. Herein, an efficient approach for the APS to control the sidelobe level is proposed. After designing the mask template to meet the prescribed sidelobe requirements and the waveform pattern, a set of element excitations is calculated through the Fourier transform performed on the projection the waveform pattern onto the mask template. Then, a desired array pattern can be synthesized from this updated set of excitation coefficients. The proposed APS approach directly presents a mathematical formulation of the exact set of excitations without any iterative optimization process. The proposed method is particularly suited for many array elements in linear antenna array. Thus, the proposed APS achieves substantial improvements in terms of computation complexity, performance, and ease of implementation in the algorithm when compared with conventional methods. Several simulation results are provided to verify the efficacy and effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Sensor Data Summarization: Theory, Applications, and Systems)
Show Figures

Figure 1

11 pages, 1219 KiB  
Communication
The Non-Tightness of a Convex Relaxation to Rotation Recovery
by Yuval Alfassi, Daniel Keren and Bruce Reznick
Sensors 2021, 21(21), 7358; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217358 - 05 Nov 2021
Cited by 3 | Viewed by 1355
Abstract
We study the Perspective-n-Point (PNP) problem, which is fundamental in 3D vision, for the recovery of camera translation and rotation. A common solution applies polynomial sum-of-squares (SOS) relaxation techniques via semidefinite programming. Our main result is that the polynomials which should be optimized [...] Read more.
We study the Perspective-n-Point (PNP) problem, which is fundamental in 3D vision, for the recovery of camera translation and rotation. A common solution applies polynomial sum-of-squares (SOS) relaxation techniques via semidefinite programming. Our main result is that the polynomials which should be optimized can be non-negative but not SOS, hence the resulting convex relaxation is not tight; specifically, we present an example of real-life configurations for which the convex relaxation in the Lasserre Hierarchy fails, in both the second and third levels. In addition to the theoretical contribution, the conclusion for practitioners is that this commonly-used approach can fail; our experiments suggest that using higher levels of the Lasserre Hierarchy reduces the probability of failure. The methods we use are mostly drawn from the area of polynomial optimization and convex relaxation; we also use some results from real algebraic geometry, as well as Matlab optimization packages for PNP. Full article
(This article belongs to the Special Issue Sensor Data Summarization: Theory, Applications, and Systems)
Show Figures

Figure 1

31 pages, 860 KiB  
Article
Coresets for the Average Case Error for Finite Query Sets
by Alaa Maalouf, Ibrahim Jubran, Murad Tukan and Dan Feldman
Sensors 2021, 21(19), 6689; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196689 - 08 Oct 2021
Cited by 4 | Viewed by 1791
Abstract
Coreset is usually a small weighted subset of an input set of items, that provably approximates their loss function for a given set of queries (models, classifiers, hypothesis). That is, the maximum (worst-case) error over all queries is bounded. To obtain smaller coresets, [...] Read more.
Coreset is usually a small weighted subset of an input set of items, that provably approximates their loss function for a given set of queries (models, classifiers, hypothesis). That is, the maximum (worst-case) error over all queries is bounded. To obtain smaller coresets, we suggest a natural relaxation: coresets whose average error over the given set of queries is bounded. We provide both deterministic and randomized (generic) algorithms for computing such a coreset for any finite set of queries. Unlike most corresponding coresets for the worst-case error, the size of the coreset in this work is independent of both the input size and its Vapnik–Chervonenkis (VC) dimension. The main technique is to reduce the average-case coreset into the vector summarization problem, where the goal is to compute a weighted subset of the n input vectors which approximates their sum. We then suggest the first algorithm for computing this weighted subset in time that is linear in the input size, for n1/ε, where ε is the approximation error, improving, e.g., both [ICML’17] and applications for principal component analysis (PCA) [NIPS’16]. Experimental results show significant and consistent improvement also in practice. Open source code is provided. Full article
(This article belongs to the Special Issue Sensor Data Summarization: Theory, Applications, and Systems)
Show Figures

Figure 1

20 pages, 1649 KiB  
Article
No Fine-Tuning, No Cry: Robust SVD for Compressing Deep Networks
by Murad Tukan, Alaa Maalouf, Matan Weksler and Dan Feldman
Sensors 2021, 21(16), 5599; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165599 - 19 Aug 2021
Cited by 4 | Viewed by 2038
Abstract
A common technique for compressing a neural network is to compute the k-rank 2 approximation Ak of the matrix ARn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is [...] Read more.
A common technique for compressing a neural network is to compute the k-rank 2 approximation Ak of the matrix ARn×d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is the number of input neurons in the layer, n is the number in the next one, and Ak is stored in O((n+d)k) memory instead of O(nd). Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the k-rank 2 approximation with p, for p[1,2], which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p1, based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. Full article
(This article belongs to the Special Issue Sensor Data Summarization: Theory, Applications, and Systems)
Show Figures

Figure 1

Back to TopTop