My Approach = Your Apparatus? Entropy-Based Topic Modeling on Multiple Domain-Specific Text Collections. (arXiv:1911.11240v1 [cs.IR])

Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling…

The distribution of the $L_4$ norm of Littlewood polynomials. (arXiv:1911.11246v1 [math.NT])

Classical conjectures due to Littlewood, Erd\H{o}s and Golay concern the asymptotic growth of the $L_p$ norm of a Littlewood polynomial (having all coefficients in $\{1, -1\}$) as its degree increases, for various values of $p$. Attempts over more than fifty years to settle these conjectures have identified certain classes of the Littlewood polynomials as particularly…

Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently. (arXiv:1911.11167v1 [stat.ML])

Multi-channel sparse blind deconvolution, or convolutional sparse coding, refers to the problem of learning an unknown filter by observing its circulant convolutions with multiple input signals that are sparse. This problem finds numerous applications in signal processing, computer vision, and inverse problems. However, it is challenging to learn the filter efficiently due to the bilinear…

DeepJSCC-f: Deep Joint-Source Channel Coding of Images with Feedback. (arXiv:1911.11174v1 [cs.IT])

We consider wireless transmission of images in the presence of channel output feedback. From a Shannon theoretic perspective feedback does not improve the asymptotic end-to-end performance, and separate source coding followed by capacity achieving channel coding achieves the optimal performance. Although it is well known that separation is not optimal in the practical finite blocklength…

Structured Multi-Hashing for Model Compression. (arXiv:1911.11177v1 [cs.LG])

Despite the success of deep neural networks (DNNs), state-of-the-art models are too large to deploy on low-resource devices or common server configurations in which multiple models are held in memory. Model compression methods address this limitation by reducing the memory footprint, latency, or energy consumption of a model with minimal impact on accuracy. We focus…

Theory-based Causal Transfer: Integrating Instance-level Induction and Abstract-level Structure Learning. (arXiv:1911.11185v1 [cs.LG])

Learning transferable knowledge across similar but different settings is a fundamental component of generalized intelligence. In this paper, we approach the transfer learning challenge from a causal theory perspective. Our agent is endowed with two basic yet general theories for transfer learning: (i) a task shares a common abstract structure that is invariant across domains,…

A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift. (arXiv:1911.11195v1 [cs.LG])

The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent. Many calibration methods in the literature have been proposed to improve the predictive uncertainty of DNNs which are generally not well-calibrated. However, none of them is specifically designed to work properly under domain…

RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. (arXiv:1911.11236v1 [cs.CV])

We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point…