Title: "Big Data" Triangle and Modern Data Analysis
Abstract: Various data-intensive applications starting from neuroscience, genomics to high-energy physics and proteomics routinely generate data that shares a common interesting structure: massive number of data points are collected from a vast collection of distributed data sources and sensors at a very fast speed from a probability distribution p = (p1, ..., pk) over a large domain k, where pi \geq 0; \sum(p_i) = 1. How can we succinctly represent such data-structure? How to efficiently compress them to deliver time and storage-efficient computation without compromising the performance? While these questions are at present largely unsolved, they undoubtedly provide the cornerstone for many fundamental Statistical learning problems. In this talk my goal is to address this unmet needs by designing new class of computational algorithms based on specially designed LP polynomial-based discrete orthogonal transform.
Keywords and phrases: United statistical algorithm; Compressive learning; Big distributions; Data-efficient learning; LP orthogonal System.