John Patrick Cunningham
Friday 9th March 2018
*****Change of Time: 1.00pm*****
Ground Floor Seminar Room
25 Howland Street, London, W1T 4JG
Structure in tensor-variate data: a trivial byproduct of simpler phenomena?
As large tensor-variate data become increasingly common, complex analysis methods for these data similarly increase in prevalence, a trend that is prominent in computational neuroscience and many other machine learning, biological, and statistical settings. Such developments hope to understand subtler and more meaningful features of the data that, ostensibly, could not be studied with simpler datasets or simpler methodologies. While promising, these advances are also perilous: novel analysis techniques do not always consider the possibility that their results are in fact an expected consequence of some simpler, already-known feature of simpler data. I will present two works that address this growing problem, the first of which uses Kronecker algebra to derive a tensor-variate maximum entropy distribution that has user-specified moments along each mode. This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of certain observed structure in neural population data. In the second part, I will discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling, and I will use this method in a texture synthesis application.
John P. Cunningham is an associate professor in the Department of Statistics at Columbia University. He received a B.A. in computer science from Dartmouth College, and a M.S. and Ph.D. in electrical engineering from Stanford University, and he completed postdoctoral work in the Machine Learning Group at the University of Cambridge. His research group at Columbia investigates several areas of machine learning and statistical neuroscience. http://stat.columbia.edu/~cunningham/