BEGIN:VCALENDAR
VERSION:2.0
PRODID:Mathematics Seminar Calendar
BEGIN:VEVENT
UID:20221205T013918-29413@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:All aboard! A mathematical study of transit equity in Baltimore
DESCRIPTION:Speaker\: Craig Gross, MSU\r\nIn 2015, the governor of Maryland canceled a light rail project through the city of Baltimore that had been planned and funded for over a decade. Instead, the money was diverted to funding highways near the richer, whiter suburbs of the city. As Baltimore is home to some of the most extreme class-disparity and segregation in the country, this decision significantly hurt the potential for a more equitable transit system. But by how much?\r\n$\\$\r\nThis talk will be a tour through a mathematical investigation of how the canceled light rail might have increased access to jobs across the city. In particular, we use some dimension-reduction and clustering algorithms on CDC Social Vulnerability Indices to explore which parts of the city may be socioeconomically disadvantaged. We then compute job accessibility metrics to determine how the light rail would have affected these regions. We also give some considerations for converting a collection of many relevant indicators into more interpretable, manageable metrics for future transit studies. \r\n$\\$\r\nThis is joint work with Adam Lee, Kethaki Varadan, and Yangxinyu Xie.\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20220919T170000Z
DTEND:20220919T180000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=29413
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-29427@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Understanding dataset characteristics via diffusion on graph
DESCRIPTION:Speaker\: Remy Liu, MSU\r\nClassical graph signal processing provides powerful techniques for understanding and modifying graph signals from the spectral domain, but they come with high computational costs. More recently, diffusion on graphs has been sought as an alternative approach to modifying graph signals; it is much more computationally efficient and is easy to interpret from the spatial perspective. Here, we present two different studies utilizing diffusion wavelets on a graph to filter graph signals for downstream analysis. In the first study, we aim to understand how and what is being utilized by Graph Neural Networks to achieve graph-related tasks. We do so by observing the performance difference between using the filtered graph and the original graph. We demonstrate that some image datasets, such as CIFAR and MNIST, rely on low-frequency signals; on the contrary, heterophilic datasets, such as WebKB, rely more heavily on high-frequency signals. In the second study on computational biology using gene interaction networks and gene expression data, we observe similar results where different frequency bands perform differently in a task-specific manner. In summary, our studies demonstrate the practical usage of graph diffusion to modify graph signals, leading to improved downstream prediction performance and a better understanding of the graph datasets' characteristics.\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20220926T170000Z
DTEND:20220926T180000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=29427
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-30464@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Efficient Modewise Measurements for Compressive Sensing or Recovery of Tensor Data
DESCRIPTION:Speaker\: Cullen Haselby, MSU\r\nRecovery of sparse vectors and low-rank matrices from a small number of linear measurements is well-known to be possible under various model assumptions on the measurements. The key requirement on the measurement matrices is typically the restricted isometry property, that is, approximate orthonormality when acting on the subspace to be recovered. Among the most widely used random matrix measurement models are (a) independent subgaussian models and (b) randomized Fourier-based models, allowing for the efficient computation of the measurements.\r\n $\\$\r\nFor the now ubiquitous tensor data, direct application of the known recovery algorithms to the vectorized or matricized tensor is memory-heavy because of the huge measurement matrices to be constructed and stored. In this talk, we will discuss two different modewise measurement schemes and related recovery algorithms. These modewise operators act on the pairs or other small subsets of the tensor modes separately. They require significantly less memory than the measurements working on the vectorized tensor, and experimentally can recover the tensor data from fewer measurements and do not require impractical storage.\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221010T170000Z
DTEND:20221010T180000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=30464
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-29428@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Unique generalization properties of a dense set of classifiers based on one-dimensional random projections
DESCRIPTION:Speaker\: Evzenie Coupkova, Purdue\r\nThe generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature. The feature is obtained by projecting the data on a random line after embedding it into a higher-dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n, based on its performance on training data, is chosen. We show that this type of classifier is extremely flexible, as it is likely to approximate, to an arbitrary precision, any continuous function on a compact set as well as any Boolean function on a compact set that splits the support into measurable subsets. In particular, given full knowledge of the class conditional densities, the error of these low-complexity classifiers would converge to the optimal (Bayes) error as k and n go to infinity. On the other hand, if only a training dataset is given, we show that the classifiers will perfectly classify all the training points as k and n go to infinity. We also bound the generalization error of our random classifiers. In general, our bounds are better than those for any classifier with VC dimension greater than O (ln n) . In particular, our bounds imply that, unless the number of projections n is extremely large, there is a significant advantageous gap between the generalization error of the random projection approach and that of a linear classifier in the extended space. Asymptotically, as the number of samples approaches infinity, the gap persists for any such n. Thus, there is a potentially large gain in generalization properties by selecting parameters at random, rather than optimization.\r\n$\\$\r\nA preprint of this work can be found here: https://arxiv.org/abs/2108.06339\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221017T170000Z
DTEND:20221017T180000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=29428
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-30483@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Exemplar-Based Texture Synthesis: Past and Current Approaches
DESCRIPTION:Speaker\: Liping Yin, MSU\r\nIn this talk, we will give an overview of exemplar-based texture synthesis. For the first part of the talk, we will discuss a classical approach via matching statistics of wavelet coefficients and its shortcomings. In the second part of the talk, we will discuss more recent work using statistics of deep convolutional neural networks for more realistic texture synthesis.\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221031T170000Z
DTEND:20221031T180000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=30483
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-30492@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Ensemble Clustering Methods in Transcriptomics Data
DESCRIPTION:Speaker\: Yuta Hozumi, MSU\r\nTranscriptomic data, more specifically single cell RNA sequencing (scRNA-seq), is an emerging field in biology that is used to obtain molecular understanding of cells. Analyzing scRNA-seq gives insight to protein and gene regulatory networks, protein expression and diseases. In this talk, I will present ensemble clustering methods used to find clusters in scRNA-seq data, which can then be used for further analysis into differential gene expression, cell trajectory and cell-cell communication.\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221107T180000Z
DTEND:20221107T190000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=30492
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-31497@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Wavelets, the Scattering Transform, and Generalizations
DESCRIPTION:Speaker\: Albert Chua, MSU\r\nIn this talk, we give an overview of some basic properties of wavelets. We then introduce the Windowed Scattering Transform and go over stability and invariance properties that make it desirable as a feature extractor. Finally, we provide a generalization of the Windowed Scattering Transform that is translation invariant and discuss other stability and invariance properties of our generalized Scattering Transform. \r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221114T180000Z
DTEND:20221114T190000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=31497
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-31498@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:Long Range Constraints for Neural Texture Synthesis Using Sliced Wasserstein Loss
DESCRIPTION:Speaker\: Liping Yin, MSU\r\nIn the past decade, exemplar-based texture synthesis algorithms have seen strong gains in performance by matching statistics of deep convolutional neural networks. However, these algorithms require regularization terms or user-added spatial tags to capture long range constraints in images. Thus, we propose a new set of statistics for exemplar based texture synthesis based on Sliced Wasserstein Loss and create a multi-scale algorithm to synthesize textures without any regularization terms or user-added spatial tags. Lastly, we study the ability of our proposed algorithm to capture long range constraints in images and compare our results to other exemplar-based neural texture synthesis algorithms.\r\n$\\$\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221121T180000Z
DTEND:20221121T190000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=31498
END:VEVENT
BEGIN:VEVENT
UID:20221205T013918-31499@math.msu.edu
DTSTAMP:20221205T013918Z
SUMMARY:TBD
DESCRIPTION:Speaker\: Rolando Ramos, MSU\r\nThis will be a hybrid seminar and take place in C117 Wells Hall and via Zoom at https://msu.zoom.us/j/99426648081?pwd=ZEljM3BPUXg2MjVUMVM5TnlzK2NQZz09 .
LOCATION:C117 Wells Hall
DTSTART:20221128T180000Z
DTEND:20221128T190000Z
URL:https://math.msu.edu/Seminars/TalkView.aspx?talk=31499
END:VEVENT
END:VCALENDAR