Contact Info

Assistant Professor
Department of Computer Science and Informatics, Indiana University
Office: LH 401E
Email: martha at
Curriculum Vitae


  • November, 2016: Our AAAI papers were accepted, one about accelerated temporal difference learning methods, and another about estimating classification performance measures for positive-unlabeled data.
  • August, 2016: Our NIPS paper was accepted, about estimating class priors for noisy positive-unlabeled learning.
  • May, 2016: We have been awarded a large group grant for Precision Health, for four years under the Grand Challenges program! I will be working towards effective representations for learning policies on high-dimensional data.
  • April, 2016: Check out our new paper, submitted to JMLR, about using alternating minimization to get global solutions for a large class of matrix factorizations. We are excited about the results, that allow us to obtain global solutions with a really simple alternating gradient descent approach.
  • April, 2016: Our IJCAI paper was accepted, about a new LSTD algorithm.
  • March, 2016: I have been awarded a CISE CRII grant, for algorithm development in reinforcement learning. Thank you NSF!
  • Our paper on emphatic TD has been accepted to JMLR. We are really excited about this new algorithm, that enables off-policy learning with only one set of weights.
  • Junfeng Wen and I have released Matlab code for RARMA models. The algorithm is simple and efficient, so give it a try for your time series problem!
  • I have just received a Faculty of Science Doctoral Dissertation Award. Thank you, University of Alberta, for recognizing my thesis!

About My Research

My primary research goal is to develop techniques for adaptive autonomous agents learning on streams of data, with an applied focus on computational sustainability. My research focus to acheive this goal is on reinforcement learning and representation learning. In particular, I care about efficient, practical algorithms that enable learning from large amounts of data.

So far, I have focused on principled (convex) optimization approaches for representation learning, which essentially encompasses unsupervised learning and parts of semi-supervised learning. I have also been working on off-policy reinforcement learning, which enables learning about many different policies in parallel from a single steam of interaction with the environment. My life goal is to make advances in representation learning for reinforcement learning, which I believe is the one of the biggest scientific hurdles for AI and autonomous agents.


Lei Le (PhD)
Raksha Kumaraswamy (PhD)
Tasneem Alowaisheq (PhD)
Yangchen Pan (PhD)
Matthew Schlegel (MSc)
Andrew Patterson (MSc)

About Me

I love soccer, volleyball, snowboarding, snorkelling, outdoor activities, cooking, (board) games and, especially, reading hard sci-fi.