Soft Matter Emerging Investigator – Tristan Bereau

Tristan Bereau is an assistant professor at the Van ‘t Hoff Institute for Molecular Sciences and the Informatics Institute of the University of Amsterdam. He completed a Ph.D. in Physics at Carnegie Mellon University, Pittsburgh, PA, USA. In 2012, Tristan moved to the University of Basel as a postdoctoral researcher. He was a group leader at the Max Planck Institute for Polymer Research from 2014, leading an Emmy Noether group from 2016 to 2019. His work focuses on the interface between multiscale modeling and machine learning for soft matter. He can be found on Twitter @tristanbereau.

Read Tristan’s Emerging Investigator article “Free-energy landscape of polymer-crystal polymorphism” and check out all of the 2021 Soft Matter Emerging Investigator articles here.

 

How do you feel about Soft Matter as a place to publish research on this topic?

Soft Matter is a high-quality journal to publish important developments relating to the field of soft matter. It’s a great venue to present scientific developments, but also leaves room for more technical/methodological contributions of high quality.

What aspect of your work are you most excited about at the moment and what do you find most challenging about your research?

Developments in machine learning are quickly changing the way we approach many problems. The capacity of machine learning to transform so many scientific fields is quickly reshaping our field. This makes for exciting opportunities to shape the way we integrate machine learning in soft matter. To take full advantage of these tools, it demands careful attention to methodological developments in computer science.

In your opinion, what are the most important questions to be asked/answered in this field of research?

In many ways hard condensed matter has more rapidly embraced machine learning. Soft matter has had a late start, though the pace of research is greatly accelerating. To further catch up, it’s important to improve how we blend in two aspects—scale separation and entropy—in machine learning models.

Digg This
Reddit This
Stumble Now!
Share on Facebook
Bookmark this on Delicious
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)