The New Frontier

Published on
May 8th, 2020
59 minutes

The New Frontier

The Interview ·
Featuring Trevor Mottl

Published on: May 8th, 2020 • Duration: 59 minutes

Trevor Mottl, managing director at Lazard Labs, speaks to Real Vision CEO Raoul Pal about how machine learning and artificial intelligence (AI) can inform and transform the investment process – from idea generation to position sizing to risk management. He tells Raoul about the AI team he runs at Lazard Asset Management that uses machine learning to identify patterns in markets too complex for the human brain to recognize in order to reliably generate alpha too obscure for human investors to reliably capture. Mottl also breaks down his three-piece framework of finance – which includes pricing, time horizon, and liquidity – and explains how this framework has shaped his investment philosophy and informed his macro outlook. Filmed on May 5, 2020.



  • MO
    Mary O.
    9 May 2020 @ 22:51
    Wasted an hour for what, exactly? He didn't explain crap that is instructive for anything other than if you want to incorporate AI, he's your guy. Hope he paid for the advertisement.
    • PW
      Patrick W.
      30 June 2020 @ 10:53
      He's giving you an idea of what he's doing which tells you the state of the current practice, and my take away from this, plus my take away from the Bill Tai interview, is that now is what 1990 is to the internet, so I feel very hopeful and very excited. If someone wants to learn the real nuts and bolts, there are thousands of hours of freemium and paid classes on Udemy, EdX, and Coursera. In fact, if you're a new visitor to Udemy, they offer their entire data science boot camp for $15 as a limited-time offer. The CFA Institute also publishes a very good state of the current practice as part of the current quant reading on ML and Big Data, complete with decision graphs so that you're not dumped straight into the specifics. While the latter is supposedly not available for public consumption, based on what I have seen on the net when I was studying for the exams, I'm sure you can find a copy, and if you truly want to get your feet wet without spending any money, and have dozens of free manuals and tutorials to get you started. In fact, after the current cryptocurrency festival, my vote would be for an AI/ML/Data Science festival.
  • KJ
    Karl J.
    11 June 2020 @ 21:05
    This guy is an idiot, the way he’s using machine learning is so 10 years ago, no wonder he works at Lazard’s and not RenTech! 😂
  • Sv
    Sid v.
    13 May 2020 @ 18:34
    Raul, I appreciate your integrity and courage to have a guy like this on, and I appreciate you doing the interview. Very interesting, and educational, and thought provoking. For an investor, it seems we go longer term, find a nitch, surrender to the indexing, or find the guy with the best models? or, just follow the trend, catch the wave, but seldom harvest the tops and bottoms.
  • JH
    Jesse H.
    8 May 2020 @ 21:24
    This guy needs to spend more time with human beings and less time with machines. This is a big misgiving of mine with the coming “AI revolution.” Have a good weekend all.
    • RT
      Rob T.
      13 May 2020 @ 02:00
      From my observation, with technology, there will always be a lag between those who understand the fundamentals and those who can explain it and it's value in laymans terms. There's a translation layer that takes time to build. Those close to the machine are optimizing for the machine not the human and those optimizing for the human are not optimizing for the machine. Someone has to be properly incentivized to build that bridge to realize that laymans bridge.
  • RT
    Rob T.
    13 May 2020 @ 01:55
    Interesting interview... Some relevant comments: "we're early on...this is a toolbox" and that it's a "focus on signal for 1 week to 1 month period". It would be interesting to know how far back "sentiment" works a signal. This would require headlines and maybe social media insights. The market headlines, how far back are those correlated to actual market returns? On the one hand there is a rise of a retail investor, but according to Raul's thesis re: retiree's liquidating portfolios, does that mean demand for some of those headlines will pull back and you will be left with more noise headlines given the smaller capital of millennials (save for those inheriting)? Also you have a trend of meme stocks which likely influences some of the headline stories... has the reduction in journalism quality (has this ever existed?) with the rise of social media resulted in a better or worse signal via sentiment?
  • FB
    Frank B.
    10 May 2020 @ 09:16
    Fascinating topic. I believe showing some visualizations on specific examples, like clustering, would have been helpful to make it more insightful.
    • NR
      Nelson R.
      10 May 2020 @ 21:42
      Great point 👆🏻
  • sk
    saner k.
    10 May 2020 @ 21:37
    x1.25 sounds better..
  • NG
    Nikodem G.
    10 May 2020 @ 21:08
    Trevor mentions the importance of "textual" data such as the length in words of a certain section in company's earnings report. However, there are approaches to analyzing text broadly described as natural language processing (NLP), which are orders of magnitude more powerful than the superficial analysis of text Trevor mentions. Vectorspace AI is a good example of a company in the NLP space. They analyze keywords in specific contexts and create relationship networks between different concepts. Foe example, in the context of scientific/medical literature, they can analyze tens of thousands of articles and create correlation networks between viral diseases, drug compounds and protein pathways they target. This way of finding hidden relationships may not only accelerate drug discovery, but also help generate alpha. There is definitely a bright future for NLP based AI.
  • FB
    Floyd B.
    10 May 2020 @ 19:10
    Raoul excellent interview,please provide an update on this topic soon. This is the kind of topic I expect from RealVision ,innovative and disruptive.
  • TB
    Tobin B.
    10 May 2020 @ 16:53
    Enjoyed this, as I am looking at entering the space using my programming background.
  • TS
    Theodoros S.
    9 May 2020 @ 20:28
    Machines - Algorithmic trading may destroy the market as it is known and will bring equilibrium. Markets are beautiful because they are not efficient as humans are not perfect.
    • JS
      Jim S.
      9 May 2020 @ 21:31
      IMO.. from what I’ve read and understand I don’t think there will be enough data for an adequate training set that will allow true AI to dominate the market for a long time. Those sets won’t have enough data to model regime changes like the destruction of an asset class that we are currently witnessing.
  • JS
    Jim S.
    9 May 2020 @ 21:27
    Thanks for doing this, enjoyed the interview. Recommend interviewing EP Chan on this topic. He has some great bookstores resources that walk you through on setting all this up.
    • JS
      Jim S.
      9 May 2020 @ 21:28
      Thanks for updating the IOS app!!
  • MJ
    Mike J.
    9 May 2020 @ 00:03
    I just looked that Lazard Asset Management RORs. They are all negative for 1yr, 2yr, 5yr and 10yr. How do they stay in business? Am I reading this wrong?
    • SP
      Steve P.
      9 May 2020 @ 17:15
      Lazard MANAGEMENT or LABS?
  • AC
    Aaruran C.
    9 May 2020 @ 02:57
    The way he discusses modeling it seems that he is much more interested in classification tasks rather than regression tasks. While I agree that higher-order ML models detect patterns by reorganizing data, models like principal components in a multilinear regression setting can be equally as powerful and often more interpretable than models like K-Means etc. Given his trading timeline of 1+ weeks, his PnL will be more a function of being directionally right than anything else so a classification focus makes sense here. However, if you preprocess data well enough to extract signals that meet regression assumptions, OLS can be the best tool, especially on shorter, more dense timelines. Simplicity is often the best approach when you're looking for stable models that don't need to be constantly retrained over time.
    • JH
      Jason H.
      9 May 2020 @ 09:18
      Aaruran, totally agree. Having done this stuff for a bit of time i have found the lack of suitability of GLM models to most contexts I apply them to. Too many breaches of distribution, stationarity, and heteroscedasticity. Not to mention they really do not appear to generalize very well. When the data is then adjusted for those components, the loss of information is often too much to proceed with. ML, and especially classification (GBMs, random forest, even dimensionalily reduction algos used with support vector machines for underspecified datasets) provide a useful tool for initial analysis of a problem. This helps focus in on important features for the next analytical steps in the process. This might explain his bias here. However each problem is quite specific and has differing needs and model suitability, so solutions will differ based on context.
    • SP
      Steve P.
      9 May 2020 @ 17:11
      OLS regression models require a host of independent variables else they’re susceptible to the 6-point test and likely will fail atleast 1. Regressions work well for fixed variables, characteristics that are less abstract though the relationship between variables may seem abstract. Also, running a regression is great for deep statistics like R, goodness of fit and T and Z statistics. Far as markets go, nearly everything correlated to SP500 lol
  • SP
    Steve P.
    9 May 2020 @ 17:05
    Been saying it for years. Friends still don’t understand the premise of my thesis that value and growth are the same thing. Economies of scale matter, but in today’s day and age that’s less about linear time and more about logarithmic experience. $MSFT is 40 years old, yet the stock price jumped over 5x in a 5 year period, defying odds. It’s like the argument over “short term” and “long term” investing. If I earn my net expected return in 1 second or 364 days does, time matter. One tiny breakthrough in a “mature” company (which, if we hold company lifespan standards to say Wells Fargo or Goldman which are in the hundred and tens, we may see Apple as in “their 20’s”) can develop as an industry standard which boosts performance in the short term, for the long term and expands growth within a value mindset.
  • KG
    Kos G.
    8 May 2020 @ 14:20
    Please forgive me, I'll hijack this comment section for for a practical discussion!! Does anyone here build models on their own? I am curious about your infrastructural solutions. What are you finance market APIs or data feeds? What is worth paying for and where you can save by getting free alternatives?
    • AI
      Andras I.
      8 May 2020 @ 22:39
      I'm building models for a while and in my experience you'll be able to find a lot of it for free, with caveats: (all this assuming you're doing this programmatically) 1, If you need the basics, Quantopian has them (market data and basic US economic data) in a very easy to access form, surrounded by portfolio management libraries. If your focus is backtesting, you can start there - then eventually port the whole thing to your own setup as you run out of the free processing power there. 2, Most of the US government agencies offer some sort of free (usually Python) API but they all vary greatly and will benefit from another layer built on top to abstract from the indiosyncracies of various libraries gathered from Github. 3, If you can afford it, Quandl did that for you already but I feel it could be a slippery slope of expensive subscriptions :) One good tip: they offer TradingEconomics as a package for less (or more access for the same price) as the actual TE website - if I remember correctly around $700/year? 4, If you're struggling to collect EM data (or anything outside the US really), I can recommend the TE subscription, has most of what you relaly need covered. Otherwise you'll probably keep hunting and writing website scrapers and interfaces forever if you're on your own while really you want to be focusing on your model. Personally: I use Interactive Brokers' free API for market data, a bunch of hand rolled libraries (based on FRED, BLS...etc) for most but I wrote those libraries in mind with Quandl so if I need to, I can swap them out easily once I feel the need to have broader data.
    • JA
      Jesse A.
      9 May 2020 @ 03:37
      My broker ameritrade has an api that gives free price data and current stats on stocks. I have been scraping yahoo to get fundamental data, but they keep changing their website and so I keep having to redo my program which is a pain. All the SEC data has XBRL now so it would be cool to get setup to scrape directly from them but it's more complicated. I just use excel and an access database.
    • AI
      Andras I.
      9 May 2020 @ 06:43
      I thought he meant macro data, that's much more difficult. For price and fundamentals there are even very simple to use Excel plug-ins with no programming need.
    • JH
      Jason H.
      9 May 2020 @ 10:39
      Kos, depends on the strategies you run. If you are playing with tick or intraday data, alternative data, or exotic markets, i am yet to find solutions that achieve both timeliness and affordability. APIs such as alphavantage or finnhub can be helpful if you're after intraday (inexpensive and usually reliable, but i have experienced inaccess due to what I suspect was high traffic). Alternative data is available on quandl and likely in other places that I havent yet looked (I havent yet use any alternative data). FRED is also great but you'll sometimes experience delays in availability of numbers. In terms of integration, as mentioned in this thread, quantopian has a brilliant backtesting platform. There is some stock and futures data, and fundamental data and other pieces that are quite helpful. Personally, I have built an environment on my own machine with custom metrics and model selection procedures used prior to backtesting to address potential lookahead bias and overfitting. The quantopian zipline python package is open source on github. You can run your backtests using that if you wish to provide your own data. I have found that this sidesteps any memory issues that may arise from slow models on the platform.
  • NR
    Nelson R.
    8 May 2020 @ 22:25
    Insightful, well done Trevor/Raoul.
  • wj
    wiktor j.
    8 May 2020 @ 19:39
    Or ypu can just buy what central banks are buying. Seeing doesnt matter anymore this sort of things doesnt matter unless you trade on macro. The swiss central bank very transparent
  • MB
    Marco B.
    8 May 2020 @ 13:01
    My expertise is computer science and finance is just a topic I enjoy. But in my view, there is still more money currently done in attaching the AI buzzword to your business plan, than in real AI applications. Don't get me wrong, the long term potential is great, and ML has some really powerful applications today. But like self-driving cars, this revolution will take a lot longer than most people think. Btw, the AI vs ML segment discussion between two finance guys was really funny to me.
  • PU
    Peter U.
    8 May 2020 @ 12:22
    What happed to the CC option? Some of us need that! Thank you
  • us
    udaiveer s.
    8 May 2020 @ 07:54
    Key takeaways for me 1) There is no one Oracle model that places all the trades for a firm (buy or sell-side) 2) Each model is probably niched down (sector-based, or company-based) and then strategy based (trend, long-short.. as mentioned) 3) Playing long term 3+ month time horizon means less machine competition
  • AT
    Amy T.
    8 May 2020 @ 05:28
    Can you please share the links to some of the public data source mentioned in this video? Thank you.