VASANT DHAR: The denser your data set becomes, the better off you are, which is why you do so much better at shorter trading frequencies, like higher frequency trading is already machine based. Humans don't stand a chance there. That's already been taken over by machines for good reason. Longer term investing, there's no scope for machine learning because there's not enough data there. That Warren Buffett investing with holding periods of years, but you don't have enough data for that. The real sweet spot for machine learning is in the denser parts of the price space, which is intraday daily data.
HARI KRISHNAN: My name is Hari Krishnan. I'm a fund manager at Doherty Advisors, and I've been on this station before talking about ETFs and dangers in the VIX and the VIX markets. It's a pleasure to introduce Vasant Dhar, who is a founder of SCT Capital, one of the first machine learning hedge funds in existence, a professor at the Stern School of Business at NYU, and a director of the PhD program in the Center for Data Science, also at NYU. It would be interesting for me at least to know how you got started in machine learning, and how you got started in finance, two apparently disparate areas at least back in the day.
VASANT DHAR: Yeah. Well, great to b, Hari. Great to be having this conversation. Strangely enough, I got into machine learning because of Nielsen, the media company. They have a household division and they were tracking lots of households and what they were purchasing and they gave me this data and said, "Can you find some interesting patterns in it?" This is like 1990. I said, "To what end?" And they said to me, "We'd like to do how new products, we'd like to know how new products do, what products do well, what their selling patterns are."
It went off for a few weeks, and we had a meeting and they said, "Vasant, what'd you find?" I said, "I found lots of things, but I can't explain them. Such as a lot of older women in the northeast shop on Thursdays." They said, "Oh, yeah, that's coupon day. What else did you find?" I was really excited because I hadn't actually told the machine to look for any such thing but there were reasons behind the patterns that were found.
That just led to more interesting stuff in the data. I became a believer in these machine learning methods, because they seem to be finding interesting stuff. Then fast forward three years, I was introduced to a gentleman called Kevin Parker who'd been hired, who'd been appointed by John Mack who's running Morgan Stanley at the time to run tech and Kevin was a big believer in technology, and he brought me in to implement machine learning at Morgan Stanley. I think I brought machine learning to Wall Street in the mid-90s.
The two problems we were looking at was customer data, and then financial market prediction. I did both of those things and gravitated towards the market prediction side of things. If you know anything about proprietary trading groups, they want to know everything you know but they don't want to tell you anything they know. I proposed a simple experiment to them. I said, "Just give me all the trades you've done in the last few years and I'll tell you if you could have done better," and they said, "You don't need to know anything about the strategy." I was like, "Nope."
I took the trades. I did some Hocus Pocus, let's call it, but I literally amended the trades with market state information and I cranked this generic rule learning algorithm I've been working on at that time, the tools were far and few between and you had to build your own. It came up with these patterns. I remember I went to the first trading meeting. It was the same d?j? vu all over again, Kevin saying, "Vasant, what'd you find?" I said, "I found a bunch of things, but I'm not sure what they mean." "It's all right, take it from the top." I said, "Well, when the 30-day volatility is in the last quartile, your trades are three times as profitable as they are otherwise."
There was silence around the room for a little while and then they tried to digest the implications of it and started talking to each other. I was just watching this and I said, "Can someone tell me what's going on?" They said, "Not really, but we've felt that we lose a lot of money when volatility spikes. It's interesting that you're telling us that volatility actually matters." Now, the interesting thing about that incident was that I only learned the reasons for why I discovered the pattern much later. That was the first lesson, which is that when you have this data driven approach to life, you find that patterns often emerge before the reasons for them, because--
HARI KRISHNAN: Let me ask a quick question there. Which is, if let's say, I've been trading for 20 years, am I better off bringing in a machine learning expert who's never traded, who might find some unbiased, as you might put them, structure in the data than bringing somebody in who actually knows a lot about finance, and might have certain prior expectations that might be valid, might push them in a certain direction?
VASANT DHAR: Great question. Now, remember, the space I got started was I didn't actually come up with the strategy. The strategy already existed, which is what you're saying, you've been trading for 20 years, you've got a strategy and you bring someone in. Now, when you bring someone in, they're going to analyze your strategy, but they're not going to actually develop it. They'll analyze it and they'll tell you if you could have improved it. That was what I really did.
It was an easier problem that I started with than let's say building my own strategy, which was the next step. What they said was, "Hey, that's interesting. Do you think you can get the machine to discover new strategies?" I said, "Sure, in principle, that should be possible." That's what led me down this path to where I am since that time. It was initially an improvement, an overlay on an existing strategy, but I need to know anything, but I needed to know machine learning to do that and I needed to apply the method, the scientific method correctly, but I didn't have to do anything creative.
The machine did the heavy lifting for me after I told it what I thought might actually distinguish good trades from bad trades, it could then find that for me, but I did very little. I just told it well, consider volatility, consider trend, consider stochastics, like the usual thing and found that for me, but it's a completely different ballgame when you don't have that and you have to start from scratch and get the machine to actually discover these strategies for you. That area is much more treacherous and you have to be really careful in how you do that.
HARI KRISHNAN: Got it. I know obviously, there's some famous unnamable hedge funds that do focus on hiring people who don't have experience. I presume, as you said, that they're simply trying to improve existing processes, models and trading systems instead of trying to build something from the ground up.
VASANT DHAR: Correct.
HARI KRISHNAN: Well, that's a very important point. Now, I occasionally dip into the internet and Google search this and that and the other one I'm going to have some time to kill and I see that everyone wants to hire a machine learning graduate PhD expert and so on. If I were sitting on the other side of the table, which I am occasionally, I would be scratching my head saying, "How do I know what's real and what's fake?" Fake is a strong word, because there's always some level of confidence, some probability. What's your vibe? What's your take on this whole question?
VASANT DHAR: Yeah, that's the central question in machine learning, where should you trust what it's telling you? What's often overlooked about machine learning is that as a problem gets harder to predict or as it gets noisier-- I look at the world in terms of predictability spectrum, completely random to completely predictable so all problems lie on this. As you move towards the randomness end of the spectrum, your models can become very unstable. What that really means is that if you change your training set, the data on which you're going to build the model slightly, the model is generated by the machine can actually change quite dramatically.
If that happens, you really shouldn't trust the machine, you should not trust that model. If you get like a high variance in what's generated, and we'll come back to what variance really means, but if you get this high variance in what the machine is generating, you shouldn't trust it. That's the core question that we focus on is, I focus on is when should you trust the machine? When should you trust the outputs of a machine?
My very simple answer to it is when there's stability. When there's a stability in the outputs, and so when you get to that point, you can say, "All right, I think I've-- but that's a necessary condition, but not a sufficient one." You need stability, to have some confidence that I'm not going to get a completely different set of trading decisions if I changed my training data slightly. That should give you cause for discomfort.
HARI KRISHNAN: God, one more primitive way to think about it is to say, well, stability must be related to some information criteria. I don't want to get too fancy, but if I have a really simple model, and it works, maybe it's automatically more stable. Where do you beat that?
VASANT DHAR: Yeah, exactly. You've gotten to the heart of it, which is I said, it's necessary but not sufficient. The simplest model could be, you always bet the average. That's a simple model. It has zero variance. You will always do the same thing. It's probably isn't useful. It has heavy bias, and it probably isn't very useful. You're trying to tease apart the structure in the space into like, good longs, good shorts.
That means that you're introducing some level of complexity now over and above that simple like that the average model. You're now introducing a little bit of complexity for more predictability and you might now sacrifice some degree of explainability for that increased predictability that you get from the complexity.
HARI KRISHNAN: Is that trade off the art of this business or is it something that you can quantify in some way?
VASANT DHAR: You always want to quantify something like that. How successful you can do it is of course questionable, but it is something one should be able to or at least measure parts of it. Complexity for sure, you can specify how complex you want a model to be, how complex you want the machine to be able to-- the complexity that you want it to be able to work with. You can specify that depending on the form of your model, the parameters would vary. You should be able to look at analyze the variance of the model.
Since I've already mentioned variance, let me just sketch it out, like variance has two types, it's the variance in performance of the model. If you change the input set slightly, how widely does your output performance vary? That's the-- or rather the variance of the performance, how high is that? The other part is your decisions, how do your decisions change as a function of small variations in your training set? Because if your decisions change a lot, that's also indicative of instability even though your performance may not change.
HARI KRISHNAN: Got it.
VASANT DHAR: Those two elements is what I look at as variance, it's variance in performance and it's the variance in decisions.
HARI KRISHNAN: I'm going to jump around a little bit and ask another question, which is if I were a viewer of this show, and I wasn't an expert in machine learning and somebody sent me a big bank sell side report showing the performance of a machine learning algo in a given market, let's say currencies or rates, what can I actually do with that? Is that totally useless?
VASANT DHAR: Well, the question is, is it real or is it simulated?
HARI KRISHNAN: If it only give me the results.
VASANT DHAR: But is it real? It's actual trading performance, or is it this is what I would have achieved?
HARI KRISHNAN: This is what I would have achieved.
VASANT DHAR: Well, that's very difficult to trust just by looking at it because you really have to peel it apart and understand what was the methodology? How many times did you look at the data? Did you follow a process and I think this is the first time I'm talking about process I'll get to that and I'll explain what I mean by that, but did you follow a standard process in how you generated this set of outputs, as opposed to, well, let's try this. Oh, that doesn't work too well, let's try something else and oh, now it looks great. There's a famous saying that I never saw back test I didn't like. How often have you seen a really poor back test being marketed? You don't.
HARI KRISHNAN: I used to go to a series of talks where every talk-- I'm actually cribbing off somebody else, ended with a graph that started at the lower left corner of the page, of the slide, and then wound up at the upper right corner.
VASANT DHAR: The short answer to that question is I wouldn't trust a back test unless [indiscernible] my own and I know exactly what the assumptions were that went into it and the process that was followed, and one of my goals in life has actually been to get to the point where my back tests and reality are indistinguishable. My back test don't look particularly impressive, but I trust that this is what I'll achieve in reality. By not too impressive, I mean that a back test is a point estimate. It says, your expected information ratio is point six, and while we're talking about it, I want to say something about this, which is that I think anyone who's built strategies for some time knows that they realize performance sometimes bears very little resemblance to their back test in the short term, and sometimes even in the long term.
Your objective should be that they should mirror each other in the long term. In the short term, things are there's a lot of noise, things are really unpredictable but in the long term, your back tests and reality should really mirror each other. That will be indication that you've got a robust process, and that you have the right set of complexity in your model if you can get to that point, but that's--
HARI KRISHNAN: Okay. That's a good point. Now, debunk this idea. I knew a guy once and he had a model that traded various equity sectors, wasn't a machine learning model mind you, but then it did various things that would be now commonly known factor modeling this, that and the other. Every now and again, he'd have a period of underperformance so he traded 10 sectors, whatever. He would gleefully call people up and say, oh, I've improved my system by throwing out the sector that didn't work when the model didn't work.
That seems a bit overly greedy in the language of algorithms. What do you find are the dynamics of the algorithms or the systems that you look at? Do you just throw out a model that doesn't work well for a while or do you believe that there is some regime dependence that may make it valid at some point in the future?
VASANT DHAR: This question used to drive me crazy before I got to the point where I developed a process for generating my strategies. Because I started using machine learning in the '90s but for the first 10 years or so, a little maybe 12 years, the models I created were human curated. I'd look at the output of the machine learning algorithm. Then I would say, "Well, let me reduce the complexity here.