OM Session 3

Bruno

Member
This was a lighter weight session (little maths) on the volatility skew, and particularly the Vertical Skew.
There will be a follow up then I'll introduce more on the horizontal skew, then Time-Weighted Vega.

The Powerpoint Presentation is here attached.
 

Attachments

  • Like
Reactions: DMR

DMR

Dave Redding
For those following this trading group forum. Had I realized this earlier I would have posted my question I sent directly to Bruno, in order for all to provide comment and answers.

Here's our short exchange, if anyone is interested. I don't think Bruno would mind me sharing his response on this forum.


Hi Bruno,
Had I been with you live during the presentation, I would have asked about the data quality (real-time data feeds) behind these math equations.
These equations are used with what is sampled data. Time slices of data of varying volume, bid-ask spreads, and DTE. I believe this is correct?
Do you have any previous post(s) that talk about empirical data integrity? At what point do these GREEK calculations suffer when sample size is too small, too infrequent, bid-ask is too wide, any DTE dependencies of interest, etc.?

Thank you, Dave.

Hi Dave,
Very good question indeed. I haven't touched on the topic of data integrity however that can be an issue in options modelling calculations.
There may be some strikes standing out on particular salient levels now and then but I am not looking at trying to capture anomalies for arbitrage purposes. That aside, I am aware that there are data discrepancies, in other words less reliable mids away from the money, so I do my best to model ATM IV and then the best possible skew curve from which I can compute theoretical realistic prices.

So, I can't quite answer your question really. I for instance monitor the IV provided by IB even when it looks incorrect and I discard it. It does not mean that market makers will be more amenable to fill an order at the calculated price if for any reason (probably the calibration of their books) make them reluctant or unwilling to price it correctly. There has been quite a bit of "chaos" in volatility behavior for some time now, and even if it often appears like we trade "against" market makers, they will only play ball if they can make money, not necessarily on our back but just in general. Using fairly realistic pricing is the best we can do...

Since you mention DTE dependencies, I am going to prepare a presentation on Weighted Vega soon. I am hoping (time permitting) to be in a position to highlight a more general model than the typical multiplier introduced by Ron Bertino a couple of years ago. Using the multiplier blindly is hardly any better than using the standard Vega.
 

DMR

Dave Redding
In my day job I'm involved with design & development of real-time, safety critical, embedded control systems. The work also extends into the world of test cell, test systems, and data analysis and comparing real world measurements to the values from the analytical equations.

This is the context of my question to Bruno with respect to the math behind the Greeks. When would the sources of errors, latency, and accurracy among the empirical data be of concern when one uses real-time data from the market place to calculate any Greek component.

This consideration is critical in my day job with respect to system stability, operation, and performance of the control system.

My question above was to get a feel for; Does Data Quality Matter (which I think it most definitely does).

How to assess the empirical data used to calculate any Greek and avoid the "garbage in - garbage out" situation.

In other words, how do I know when the calculations for a specific Greek is invalid because the underlying data lacks sufficient goodness/quality.

I get nervous calculating derivatives with sampled data using small time intervals ... the smaller the time step, the smaller the denominator, and a calculation such as this will "amplify" or distort the resultant value.

I have no experience in this Financal World of real-time data sampling, but my grey hairs tell me data quality matters to the folks that stand to lose Big $'s.
 

Bruno

Member
Thanks Dave,
I am working on more practical examples around the topic of vertical and horizontal skews. I shall come back to theoretical stuff later.
I also need more time to study data sampling anomalies or distortions due to friction (minimum tick size), step size issues (discrete strikes), volume, bid-ask spreads etc...
In the meantime, I am preparing a little something about Weighted Vega and it is certainly also not as easy as it looks. The time compensation proposed by Ron Bertino 2-3 years ago doesn't always answer the problem. There is virtually no reference in academia and not much on quant forums either.
 

garyw

Active member
One specific point on "real-time" data: A possible source of error in real-time data from any "streaming-like" source or conduit, is mixing of "stale" data with recent data (data actually originated at different times). This can be a common issue with normal TOS-RTD, as equations may relate to a number of real-time data items which are in flux. This can be minimized somewhat, by taking a snap-shot of the set of data at a specific time, then perform calcs on that stable data. While it does not guarantee the data is an atomic capture, it is likely close enough, if the "snap-shot" is performed on the server side! For RTD, one can implement this by turning off the standard 2-second refresh rate, and implement a capture at specific points in time and give TOS and RTD a time window to complete the capture before basing any calculations on it! I have been doing this for a number of years.
(The "snap-shot, then process" algo also avoids data contamination from having independent uncorrelated processes, which is likely the more serious/frequent issue)
 
Top