Previously, on Jack’s research project:
I’m working in the lab of Professor Tsutomu Saito on a project to learn how to perform Simplified Background Oriented Schlieren, and try to extend the method in new ways. As I explained in An explanation of what I’m doing in Japan, Schlieren is a 150+ year old technique to see density gradients using a bunch of fancy lenses and mirrors. Background Oriented Schlieren (BOS) in a 17 year old technique to do the same thing without any expensive lenses or mirrors using computer processing. Simplified Background Oriented Schlieren (S-BOS) is a 5 year old technique that works like BOS but uses a different image processing algorithm that’s a lot faster than regular BOS.
Right now I have several different tasks I’m working on during my remaining three weeks here.
- Make S-BOS quantitative
- Compare speed of S-BOS and BOS
- Compare accuracy of S-BOS and BOS
- Test the robustness of S-BOS to being out of focus
- Apply S-BOS to high speed flows
Make SBOS quantitative
What I mean by this is that BOS can be used to get quantitative information about the density in the flow you’re looking at. Traditional schlieren imaging makes pretty pictures that gives lots of good information, but it doesn’t tell you the density in the flow. BOS does, because it determines how far each pixel of the image moved as a result of being refracted by density gradients between the background image and the camera. Since the amount each pixel is displaced is proportional to the angle of refraction which is proportional to the gradient of the refractive index, which is proportional to the gradient of the density, you can take the distance that each pixel is displaced and integrate it to get the density everywhere in the image. Right now you can’t do that with S-BOS because of a couple of reasons. I’d like to solve that. I’ve been working on it for a couple weeks now, and while I’ve made progress, realistically I don’t think I’ll have enough time to finish this before I leave. In that case I hope to continue to collaborate with this lab after I go back to the US, and they can keep working on this.
Compare speed of S-BOS and BOS
This is less ambitious than the previous task. Basically I want to keep practicing BOS and S-BOS, and I think it would be worthwhile to try to quantify their strengths and weaknesses. The process of converting the original images into processed images is way faster with S-BOS, because it basically just involves performing arithmetic at each point in the image. BOS on the other hand uses cross correlation, which is a lot more demanding for the computer. I’d like to measure the runtime and get an actual number for how much faster S-BOS is. While in reality it can vary a lot depending on how the input parameters of the cross correlation, as well as which programming language is used and how the algorithm is written, its still useful to get a rough idea of how much faster it is.

Compare accuracy of S-BOS and BOS
This is just like comparing the speed. Its nothing fancy, but I should try to do it as long as I’m taking S-BOS and BOS for a test drive.
Test the robustness of SBOS to being out of focus
In “Recent Developments in Schlieren and Shadowgraphy”, Hargther and Settles write the following about the focus for BOS:
In general, a distant background, imaged with a long focal-length lens by a camera of high pixel resolution, results in the greatest sensitivity. This is constrained, however, by depth-of-field, since maintaining both the background and schlieren object in reasonable focus is important for the success of the BOS processing and for good photography. Typically the required sensitivity is established by choosing a suitable distance L − t within the constraints of a given experimental setup, and then choosing t for acceptable depth-of-field. Ideally the schlieren object will be halfway between the camera and the background for maximum sensitivity, but this requires a large depth-of-field that is not always obtainable, especially in high-speed applications.

This selection of the focal distance seems like an interesting problem, since the effect of being out of focus can have a very different effect than it does for normal photography. The flow isn’t being “looked at”, but rather looked through, and in fact the refraction of the light is proportional to an integral along the optical path. The necessity to have the flow be in focus is based on seeing solid objects near the flow. For the background being in focus, it also has interesting quirks. With a normal photo, being out of focus affects every point in the image in a consistent and predictable way- it’s blurry. But with BOS, the sharpness of the background image affects the likelihood of the cross-correlation algorithm to correctly identify the displaced pixels. Blurring may or may not cause it to misidentify the pixels, and when it does it won’t reduce the quality of the processed image in every location evenly- it will happen in random discrete locations in the frame. This is something I also found confusing to think about as a novice.
In addition to wanting to demonstrate this for my own satisfaction, I’m also interested in exploring this because in theory S-BOS may be less vulnerable to this effect. Since the background pattern features smooth transitions between the light and dark regions, this means that any blurring due to being out of focus might pose less of a problem than it would for a dot pattern for BOS.
Apply S-BOS to high speed flows
I was helping with this work a couple weeks ago but I haven’t participated recently since I’ve been focusing on the above tasks. Dr. Hatanaka has been working on this. I’m eager to see where it goes, because it’s definitely interesting.