Mon 09 September 2013 - 11:34:31 CDT
Last week I presented a paper at SPIE Optics+Photonics in San Diego. It was a good conference, and I think the people who attended my session were happy with it. (Also thanks to the CalTech and U of Arizona people for a good lunch discussion after the session.)
Here I’m not going to get into the (ongoing) research, but I want to highlight a couple things I did with the slides that you might try in your own presentations.
First, sorry (long-time friend of mine) Geoff, but I’m not going to post all of the slides. Remember, slides don’t stand alone as a document. In fact, the point of this presentation at the conference was to highlight salient sections of the paper I had already written. So, if anything, I would post the paper.
Now, slide number 1: a plot of diffraction efficiency data generated in Scilab.
A few things to note here:
That last point is not difficult to achieve, but it is time-consuming. Scilab generates good plots, but doesn’t have a lot of flexibility in formatting. Also, I’ve found that if you play around with dashed lines in Scilab, they look fine in the application but don’t export well. Fortunately, Scilab will export SVG vector graphic files that can be tailored in Adobe Illustrator (or Inkscape or similar) to change fonts, add annotations, change colors, and change dashes. (Caution: The initial vector structure you get will be a mess. But with a little effort, you can split/combine paths and group/ungroup entities to get to something that makes sense.)
Another trick is to keep consistent axis scaling in Scilab as much as makes sense:
--> plot(...); --> a = gca(); // get current axis --> a.data_bounds = [xmin, ymin; xmax, ymax]; --> a.tight_limits = "on";
This way, you can reuse axis formatting–line weight, color, font–across multiple plots in your presentation. (It also helps the audience understand relative scales between data sets.) And finally, keeping everything in a vector format as long as possible maintains image fidelity when you resize in the presentation slide.
Now slide number 2 (and 3): a full-bleed photograph of part of the lab setup.
Large photos are great, obviously, for illustrating things that are difficult to describe (like an experimental lab setup). The first photo has the room lights on, and when this slide is up, I can describe the position of the holographic plate on the rotation stage, the location of the laser source, etc. But it is difficult to see the two diffraction spots on the screen. However, when I include another photo taken from the same position with the room lights off, you can easily see the diffraction spots.
You can’t see the apparatus well anymore in the second picture, but that doesn’t matter because context was established in the previous slide. When the slideware–Keynote in this case–dissolves between the photos, connection between the two is obvious, and that can be worked into the description during the fade: “The output spots are easier to see once we turn the room lights off.”
(Also, don’t forget to practice with technology interruptions. My presenter display–showing the next slide and a timer–decided not to work during the presentation. My laptop display was just mirroring the projector, and there wasn’t time to fix it. Things still went reasonably well.)
Tue 03 September 2013 - 14:11:10 CDT
I recently worked on a consulting project where I was brought in to carry some embedded software across the finish line. There was more work than they originally thought–not an uncommon sentiment–and the timeline was growing increasingly critical. (When isn’t it?).
The project seemed straightforward on the surface. “We have a legacy machine and legacy code base. We need to update the processor because we can’t get the old one anymore. So we also need to update the firmware because a lot of it is hardware dependent.” No sweat. No surprises here. Code in in C, so it’s reasonably portable. Find the hardware-dependent parts, extract and modularize, port, verify.
There were no requirements. And there was no version control implemented (unless you consider dated, initialed comments version control–hint: it’s not). But worst of all, another software engineer with the best of intentions decided that the legacy code was poorly organized and needed to be completely rearchitected.
First, he was right. Those hardware-dependent parts: everywhere. No modularity. And what modularity there was depended heavily on Ctrl-C/Ctrl-V: copy and paste running rampant. If good code is DRY (Don’t Repeat Yourself–hint: it is), this code was a fire hose trained on the Pacific Ocean. Global variables ruled the day. Comments were needed because object naming was nigh unintelligible. But the existing comments–when not used as a poor substitute for version control–were not kept up to date, so they did more harm than good. (Pro-tip: please stop bothering with comments unless you’re writing in assembly; I want to read your code not your comments.)
But second, he was dead wrong. Remember, no requirements. The code base had no associated functional or unit tests. The whole system only had an end-of-line test plan. The legacy machine was the design documentation, and the fact that customers liked it and wanted the revision to act the same was the pass/fail criterion.
You are not–repeat not–allowed to refactor code in this situation.
You don’t know the impacts. You can’t test the changes. You have no idea what you just broke. This is a stopgap project to keep the assembly line going until the next gen is released. This is not your Mona Lisa or Empire State Building or Saturn V.
There is one course of action: Click Compile. Fix hardware- or toolchain-related syntax error. Repeat.
The end result isn’t pretty and it’s not maintainable. But neither was the starting point. Step away from the editor (i.e. vi), the schematic/layout suite, the solid modeling tool, the #2 pencil, the screwdriver…whatever. The goal is to ship an almost exact copy. It doesn’t have to be pretty; your customer doesn’t care. It has to ship on time and on budget (your customer definitely cares about this).
All that being said, though, please don’t run your development this way. Don’t get into this situation. Write requirements and tests. Get early buy-in. Use automated testing for software–it’s just more software–so you can refactor and optimize to your heart’s content. Documentation costs money, sure. Useful documentation anyway, and it saves more than it costs.
Know where you’re going before you start walking, and you’ll have a maintainable, efficient system that will be more profitable in the long run.
Mon 29 July 2013 - 10:13:53 CDT
Following up from my earlier post covering resources for building effective presentations, I thought I’d throw a few more out there. These are not about presentations so much as about general effectiveness. I’ve read all of these–most multiple times–and I find them eye-opening and engaging and helpful. So if you’re a manager, and employee, an academic, a free-lancer, whatever, you’ll probably get something good out this list.
…and a few more quick ones:
Thu 18 July 2013 - 15:58:13 CDT
I recently posted a few articles about Fourier decomposition, and I often get asked–and ask myself–“How do I calculate that again?” using a tool like Matlab or (my preference) its open-source cousin, Scilab. Just how we change a data set sampled in time to one sampled in frequency is straightforward. We run the Fast Fourier Transform algorithm:
Easy. But what if you want to actually understand the data and do something with it? The problems with not knowing how to tailor the output of this calculation are best illustrated with some plots. In Scilab, let’s generate a basic audio signal with two tones sampled at 44.1kHz. First we create an axis consisting of 1024 evenly spaced points in time. A sample rate of 44.1kHz–the CD audio rate–means the sample times are separated by a bit less than 23 microseconds. (Note: this example code works with Scilab 5.4.0 on OSX, your mileage may vary.)
--> N = 1024; --> fs = 44100; --> t = (1/fs)*(0:1:(N-1));
Then we create two sine waves with frequencies of 5000Hz and 10000Hz and different amplitudes (1 and 2), add them together, and plot the result.
--> f1 = 10000; --> a1 = 1; --> y1 = a1*sin(2*%pi*f1*t); --> f2 = 5000; --> a2 = 2; --> y2 = a2*sin(2*%pi*f2*t); --> y = y1 + y2; --> plot(t, y);
After zooming in on the plot, we get something that looks like this:
Taking the Fourier transform and plotting:
--> Y = fft(y); --> plot(Y);
which looks interesting, but it’s not terribly useful. First, we don’t have a frequency axis, just numbered samples. Second, we expect a delta function–a spike (technical term)–for each sine wave, but we see four spikes instead of two plus some negative-going add-ons. And the amplitudes are all messed up; the spikes are too tall.
Second thing first. The result of the FFT is a series of complex numbers, and if you give that to the plot function, Scilab plots the real part by default. In most applications, we’re not interested in real and imaginary parts but rather amplitude and phase. Antennas and image sensors (eyes) and microphones (ears) and speakers and… often only care about intensity, or amplitude-squared; even the phase information is ignored. So for now, let’s plot the amplitude of the FFT using the complex absolute value function.
At the same time, let’s add a proper frequency-axis to the plot. The nature of the FFT is such that the 0th sample is DC or a frequency of 0, and the highest frequency is just less than the sampling frequency. What does “just less” mean? Well, the sample spacing is such that if you extended the sequence by one sample, the Nth point would be the sampling frequency (but because we’re zero referenced, the (N-1)th point is the last one.) An easy way to handle this is:
--> f = linspace(0, fs, N+1); // one sample too many --> f = f(1:$-1); // drop the last sample
The final trick is to divide by N to scale the amplitude correctly–I always forget that bit, so now I’ve written it down. Now if we plot the scaled amplitude of the FFT vs. our frequency axis
--> plot(f, abs(Y)/N);
Hmmm, still four spikes instead of two. And we started with amplitudes of 1 and 2, not 0.5 and 1, so the scaling is still wrong. Or is it?
There are two ways to look at this. First, from a Fourier transform table, the transform of a sine function of frequency f is a delta function at f and one at -f each with half the amplitude. So if we move the right half of our plot to the left and shift the axis to include negative frequencies:
--> plot(f-(fs/2), fftshift(abs(Y)/N));
the plot looks like this:
Now we have spikes of amplitude 1–half the original 2–at 5000Hz and -5000Hz, and spikes of amplitude 0.5 at 10000Hz and -10000Hz, just like we expect. (Note that the amplitudes are slightly less than these values, and the spikes flare at their bases. This is a topic for another time, but it’s related to windowing with a rectangle…or convolving with a sinc.)
The other way of looking at the result is to re-examine the unshifted plot and note that the high-frequency spikes are at 39100Hz and 34100Hz. Not coincidentally, these frequencies are the sampling frequency (44100Hz) minus the frequencies of interest (e.g. 44100-5000=39100). The FFT plot is a mirror image folded–origami, get it?–at one-half the sampling frequency. Half the sampling frequency is often called the Nyquist frequency (Harry Nyquist was Swedish, and with folding we’ve resolved this post’s title). The theory says that this is the highest frequency that can be reconstructed from the sampled signal without aliasing. This is also a topic for another time, but here’s a taste.
Let’s keep the 44100Hz sampling rate, but feed it a 30000Hz signal.
--> z = sin(2*%pi*30000*t); --> plot(f-(fs/2), fftshift(abs(fft(z))/N));
Again we see the characteristic spikes at positive and negative frequencies, but the frequency is not 30000Hz. In fact, it is 44100-30000=14100Hz. So if you record a 30kHz tone at a 44.1kHz rate and play it back, the signal will manifest itself as a tone with a bit less than half the original frequency!
This is fundamental theory that is applied regularly to countless systems: audio, video and imaging, communications, biometric sensors, … While the math and the tools are relatively simple, reality makes the engineering more difficult. Keep on eye on this blog for more.
Tue 25 June 2013 - 14:45:00 CDT
I have a conference coming up, so I’ve been thinking again about what makes a good presentation. This is a difficult question to answer, and it’s probably easier if you think about the converse at the same time: what makes a bad presentation? Here are a few concepts I’ve picked up over the years.
Sometimes you can get over a creative block and ultimately create a more memorable presentation by enforcing limits on yourself. Consider these two formats. (Several cities have events organized around these.)
Here are some valuable resources:
And for the scientists and researchers out there who need to present data, look into the excellent books by Edward R. Tufte. These are great, but dense and not inexpensive. Consider borrowing a copy to start or attend his one-day course; it’s really good.
Finally, some comic relief: