Jump to content

Robin Oswald

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by Robin Oswald

  1. Hi @attila Thanks for the changes, I've been testing them. To my surprise, I've found a two new options: 1) In the drop-down "avg mode" there is now a third option 'coherents': What does this do (or is this just a typo/bug)? 2) Dropdown "cohere": What's the meaning of this option (and shouldn't this be greyed out if one has chosen mag/phase avg)?
  2. Thanks for another useful reply, @attila! If I'm not mistaken, then you and me mean wildly different things when we use the word 'coherence'. Unfortunately I don't quite understand your usage, but let me explain mine. HP Application Note 243-1 has a nice explanation in words on p55 (and a similar example to what we two have been using all along): https://www.hpmemoryproject.org/an/pdf/an_243-1.pdf 'Coherence measures the power in the response channel that is caused by the power in the reference channel. It is the output power that is coherent with the input power.' For us, this means that at 1 MHz we have: P_OUT_TOT = (P from Response to W1) + (P from static W2) P_OUT_COH = (P from Response to W1) Thus COH = P_OUT_COH / P_OUT_TOT = (P from Response to W1) / ((P from Response to W1) + (P from static W2)) < 1 Again, this logic dictates that coherence should drop when there is external noise (i.e. at 1 MHz), not increase. HP Application Note 243-3 has many additional nice examples for coherence: https://www.hpmemoryproject.org/an/pdf/an_243-3.pdf There you can intuitively see this in action, i.e. on Fig 2.20 on page28. When the measured transfer function is low (typically at anti-resonances), the background noise can't be neglected anymore compared to the excitation. Thus the power at the frequency of interest is only partially due to the excitation signal, and partially due to background noise. Thus the coherent fraction of the output power at that frequency dips, leading to a dip in coherence. The coherence function thus serves as a neat consistency/plausibility check to assess how much one can trust a given transfer function measurement at a given frequency, which is why I've been so relentless in nagging you about it ;). I hope this helps. Best, Robin
  3. Thanks for the useful reply @attila. I'm not in a hurry, so take your time with the next installer. Quick follow-up questions: 1) When you write "signal generator is periodically restarted", what does this mean? Specifically, I'm wondering if the output of the signal generator is nice and steady (i.e. the equivalent to constantly on with no idle periods, no phase-jumps, and constant freq), or if there are discontinuities? I'm asking because if there are discontinuities in the output of W1 between the different averages at a fixed frequency then I'd be worried about transient effects, and thus having to wait again and again for them to die down. 2) The coherence function as shown in your screenshot matches what I've also seen previously. However, it does the opposite of what I'd expect it to do. If the recorded signal response on CH2 is completely due to W1/CH1, then coherence should be 1. If the response is only partially due to W1/CH1 because of things such as additional noise in the circuitry, then the coherence should drop. However, in your plot of coherence above what I can see is the exact opposite: Coherence is low when I'd expect it to be high (i.e. away from 1 MHz where you supposedly put some noise), and high when I'd expect it to be low (i.e. at 1 MHz). Do you understand what I mean?
  4. Hey @attila I've been happily using coherent averaging in the NA for quite a while now. Thanks again for implementing it, it has been extremely useful. Lately, I've found a bug with coherent averaging: When doing transfer function measurement with a very small step size in frequency, coherent averaging is restricted to a minimum step size, whereas normal mag/phase averaging is not. Example: Measurement centered around 1 MHz Smooth curves are from 5X mag/phase averaging Jagged curves are from 5X coherent averaging As visible, the coherent averaging seems to have a minimum step size in frequency of ~120 Hz. This minimum step size seems to depend on the center frequency of the scan, because for a similar measurement centered around 5 MHz the minimum step size is ~600 Hz. Note that for mag/phase averaging (or no averaging) everything works beautifully, so this minimum frequency step size only seems to apply to coherent averaging. Could you look into this? As a side note, the 'Coherence' function also seems broken. Previously, it would display some numerical values (albeit incorrect ones), but now it won't display anything anymore, instead displaying the following message "ReferenceError: Can't find Variable: Coherence". Thanks in advance, Robin
  5. Hi @Malcolm First try without the pre-amplifier, since there's a decent chance that it's already good enough. However, if you do need to add a pre-amplifier you're still going to be fine: Easiest: Make the pre-amplifier ~10X faster in bandwidth than the highest signal you care about. Then the phase shifts due to the pre-amp will be a few degrees at most, so probably irrelevant (depending on the required level of precision of course). Next easiest: Use the AD2 to characterize the pre-amplifier first, and then account for it's phase shift. This requires a tiny bit more data analysis, but is also fairly straightforward. Any reasonably pre-amplifier should give you stable phases with little phase noise as long as you don't drive it into saturation, so I think you'll be fine either way.
  6. @MalcolmBased on my adventures using the NA, I'm very confident that it should give you reliable results for the scenario shown in your screenshot, especially because I can still rather easily see the sine wave in the response by eye and can pull out a phase difference if really required. I've seen the AD pull out reasonable mag/phase from a lot worse time traces! The only thing that is a bit borderline (but probably still acceptable) is the amplitude of the blue channel being only a few mV. For your scenario, this should still work, but to improve the measurement it'd be advantageous to either increase the excitation amplitude if you can (i.e. if there isn't any non-linearity limiting you), or using a pre-amplifier before the AD to get the voltage amplitude up into a decent range.
  7. Hi @attila 1. Did anything else change (i.e. triggering, synchronization, etc.)? In my first post the default averaging didn't seem to help, which started all of this. 3. Thanks, I'll check it out! 4. Windowing If I understand things correctly, then the main reason why the flat-top window is popular in applications such as FFT analyzers etc. is because of the flat-top, which will make sure that the measured amplitudes are correct, regardless of being unlucky with the spacing of the FFT bins. I.e. if there are bins at 1 kHz, 2 kHz, 3 kHz etc. and we have a sine wave at 1.5 kHz, the flat-top window makes sure to give an accurate reading of the amplitude (the wikipedia articles on "window functions" and "spectral leakage" have nice examples). For broadband applications, this makes a lot of sense. However, in the case of the NA, we already know ahead of time what frequencies we are interested in (mainly the excitation frequency and possibly its harmonics), and we can also adjust the sampling frequency, the length of the time record etc. to make the FFT frequency bins overlap with the excitation frequency. Ideally, the capturing would also be adjusted in a way to get an integer number of periods. Then there would be no need for windowing at all. To my knowledge, in many applications where transfer functions are determined experimentally, it's often recommended to make the excitation signal periodic and to record an integer number of periods such that one can forego the headaches and trade-offs introduced by windows, see e.g. [1] for a readable account on windows. In fact, it would even be advantageous to forego the FFT and simply calculate the Fourier coefficients from the Fourier integral at the excitation frequency, allowing one to average much longer, especially if it's done directly on the FPGA. This latter approach is nicely described in [1], but is a bigger departure from what is currently done. The last few days I've occasionally used the rectangular window (i.e. no windowing!) and I've found it useful, because it gives less spectral leakage than the flat top window. This is quite useful if there are peaks at discrete spurious frequencies that one wants to resolve, but also to confine the influence of said peaks to a narrow frequency range rather than smearing it all over the frequency axis. Therefore, I'd appreciate if you keep the option to manually choose a window (but I agree that the flat-top window is probably a good, safe choice as the default window). [1] Avitabile, Peter. Modal testing: a practitioner's guide. John Wiley & Sons, 2017. [2] Abramovitch, Daniel Y. "Built-in stepped-sine measurements for digital control systems." 2015 IEEE Conference on Control Applications (CCA). IEEE, 2015.
  8. Hi @attila I've finally gotten around to testing coherent averaging in the lab, both with my basic test setup consisting of the differential amplifier, and with real-world applications such as characterizing feedback loops in-situ. Overall, I'm very happy, and it works beautifully. Thank you so much, this is very useful! Example 1: Basic test using differential amplifier For the following setup (same as in initial post): I now get the following measurements (for different averaging modes, number of averages, windows, etc.): Note how the y-axis for the magnitude only shows +-3 dB, and the measured frequency response is even flatter. Essentially both modes work well, with coherent averaging being consistently slightly better than mag/phase. Since fixed sine at 100 kHz is not coherent with the excitation from W1, it averages away with enough averaging. Great success! Example 2: Characterizing feedback loops in-situ Conceptual overview: In this specific application, the process is a laser being stabilized to a frequency reference, so I mostly care about the disturbance attenuation S. When I measure it, I get the following: Here you can see that at low frequencies disturbances are properly attenuated. At intermediate frequencies of a few hundred kHz disturbances are actually amplified a bit (which can't be helped due to Bode's integral theorem). At high frequencies disturbances are unaltered because the loop gain has rolled-off, so effectively it doesn't do anything anymore. In this specific loop, there's a lot of intrinsic noise around 20 kHz due to internal modulation in the laser. As you can see, the measurement of S gets tripped up there, but as averaging is increased the correct result is recovered (whereas before your changes it was just hopeless). So coherent averaging also proved very useful here. Further questions/comments: 1) mag/phase averaging: Is 'mag/phase' averaging different compared to the "old" averaging (i.e. what used to be done by default)? Previously (see my first post) the normal averaging did not help, but now both 'mag/phase' and 'coherent' averaging help a lot! (If you changed the averaging to use the complex Fourier coefficients (i.e. also use the phase, as I think you should) then this would explain this improvement.) 2) Comment on mag/phase vs coherent averaging: Based on your explanation above: I suspect that the difference in the order between mag/phase and coherent averaging doesn't really matter (since the Fourier transform should be linear), and thus averaging first as done in coherent averaging should be more efficient since you only have to apply a window and calculate the FFT once. If I'm not mistaken, then the main reason why coherent averaging seems to perform better in my measurements is because it is triggered on W1 (which sounds like a reasonably thing to do by default, but I haven't thought too much about it). 3) Coherence function How did you implement the custom trace showing the coherence function? 4) Windowing Thanks for adding the option to choose different window functions in the NA, I've already found it quite useful! Overall, great job, and thank you very much again, Robin
  9. Hi @attila Thank you very much, I'll give it a go (not sure if I'll manage this week, or if it will be next week).
  10. (short answer because I have to run to a meeting): Another reason that just came to my mind why the coherent averaging might not work as well as expected is the order in which the measurements are taken. Essentially it boils down to reversing the order of the two for loops in my pseudocode above, which will make a big impact depending on coherence times between W1 and W2: Scenario A: Average first, and then move on to the next frequency If I'm not gravely mistaken, then this is what is currently implemented. Crucially, if W1 and W2 have decent coherence on the time scale required to take all the measurements at a fixed frequency, then coherent averaging will not help! If both W1 and W2 are derived from usual lab devices, then I'd expect that ultimately they are each referenced to some quartz oscillator from which all the timing/clocks are derived. In that case, I'd expect a good coherence on short time scales (i.e. from the few ms it takes to do the first measurement at f1 to the second measurement at f1), and coherent averaging will NOT help. Scenario B: First measure all the frequencies once, and then repeat N times to average With this order, the time between two measurements at the same frequency f1 will go drastically up, say to several seconds. So if W1 and W2 come from separate devices with different quartz oscillators, presumably they will have had a better chance to 'drift apart' on this time scale, which should then enable coherent averaging to lead to a big improvement in the measurement quality. How easy would it be to test this, i.e. reverse the order of the loops? Would it be possible for me to get the modified version of waveforms that you're using above to play around with it in the lab and try to debug things (I use Win64)?
  11. Hi, @attila Thanks for the patience and the more detailed explanations. The fact that the coherence function doesn't look at all like expected indicates to me that we're still missing something. I'd have expected the coherence to be close to 1 everywhere except at the frequency where W2/Wext is active. There it should drop from 1 down to maybe 0.5 or so. However, on the first screenshot above, the blue trace is always near one and never drops. On the second screenshot, it's even weirder with the baseline starting at 0.5 rather than 1, and coherence getting better rather than worse at the disturbance frequency. I think this might be one of the keys. If I'm not mistaken (and I have to admit that I myself am also not an expert on these things), it should rather be done in the following order: For every frequency in the stepped sine, do: For every measurement k in the average, do 1. Capture Ch1=x_k(t), Ch2=y_k(t) 2. Apply window (potentially optional, but let's keep it for now) 3. Calculate FFT X_k(f) = fft(window(x_k(t))) Y_k(f) = fft(window(y_k(t))) 4. Calculate G_yx and G_xx G_yx = average_over_k(X_k(f)Y_k*(f)) G_xx = average_over_k(X_k(f)X_k*(f)) = average_over_k(|X_k(f)|^2) Note that here X and Y are complex numbers, and that one has to be slightly careful here with the usual annoyance of one-sided vs two-sided FFTs/spectras. 5. Calculate mag/phase mag = |G_yx| / |G_xx| phase = phase(G_yx) - phase(G_xx) 6. Calculate coherence (optional) coh = |G_yx|^2 / ( |G_xx| |G_yy| ) Can you try it in this order? I'm reasonably optimistic that this should work better!
  12. Hey @attila, Thank you very much for giving this a shot, and especially for doing so so very quickly! Debugging coherent averaging Based on your screenshots, I agree that the currently observed improvement is mediocre at best. However, from my experience & intuition, I'd expect a much bigger improvement. To me, that leaves us with three main possibilities: 1) My intuition could be wrong Maybe my intuition is not so good after all, or some subtle assumption fails to be met, and what you show really does represent all we can hope to achieve. Currently, I still hope that this is not the case though ;). 2) The software implementation could be wrong How does the waveforms software actually calculate calculate the FFT and then the trace C2 = mag(Ch2/Ch1)? Is there any windowing involved in the FFT (which could lead to spectral leakage)? From experience playing around with the waveforms software, having a denser frequency axis usually resulted in better resolved features and narrower 'frequency ranges' around f1 being used in the calculation of Y(f1)/X(f1) with f1 being the excitation frequency and Y(f1) and X(f1) being the (complex) Fourier coefficients at those frequencies, averaged (?) over some frequency range (???). I've found the following references useful regarding the fine details of how to calculate and implement things such as G_yx, G_xx and H = G_yx/G_xx: 2010 Bendat, Julius S. / Piersol, Allan G.: Random Data: Analysis and Measurement Procedures There it is discussed in what order things should be averaged, and how the uncertainties should scale with more averaging etc. 3) The test measurement on the hardware could be wrong How exactly did you generate and measure the traces? How did you sum up W1 and W2? One possible explanation could be that if you run W1 and W2 from the same device, then they might be phase coherent, so W2 won't average away. General note Currently, the Network Analyzer implements a digitally stepped-sine measurement (the digital counterpart to the old analog swept-sine measurements), and hence uses a narrow-band excitation. Therefore, if all we're interested in is the linear response, then the calculation could be a lot simpler, since we essentially only have to take care of signals at the excitation frequency (rather than the full spectrum): Rather than calculating FFT(Y)/FFT(x), we can simply calculate the Fourier integrals numerically, and do this only for the excitation frequency. Computationally this is very simple, and circumvents a lot of common headaches with the FFT such as windowing, leakage, inconveniently spaced frequency points. Also, it already nicely rejects a lot of noise sources. A nice discussion of an implementation using this approach can be found in: 2015 Abramovitch, Daniel Y.: Built-in stepped-sine measurements for digital control systems How to proceed Personally, I suspect that we're still missing some subtle detail, and would really like to make this work (assuming my intuition holds, and that it can still be improved a lot further). I'd gladly re-run some measurements to verify your findings with a modified waveforms version. Also, I'd be up for a video call, since early in the debugging when the mistake could be almost anywhere I suspect having a direct discussion would be way more efficient than the kind of guesswork above. I'd be available during reasonable hours for European time zones. How would you like to proceed? Thanks in advance, Robin
  13. Hey, First, I want to congratulate you on the Analog Discovery kit & Waveforms software. The hardware is decent, well-documented and affordable (especially before the covid price hike). However, what really makes it stand out for me is the Waveforms software, which simply works well, is modern and convenient to use, and reasonably open and extendable (unfortunately all of these three points are often sorely lacking in test & measurement equipment). As a consequence, this has led my lab to buy a dozen or so ADs over the years. Background Recently, I've started using the network analyzer capability of the AD2 extensively, measuring the frequency response of various devices both in open-loop operation, and in closed-loop configuration in order to fine-tune PI(D) controller parameters in-situ, with bandwidths reaching up to a few MHz. There the AD2 and especially the recent ADP3X50 nicely fill a mostly unaddressed niche: Traditionally, this kind of measurement would have been done with a two-channel FFT analyzer (such as the venerable HP 3563A control systems analyzer). Unfortunately, most FFT analyzers only seem to go up to 100 kHz at most, so not quite far enough for my applications. At higher frequencies, there is of course the option of traditional network analyzers, but they often don't work all that well below 10 MHz, and the 50 Ohm input and output impedance is a big nuisance to interface with. The AD2 and ADP3x50 nicely fill this gap, which is why I recently bought a ADP3450, hoping to finally fill the void. Feature request: Coherent averaging for network analyzer While the NA functionality has already proven quite useful, it is currently still missing one crucial feature: Coherent averaging Let me explain the motivation step by step. In the simplest, noiseless case we can measure the transfer function as follows: Here X and Y are the Fourier transforms of x(t) and y(t), as calculated by the FFT. If I understand the documentation correctly, then the Waveforms software implements exactly this, and gets the magnitude and phase of the frequency response function by calculating magnitude = |X/Y| and phase = (angle(Y)-angle(Y)). In reality, there will always be various additional noise sources, as shown below. Now, the naive averaging fails: Using coherent averaging, we can however form a better estimate of the transfer function, by using only the part of the output Y that is coherent with the input excitation X: The crucial part happens in step 2), specifically in the numerator with the averaging of E[XY*] = E[|X||Y|exp(i*phi)]. The part of Y that has a stable phase relationship with X (i.e. is phase-coherent) will be retained, whereas the part of Y that is incoherent will get averaged over multiple measurements. Another very useful feature is that one can then introduce the so-called coherence function gamma that tells us how much of the measured response y is actually due to the stimulus x, as compared to noise or other unaccounted excitations. This makes it a very useful diagnostic tool indicating when the measurement can be trusted. For example in experimental modal testing this is used extensively. Demonstration of problem in practise I've performed a quick test measurement to demonstrate how the simple, naive method gets tripped up and to confirm that the Waveforms software currently doesn't seem to do any coherent averaging. Here's the test setup: The DUT is simply a differential amplifier with unity gain and a bandwidth of ~ 1 MHz, so well above the frequency range measured below. Note the fixed excitation at 25 kHz representing v(t) from the previous, general scenario. Using the ADP3450 to measure the frequency response, I get the following: Note how the measurement gets tripped up around 25 kHz, and that introducing averaging doesn't help. Summary In summary, I'd love if the following features got implemented (in descending order of importance): 1) Coherent averaging for the NA as outlined above 2) Ability to display the coherence function gamma^2 3) Ability to nicely tap into coherently averaged G_xy, G_xx, etc. (for export & more complicated further analysis outside of waveforms) Reading the release notes of the waveforms software I'm led to believe that behind the scenes the required ground work for implementing coherent averaging has mostly been done, since it's mentioned that the FFT already calculates the phases (there just doesn't seem to be a way to plot/access it). If I'm not mistaken, the main required change would be to use this phase information from the FFT in the averaging process. I'd gladly help in any way I can in order to implement coherent averaging! Best, Robin Oswald
×
×
  • Create New...