• 0

XADC conversion (FFT application)



I am using Xilinx XADC IP core for FFT operation and I have a couple of questions on the XADC sampling rate.

Question 1)

I would like to have the sampling rate at 1 MSPS but for a 100 MHz clock the XADC actual conversion rate is 961.54 KSPS. Same is for 50 MHz. I realized that for 104 MHz of DCLK clock the conversion rate is 1000 KSPS but implementing this clock from clocking wizard IP resulted in timing failure during implementation. (This is another problem if you have any inputs on how to tackle it)

So I settled for 100 MHz clock and expected a 961.54 KSPS sampling rate. If I am not wrong the FFT resolution should be 961.54*10^3/4096 (for a 4096 point FFT). 

Backtracking from the scope, it appears that there is an offset from the expected sampling rate. Example: While 23.5 KHz is expected to fall in the 100th bin it falls in 111th bin. The sampling rate (computed from the FFT resolution formula - observing output) would be around 870 KSPS.

Question 2)

Would the sampling rate change if I use a 50 MHz DCLK clock instead of 100 MHz? The IP core indicates that the actual sampling rate would be 961.54 KSPS (same as that with 100 MHz clock) but I observed a shift in FFT output yet again. This time the sampling rate (computed from the FFT resolution formula) falls around 835 KSPS.

Please help!

P.S. - In my design I used a AXI-4 stream register slice as a pipeline stage to account for latency involved in multiplication and addition operations on the FFT output so that the signals from xfft_0 appear at the same time as the data. Frames are sent at the same rate (100MHz) at which the FFT is operated => BRAM read frequency = FFT CLK = 100 MHz.


3) Question on FFT:

My FFT output appears to be almost as expected (except for the constant offset in frequency bins). After every 4095th bin there is a repetition of bin value 4080, (for a certain interval, until next 0) and I with a peak at this value. I do not understand the reason behind this. Please provide some insight on this as well.


Share this post

Link to post
Share on other sites

5 answers to this question

Recommended Posts

  • 0


The trick to debugging an FFT is to separate out the components, and to debug each separately.

  1. For example, are you certain you are sending the frequency you think you are sending into the ADC?  Try copying ADC output data into a buffer, and then FFT that buffer both with the Xilinx FFT as well as a MATLAB (or Octave) FFT.  Do the two match?
  2. If your source for the frequency of the incoming waveform is the frequency setting on your waveform generator, then you have too many unknowns in your system.  Consider, for example, what would happen if the 100MHz system clock rate isn't quite at the 100MHz you are expecting.
  3. It is possible to generate arbitrary clock frequencies on an FPGA.  Doing so typically requires a PLL, which is a whole separate discussion.
  4. You haven't mentioned any type of windowing.  Without proper windowing, the measured frequency will be highly susceptible to out of band content.  Getting pushed around by a bin or two would not be uncommon.  This is another reason why you need to make certain that the FFT input is a pure tone for this type of test, rather than the input from an A/D.
  5. A peak at 4095, for a 4096 point FFT, could easily be spillage from bin 0.  It's not uncommon, and I've seen it often.
  6. Information at zero frequency is hard to get rid of.  Even if you properly high-pass the input, there's a whole art and science required to keeping DC from creeping back into your signal processing operations.
  7. You might wish to count signal clocks (VALID && READY) from the XADC to make certain you are getting the frequency you think you are.
  8. Changing your system clock rate should not adjust the outputs of the FFT.  If it does, it's an indication of a bug in your setup.  Some things to look for include the possibility of a EMI signal entering the external wire (another good reason for testing with an internal source), or bad handshaking (something I've  seen often).
  9. AXI register slices are not the same as single clock delays.  They perform a separate purpose.  Better to process the stream and produce the stream as a separate (guaranteed) aligned output then trust a register slice to always give you a single clock delay.  (As I implement register slices myself, there's no delay unless VALID && !READY)

I haven't double checked thoroughly, but I think that should just about answer most of your questions above, or at least set you on a footing to go find the true answer.


Share this post

Link to post
Share on other sites
  • 0

Dear @[email protected]

I would like to seek your comment on this question: 

Frequency for xfft aclk: 50 MHz (aclk is 50 MHz)

In the below design I am trying to compute magnitude of FFT output (squaring through multiplier and square root using CORDIC). I am probing the signal bus magnitude_out which has the FFT magnitude data and tuser to index bin number corresponding to FFT output.    



Resolution of FFT is sampling rate/FFT_size: 961KSPS/4096 = 234.7 Hz

However, I observe, from direct probing that every frequency component is scaled by 0.8. What I mean is, 100 KHz while expected in the bin number 426, the probed index falls in the bin 341. 10 KHz is expected in the bin index of 42 but the output probe indicates it to be in bin 34 (=0.8*42.6). This linear relationship is observed in all the bin indices. I am not sure of any reason behind this scaling factor. What do you think could be a possible reason for this behavior? Do you have any suggestions that I can check for?

Thank you very much!

Share this post

Link to post
Share on other sites
  • 0
3 hours ago, [email protected] said:


That's the kind of error I would expect from getting the AXI-stream signaling wrong.  I can't tell from your diagram if you've gotten that right or not.


Thanks @[email protected]!

By AXI-stream signaling do you mean the handshaking signals between FFT and post-FFT processing blocks? (Problem with AXI stream register slice may be?)

If it was a fixed offset I could account it for the latency in multipliers and CORDIC blocks but a fixed scaling factor is getting me all worried. For the former problem, I could try to add latency to the index signal magnitude_tuser so that both the index and magnitude_tdata have equal latency. I am now unsure about how to start debugging the scaling error. Any guidance would be of immense help! Grateful for your kind support and patience!

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now