Jump to content
  • 0

Lowpass Audio Filter


globieai

Question

7 answers to this question

Recommended Posts

@globieai,

Sadly, building a project that doesn't work is a sad but very common state of affairs among beginning FPGA designers.

This is why, when I built my own audio filter, I also built an FFT-based test bench that would verify that my filter worked.  This included predicting what level of output would be expected as well.

Have you simulated your design?  Can you demonstrate that it works in a simulation?  Run sine waves through it and verify that you can get sinewaves out of it?  You'll find simulation based debugging much easier to do than hardware debugging.  Even more, I'm a strong proponent of debugging designs using formal verrification--something that often ends up faster and easier than simulation based debugging, while not missing half as many bugs.

Dan

Link to comment
Share on other sites

Thank you so much @D@n

I could not write a test bench because i am working with audio signals. However, i have verified my convolution individually, i am sending convolution's source code and testbench. I have prepare project on Vivado 2018.2 and just used Nexys A7 board and a simple headset to listen output. It compiled without any error. 

When i tried to implement it to AudioDemo which was generated by here https://github.com/Digilent/Nexys-A7-100T-OOB?files=1 there is a significant noise. I have just edited AudioDemo.vhd for implementing convolution filter. I was tried to use chipscope and added to attachment.

There was a critical error as;

"[Timing 38-469] The REFCLK pin of IDELAYCTRL Inst_Audio/DDR/Inst_DDR/u_ddr_mig/u_iodelay_ctrl/u_idelayctrl_200 has a clock period of 5.053 ns (frequency 197.917 Mhz) but IDELAYE2 Inst_Audio/DDR/Inst_DDR/u_ddr_mig/u_memc_ui_top_std/mem_intfc0/ddr_phy_top0/u_ddr_mc_phy_wrapper/u_ddr_mc_phy/ddr_phy_4lanes_0.u_ddr_phy_4lanes/ddr_byte_lane_A.ddr_byte_lane_A/ddr_byte_group_io/input_[1].iserdes_dq_.idelay_dq.idelaye2 has REFCLK_FREQUENCY of 200.000 Mhz (period 5.000 ns). The IDELAYCTRL REFCLK pin frequency must match the IDELAYE2 REFCLK_FREQUENCY property."

I also get filter parameters from MATLAB Filter Designer Tool, there is also an analog LPF on Nexys A7 board about 10kHz, and my filter seems about 5kHz, however noıse did not decrease. When i turned back the default demo, it works correctly. I have mentioned constraint file and MATLAB filter designer result.

Best regards.

Chipshope.png

konvolusyon_sinyal.vhd tb_konvolusyon_signal.vhd ornekler_paket.vhd

MATLAB.png

AudioDemo.vhd Nexys-A7-100T-Master.xdc

Link to comment
Share on other sites

@globieai.

There's only so much I can tell by looking over your design sources

  • I have a rule in my own practice that any reads from memory should be done in a process of their own, with nothing else in the process
  • Likewise, any multiplies should be in their own process.  Xilinx's DSP's support multiply and accumulates in the same cycle, but my rule is intended to 1) make inference simpler, and 2) be more portable across architectures.  These two requirements will force some amount of pipelining on your filter, while also allowing you to increase your system clock speed.
  • You are picking particular bits of your output.  I cannot tell from just looking that you are picking or sign extending the right values.
  • The error regarding the REFCLK is something you need to pay attention to.  The reference clock *must* be at 200MHz.  It looks like you changed some clock in your design and didn't think through all of the consequences.
  • In any audio application, there's a relationship between the number of clocks required to process audio input samples, and the number of clocks between samples.  This issn't apparent from your discussion above.

The proper way to find and fix many of these bugs is through simulation.

10 hours ago, globieai said:

Thank you so much @D@n

I could not write a test bench because i am working with audio signals.

This is the source of your problems.

I don't normally use VHDL myself, or I'd offer more help here.  My favorite simulation tool, Verilator, works very well with Verilog--not VHDL.  It has no problems with Audio signals.  I would typically feed a signal into the simulator, and write the output to a file that I can then read in Octave.  I'm not sure how you would do this with a VHDL tool, or which VHDL tool you might use for that task.  Others on the forum who work with VHDL, such as @zygot or perhaps even @xc6lx45 (not sure if he usees VHDL) might be better at suggesting the proper simulator and how to go about accomplishing these tasks for a VHDL design.

While it is possible to find and fix bugs within an FPGA design in hardware, doing so can be quite a challenge.  You'll need to set up some tooling, so that you have access to values within the FPGA.  Such tooling would allow you to "see" values within an FPGA.  Xilinx offers an ILA capability for this purpose.  I use a Wishbone Scope myself, although doing so requires that you have a Wishbone (or AXI) bus already existing within your design.  There's also a compressed version of the same that would work nicely for problems where things don't change very fast--such as in any audio problem.  You should be able to simulate your design with the scope installed within it, as well as in hardware, to know that you will be able to properly capture the right values from hardware.

If your simulation was working, and if you had that kind of infrastructure available, I might then suggest

  • That you replace your filter with a square wave generator, to verify that your output works--especially since you played with the clock in the design you started from this is no longer certain
  • Once you've verified that a basic square wave works, you should then replace the square wave with a sine wave generator.  Repeat the experiment.  (Still without the filter in place)
  • You should be able to adjust the amplitude and frequency of both square wave and sine wave, and verify that the change affects the output as expected.
  • Only after you've verified your output in isolation, would I then turn to the filter.  I would still keep it separate from the input though--only repeating the input with first a square wave,
  • and then replacing the input of the filter with a sine wave.
  • I would also recommend capturing audio data (at audio rates!) from the input, and going to a file.  (You should be able to simulate this before trying it!)  Play a note of some type into the audio, verify that you get the same note in the data produced.
  • Repeat the test above using your filter--but capturing to a file instead of the output.

I guess the bottom line that I'm trying to get across is that there's a lot of work that needs to take place between the problem you are currently struggling with and the solution you want to achieve.

Dan

 

Link to comment
Share on other sites

@globieai,

Octave isn't much of a "simulation tool".  It's more of an ad-hoc scripting tool that looks and feels very similar to matlab.  It works well for verifying that the output of a simulation is the output it should be.

For simulation tools, let me encourage you to look into ghdl (and open source simulator) in addition to xsim (Vivado's simulator).

Dan

Link to comment
Share on other sites

On 12/25/2019 at 11:24 AM, D@n said:

Octave isn't much of a "simulation tool".  It's more of an ad-hoc scripting tool that looks and feels very similar to matlab.  It works well for verifying that the output of a simulation is the output it should be.

Don't really have much to add to what @D@nhas mentioned so far. It's been an enjoyable and enlightening read. OCTAVE is a good tool for prototyping and verification. The logic simulator is for RTL and timing verification.

For a project like yours I'd start with writing an OCTAVE script that accomplishes what you want to do. The prototype has to be written to represent the algorithm that you intend to implement with digital logic. This means not using keywords that do all of the magic behind the scenes. ( Well, actually a first cut script might do that just to prove the concept..). Before writing the HDL the prototype will reflect your design choices; i.e. data structures and algorithms. Once the prototype is satisfactory you write the HDL implementation. This gets verified with the Vivado Simulator. Be warned that Vivado will be happy to simulate your design using the toplevel entity as the toplevel simulation entity. The result is worthless. You need to write a testbench to exercise your toplevel HDL to test the behaviors that you think need to be tested; that is in simulation your testbench is the toplevel entity.. Testbench code is normally quite different than HDL code meant for synthesis, though it instantiates HDL entities and models written for synthesis. I have written simple behavioral models for external devices as if they were to be synthesized but that another discussion. Your testbench can write output data to a file so that you can compare results to your OCTAVE prototype output. Likely, you are using integer math but often signal processing application use fixed point. Either can suitably be processed with a verification OCTAVE script. If everything has gone right your simulation output, hardware output, and prototype will agree, within reason, with each other.

And that's the short course in doing complex programmable logic project for digital, analog or mixed applications. 

[edit] I forgot to mention that ModelSim or ISIM will let you view std_logic_vectors as analog signals in the waveform viewer if that's appropriate. In theory, I suppose, it would be possible to do all of the steps that I've mentioned in your HDL except for rendering signals, which is a strong part of OCTAVE or SCILAB. Does anyone really want to write their own simulator? I have used the parallel USB interface on my Genesys2, ATLAS, etc. to capture signals from working hardware, using my own C++ application with Digilent Adept API that then writes OCTAVE or SCILAB formatted data files that can then be read into an OCTAVE script for analysis. It's all really quite satisfying.... though if I had the gumption to figure out how to write the rendering part I wouldn't need all of those steps.. perhaps....

[edit1] Yeah, I'm having difficulties letting this go... If anyone from Digilent is reading this thread you should understand that one of the reasons why you've been able to sell me boards is by putting decent PC interfaces on them with useful API libraries for application development. For a number of reasons Ethernet isn't ideal, though I do use a PCIe, FPGA Ethernet PHY equipped board as an alternate route for communications. If all of your boards are going to be ZYNQ based then, unless they have a PCIe or FPGA USB interface directly connected to the logic fabric (USB 3.0 would be nice ) they won't be suitable for my typical development flow.

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...