Search the Community

Showing results for tags 'dsp'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • News
    • New Users Introduction
    • Announcements
  • Digilent Technical Forums
    • FPGA
    • Digilent Microcontroller Boards
    • Non-Digilent Microcontrollers
    • Add-on Boards
    • Scopes & Instruments
    • LabVIEW
    • FRC
    • Other
  • General Discussion
    • Project Vault
    • Learn
    • Suggestions & Feedback
    • Buy, Sell, Trade
    • Sales Questions
    • Off Topic
    • Educators
    • Technical Based Off-Topic Discussions

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 7 results

  1. I am trying to investigate how sparsity (the number of zeroes in the filter coefficients) affects the implementation of a filter on the FPGA. Specifically, I am interested in the number of BRAM blocks and DSP slices used for a sparse filter versus a non-sparse filter of the same length. I assume the filter coefficients to be symmetric and the number of taps to be odd. I have been experimenting with the FIR compiler GUI and have observed the following. For one output per cycle (No overclocking), The filter coefficients [1 2 3 4 0 1 2 3 4 ] use 9 DSP slices. Shouldn't this be 4 DSP slices (if we use the symmetry of the coefficients)? The filter coefficients [1 0 0 0 0 0 0 0 1] use 5 DSP slices. We have just the two multipliers here? Why do we need 5 DSP slices? For both of the filter coefficients, no BRAM units are used. When I increase the clock frequency, the number of DSP slices used goes down (although, I do not see a clean division here i.e. increasing the clock frequency by a factor of 2 does not reduce the DSP slices by half), but the number of BRAM units used increases. For example, the filter coefficients [1 2 3 4 0 1 2 3 4] use 6 DSP slices, (down from 9), and 5 BRAM blocks (previously 0 in the not overclocked case) when overclocked by a factor of 2. Similarly, overclocking by a factor of 3 reduces the number of DSP slices to 4, and the BRAM count goes to 3. Is there some relation between the number of DSP slices and the BRAM count? Note: I have also posted this question to Xilinx Forums, but have not received any feedback.
  2. Hi, I am wondering whether the scope channel in Analog Discovery can record a voice data from a microphone.? and do signal processing such as averaging, FFT etc? Suppose, I have a mic and mic-amplifier. Can feed the output from the mic-amplifier straight to scope input? and then see the waveform in the scope mode? If so, is it then possible to save the voice data for ( say 10 seconds) and then do the processing (averaging and then to perform FFT to extract the frequency components? If so what is the best way to start? Also, if created a complex waveform in the AWG (waveform generator), can I use the scope to store the waveform and the do digital signal processing (DSP)? Thanks any ideas and comments. Best Regards Fernando
  3. I am doing a project with some audio DSP. For this I am using the audio codec on the Zybo. The first thing I want to do, is to be able to record and playback audio with a little delay between input and output. In order to speed up the development process, I decided to use the DMA Audio demo. But I'm lacking some information or rather some documentation. So is there any of your guys who knows which registers is references to here (Audio controller registers) in the code below. Since I2S is an hardware interface standard, so there should not be any registers. So I think it has something to do with DMA. But I can't find any documentation, which fits. Do anybody know which documentation would give a insight to the registers? #define AUDIO_CTL_ADDR XPAR_D_AXI_I2S_AUDIO_0_AXI_L_BASEADDR //Audio controller registers enum i2sRegisters { I2S_RESET_REG = AUDIO_CTL_ADDR, I2S_TRANSFER_CONTROL_REG = AUDIO_CTL_ADDR + 0x04, I2S_FIFO_CONTROL_REG = AUDIO_CTL_ADDR + 0x08, I2S_DATA_IN_REG = AUDIO_CTL_ADDR + 0x0c, I2S_DATA_OUT_REG = AUDIO_CTL_ADDR + 0x10, I2S_STATUS_REG = AUDIO_CTL_ADDR + 0x14, I2S_CLOCK_CONTROL_REG = AUDIO_CTL_ADDR + 0x18, I2S_PERIOD_COUNT_REG = AUDIO_CTL_ADDR + 0x1C, I2S_STREAM_CONTROL_REG = AUDIO_CTL_ADDR + 0x20 };
  4. Hello, I'm building this filter, generating a .COE file in Matlab, which I use in the FIR compiler IP. Here are two screenshots of the settings. Do you know if the difference between the two pictures, in terms of magnitude, are just a displaying fact or if there is a real amlpification involved by the FIR compiler ? If it's the case, do you know how to fix it to generate the same filter as I designed in Matlab, so without gain ? Kind regards, Yannick
  5. Hello, I'm currently trying to implement a simple low-pass filter using the FIR Compiler available in the IP catalog. My design is very basic, I've generated a sine wave using the DDS IP : * Configuration Options : Phase generator and SIN COS LUT * System clock : 100 MHz * Mode of operation : Standard * Output frequency : 1.2 MHz * Output width : 8 Bits I want now to apply a low-pass filter and to see how is it going in simulation, using Analog waveform style. To generate the coefficients, I am using Matlab and the filterDesigner. Here are the specifications : * Lowpass, FIR (Equiripple) * Fs : 2.7 MHz * Fpass : 1.3 MHz * Fstop : 1.35 MHz * Apass : 1 dB * Astop :40 dB Then, I generate a .COE file which I use in the FIR compiler. I specify these options : * Filter type : Single rate * Input sampling frequency : 2.7 MHz * Clock frequency : 100 MHz * Coefficient type : signed, on 16 bits * Coefficient structure : Inferred * Input data type : Signed, on 8 bits * Output rounding mode : Full precision The screenshot shows what I obtain and it seems that it is not working very well. Does anybody can explain me what is going on ? Do I make some mistakes when I am setting up the filter ? Thank you very much for your help !
  6. I recently purchased the Zybo, zynq development board, along with a MIC3 so that I could hopefully take in audio from the PMOD port and then process it in Arm processor, and finally output it through the audio output port. I'm having trouble getting the audio routed with the IP in vivado. I'm fairly new to the vivado IP integrator but I have some experience creating verilog projects. Has anyone used the MIC3 with the zybo, or know how to get the audio from a MIC3 in one of the PMOD ports, into the ARM for proccessing? Any help would be greatly appreciated.
  7. I've got my new fractal viewer up and running on the new Nexys Video board. I'm just about to upload the source and a .bit file to http://hamsterworks.co.nz/mediawiki/index.php/Mandelbrot_NG_1080i It does not use any frame buffer - each pixel is completely recalculated every time it is displayed, allowing for super-smooth zooms and pans. Some highlights - Generates video DVI-D at 1080i. It is pretty hard to find examples of how to do 1080i properly. - Outputting over DVI-D, Although it can support 24-bit colour, however for the moment I am just using 64 different ones, as it aids debugging. - Does not use any IP except an unsigned 17x17 multiplier (because it wouldn't infer properly) - Each pixel spends 2,106 cycles going through the calculation pipeline before being sent to the display, - Uses 'only' 540 DSP blocks because when I used more it caused the board to reset itself, so the chip is capable of about 30% more math. - The heatsink gets a little warm... - Currently processes down to iteration 162. I'ld like it to go higher but must solve why it is resetting itself - You have the option of moving some of the DSP multipliers into LUTs, which allows you to calculate 'deeper' at the expense of power usage. - Performs about 120 billion 36-bit operations per second I've also got a 720p version, and a 640x480 that calculates three times as 'deep' per pixel, but that doesn't have the magical 1920x1080 resolution.