First i implement it on MATLAB and use this sequence

Filter
handle filter delay
downsample
Using above sequence i successfuly implement it on MATLAB

For FPGA i use fir compiler core in which i paste the same coefficients that is generated through matlab
But the problem i face is fir compiler core directly gives you the decimated output without handling fitler delay
so what should i do if want to handle this filter delay in xilinx

## Question

## ATIF JAVED 0

I am trying to implement Decimation on FPGA and Matlab

For this task i chose following design parameters

Filter type=window hamming hamming method

filter order=30

decimation factor=10

input sample rate=2Ms/s

output sample rate=200ks/s

Normalized cuttof =1/decimation factor

First i implement it on MATLAB and use this sequence

Filter

handle filter delay

downsample

Using above sequence i successfuly implement it on MATLAB

For FPGA i use fir compiler core in which i paste the same coefficients that is generated through matlab

But the problem i face is fir compiler core directly gives you the decimated output without handling fitler delay

so what should i do if want to handle this filter delay in xilinx

Fir compiler core sequence

Filter-downsample

## Link to post

## Share on other sites

## 4 answers to this question

## Recommended Posts

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account## Sign in

Already have an account? Sign in here.

Sign In Now