I'm building this filter, generating a .COE file in Matlab, which I use in the FIR compiler IP. Here are two screenshots of the settings. Do you know if the difference between the two pictures, in terms of magnitude, are just a displaying fact or if there is a real amlpification involved by the FIR compiler ? If it's the case, do you know how to fix it to generate the same filter as I designed in Matlab, so without gain ?
Ok, now that I understand what you wish to do, try this:
Tie ACLKEN on the DDS to 1. This will generate a signal at your system clock rate. It'll also look nice and pretty in the simulator as it comes out of the DDS.
Tie ACLKEN on the FIR compiler so that it is 1 for one system clock 2.7M times per second. The result won't look so pretty going into the filter any more, but it will be at your desired sample rate.
Tie m_axis_tdata_ready going into the DDS to 1. This will allow the DDS to free run, even though you are only looking at samples 2.7M times a second.
Tie m_axis_tdata_ready going into your FIR to 1 as well. We won't use that.
Now, on the output of the FIR compiler, you'll want to sample the data once every time ACLKEN is valid. This will then capture the sampling effects, and allow you to visualize how the sample rate is affecting things.
You should see some fascinating things happen as you set the DDS to just less than 1/2 your sample rate through just greater than 1/2 your sample rate. Likewise, you may find setting the DDS to somewhere between just less than 2.7MHz and just greater than 2.7MHz should also be quite fascinating for you.