xc6lx45

Members
  • Content Count

    526
  • Joined

  • Last visited

  • Days Won

    32

xc6lx45 last won the day on June 17

xc6lx45 had the most liked content!

About xc6lx45

  • Rank
    Prolific Poster

Contact Methods

  • Website URL
    https://www.linkedin.com/in/markus-nentwig-380a4575/

Profile Information

  • Gender
    Male
  • Location
    MUC
  • Interests
    RF / DSP / algorithms / systems / implementation / characterization / high-speed PA test and creative abuse of Pedal Steel Guitars

Recent Profile Visitors

2389 profile views
  1. what I'd do is use the template for an AXI-Lite slave (appears in the graphical view as RTL block and is recognized in the address editor), route one of the template registers to a LED. Then use this to "pipe-clean" the tools, for example go through the whole PS7 setup once without templates (keep DRAM as it's fairly complex but the other settings I'd rather understand when working with the chip). Can't say whether the "various" ways are that more interesting but for example you need a MIO-driven LED to emit blink codes from the FSBL because the PL isn't yet awake (hopefully you don't need this anyway. Take it as advance warning that running a design from flash is not necessarily as straightforward as e.g. on Artix...)
  2. can't help you with that specific tutorial, sorry. But... >> I've uninstalled/reinstalled everything ... that working mode sounds familiar and I'm not convinced it leads anywhere. Zynq is a very complex system. Without a systematic way of debugging, it's hopeless (IMHO). As soon as there are two issues at the same time (and this usually happens), any attempt at unsystematic trial-and-error (like reinstalling etc) is doomed to fail. One reason is there are four Vivado versions a year, and porting a "complex" (e.g. camera) demo from one to the other is a non-trivial undertaking. Especially if you intend to stick with it for a longer time, you might have a better start studying the various ways to blink a LED (from RTL, PL GPIO from ARM, MIO GPIO, from the handoff hook in the bootloader, from an own .elf file).This may sound absurd but it lets you tackle the problems one by one. But that said, maybe someone else can help with the specific question...
  3. >>thus i have a tendency to over-pipeline my design read the warnings. If a DSP48 has pipeline registers it cannot utilize, it will complain. Similar for BRAM - it needs to absorb some levels of registers to reach nominal performance. I'd check the timing report. At 100 MHz you are are maybe at 25..30 % of the nominal DSP performance of an Artix, but I wouldn't aim much higher without good reason (200 MHz may still be realistic but the task gets much harder). A typical number I'd expect could be four cycles for a multiplication in a loop (e.g. IIR). Try to predict resource usage - if FFs are abundant, I'd make that "4" an "8" to leave some margin for register rebalancing: An "optimal" design will become problematic in P&R when utilization goes up (but obviously, FF count is only a small fraction of BRAM bits so I wouldn't overdo it)
  4. isn't that 512 Mbps, with USB 2.0 ~ 480 Mbps max? And I wouldn't bet on achieving that. You could set up a simple Ethernet UDP dummy data generator on Zynq with what would be a few lines of C code on regular Linux, to check how much of that "gigabit" is actually achievable. But I suspect your requirements are deep in PCI territory. BTW, I'm still looking for the 128 IOs on this board.
  5. Reading between the lines (apologies if I'm wrong, this is based solely on four sentences you wrote so prove me wrong): I see someone ("need the full speed ...") who'll have a hard time in embedded / FPGA land. For example, what is supposed to sit on the other end of the cable? A driver, yes, but for what protocol and where does that come from? Have you considered Ethernet? It's relatively straightforward for passing generic data and you could use multiple ports for different signals to keep the software simple. UDP is less complex than TCP/IP and will drop excess data (which may be what I want, e.g. when single-stepping code on the other end with a debugger).
  6. Hi, when you load a Zynq bitstream from Vivado there are a few things that happen "automagically", like powering up the level shifters on the PS-PL interface (clock!). Zynq doesn't support FPGA-only operation, but you can manage with an autogenerated "FSBL" (first stage boot loader). Vivado, "export hardware" and include bitstream checked. Then you need to open SDK ("File/launch SDK") and create a "FSBL" project. Compiling it results in a .elf file. Put this into a boot image (.bin or .mcs) at the first position, the FPGA bitstream on position 2. Flash the result and the FPGA image will load at powerup when QSPI mode is jumpered. Note, the SDK flash utility does not work reliably (sometimes fails with error) when Vivado is still connected to the hardware. I'm sure there are tutorials but this as a two-line answer.
  7. You could use the multiplication operator "*" in Verilog (similar VHDL). For example, scale the "mark" sections (level 10) by 1024, the "space" sections (level 3) by 307. This will increase the bit width from 12 to 22 bits, therefore discard the lowest 10 bits and you are back at 12 bits. Pay attention to "signed" signals at input and output, otherwise the result will be garbled.
  8. Hi, could this be a signed-/unsigned issue? BTW, once you've got it working, for XADC rates (up to 1 MSPS) "systolic" architecture using 45 DSP slices seems overkill. It can probably be done in one slice (you need 89 MMAC/s/channel, it's not hard to clock a DSP slice at 200 MHz). And, the XADC actually outputs 16 bits. The lowest 4 bits are not specified e.g. no guarantees for no missing codes etc but useful in average. I would probably use them. Otherwise, for a non-dithered ADC, the quantization error becomes highly correlated with the input signal at low levels and causes weird artifacts.
  9. Hi, a conventional spectrum analyzer shows average power observed within the resolution bandwidth (RBW) filter. I need a correction factor from the RBW filter to the equivalent noise bandwidth of the filter, either from the analyzer's manual or via the analyzer's firmware (e.g. 'noise density' marker or the like that performs the conversion automatically). I can ignore this factor but it won't be fully accurate then. The difference is between definitions e.g. -3 dB bandwidth vs. an equivalent brickwall filter with the same area under the curve. For a "raw" FFT-based spectrum analyzer, the RBW is about the inverse of the capture time, e.g. 1 ms capture = 1 kHz RBW. Knowing power in RBW, I scale down to power in 1 Hz. E.g. with a 300 kHz RBW (noise) bandwidth, divide the power (in units of Watts) by 300000. Then convert to voltage over 50 ohms and there is the RMS value (since spectrum analyzer power readings mean "dissipated over the SA's 50 ohms input resistance). This for the vendor-independent basics...
  10. xc6lx45

    Custom IP

    is it possible that you simply need to right-click the ports, "make external" or the like?
  11. Question is, what's "better". If I'd use standard AXI blocks and SDK, the motivation would be that other people can work easily with the design, using the higher-level description. This would be my strong preference. Also, that you can have working code within about 60 seconds. If your project is large enough to warrant a more "efficient" implementation (keeping in mind that Xilinx is motivated to sell silicon by the square meter, not to use it at maximum efficiency), a custom RTL blocks and direct access via volatile unsigned int pointers. But for "controlling it via a PC" this is simply not relevant.
  12. That's the spirit I'm just commenting because Hilbert transform looks like a wonderful tool for its conceptual simplicity. And textbooks get carried away on it. And of course it does have valid technical applications. But it can easily turn into the steamroller approach to making apple puree, and DSP tends to become unforgiving a few minutes into the game when "implementation effort" plays against "signal quality" on linear vs logarithmic scales. At the end of the day, it boils down to the same idea - elimination of the negative frequencies so cos(omega t) = 1/2 (exp(- i omega t) + exp(i omega t)) becomes constant-envelope exp(i omega t)
  13. But this would an example where it's trivially easy to generate the reference tone in quadrature. Multiply with the complex-valued reference tone, lowpass-filter to suppress the shifted negative frequency component and there's my analytical ("one-sided spectrum") signal for polar processing. Now to be honest I've never ever designed a guitar tuner but I suspect that this with a decimating lowpass filter (no point in maintaining an output rate much higher than the filter bandwidth) can be orders of magnitude cheaper because I'm designing for the tuner's capture bandwidth (say, 10 % of the high E string fundamental would be ~30 Hz) instead of audio frequency.
  14. enter VHDL-AMS and spend coffee breaks reminiscing about the days when simulation "was" all clock based... Adaptive time step control... works for differential equations so it'll sure work for digital systems, too... strictly monotonic time was yesterday... umm, which 'yesterday', the one we just had or the one that hasn't happened yet... Oh well, I digress...
  15. xc6lx45

    xadc_zynq

    I'd recommend you spend a working week "researching" the electrical-engineering aspects. The ADC may look just as an afterthought to DSP but it will require significant engineering resources (plan for several / many man-months). Long is the list of bright-eyed students / researchers / engineers / managers who have learned the hard way that there is a bit more to the problem than finding two boards with the same connector... Hint, check how much latency you can tolerate and research "digitizer" cards for PC (or PXI platform). If you don't need a closed-loop real-time system, don't design for a closed-loop realtime system.