xc6lx45

Members
  • Content Count

    563
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by xc6lx45

  1. OK, never mind the soundcard idea 🙂 as said, I had not studied your requirements in depth. Just wondering, you want to use the AD2 as a module, not as test instrumentation? With the budget you mentioned, subcontracting a customer-specific FPGA image is not unheard of (this comment for FPGA-based hardware in general, not Digilent-specific) If your requirements are somewhat flexible: The Xilinx 7 series devices (Artix, Zynq, Spartan 7) have two built-in, independent AD converters with 1 MSPS, 12+4 bit . The ubiquitous FTDI USB chip can manage close to 30 megabit per second. If you can re-spec to those limitations, modules start at $50, give or take some (e.g. CMOD A7). I implemented a fun project that demonstrates dual-channel 700 ksps streaming data acquisition via USB on the CMOD A7 module (can use that e.g. to quickly check whether streaming is still reliable when other software is running on the PC). Probably performance is too low, but the hardware option would be very cheap. If that matters, roundtrip latency slightly below 125 microseconds is possible this way (this is my USB interface code but it's not for the faint-of-heart, 5x faster but by necessity much more complex than plain UART-based IO) I'm personally wary of building on top of USB with its DNA deeply rooted in consumer electronics (GPIB anybody?). That said, if I had to come up with USB streaming data higher than 30 MBit/s (standard FTDI chip in serial mode) but still cheaper than your expensive 2 GBit/s bus that is quickly demonstrated as a prototype, I'd grab one of those modules and use it in parallel mode. When that works reliably, making a custom PCB with converters, some FPGA module as glue logic and the FTDI module seems like a routine design job, with the expected difficulties mostly on the software side (PC side code, RTL and the protocol in-between). My $0.02...
  2. Hi, I hope someone will be able to solve your problem. That said, ... >> So my capture time was almost 5 times too long. I confirmed that even with two pulses the system cannot transfer fast enough >> python >> improve performance? Is there something wrong in my code? Am I misunderstanding the capabilities of the AD2? If the system had a USB 3.0, ... I suspect your job calls for hardware in a different price segment. Take this just as a general opinion without having studied the details of your specific problem. An NI 5840 (or 5820 if you don't need its RF frontend) would turn it into a routine job, more or less. Obviously, priced very differently, just pointing it out as an instrument that I know works well. Some PCI AD/DA card would probably cover the functionality you need but think twice about synchronizing multiple devices with accuracy sufficient for TOA measurements, and long time stability - here be dragons... If I needed something cheaply, I'd look at 192k sound cards, they can do reliable streaming data at low price. But the bandwidth will limit your Doppler range severely, plus a null at DC (offset Tx/Rx LO might fix that but yes it starts to look like a bad hack).
  3. Cool! 7 Series should have sufficient horsepower for a decent convolution reverb / cabinet model, for example.
  4. xc6lx45

    LVDS in CMOD S7

    disclaimer, I haven't tried a similar thing and did not dig into the documentation. But I think you need a HP bank (not HR) and question is, does a low-end board support it. Check the warnings. See https://www.xilinx.com/support/documentation/user_guides/ug471_7Series_SelectIO.pdf page 91: The LVDS I/O standard is only available in the HP I/O banks. There used to be LVDS 3.3 but not on 7 Series anymore, AFAIK (differential modes, yes, but not "LVDS" specifically).
  5. PS not saying that the idea of 400 MHz SPI is necessarily meaningful 🙂 just talking about the internals to the FPGA pin, not what's realistic on a PCB with 1/10 inch headers etc.
  6. Hi, quick answer: For someone with FPGA design experience it is not a big job (the PC side interface for configuring parameters etc is probably more work than the SPI core itself). BUT, don't underestimate the difficulty of getting there ("Python?", I read between the lines that you need it as just a tool, not a task that may require a man month. Check early what is realistic - whether or not the hardware can do the job is not necessarily the right question. A FPGA can do "anything", more or less using HDL) It has built-in PLLs so you can generate any frequency you like. 100 MHz is fairly straightforward, 200 MHz will need some extra effort, 400 MHz should be feasible in the hands of someone who knows the hardware (SERDES) . As many slaves as you have IO pins for (check the voltage, though). SPI is fairly simple, it sounds like a fairly compact project (UART for PC IO, state machines for control around a bunch of shift registers). But again, reading between the lines, be careful what is achievable with the amount of time you are willing to invest.
  7. just an observation: Most people would probably implement this around a shift register, instead of muxing a single bit. Functionally it would be the same (of course, assuming timing is met).
  8. >> Don't know why my tower doesn't work but my laptop does might be a bad USB card or interface. For example, the polyfuses in the voltage supply line are notorious for their even infinite memory effects. Also, the specified number of cycles for USB connectors is ridiculously low. USB from the PC shop is consumer technology - a USB card can be had for $5, an industrial GPIB card is $500 even though the former is even technologically more advanced 🙂
  9. Hi, purely as an idea: If you have the 35A-variant, you could try to run this (link below, use the latest link from the last posts). You may need to install FTDI's standard driver, if not present. It uploads a bitstream and exercises USB with regular traffic e.g. set ADC rate close to 1 MSPS and it'll use about 25 MBit/s. The application will immediately close with an error if USB fails. There may be a virus warning e.g. with bitdefender which is to the best of my knowledge false.
  10. xc6lx45

    XC7A15T-1CPG236

    Hi, can you find a CMOD A7 version with the bigger FPGA (35T)? They should be mechanically identical.
  11. one hint: You can zoom in (I think it's the middle mouse wheel) and at some level the primitives become visible. You'll see that for a small design the utilization is very sparse so the picture does not give a good visual indication of used resources (use the design report). Using inference (via the "*" operator) is generally a good idea. Read the tool output / warnings, it'll tell you a lot about what happens internally. The DSP48 is happiest if there are a few registers in the path so it can absorb them into the DSP48's internal hardware registers. This matters if you intend to go beyond maybe 100 MHz, give or take some. The same applies to BRAM, "inference" works pretty well and it's time well spent to figure out how it works.
  12. Have you considered installing Linux? E.g. a virtual machine or a Raspberry PI. $0.02: Learning interview questions is an immensely popular topic on the web but I'm sometimes wondering how much point there really is. So the skillset you are presenting has been obtained from a book over a few days. Surprisingly, the company is still more than happy to hire you. Question is ... do YOU want to work for this company? Lots of people hate their job. Every day. And this starts here ... think.
  13. what I'd do is use the template for an AXI-Lite slave (appears in the graphical view as RTL block and is recognized in the address editor), route one of the template registers to a LED. Then use this to "pipe-clean" the tools, for example go through the whole PS7 setup once without templates (keep DRAM as it's fairly complex but the other settings I'd rather understand when working with the chip). Can't say whether the "various" ways are that more interesting but for example you need a MIO-driven LED to emit blink codes from the FSBL because the PL isn't yet awake (hopefully you don't need this anyway. Take it as advance warning that running a design from flash is not necessarily as straightforward as e.g. on Artix...)
  14. can't help you with that specific tutorial, sorry. But... >> I've uninstalled/reinstalled everything ... that working mode sounds familiar and I'm not convinced it leads anywhere. Zynq is a very complex system. Without a systematic way of debugging, it's hopeless (IMHO). As soon as there are two issues at the same time (and this usually happens), any attempt at unsystematic trial-and-error (like reinstalling etc) is doomed to fail. One reason is there are four Vivado versions a year, and porting a "complex" (e.g. camera) demo from one to the other is a non-trivial undertaking. Especially if you intend to stick with it for a longer time, you might have a better start studying the various ways to blink a LED (from RTL, PL GPIO from ARM, MIO GPIO, from the handoff hook in the bootloader, from an own .elf file).This may sound absurd but it lets you tackle the problems one by one. But that said, maybe someone else can help with the specific question...
  15. >>thus i have a tendency to over-pipeline my design read the warnings. If a DSP48 has pipeline registers it cannot utilize, it will complain. Similar for BRAM - it needs to absorb some levels of registers to reach nominal performance. I'd check the timing report. At 100 MHz you are are maybe at 25..30 % of the nominal DSP performance of an Artix, but I wouldn't aim much higher without good reason (200 MHz may still be realistic but the task gets much harder). A typical number I'd expect could be four cycles for a multiplication in a loop (e.g. IIR). Try to predict resource usage - if FFs are abundant, I'd make that "4" an "8" to leave some margin for register rebalancing: An "optimal" design will become problematic in P&R when utilization goes up (but obviously, FF count is only a small fraction of BRAM bits so I wouldn't overdo it)
  16. isn't that 512 Mbps, with USB 2.0 ~ 480 Mbps max? And I wouldn't bet on achieving that. You could set up a simple Ethernet UDP dummy data generator on Zynq with what would be a few lines of C code on regular Linux, to check how much of that "gigabit" is actually achievable. But I suspect your requirements are deep in PCI territory. BTW, I'm still looking for the 128 IOs on this board.
  17. Reading between the lines (apologies if I'm wrong, this is based solely on four sentences you wrote so prove me wrong): I see someone ("need the full speed ...") who'll have a hard time in embedded / FPGA land. For example, what is supposed to sit on the other end of the cable? A driver, yes, but for what protocol and where does that come from? Have you considered Ethernet? It's relatively straightforward for passing generic data and you could use multiple ports for different signals to keep the software simple. UDP is less complex than TCP/IP and will drop excess data (which may be what I want, e.g. when single-stepping code on the other end with a debugger).
  18. Hi, when you load a Zynq bitstream from Vivado there are a few things that happen "automagically", like powering up the level shifters on the PS-PL interface (clock!). Zynq doesn't support FPGA-only operation, but you can manage with an autogenerated "FSBL" (first stage boot loader). Vivado, "export hardware" and include bitstream checked. Then you need to open SDK ("File/launch SDK") and create a "FSBL" project. Compiling it results in a .elf file. Put this into a boot image (.bin or .mcs) at the first position, the FPGA bitstream on position 2. Flash the result and the FPGA image will load at powerup when QSPI mode is jumpered. Note, the SDK flash utility does not work reliably (sometimes fails with error) when Vivado is still connected to the hardware. I'm sure there are tutorials but this as a two-line answer.
  19. You could use the multiplication operator "*" in Verilog (similar VHDL). For example, scale the "mark" sections (level 10) by 1024, the "space" sections (level 3) by 307. This will increase the bit width from 12 to 22 bits, therefore discard the lowest 10 bits and you are back at 12 bits. Pay attention to "signed" signals at input and output, otherwise the result will be garbled.
  20. Hi, could this be a signed-/unsigned issue? BTW, once you've got it working, for XADC rates (up to 1 MSPS) "systolic" architecture using 45 DSP slices seems overkill. It can probably be done in one slice (you need 89 MMAC/s/channel, it's not hard to clock a DSP slice at 200 MHz). And, the XADC actually outputs 16 bits. The lowest 4 bits are not specified e.g. no guarantees for no missing codes etc but useful in average. I would probably use them. Otherwise, for a non-dithered ADC, the quantization error becomes highly correlated with the input signal at low levels and causes weird artifacts.
  21. Hi, a conventional spectrum analyzer shows average power observed within the resolution bandwidth (RBW) filter. I need a correction factor from the RBW filter to the equivalent noise bandwidth of the filter, either from the analyzer's manual or via the analyzer's firmware (e.g. 'noise density' marker or the like that performs the conversion automatically). I can ignore this factor but it won't be fully accurate then. The difference is between definitions e.g. -3 dB bandwidth vs. an equivalent brickwall filter with the same area under the curve. For a "raw" FFT-based spectrum analyzer, the RBW is about the inverse of the capture time, e.g. 1 ms capture = 1 kHz RBW. Knowing power in RBW, I scale down to power in 1 Hz. E.g. with a 300 kHz RBW (noise) bandwidth, divide the power (in units of Watts) by 300000. Then convert to voltage over 50 ohms and there is the RMS value (since spectrum analyzer power readings mean "dissipated over the SA's 50 ohms input resistance). This for the vendor-independent basics...
  22. xc6lx45

    Custom IP

    is it possible that you simply need to right-click the ports, "make external" or the like?
  23. Question is, what's "better". If I'd use standard AXI blocks and SDK, the motivation would be that other people can work easily with the design, using the higher-level description. This would be my strong preference. Also, that you can have working code within about 60 seconds. If your project is large enough to warrant a more "efficient" implementation (keeping in mind that Xilinx is motivated to sell silicon by the square meter, not to use it at maximum efficiency), a custom RTL blocks and direct access via volatile unsigned int pointers. But for "controlling it via a PC" this is simply not relevant.
  24. That's the spirit I'm just commenting because Hilbert transform looks like a wonderful tool for its conceptual simplicity. And textbooks get carried away on it. And of course it does have valid technical applications. But it can easily turn into the steamroller approach to making apple puree, and DSP tends to become unforgiving a few minutes into the game when "implementation effort" plays against "signal quality" on linear vs logarithmic scales. At the end of the day, it boils down to the same idea - elimination of the negative frequencies so cos(omega t) = 1/2 (exp(- i omega t) + exp(i omega t)) becomes constant-envelope exp(i omega t)
  25. But this would an example where it's trivially easy to generate the reference tone in quadrature. Multiply with the complex-valued reference tone, lowpass-filter to suppress the shifted negative frequency component and there's my analytical ("one-sided spectrum") signal for polar processing. Now to be honest I've never ever designed a guitar tuner but I suspect that this with a decimating lowpass filter (no point in maintaining an output rate much higher than the filter bandwidth) can be orders of magnitude cheaper because I'm designing for the tuner's capture bandwidth (say, 10 % of the high E string fundamental would be ~30 Hz) instead of audio frequency.