• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by xc6lx45

  1. xc6lx45

    Verilog Simulator

    >> student/beginners try to use these non-synthesizable constructs to generate "programs" rather than "designs". Students, beginners and most of industry for testbenches, making pitiful noises from the pain but using it nonetheless. Yes, Verilog is horrible and could be improved in oh-so-many ways, like most successful languages engineers use to get their work done before the deadline (perl, Matlab, C, Excel / Visual Basic, ...). I think you are driving a methodology that is very different from typical industrial use (where designers must deal with a large number of simple problems instead of working on a single hard one for weeks and months). Maybe you should add a disclaimer, like "warning: following this to the letter may make you look really stupid in a non-academic job interview".
  2. xc6lx45

    Verilog Simulator

    As a 2nd opinion, I would not recommend Verilator to learn the language. It does work on Windows (MSYS) but I'd ask for a good reason why you need Verilator in the first place instead of a conventional simulator. Have a look at iverilog / gtkwave: http://iverilog.icarus.com/ It works fine from standard Windows (no need to create a virtual machine). You'd call it through the command line though (hint: create a .bat file with the simulation commands to keep them together with the project. Hint, the abovementioned MSYS environment is pretty good for this, e.g. use a makefile or shell script).
  3. xc3sprog works for flashing with minor modifications e.g. IDCODE. I have used it on Artix. Setting up the compile environment is quite a bit of work, though. If that helps: for uploading to volatile memory (until the next power cycle), I've got source code here.
  4. xc6lx45

    Large Spectrum Generation

    (OT) A curiosity, I've tried to use it for PWM once but the DC average observed on the brightness of a LED seemed to depend on the bit position. Has anybody made similar observations?
  5. xc6lx45

    Large Spectrum Generation

    Hi, there are two important corner cases with regard to the power you're outputting: - slow sweep (wobble / FM): the signal is continuous-wave, its peak-to-average is as low as physically possible. This will output the highest power within a given amplitude limit and (oversimplified) optimally use the dynamic range of the signal chain. - single pulse: use a sample stream ... 0 0 0 1 0 0 0... (which has constant spectrum) or use the impulse response of a suitable bandpass filter. This results in the shortest possible signal within the limitations of the time-bandwidth product (see e.g. "band-limited pulse" on Wikipedia) but the signal has a high (bad) peak-to-average ratio, thus (oversimplified) requiring higher dynamic range in the signal chain. Examples for the two methods are a conventional RF vector network analyzer (continuous wave, at least traditionally) and pulse-based time domain reflectometry, e.g. Teledyne LeCroy Sparq instruments. What can work well is to use a bandpass-filtered (pseudo)random sample stream, possibly with additional clipping / filtering rounds to improve PAR. But at the end of the day, it depends on what statistics of the signal matter how much in your application, e.g. autocorrelation, amplitude PDF, actual spectrum shape. Matlab (Octave) would be my tool of choice for algorithm design, not RTL
  6. OK, never mind the soundcard idea 🙂 as said, I had not studied your requirements in depth. Just wondering, you want to use the AD2 as a module, not as test instrumentation? With the budget you mentioned, subcontracting a customer-specific FPGA image is not unheard of (this comment for FPGA-based hardware in general, not Digilent-specific) If your requirements are somewhat flexible: The Xilinx 7 series devices (Artix, Zynq, Spartan 7) have two built-in, independent AD converters with 1 MSPS, 12+4 bit . The ubiquitous FTDI USB chip can manage close to 30 megabit per second. If you can re-spec to those limitations, modules start at $50, give or take some (e.g. CMOD A7). I implemented a fun project that demonstrates dual-channel 700 ksps streaming data acquisition via USB on the CMOD A7 module (can use that e.g. to quickly check whether streaming is still reliable when other software is running on the PC). Probably performance is too low, but the hardware option would be very cheap. If that matters, roundtrip latency slightly below 125 microseconds is possible this way (this is my USB interface code but it's not for the faint-of-heart, 5x faster but by necessity much more complex than plain UART-based IO) I'm personally wary of building on top of USB with its DNA deeply rooted in consumer electronics (GPIB anybody?). That said, if I had to come up with USB streaming data higher than 30 MBit/s (standard FTDI chip in serial mode) but still cheaper than your expensive 2 GBit/s bus that is quickly demonstrated as a prototype, I'd grab one of those modules and use it in parallel mode. When that works reliably, making a custom PCB with converters, some FPGA module as glue logic and the FTDI module seems like a routine design job, with the expected difficulties mostly on the software side (PC side code, RTL and the protocol in-between). My $0.02...
  7. Hi, I hope someone will be able to solve your problem. That said, ... >> So my capture time was almost 5 times too long. I confirmed that even with two pulses the system cannot transfer fast enough >> python >> improve performance? Is there something wrong in my code? Am I misunderstanding the capabilities of the AD2? If the system had a USB 3.0, ... I suspect your job calls for hardware in a different price segment. Take this just as a general opinion without having studied the details of your specific problem. An NI 5840 (or 5820 if you don't need its RF frontend) would turn it into a routine job, more or less. Obviously, priced very differently, just pointing it out as an instrument that I know works well. Some PCI AD/DA card would probably cover the functionality you need but think twice about synchronizing multiple devices with accuracy sufficient for TOA measurements, and long time stability - here be dragons... If I needed something cheaply, I'd look at 192k sound cards, they can do reliable streaming data at low price. But the bandwidth will limit your Doppler range severely, plus a null at DC (offset Tx/Rx LO might fix that but yes it starts to look like a bad hack).
  8. Cool! 7 Series should have sufficient horsepower for a decent convolution reverb / cabinet model, for example.
  9. xc6lx45

    LVDS in CMOD S7

    disclaimer, I haven't tried a similar thing and did not dig into the documentation. But I think you need a HP bank (not HR) and question is, does a low-end board support it. Check the warnings. See https://www.xilinx.com/support/documentation/user_guides/ug471_7Series_SelectIO.pdf page 91: The LVDS I/O standard is only available in the HP I/O banks. There used to be LVDS 3.3 but not on 7 Series anymore, AFAIK (differential modes, yes, but not "LVDS" specifically).
  10. PS not saying that the idea of 400 MHz SPI is necessarily meaningful 🙂 just talking about the internals to the FPGA pin, not what's realistic on a PCB with 1/10 inch headers etc.
  11. Hi, quick answer: For someone with FPGA design experience it is not a big job (the PC side interface for configuring parameters etc is probably more work than the SPI core itself). BUT, don't underestimate the difficulty of getting there ("Python?", I read between the lines that you need it as just a tool, not a task that may require a man month. Check early what is realistic - whether or not the hardware can do the job is not necessarily the right question. A FPGA can do "anything", more or less using HDL) It has built-in PLLs so you can generate any frequency you like. 100 MHz is fairly straightforward, 200 MHz will need some extra effort, 400 MHz should be feasible in the hands of someone who knows the hardware (SERDES) . As many slaves as you have IO pins for (check the voltage, though). SPI is fairly simple, it sounds like a fairly compact project (UART for PC IO, state machines for control around a bunch of shift registers). But again, reading between the lines, be careful what is achievable with the amount of time you are willing to invest.
  12. just an observation: Most people would probably implement this around a shift register, instead of muxing a single bit. Functionally it would be the same (of course, assuming timing is met).
  13. >> Don't know why my tower doesn't work but my laptop does might be a bad USB card or interface. For example, the polyfuses in the voltage supply line are notorious for their even infinite memory effects. Also, the specified number of cycles for USB connectors is ridiculously low. USB from the PC shop is consumer technology - a USB card can be had for $5, an industrial GPIB card is $500 even though the former is even technologically more advanced 🙂
  14. Hi, purely as an idea: If you have the 35A-variant, you could try to run this (link below, use the latest link from the last posts). You may need to install FTDI's standard driver, if not present. It uploads a bitstream and exercises USB with regular traffic e.g. set ADC rate close to 1 MSPS and it'll use about 25 MBit/s. The application will immediately close with an error if USB fails. There may be a virus warning e.g. with bitdefender which is to the best of my knowledge false.
  15. xc6lx45


    Hi, can you find a CMOD A7 version with the bigger FPGA (35T)? They should be mechanically identical.
  16. one hint: You can zoom in (I think it's the middle mouse wheel) and at some level the primitives become visible. You'll see that for a small design the utilization is very sparse so the picture does not give a good visual indication of used resources (use the design report). Using inference (via the "*" operator) is generally a good idea. Read the tool output / warnings, it'll tell you a lot about what happens internally. The DSP48 is happiest if there are a few registers in the path so it can absorb them into the DSP48's internal hardware registers. This matters if you intend to go beyond maybe 100 MHz, give or take some. The same applies to BRAM, "inference" works pretty well and it's time well spent to figure out how it works.
  17. Have you considered installing Linux? E.g. a virtual machine or a Raspberry PI. $0.02: Learning interview questions is an immensely popular topic on the web but I'm sometimes wondering how much point there really is. So the skillset you are presenting has been obtained from a book over a few days. Surprisingly, the company is still more than happy to hire you. Question is ... do YOU want to work for this company? Lots of people hate their job. Every day. And this starts here ... think.
  18. what I'd do is use the template for an AXI-Lite slave (appears in the graphical view as RTL block and is recognized in the address editor), route one of the template registers to a LED. Then use this to "pipe-clean" the tools, for example go through the whole PS7 setup once without templates (keep DRAM as it's fairly complex but the other settings I'd rather understand when working with the chip). Can't say whether the "various" ways are that more interesting but for example you need a MIO-driven LED to emit blink codes from the FSBL because the PL isn't yet awake (hopefully you don't need this anyway. Take it as advance warning that running a design from flash is not necessarily as straightforward as e.g. on Artix...)
  19. can't help you with that specific tutorial, sorry. But... >> I've uninstalled/reinstalled everything ... that working mode sounds familiar and I'm not convinced it leads anywhere. Zynq is a very complex system. Without a systematic way of debugging, it's hopeless (IMHO). As soon as there are two issues at the same time (and this usually happens), any attempt at unsystematic trial-and-error (like reinstalling etc) is doomed to fail. One reason is there are four Vivado versions a year, and porting a "complex" (e.g. camera) demo from one to the other is a non-trivial undertaking. Especially if you intend to stick with it for a longer time, you might have a better start studying the various ways to blink a LED (from RTL, PL GPIO from ARM, MIO GPIO, from the handoff hook in the bootloader, from an own .elf file).This may sound absurd but it lets you tackle the problems one by one. But that said, maybe someone else can help with the specific question...
  20. >>thus i have a tendency to over-pipeline my design read the warnings. If a DSP48 has pipeline registers it cannot utilize, it will complain. Similar for BRAM - it needs to absorb some levels of registers to reach nominal performance. I'd check the timing report. At 100 MHz you are are maybe at 25..30 % of the nominal DSP performance of an Artix, but I wouldn't aim much higher without good reason (200 MHz may still be realistic but the task gets much harder). A typical number I'd expect could be four cycles for a multiplication in a loop (e.g. IIR). Try to predict resource usage - if FFs are abundant, I'd make that "4" an "8" to leave some margin for register rebalancing: An "optimal" design will become problematic in P&R when utilization goes up (but obviously, FF count is only a small fraction of BRAM bits so I wouldn't overdo it)
  21. isn't that 512 Mbps, with USB 2.0 ~ 480 Mbps max? And I wouldn't bet on achieving that. You could set up a simple Ethernet UDP dummy data generator on Zynq with what would be a few lines of C code on regular Linux, to check how much of that "gigabit" is actually achievable. But I suspect your requirements are deep in PCI territory. BTW, I'm still looking for the 128 IOs on this board.
  22. Reading between the lines (apologies if I'm wrong, this is based solely on four sentences you wrote so prove me wrong): I see someone ("need the full speed ...") who'll have a hard time in embedded / FPGA land. For example, what is supposed to sit on the other end of the cable? A driver, yes, but for what protocol and where does that come from? Have you considered Ethernet? It's relatively straightforward for passing generic data and you could use multiple ports for different signals to keep the software simple. UDP is less complex than TCP/IP and will drop excess data (which may be what I want, e.g. when single-stepping code on the other end with a debugger).
  23. Hi, when you load a Zynq bitstream from Vivado there are a few things that happen "automagically", like powering up the level shifters on the PS-PL interface (clock!). Zynq doesn't support FPGA-only operation, but you can manage with an autogenerated "FSBL" (first stage boot loader). Vivado, "export hardware" and include bitstream checked. Then you need to open SDK ("File/launch SDK") and create a "FSBL" project. Compiling it results in a .elf file. Put this into a boot image (.bin or .mcs) at the first position, the FPGA bitstream on position 2. Flash the result and the FPGA image will load at powerup when QSPI mode is jumpered. Note, the SDK flash utility does not work reliably (sometimes fails with error) when Vivado is still connected to the hardware. I'm sure there are tutorials but this as a two-line answer.
  24. You could use the multiplication operator "*" in Verilog (similar VHDL). For example, scale the "mark" sections (level 10) by 1024, the "space" sections (level 3) by 307. This will increase the bit width from 12 to 22 bits, therefore discard the lowest 10 bits and you are back at 12 bits. Pay attention to "signed" signals at input and output, otherwise the result will be garbled.
  25. Hi, could this be a signed-/unsigned issue? BTW, once you've got it working, for XADC rates (up to 1 MSPS) "systolic" architecture using 45 DSP slices seems overkill. It can probably be done in one slice (you need 89 MMAC/s/channel, it's not hard to clock a DSP slice at 200 MHz). And, the XADC actually outputs 16 bits. The lowest 4 bits are not specified e.g. no guarantees for no missing codes etc but useful in average. I would probably use them. Otherwise, for a non-dithered ADC, the quantization error becomes highly correlated with the input signal at low levels and causes weird artifacts.