• Content Count

  • Joined

  • Last visited

Everything posted by xc6lx45

  1. You might have a look at the low end of the frequency range, especially if your application is sensitive to group delay variations. DC-coupled, even more difficult.
  2. HI, if you search the forum you may find a few similar requests. Apparently you are unaware that the FTDI/JTAG solution is licensed, please search the forum (I haven't heard of anyone using the FT4232H though). For plain bitstream upload you could adapt this project (the bitstream uploader is a small part of it, just delete the rest). Alternatively, there are other FTDI/JTAG solutions e.g. xc3sprog. But none of them will provide you deep integration wíth the Xilinx tools. The quickest way might be to break out the electrical JTAG interface, get a licensed module and integrate it
  3. Yes, it's just a roundabout way to express it via the speed of light (usually with the effective epsilon_r of the dielectric as extra parameter, then it's really a physical line length). For AD frequency range, using time seems a good choice, IMHO. For example, taking a random Rohde manual: https://cdn.rohde-schwarz.com/pws/dl_downloads/dl_common_library/dl_manuals/gb_1/z/zvl_1/ZVL_Operating_en_09.pdf Page 137 "Phase Delay/El. Length"
  4. The red "X" in "Inter-clock" paths is a dead giveaway ("inter", latin, "from one to the other"): There are two clocks. they aren't simply coperiodic therefore any timing analysis between them will be ... difficult. The solution is, get rid of the unintended clock.
  5. OK I think I got your point. You want to zero out some baseline delay with a phase that's linear with frequency. On a RF VNA the feature would be labeled "electrical length correction" or the like, if that's of any use to locate the parameter.
  6. >> The docs say NF value of FFT is the sum of rms noise in all the bins OK strictly speaking about the math (not AD's implementation), one problem here is we want the readings to be consistent regardless whether a sine signal is exactly on a FFT bin frequency or just halfway between two bins. The typical solution is to use a time domain windowing function before the FFT that "blurs" the FFT's output power spectrum. But when the instrument is calibrated to give accurate spot frequency readings (so the power of a sine wave is accounted for by a single point from the FFT, not an integ
  7. Hi, if I had to solve the problem, I would definitely calibrate with a test signal, for example a sine wave at -20 dBfs and a bandpass noise signal with an equivalent power (I'd leave maybe 20 dB headroom for peaks, starting from Gaussian noise e.g. randn() in Octave. This is a differential measurement so it can be done with near-unlimited accuracy. The math has so many possible pitfalls that I would always run a measurement (if only to validate the calculation).
  8. Actually I'm wondering what you consider "misleading" about your result. This is what a network analyzer will show when you set it up as you did. Some thoughts: - use a linear frequency axis instead of logarithmic (and a linear frequency sweep, at least on a conventional CW-swept VNA) - you need a higher density of points so the eye can unwrap the phase in the 6 kHz region (for an algorithm this might be even enough but it's visually distracting). Right now, it looks like the direction changes but that's a visual artifact from connecting the dots within a single 360-degree zone.
  9. Hi, based on the information from the picture, the lowpass at V_P/V_N has a cutoff frequency of around 560 kHz, then -6 dB / octave. Unless the manual is wrong, I'd guess there is your anti-aliasing filter. My first thought would be to tune it down with e.g. 2x1 kOhms resistors, depending of the highest frequency of interest. Otherwise it's not very effective at the first alias frequency (but good to suppress general high-frequency noise)
  10. Now that is serious business :-) If I approach it from that corner, you'll find an FPGA a breadboard the size of a basketball court but without the wiring delay. But, heed my warning about "synchronous logic". Some of the breadboard thinking doesn't easily translate to FPGA, or it becomes a topic for specialized experts (say, in ASIC emulation). Keywords here for a systematic approach: "Moore / Mealy machines" but forget any "entry-level" textbook material that e.g. builds flipflops from logic gates...
  11. ... and finally: Be aware that any ideas about connecting logic gates like 4000 or 7400-series circuits will most likely send you on a path to nowhere. Modern FPGA design is based on the "synchronous logic" paradigm: Signals are guaranteed to arrive before a clock edge and remain valid for a given time (setup- and hold concept, see static timing analysis) BUT what they are doing in-between is unspecified. Ignore this fact and FPGA design will become dark voodoo and implementation-dependent non-determinism.
  12. BTW what you're describing might happen to someone who never bothers to dig deeper, for example studying the output from the tool, say warnings and timing analysis, and making sense of it. For example, BRAM at higher frequencies requires use of hardware registers, a DSP even multiple levels. This can have serious architectural implications (if you port e.g. a Xilinx J1 design to Lattice you'll run into a few of those). Still, I'd argue that the "partnership" with the tool is much more efficient than reaching the same conclusions from low-level design work. And maybe we shouldn't overestimate o
  13. You might have a look at the open-source Lattice toolchain. It'll probably work in a Linux virtual box on any host (haven't tried). Fighting the tools is, unfortunately, a large part of the FPGA experience. In principle, you should be able to instantiate the hardware-level primitives e.g. LUTs (CLBs) see UG474. You will not have direct control over the routing, though, but that's understandable (e.g. prevent hardware damage from conflicting driver assignments). For an example, check what has been written about the Picoblaze implementation. I think that IP block may be pretty clos
  14. OT but yes I think that is generally the idea. Usually you scale devices down (start with a big one for faster build cycle, ILA etc). Then squeeze the design into the smallest device possible in volume production. If you scale up, it obviously puts more stress on DC / thermal design.
  15. Going through FMC just for 1.8 V IO seems a bit like overkill (depends ultimately on your application). But you may find boards better suited to the application. One such board is Trenz TE0725. You can provide the bank voltages through pins on the header, solder a wire to the on-board 1.8 V regulator (a bit of a hack but easily done, it has a large capacitor where you can take the voltage from). To emulate a PMOD port you may use two insulation-displacement connectors and a SCSI-style ribbon cable. Separate individual wires on one end and push them into connector blades according to
  16. Hint: You can also download the Zynqberry design files from git and use free Altium viewer to look inside.
  17. Hi, did you check the boot mode configuration section (QSPI vs SD)? See https://reference.digilentinc.com/reference/programmable-logic/zybo-z7/reference-manual section 2 - Is the jumper set correctly? - if QSPI, is there a valid FSBL in one board and not the other? I wouldn't be surprised if there a default FSBL in the working board that tries multiple options until it finds the first bootable image. An easy way to check is to clear the flash of the working board. If it stops working, you've found the root cause (obviously it'll "break" the working board temporarily but b
  18. If you want an opinion, tutorials aren't as helpful as you might think. They make you achieve wondrous things and give the illusion of learning. But have you really made the knowledge your own so you can apply it independently? Probably not. You may get better results following the tutorial only loosely, as a general guideline and when you get stuck (e.g. after banging your head against the problem for an hour or a day, but not after 15 seconds). Most certainly, being led by the hand won't take you anywhere near "master"... IMHO, what you absolutely should do (as you asked) is r
  19. I think the first thing I'd check is the jitter performance of the available DAC clock (its impact is proportional to the highest frequency component you're generating). The second thing I'd check is 14 bit (or whatever Nyqust equivalent performance they promise) against the LTE uplink specs. See TS 36.101. I don't think it will fly but I haven't done my homework (nor do I have the input what you're actually trying to achieve e.g. in terms of specs compliance). Check unwanted emissions, not so much close in to your signal (where the requirements are quite forgiving) but far away "out-of-band".
  20. OK I just read the LTE part. I suspect this is a RF systems design question, lacking the analog / RF upconverter (which is outside the FPGA). 30.72 MHz sample rate for LTE20 is standard at baseband. Upsampling by 8 to 245.76 MHz sounds meaningful as well (depends on the converters / filters that are available). What's missing is the step from there to RF, which might be either direct conversion (using the BB signal centered at 0 Hz) or something more complex (using a non-0 Hz DAC signal) which is easier from RF point-of-view e.g. thanks to the lower ("non-infinite") fractional bandwi
  21. A sneaky way out of it is to use n (e.g. n=8 ) generators in parallel, with a phase offset of 1/n sample. This is literally a "polyphase" approach. For high-end fast DA / AD converters you won't be able to operate at the converter's clock rate on an FPGA so it needs to be split on multiple, parallel lanes anyway.
  22. Hi, it sounds very unlikely. Now it does happen that boards fail (which is very rarely related to the FPGA itself, but more often unreliable USB cables, connectors, failing voltage regulators, PCB microcracks and, did I mention, unreliable USB cables) but usually, the problem is somewhere else. The FPGA GPIO circuitry is very robust. Unless there is an external power source involved outside the bank voltage range, I doubt you'd manage to damage it even if you tried. I guess we all know those panic moments, like "oh no I've bricked / toasted the board" and then you have a coffee,
  23. Hi, for your toy example, you can safely enter set_property CLOCK_DEDICATED_ROUTE FALSE [get_nets btn_IBUF[1]] into the TCL command line and the warning should go away (this should cause no issues at low speed, say low MHz range) Without going into details, you may find that the most "simple" examples will become very complex (in a sense of getting down to the bottom what is actually happening on the FPGA) because you're deviating from standard design patterns (synchronous logic) that has information moving from register to register on a clock edge (and that clock needs to
  24. I think you'll save yourself much hassle if you don't share the same physical JTAG interface. Simply take a couple of GPIOs, connect them to an FT2232H minimodule or any other hardware interface, and run a fully independent physical interface. Otherwise, you're facing the problem that different software packages need access to the PC side of the JTAG box at the same time, and this usually does not work for the very simple reason that only one connection can be open at the same time on the PC side. I have successfully integrated my own logic to JTAG as it's ~5x faster than UART (see B
  25. Hi, please search the forum, it's fairly common and frequently resolved. Use a different cable, use a different computer (there is a high number of bad quality USB cables on the market; there are many unreliable PC ports e.g. because of bad polyfuses that have once tripped and never recover completely). You may have a bad module and there exists a newer hardware revision with improved decoupling caps but most likely the easiest way ahead for you is to get a quality cable and rule out issues with the USB port on the PC side.