Jump to content

zygot

Members
  • Posts

    2,859
  • Joined

  • Last visited

Posts posted by zygot

  1. Oh, I didn't think about a mis-wired JTAG adapter... yeah, that'll do it.

    If you can't initialize the JTAG chain, then there's no use in trying to configure a device.

    Glad that the issue is resolved. Modern OSes can't run a number of ISE functionality, like IMPACT, but the abil;ity to create a bitsream from sources does seem to work. I do this with my Win10 box. Haven't tried to do this with Ubuntu. I do remember that the AMD/Xiliinx archived version of ISE didn't support all of the device families that I need when I tried it a few years back,so I used the DVD installer.

    Basically, ADEPT Utilities needs ID codes for all supported devices. In the Windows version of ADEPT Utilities, there is a text file jtscdvclist.txt file that holds all of the supported device codes. This is can be modified to add boards with unsupported devices. I've done this a few times. If the Linux version of the Utilities has this file I haven't found it; so Linux users might be stuck with the release device support. I haven't tried doing Spartan 3 development on a Linux OS.
  2. My you are an adventurous person considering the differences between 32-bit ISE 14.7 and Ubuntu 22.04 Linux kernels.

    I'd start with getting answers to these questions:
    - Is the JtagHS3 compatible with Spartan 3E device JTAG signalling?
    - Does the Adept utilities support the 3SE500 device?

    I can confirm that the ADEPT Utilities for Linux work on Ubuntu 22.04, though I'm using an older HSx cable.

    ..$ sudo dadutil enum
    Found 1 device(s)

    Device: DCabUsb
    Device Transport Type: 00010001 (USB)
    Product Name: DCabUsb1 V2.0
    User Name: DCabUsb
    Serial Number: 50003C040864
    ..$ sudo djtgcfg init -d DCabUsb
    Initializing scan chain...
    Found Device ID: 0362c093

    Found 1 device(s):
    Device 0: XC7A50T

    I suspect that the problem is that the utilities don't support your specific device. I know how to fix this on Windows, but haven't tried to find the proper file on Linux.
  3. Perhaps I'm reading too much into your use of the words "failure", "resetting", "proves", "valid Ethernet connection", etc..

    - Don't assume that all Ethernet PHYs behave the same.
    - Clocks can assume a quiescent state for a variety of reasons. That doesn't necessarily imply a reset or failure of some kind. Sclk in an SPI interface is and example of a clock that is only active during periods of data transmission. In Series7 FPGA devices internal clock signal can be driven with a clock buffer having an enable in order to put sections of logic into a quiescent state and save power dissipation.
    - the behavior of the PHY on your board might change depending on what kind of PHY it's connected to.
    - All PHY interfaces provide transmit and receiver error signals, though none are as simple to extract as the GMII interface.
    - The method that you chose to detect an absence of the PHY CLKOUT signal might be sufficient. Don't be too anxious to declare victory and move on. You might want to devise a second, more complex logic design to detect an absence of CLKOUT transitions.

    It wouldn't hurt to read the datasheet for the PHYs that you are working with. Not all Ethernet PHY vendors provide this documentation without an NDA. But one can get insight by reading OS driver code or other available information.

    All in all this is a good exercise and potential learning experience.
  4. Even though the Ethernet PHY is connected to the MIO pins, and not PL pins, these boards to connect the PHY CLKOUT pin to a PL pin.

    Some Ethernet PHYs do have energy conservation designs that shutdown portions of the PHY when there is no activity on the receiver. Depending one the PHY, This behavior might be altered by changing certain registers in the PHY using the serial interface.

    So, can one detect a condition where the PHY CLKOUT signal is not toggling using PL resources on one of these boards? I can't think of a reason why not. The solution just might be a bit more complicated than you envision.

    There are ways to use a ZYNQ based FPGA board without any PS connectivity so that you can implement a logic design. One could make the case that having a minimal connection to the PS might be useful; for getting additional clocks into the PL for instance.
  5. Well there are FMC mezzanine cards with mixed-signal (ADC/DAC) interfaces that provide for connecting a system reference clock via an suitable connector. These tend to be very expensive. So the Nexys Video or Genesy2 boards might be a suitable platform. Not providing the possibility of using an external reference clock is a major design flaw for any platform that claims to be a mixed-signal platform in my opinion. I don't believe that Digilent's SYZYGY carrier boards were designed wit customer needs in mind.

    Before Digilent came out with the Eclypse-Z7 and ZMODs Intel based boards with an HMSC connector were the only viable FPGA platforms around for doing mixed-signal applications at a price point for hobbyists. Terasic sells suitable hardware, though the cost of Cyclone V or Cyclone IV boards have become pretty burdensome ( twice what they were when introduced )

    What you have to look out for when considering selecting hardware is the details. For instance Terasic's DCC has 2 ADC and DAC channels that exceed any of the ZMOD Fs and an SMA connector that accepts and external reference clock. What's less obvious is that the ADCs and DACs are connected through wide-band transformers, so you lose DC and low frequency response.

    The nice thing about the so called ZMODs is that they were designed for Digilent's transition into an instrument company. So, they provide really nice flexibility for most general purpose ADC/DAC applications.

    If I have my way all FPGA development boards would have at least on SMA clock_in and one DMA clock_out. Rarely, do I get my way...
  6. All FPGA devices have clocking regions, and limitation for clocking infrastructure.

    Intel devices historically are more restrictive than AMD/Xilinx devices with regard to clocking options. That's why, most of the Altera/Intel development boards supply clocks to more than one region. Often they use one external clock module and a clock buffer with more than one output. Digilent FPGA boards are designed to be cheap... so they generally only have one external clock source. The problem with DDR is that signals require a Vccio that's well below what general purtpose IO signal need to be. It would be nice if they had provided separate clock for the DDR IO banks. This would make the boards a bit more expensive. Is the a design shortcoming? One could make an argument against a lot of design decisions that were made for Digilent FPGA boards, where a bit of extra cost would make the board substantially more useful. That's for another discussion

    My sense ( at least I don't remember running into this issue with in designs with older tool versions ) is that ISE and early versions of Vivado did not treat "sub-optimal clock module placement" as worthy of a bitgen error. But recent versions of Vivado do, so the only way to fix the board design limitations is by using the suggested constraint. Sub-optimal situations don't mean that you can't produce useful FPGA applications.

    Designing an FPGA board that is optimized for one specific purpose allow one the possibility of optimal performance. Designing a general purpose FPGA board, especially one that's cheap, and designed to work with PMOD add-on boards, pretty much dispenses with the notion of optimal performance.

    [edit]
    I realize that I could have provided a better answer to your question.

    If you really want to know how clocking works in Series7 devices then you should read UG472 the Series7 Clocking Resources User Guide. If there are any idiosyncrasies for Spartan 7 devices, this should be covered in the device datasheet. This guide informs you about clocking regions, clocking buffers, clock trees etc, plus the rules for using a clock across regions. Yu can also learn about the CMT backbone. It's somewhat complex and involve the clock-capable input pin assignments that board designers select.

    I will note that the user experience for Vivado IP that uses an AXI bus may be quite different than someone using the same IP with a native interface. Also user experience with Vivado managed design flow like IPI might be different than that for the HDL designer.

    When you instantiate an MMCM or PLL in your design and you can drive the input clock with an MRCC pin or SRCC pin or a clock buffer. Using a pin restricts the MMCM location placement. If you use one of the limited a global clock buffers and instantiate a specific buffer explicitly ( rather than relying on the tool to infer one ) then you might be able to end up with a better MMCM location placement. Generally, Digilent FPGA boards use the Multi-Region Clock Capable (MRCC) pin for external clock modules and oscillators on their designs. Even then there might be restrictions, as documented in the AMD/Xilinx user Clocking and Select IO User Guides that might determine you design choices. Generally, using the CLOCK_DEDICATED_ROUTE BACKBONE constraint will not be a problem in how the bitstream works on hardware.
  7. One would think that a valid MIG .prj file for a particular platform, like the Genesys2, would be a suitable source file. The MIG Wizard let's your try and import one. The problem is that I've never had success doing this. Part of the problem is that constraint syntax keep changing with occasional Vivado releases. In fact I've had Vivado add constraints to my user managed constraints file that it then decides to ignore because of syntax errors. Unfortunately the MIG hasn't gotten much love over the years. No updates to the IP wizard. Same old bugs in every new Vivado version plus a few new ones.

    One way to handle Vivado bugs is to manually keep track of Ip settings. This is what I did for the Genesys2 Video demo. Whenever I want to add DDR to new project I just create a new project and replicate the setting. If you read the commentary in my video demo sources you can find all of the relevant settings that I used to create Vivado IP. Another, probably more appropriate, way to to use the MIG is to create a tcl script to create your MIG IP for a board. Neither of these are totally foolproof s there are bugs that are consistent through all of the Vivado versions; like trying to edit a MIG project file using the wzard and having the GUI change settings from what you initially made back to default settings. It's all a pain in the ars and the Vivado coder have no interest in fixing it. another problem unique to the MIG IP is that you could never modify the IP; you can only create a new set of IP product files. In my experience Vivado gets 'confused with having multiple MIG .prj files and generated IP products.

    Digilent usually supplies a ucf file for the MIG in setting up the location assignments. Often these have to be modified due to changes in a tool version. For the video demos I had to create a modified ucf file, which I included in the demo sources.

    n my opinion Vivado is trying to depreciate the HDL design flow and force users to use the IPI GUI and high resource IP that come with it. Also, managing source files has gotten harder, though it's not as bad as it is for Intel Quartus users. In ISE and early versions of Vivado one could open a generated IP and change the name and produce a variation of it. No more. As the years go by I find using Vivado less and less friendly to work with.
  8. I strongly urge you to go to Opal Kelly's website and download the specifications for SYZYGY and SYZYGY DNA. This explains how to design SYZYGY compliant carrier boards and pods. They even have some (old) KiCAD templates.

    While the SYZYGY standard provides rules for how to design compliant boards, there is considerable room for making design choices that might limit what you can do with the interface. This is particularly true for carrier board voltage supplies, FPGA pin assignments etc. This is why you need to understand the specifications before doing the analysis of any particular SYGYGY carrier board to see it meets your requirements.

    With respect to differential signalling, SYZYGY pins 5-20 can be either single-ended or differential; i.e. pins 5:7 being the _p/_n pair for differential signal D0, or they could be two single-ended signals. Those are the logic signals available for differential signalling. In the FPGA world clocks are different than logic signals and have their own infrastructure. The SYZYGY specification supports 2 differential clock signals. Pins 33:35 are for a clock generated on the pod and being received by the carrier board. Pins 34:36 are for a clock driven by the FPGA on the carrier board and being received on the pod. Again, clocking can be single-ended but for Series7 FPGA devices not every IO pin can connect to the clocking infrastructure of the FPGA. Any of the pin pairs 5-20 could be used as clock signals that are generated on the carrier board. Any standard has to have some flexibility as there are a lot of possible applications that can be accommodated by a standard. For FPGA designs, the FPGA devices have their own rules, limitations, and quirks to understand and deal with.

    Do read the DNA specification to understand what the uC and EEPROM do and how they can be utilized. This is particularly important for the carrier board which generally supplies power voltages and in most importantly Vccio IO bank voltages necessary to implement a specific IOSTANDARD. Even still, there is considerable room for customization regarding DNA negotiation and accessing EEPROM data. Digilent designed boards are unique with thier implementation for the Eclypse-Z7. The USB104 is not a Digilent designed board.

    With respect to how you design a power supply on a system level, I'll just say that picking a carrier board and trying to adapt that to your needs is not likely to be a satisfactory path. Unfortunately, there are not a lot of SYZYGY carrier boards to choose from. If you stick to powering your pods separately, except for the Vccio rails, you have a good chance of getting to where you want to be. If you intend to use the USB power supply to power the USB104 as well as your pod, then you are in for a rough ride. Older USB 2.0 only allows for 500 mA @5V. Type C USB allows for the ability of a USB host to supply much more power to a USB downstream device, though there's no guarantee that any particular device supports this by design. USB is a whole other thing to know about.
  9. Recently Digilent sent me an email about this new product. It so happens that ADI has a product with the same designation. Does anyone think of checking these kinds of things in advance?

    Given that Digilent has clearly gotten out of the FPGA development board business and will never encroach on the much higher priced products of its parent company, perhaps you should drop the "Pro" verbiage which is only confusing to the uninformed..
  10. There have been -1L Arty boards with lower performance than -1 parts, so knowing the speed grade might be important. You should go by what Digilent claims.

    Still it would be nice if all device markings indicated the speed and temperature grade in an obvious manner. I've run into ICs where the only indication of what the part is consists of a code that you need to find a document to decipher.
  11. I suppose that anything's possible; but it seems highly improbable that Vivado IP would create an instance where there are more channels than it is capable of creating.

    Without reading the IP documentation or doing simulation you have a challenging task in figuring out what's going wrong. Trying to debug something that you don't understand, and don't even have access to IP documentation, which can be sketchy at best, is tough.

    My guess is that you aren't seeing any, much less 10 channels with complete data frames. As the amount of data increases per frame it's reasonable to suspect timing and clocking as a source of your problems.

    Have you worked you way through the IP source code?

    My suggestion is to create your own IP using the HDL flow and abandon the VIvado IP if you can't read the documentation or do a proper simulation.
  12. Well out of curiosity I tried to find the IP documentation that you are using. Without actually signing into my AMD account I was totally unsuccessful. Finding documentation for AMD/Xilinx products has definitely gotten harder recently, especially on a Linux host, unless you sign into your account.

    Using FPGA vendor IP is not as simple as configuring it in the IP wizard. That's why you need to read the documentation. While IP may support a wide range of configuration settings that doesn't mean that there aren't restrictions among all of the possibilities. Just because you had success with 6 channels doesn't mean that changing the number of channels from 6 to 8 automatically works without some modification. I've never used this IP so I have not experience with it.

    The first thing for you to do is read all of the synthesis and implementation warnings to see if there are clues to what's changed in your modified design Usually, Xilinx IP comes with a simulation testbench. Simulation is the key to all FPGA design flows. AXI based simulation is generally messier and more complicated than regular HDL simulation, especially for a ZYNQ target.

    If you can't figure out how to debug your design then this is a problem. FPGA vendor IP usually is very hard to unwind so that you can understand how it works. It you can't do an effective simulation of your design, then you need to be clever about figuring other ways of debugging what the tools are giving you. .
  13. Gee, as someone who generally thinks that there are no stupid questions, yours should be close to being in that category... but it isn't.

    The answer ( but probably not ) is in UG475 where AMD Series7 packages are described. In the device marking section there's a long description of what you might encounter ( or not encounter ) while looking at the IC markings. There's no guarantee that there's any text telling you what speed grade a particular part is. There's probably a 2D bar code and like many AMD documents, instead of including the answer where you'd expect to see it, you get a reference to yet another document to find and read. This is typical of FPGA documentation.

    I can save you the trouble though as the cost of -2 or -3 speed grade parts makes it extremely unlikely that Digilent would use the more expensive device just to complete a production run without charging a premium for the finished product. The Genesys2 is the only Digilent board that I know of with a speed grade part faster than -1..
  14. The inappropriately named differential PMODs found on Digilent's FPGA boards match n/p pair lengths for each IO bank pin pair. Digilent does not match pin lengths across all pin pairs ( i.e all 8 signals ). Furthermore, the matching only extends to the connector through-hole pads. The right-angled connector used on all PMODs is unsuitable for most differential applications. If you need the best common-mode performance, then none of these PMODs are suitable. Also, since none of the PMODs have signals connected to IO banks with a Vccio that isn't 3.3V, TMDS_33 is the only possible differential standard. Lastly, placement of termination resistors can never be ideal.

    If you need matched n/p pair lengths for an application where the FPGA pins are receivers you can always use IDELAY to compensate for mismatches, up to about a ns or so. In theory one could use a right-angle connector on an attached board that is in the opposite orientation to cancel out the connector length mismatch, but you still have to deal with the connector unsuitability for differential signalling.

    There might some low frequency applications where these PMODs might be OK. like RS-422.

    It would have been nice if Digilent actually put usable differential IO connectors on their boards that were connected to IO banks powered by a suitable Vccio; but they have refused to do so. I suspect that's because none of their PMOD accessory products use differential signalling or high speed signalling and the purpose of the PMODs is to sell PMOD products, not to make their boards suitable for custom user projects.
  15. Dr.J: "Our analogue boards are rather power hungry... "

    Perhaps, as you suspect, you need to power your analogue boards separately. Don't confuse Vio rails with device power rails. Not all devices have power supply pins for IO that are separate from the device power supply pin. FPGA devices do. Many analogue devices with digital control signals do as well.

    Don't assume that an LVDS standard as defined by an FPGA vendor is totally compatible with the LVDS specifications of any random device supporting differential signals. The proper thing to do is read the datasheet for the FPGA you intend on using as well as the Series7 Select IO User Manual supplied by AMD/Xilinx. There are application notes to help with designing circuitry to make differential signalling compatible with arbitrary LVDS signals.

    Generally, the Vadj voltage rails used with FPGA devices are only intended to power Vio pins on whatever device is on the other end of the FPGA pin signals. Series7 FPGA devices don't support LVDS standards for Vadj (Vccio) above 2.5V. That doesn't mean that you can connect arbitrary devices with differential signalling to an FPGA, just that you might need to condition the signalling to make both ends compatible. Series7 FPGA devices support LVCMOS_33 or LVTTL_33 3.3V single-ended signalling. The devil is in the details, and the details are specific to the exact circuitry that your analogue boards use.

    Vadj power rails on FPGA boards just means that the user can select from a limited number of Vccio voltages. In FPGA devices, the Vccio Voltage that drives an IO bank determines what IOSTANDARD the IO pins can be compatible with. SYZYGY is a standard supporting differential signalling for a range of Vccio voltages. The Series7 documentation is pretty good at covering this.
  16. If you are implementing your own MAC design in logic then the MAC address can be anything that you want it to be. It doesn't even have to be static.

    This of course presents a problem if your hardware is connected to a LAN or the internet as there are not supposed to be any two MACs with duplicate addresses. Assuming that the address in the Genesys2 FLASH is unique somewhat strains credibility, as does the notion that all of the internet connected devices out there might be unique.

    In truth you don't even need to incorporate a formal "MAC:" structure in an Ethernet design implemented in logic.
  17. Is the RPi5 suitable for paring with an FPGA?

    You may have noticed that the latest generation of RPi5 uses an asic, referred to
    as a Southbridge, for IO. All of the IO, Ethernet, USB connectivity etc. are implemented
    in the RP1 Southbridge whic is connected to the BCM2712 processor though a 4-lane PCIe
    Gen2 interface. This makes it substantially different from the RPi4 or RPi3
    boards. So you might be wondering if the new version can be connected to an FPGA. That
    is a question that I decided to investigate for myself.

    The RPi5 has a single lane PCIe Gen2 ( perhaps even a Gen 3 ) header on it that someone
    will eventually create an FPGA add-on board for. Most people will want to use that interface
    as a higher performance alternative to the SD card. But the RPi3 and RPi4 could always
    connect to an FPGA board using a USB 2.0 bridge device like the FT2232H supporting synchronous
    245 mode operation. I've done this and performance for the RPi3 and RPi4 is OK for moving
    small blocks of data between the processor and the FPGA. As the amount of data being transported
    increases the performance drops off considerably, unlike an x86_64 processor.

    For my experiments I'm using a Genesy2 board with a FMC_UMFT601BX mezzanine board. This
    allows the HDL application in the FPGA to act as a USB 3.0 endpoint with a peak data rate
    of 400 MiB/s. The design of the Genesys2 application used for my tests is pretty straight-forward.
    All data uploaded to the FPGA from the USB Host gets stored in DDR3 that functions as a very
    deep FIFO. The USB Host can retrieve the upload data once it's been downloaded. The HDL expects
    n sectors (4096 bytes/sector), to be uploaded and then downloaded. The DDR3 interface in combination
    with sufficient FIFO storage can accept up to 1 GB of upload data and return it without delays.
    In addition to the simple up/down data scheme the HDL has performance timers to timestamp the
    important events in the tests. These are: the time that the first 32-bit word is uploaded, the
    time that the last 32-bit word is uploaded; and the same events for download. Since it's a
    free-running counter, I can calculate the total time that has elapsed between reading the
    first 32-bit upload word from the FT601 FIFO to the last 32-bit word written to the FT601 FIFO.
    This provide a much more accurate picture of the USB Host OS/Software behavior and performance
    than typical software timing methods provide. It must be noted that from the perspective of the
    USB Host data rate performance is more complex than just time spent in the driver filling or
    emptying the FT601 FIFOs. I have my own Software application that is mostly identical for Windows
    and Linux platforms. The D3XX drivers for these platforms are not the same however.

    The FT601 is not the only way to connect an FPGA to the RPi5 via USB 3.0. I also tested the
    XEM7320 with the Infineon FX3 bridge.

    For the FT601 Test I used this setup:
    - Genesys2 FPGA board
    - RPi5 8 GB w/ heatsink/fan
    - Raspios Bookworm 64-bit
    - libftd3xx-linux-arm-v8-1.0.5
    - FT601_245.cpp Host Application
    - G2_FT601_TESTER.vhd

    In FT601_245.cpp I do software elapsed time calculation. The test runs in this manner:
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
    ftStatus = FT_WritePipe(ftHandle, 0x02, pBufOut, SectorSize*up_sectors, &BytesWritten, NULL);
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start);
    ftStatus = FT_ReadPipe(ftHandle, 0x82, pBufIn,RxBytes,&BytesReceived, NULL);
    clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &stop);

    So, there is some processing between the upload and download calls to the D3XX driver.


    Without further ado, here is a sampling of results for the Genesys2/FT601/RPi5 testing

    Test Upload Average Download Average
    Length Time Upload Time Download
    Bytes Data Rate Data Rate
    ------- ------------ --------- ----------- ---------
    16384 42.09 us 389 MiB/s 44.68 us 367 MiB/s
    65536 170.66 us 384 MiB/s 196.35 us 334 MiB/s
    262144 682.51 us 384 MiB/s 848.43 us 309 MiB/s
    1048576 2.73021 ms 384 MiB/s 3.37639 ms 311 MiB/s
    4194304 10.92282 ms 384 MiB/s 12.13967 ms 346 MiB/s
    8388608 21.84317 ms 384 MiB/s 24.26676 ms 346 MiB/s

    This is rather surprising that the upload data rate is consistently around 380+ MiB/s
    for blocks of data ranging from 16 KB to 8 MB. Download rates were less consistent from
    run to run but still near 350 MiB/s. For USB 3.0 the RP1 Southbridge performance is
    outstanding. I believe that the low rates in 310 MiB/s range were outliers, but within
    the range that one should expect. BTW the performance with the same setup, but using my
    Ubuntu 22.04 i7-13700K box was dismal; about 44 MiB/s up and down for 1 MB test.

    I also tested the RPi5 using this setup:
    - XEM7320
    - RPi5 8 GB w/ heatsink/fan
    - Raspios Bookworm 32-bit
    - FrontPanel-Raspbian10-armv7l-5.3.0
    - EthAppliance.cpp Host Application
    - EthAppliance_1.vhd
    - Genesys2 configured with Genesys2_Eth_DUT.vhd ( Ethernet echo application )

    A few months ago Opal Kelly had posted a 32-bit beta ARM driver for FrontPanel; it's
    since disappeared from their download website.

    I didn't try to do do a performance test with this setup. All I wanted to know was
    if I could run the application on the RPi5 and see if it worked as well as on my
    x86_64 Windows and Linux platforms. The application streams TX Ethernet 1 GbE packets
    through a SYZYGY Ethernet pod. It stores RX packets simultaneously into a DDR3 buffer
    that can be read later. I was able to run the HDL and software applications with
    performance that was equal to that on x86_64 Win10 and Ubuntu 22.04 platform; that is
    sustained 120+ MiB/s full-duplex Ethernet.

    So, do I think that an RPi5-USB 3.0 FPGA could be interesting? Absolutely I do! The RPi5
    is an impressive bit of gear with some interesting possibilities.
  18. Leon18: "Schematics are not enough since I probably need PCB layout to check these two board FMC ports."

    I'm not understanding what your qualms are. What do you need to check on the layout that the schematic won't show, other than if differential signals are laid out in a proper differential routing scheme? ( the XM105 only supports single-ended for most of it's signals ).

    As I mentioned I've used the HPC HW-FMC-XM105-DEBUG board with both the LPC Nexys Video and HPC Genesys2 boards. The Differential Pmod Challenge projects that I published in Digilent's Project Vault provide some information. But I will say that no one should accept anything posted to a forum like this as absolute fact; trust and verify in the words of a past POTUS.

    In general, interfaces like HSMC and FMC are carrier/mezzanine board oriented; with one gender assigned to the carrier and the mating version assigned to the mezzanine board. This doesn't mean that there aren't exceptions as some vendors just want to use the connectors and not follow the standards.

    The Genesys2, Nexys Video and XM105 board (xtp078.pdf) schematics provides the connector part number. ADM/Xilinx also provides extra information like BOM, PCB information etc. It's up to you, the designer, to go to the connector manufacturer and get specific information. Don't be surprised to discover that for FMC and HSMC there are a lot of connector part number variants, some of which may be specialty parts for specific customers. This is to prevent customers from getting bored.

    Regardless of whether or not an FPGA vendor adheres to a standard, it is up to the designer to go through all of pins and verify compatibility between a carrier board and an arbitrary mezzanine board. This is especially important for LVDS and clocking as various FPGA device families have different rules regarding clocking and IO pins. It's no uncommon for a mezzanine board vendor like Ti or ADI to use a pin for some purpose other than what the standard decrees. Also, the signal trace layout rules for pin pairs that might be differential or single_ended are not hard and fast. For a general purpose interface one might need the best possible signalling for one project, but just good enough for a different project. Generally FPGSA carrier board schematics provide not just interface connector pin assignments but IO bank assignments which might be just as important,

    Just when I thought that I was done here, I remembered that most high density connectors come in a variety of heights. If you are designing your own board you need to pay attention to these things.
    As an aside some newer FPGA boards use the FMC+ specification which is not backward compatible with the FMC connector mechanical dimensions.
  19. In earlier versions of the tools, ISE and early versions of Vivado, you could look at Settings/Synthesis, and Settings/Implementation to see a list of strategies and all of the settings for the tools preset strategies. In recent versions of Vivado, the list of setting is limited and replaced by an empty line with the nebulous name of Other....

    For synthesis there will always be some optimization of how the tool understands your HDL text. It will also infer known structures like counters, state machines, etc, and replace your code with optimal instructions when it can infer them; as long as you enable these features.

    The wonderful thing about VHDL and Verilog is that the designer has free reign to implement anything, using good, bad or indifferent coding. And that's the bad thing about the HDL flow; what you intend isn't necessarily what the tool infers from your code. To make matters worse, each vendors synthesis and implementation tool expects a coding style that isn't always in concert with another vendor's tools.

    VHDL and Verilog just haven't keep up with modern FPGA architectures and features, so each vendor has their own way of completing the job. For instance there's no arest or sreset keyword in VHDL or Verilog; the subject just isn't addressed. HDL's don't know about single-ended or differential logic or buffers; Each tool has it's own way of specifying logic types. All vendors allow the designer to put some synthesis and implementation constraints into their HDL sources using PARAM or attributes, but there is no consistency between vendors tools or even an attmept at completeness for any vendor. it's hit or miss. I could go on and on, but hopefully you get the idea.

    It's important to read you vendor's reference manuals. For Xilinx UG901, UG903, UG904, UG893 are required reading plus a lot of others. Don't download some random version. Use the Document Navigator for tool version specific versions of these documents.

    Figuring out basic HDL stuctures and concepts is just the beginning of the HDL design flow. VHDL and Verilog have slightly different concepts to grasp, and there are subtleties that aren't at all obvious for a particular design expression. Basically, I view HDL source code as a kind of dance between the designer and the tools with the object of creating one functional behavior of the completed design netlist.

    BTW, I like the idea of restructuring your code if you aren't happy with the results of the current version. It's part of the dance. For what you are trying to do, you might consider putting that functionality into you state machine. Letting events in process sensitivity lists, or simple boolean expressions do all of the heavy lifting is tempting but not always a good strategy. Sometimes you need to do more of the work for yourself.

    Also, you are using a lot of the clocking resources. Relying on so many derived clocks spanning so wide of a frequency range is probably not the ideal strategy. You can put less burden on the tools by minimizing the number of clock domains and using logic structures like counters, strobes, etc to structure the timing of your design. As an added benefit this will force you to think about how signals in one of those clock domains might cause problems in a different clock domain.
  20. Be aware that if you configure SPI1 as being connected to the PL via EMIO, the pins for SPI1 on the ZYBQ processor block don't necessarily behave the same as, say SPI0 connected to the MIO pins. Read the Z7000 TRM before trying to use the interface.

    Also a SPI connected to the EMIO runs at 25 MHz max, while an SPI connected to MIO pins runs at 50 MHz max.

    Lot's of details to understand.
  21. First of all the post by artvvb is a good one and deserves your attention and spending some time doing a bit of research on HDL resets for Vivado synthesis.

    I notice that clk_out5 hs a requested frequency of 10 MHz and the wizard gave you 1 MHz. I suggest resolving this first.

    Resets, both synchronous and asynchronous varieties can be tricky. In general I suggest avoiding edge event resets in your logic.

    If you want to use an asynchronous reset then you could try this:
    always @ (posedge clk or rst_n) begin
    if (!rst_n) begin
    //...
    end
    else begin
    You are probable better off creating a srst_n signal that is synchronous to the clk clock domain.
    always @ (posedge clk) begin
    if (!srst_n) begin
    //...
    end
    else begin
    Understand that, depending on the settings for synthesis, the tools might infer some functionality and replace you code with it's optimized version. This might not be what you want.

    VHDL and Verilog are great. Unfortunately, they don't address a lot of what modern FPGA synthesis and simulation tools require. But you can write your code in a way that's consistent and unambiguous.

    it's tempting to use a signal's edge event as an input to a process to do something; and you don't want that something to repeat unintentionally. The common way to do this is to create a one-clock wide strobe that occurs relative to some signal external to your process. Say that you have a signal that toggles at a 1 MHz rate; that is 500 ns low and 500ns high. You want to use rising edge of this signal as an 'event' to cause some action in your process. You can create a one clock wide strobe near the rising edge this way:
    - create a delayed version of your input signal using clk
    - create a strobe when s is high and s_r1 is low.
    - use the strobe instead of s as an input to your process.
  22. Connecting one of the ZYNQ PS UARTs to your PL design using the EMIO is pretty straight-forward. In theory, one could use a PS GEM for bi-directional DMA of data at about 125 MiB/s. Unfortunately, for my needs the ZYNQ GEM software is too complicated to justify developing such an interface; easier to use an AXI streaming IP.

    I looked at doing an SPI or GPIO EMIO interface just for fun and decided that these also were not worth the effort for anything that I could think of as functionality that I needed. That doesn't mean that you should come to the same conclusion.

    I'd advise reading the ZYNQ TRM, especially the parts about the EMIO connections and pin functions as they relate to the SPI. It's not going to be a trivial exercise.

    I suspect that connecting a graphics display through the PL is going to be a time consuming project for you.. unless you find hardware that comes with a Vitis hw/sw project with source code designed to be used with your board. Note that the Z7000 PS QSPI is cannot be connected to the PL through EMIO. The SDIO can be connected to the PL through the EMIO but is limited to 25 MHz. (See page 48 of the ZYNQ 7000 TRM ).

    Digilent sells one bit-mapped graphics PMOD LCD display but has never tried to use with any of their FPGA boards as far as I can tell.

    It's a hard lesson to learn, but ZYNQ and Xilinx tools and IP have limitations, both in terms of functionality and support that have no simple solution. This can be a problem if your start off implementing a design on a specific platform doing the easy stuff first and then want to add new functionality. Things can get very complicated very quickly.

    Personally, I dislike the functionality and performance of the Z7020 PS SPI that I always opt for an HDL PL SPI. The GEM is pretty much the same. For low speed connectivity a PS UART through the EMIO does seem to be worth the effort and pretty useful as well.
  23. If it helps, a node locked license from a purchase voucher works with devices having different package. For example the license for a Genesys2 or KC705 K325T in a 900 pin package also works for a different board with a K325T in a 676 pin package, like the NetFPGA-1G-CML board. So, buying another board is likely much cheaper than purchasing a full tool license, which only lasts for one year of updates.

    It's not great... but a lot better than any other FPGA vendor will provide.

    Of course the cheapest, and possibly best option, is to just use the Vivado tool version that your current license works for.
  24. "The max rate isn't specified. You can find other users trying to get Pmods going as fast as possible"
    It's hard to argue with a statement that doesn't make any assertions.

    Be aware that all of the PMODs on the Basys3 are the "low speed" variety; that is they have 200 ohm series resistors between the FPGA pins and the PMOD connector pins.

    If all you want to do is toggle pins at 100 MHz, yes you can do that. if you want to use such a signal to transmit information, then there's a lot more to consider.

    Only the so called high speed differential PMODs have any PCB trace length matching. That's only between the _n/_p pin pairs. And no PMOD, except on n the ATLYS can do differential signalling.

    None of the PMODs that I know of have length matching across all 8 pins.

    Most PMODs don;t have a clock capable pin connected to any of the PMOD pins. This might be a problem for high speed interfaces.

    10 MHz toggle rate is what most of Digilent's Reference Manuals suggest for the standard low speed PMOD. That's probably very conservative. I'd certainly implemented SPI interfaces using PMOD connector to external devices via a custom PCB adapter that exceed 32 MHz; on the high speed PMODs. I'm guessing from past experimentation that ~50 MHz is practical limit for useful educational work. Of course the termination on the receiving end, the quality of the transmission line from FPGA pin to receiver pin, the current drive, slew rate, etc will determine the quality of your signal. Large amounts of overshoot or undershoot will degrade performance and potentially reliability.

    Basically, what I'm trying to say is that if you want a good answer to your question, then you need to ask a better question. A nebulous question invites a nebulous answer.

    I suppose that what you really want to know is whether or not your design idea will work with a Basys3 PMOD connected to some external circuit. A good answer requires more information about what you are trying to do.

    I looked over the current Basys3 Reference Manual and was surprised to see that it didn't mention a useful toggle rate for the PMOD connectors. Since that board is pretty old I assume that this information was scrubbed from the original manual. This seems to be consistent with the new Digilent policy of removing important information from easy access when it doesn't reflect well on the product capabilities and replace specifications with ill-defined comments that suggest something more positive.
×
×
  • Create New...