zygot

Members
  • Content Count

    800
  • Joined

  • Last visited

  • Days Won

    39

zygot last won the day on May 20

zygot had the most liked content!

4 Followers

About zygot

Recent Profile Visitors

5252 profile views
  1. Hi Phil, I assume that you want an FPGA board with an ARM or intend to use a soft-core processor for you designs. Here's my ( arguably debatable ) perspective. I much prefer to do everything in an HDL. All of the FPGA vendors support mixed Verilog and VHDL source. If you are into something more exotic then you need to do some research on their support of your favorite HDL. There are a lot of good reasons for doing an all HDL design. There are discussions in the Digilent forums you might find interesting. If you want to use NIOS or MicroBlaze then you are constrained to use a specific development flow that I find problematic. I rarely need to use an embedded processor but confess that I love ZYNQ when a processor makes sense. For XIlinx their ZYNQ ARM based products might best be thought of as an ARM with a programmable FPGA embedded. The tools for coding for the ARM are part of the free SDK and are, to my mind, pretty darn good. There are tool version incompatibility/change issues for all ARM software tools ( a big reason to avoid embedded processors when not needed ) though, in my experience, Xilinx tends to be the least painful. HDL only projects are pretty easy to maintain through tool version releases. If you really need a low-end processor without floating point then there is a variety of options for third party implementations of low end micros out there. I'll warn you that replicating a project is one thing but if you want to create your own FPGA/ARM design things can get very complicated very quickly. For all ARM based FPGA devices there are two distinct flows involved. First you need to implement the FPGA (PL) side using the traditional FPGA tool flows. The you export the hardware to an SDK to do the software development. It can take some time to figure out how to set up and use the SDK efficiently. You can program the device, run, and debug the ARM code from the SDK ( debugging the PL and PS code is not integrated but not necessarily difficult ). Digilent doesn't offer an FPGA board with a USB 3.0 interface. Generally the USB interface also handles configuration. Opal Kelly offers a good selection of USB 3.0 boards, at a higher price point than other vendors. I prefer something like a Genesis2 or Nexys Video with an FMC connector. You can get inexpensive USB 3.0 development boards and software to add USB 3.0 connectivity to these boards. I've done so using FTDI and Cypress tools for both of the Digilent boards successfully. Before spending money spend some time checking on support for FPGA boards first. What kind of demos do they offer? Do they supply source code? Is their source problematic? For Digilent products there is definitely an issue for users trying to rebuild projects due to Vivado version issues. This is mostly a problem of them shooting themselves in the foot rather than trying to work smarter. I'm pretty certain that this is almost, but not quite, as painful for them as it is for their customers. As to 1G Ethernet this depends on how you want to use it. There aren't a lot of ARM based boards with an Ethernet PHY connected to the PL ( FPGA fabric ). The more expensive ARM based boards have a 1G Ethernet PHY connected to the PS (ARM). This might be great if you want to use the Ethernet in a traditional way. I prefer my Ethernet PHYs connected to the FPGA but that's just me. One nice thing about Digilent's gigabit Ethernet PHYs is that they initialize them on power up to be useful without having to program them and reset them post power up. Intel really really wants users to be dependent closed IP to use Ethernet and make it as difficult as possible to use FPGA connected Ethernet. If you choose the Genesis2 you could, in theory ( I haven't done this ), add a 10G Ethernet interface using an FMC Mezzanine board.
  2. Perhaps I misunderstood your question. Since Digilent doesn't make any FMC mezzanine boards using the transceivers their basic constraints file ignores these pins. One possibility is to look at one of the ADI FMC 204B project files and see what they use for IOSTANDARD property values. ADI does target the KC705 and the Genesys2 heavily borrows from that design. I don't know if there even is a Kintex transceiver Application Note for transceivers as neither Intel or Xilinx is particularly interested in help users use transceivers for any but the high end devices. Perhaps the @elodgDigilent engineer who characterised and tested the Genesis2 FMC transceivers has some information. I don't believe that the transceiver pins support any IOSTANDARD but are dedicated for use as transceiver IO.
  3. I wasn't referring to IOSTARDARD when I was mentioning pin name/functionality. Everything that you need to know about IOSTANDARD capabilities of any IO pin is in Xilinx UG-471. Note that the Genesis2 uses the Kintex device and has both HP and HR IO banks. Digilent FMC equipped boards use a user settable Vadj voltage for FMC IO and this influences what possible IOSTANDARD you can opt to use. Just read the Xilinx literature.... JESD204B is great but uses transceivers so... if you want to use those you need to read through UG-476 as well. I'm not aware of any Ti FMC ADC EVMs that target Xilinx FPGA boards. I'm assuming that you are making your own FMC mezzanine card. ??? I am also not aware of anyone offering JESD204B IP for free. Make sure that you know what you are doing as there are lot's of unhappy surprises for the uninformed and unprepared. The best chance of success is to choose a third party ADC FMC mezzanine board from a vendor that supplies some source code operability for a Xilinx board compatible with the Genesis2. Given the licensing issue you might find this advice hard to act on. [edit] Analog Devices offers JESD204B ADC devices and a few FMC cards with them. They generally have some good FPGA support but good luck trying to change their demo source to fit a custom application. You might want to snoop around the ADI website. Caution!!! The devil is in the details.. and the vital details are usually hard to come by without a lot of work. Pin assignments by FPGA board vendors can break a project so expect to spend some time tracking down each pin on the schematic for compatibility. Been there, done that on a few occasions. MAKE NO ASSUMPTIONS!!!
  4. The pins that your picture refers to are specific to the FMC Vita 57 standard. I suggest looking around the internet for FMC pin name/functionality. More important is the pin name/functionality for the Xilinx FPGA device. Xilinx UG475 provides this information. I'll warn you to make sure that you understand everything in the SelectIO and Clocking reference manuals if you intend to use IOSERDES. I've posted on this topic quite a lot and don't want to keep repeating myself. Most Ti ADC/DAC serdes EVMs will be problematic on Xilinx boards. You have a decent chance using HSMC adapters and Altera boards. If you are prepared to make your own EVM/FMC adapter then you also have a good chance of success... as long as you understand all of the limitations involved with using IOSERDES and clocking pin assignments. best of luck.
  5. That's fine. Make sure that you pay attention to any data moving between clock domains if you aren't using one global clock.
  6. No secrets. Look around in the Project Vault part of the forum. There are several that use the UART including most of mine. I agree that the nomenclature can be confusing to people new to Digilent FPGA boards. There is no "official" standard for naming port signal names for UART interfaces. Digilent chooses to pick names from the viewpoint of the FTDI device so uart_rx_out is an input to the FPGA and an output from the USB device. They are consistent which is all the we can ask of them. 16 MHz seems to be pretty low to clock an HDL UART for typical implementations. My Project Vault submissions have a testbench for simulation so that you can delve into the inner workings. Simulation won't solve hardware issues with outputs driving outputs but are key to good FPGA development. As far as running hardware always read the FPGA board schematic and the datasheet for any interface device you are using to do a sanity check on who's driving what pins... before powering any implementation. It's a good habit to foster. Once you understand how the hardware works and have a simulation that seems to work you can check things like baud period to be sure that your implementation has a chance of working on hardware.
  7. @robfinch I need to clarify a point that I made in the previous post. Obviously, real UART serial ports haven't been seen on a PC for some time now; so even though the UART interface is easier to grasp from an HDL viewpoint it is still a USB device from a PC viewpoint. So PC driver and application overhead is the same as it would be using DPTI. For really low baud rates you may not notice this at all. For really high baud rates you likely will notice. In a few of my Project Vault submissions I've included VHDL code for a UART from Opencores as part of a testbench. When clocked at 100 MHz this particular implementation starts to show problems above about 7 Mbaud on an Artix device. That's just an anecdotal round-about figure from a bit of experimentation I did a few months ago. This amounts to about 14 baud period samples so you should be able to envision what possible problems might be encountered at that rate. I'm just talking about FPGA implementation here. If frame to frame latency is an issue and you can create multiple frames faster than 33 ms. on your FPGA platform then clearly there might be an advantage to using DPTI over a USB COM port device. I've never had an impetus to do any investigation into latency for USB UART applications. If you do choose DPTI I suggest sending the largest blocks of data within reason ( you'll need a large buffer in the FPGA ) and avoid partial packets. It's astonishing how slow USB can be if you aren't careful.
  8. @robfinch Hey this is an interesting question. It sounds so simple on the face of it but has so many unknowns when extrapolated to a final implementation. First of all it's not clear to me what you are trying to do. Do you want to create your own Putty application but using a USB data stream? ( I think that such an application already exists ). Let's put aside questions about how you want to present your data and start with overall data transfer rates. Let's say we want to transfer 8KB 8-bit bytes every 33.333 ms. This amounts to 8192x30x8 = 1966080 bps. I've found that using a UART at a 921600 baud rate with Putty or Python on a PC is pretty robust. Of course with a UART we usually need 10 bits per character to provide stop and start bits so we'd need to bump that up to a 3686400 baud rate if we want to constrain ourself to standard baud rates in order to handle the desired data rate. Even still a UART seems to be a reasonable possibility. The UART as a COM port is pretty easy to work with though Microsoft doesn't make creating a serial application easy for Windows. I can say that at 921600 baud using Putty it's pretty difficult to read that much text scrolling through a virtual screen in Putty so perhaps you have a different presentation in mind. Rendering a display of say, 80x100 characters, in a static screen will require some thinking about how both the PC application and FPGA will operate. Do you really want to send redundant data every 33 ms? Perhaps. Technically, there is no problem with 3686400 baud rates as far as the hardware goes but getting a PC application to work robustly might be difficult. I've never had a reason to try it. As far as USB 2.0 goes you should have no problem streaming data at 10x the desired data rate if you pay attention to the details. You still need to render the data into a desired presentation though. The problem with USB is that the protocol overhead can become an issue with low data rates and short transfer lengths. You can always pad you data to overcome these issues; that is you might need to transfer a lot more bytes than needed to accomplish your overall goals. So here comes the advice; and it probably isn't the advice that want you want. In terms of data transfer you certainly could use DPTI or a UART interface. The question is how do you write a PC application to render it. Once you've decided on how the text will be presented you will need to figure out if your OS will allow your application time to get the data and render it. Certainly rendering 8KB at a 30 Hz rate isn't going to be an issue for a modern PC. If there is a delay in rendering the data is this a problem? That depends. So here comes the advice. Create some intermediate projects to experiment with the different elements to such a project. I'd start with the PC application to render your 80x100 text screen. Then I'd add the UART or DPTI interface to the application and get a feel for what the issues are. Since the format for a UART is fixed and there is no packet protocol to deal with this is the easiest interface to work with with the fewest surprises. DPTI has a lot more considerations to avoid undesirable performance penalties. I urge anyone wanting to use a USB interface to read the available standards and understand the protocol before trying to use it. You'll still need to experiment as the OS layers will have a large impact but at least you will have a foundation for doing intelligent experimentation. Hopefully this will kick of a useful discussion addressing your question.
  9. @JColvin So... I looked at both schematics again and was surprised to see that the same Rev B.1 schematic displays differently on Win7 using Foxit Reader than it does in Centos using the native Gnome poppler application. If I drag the cursor over the empty blocks for IO Banks 34, 35 and 16 while holding down the left mouse button on Centos the hidden text appears in a blue background. Thanks (really!) for forcing me figure this out. I guess that PDF files aren't all things to all people.
  10. @eric_holtzclaw, If you need to use an external clock as part of an interface there are pins that are appropriate. Unfortunately, the current version of the schematic for the CMOD-A7 doesn't show the pin names for all of the IO banks connected to PIOxx pins. Any _MRCC or _SRCC pins will work. Absent a customer friendly schematic you can find pin names for all Xilinx devices in UG475. According to an older version of the CMOD-A7 schematic module pins PIO36, PIO46, PIO43, PIO37, PIO18, and PIO19 are on MRCC type pins.
  11. I use Centos 6 with ISE 14.7 and Adept without issues. I don't have an answer for your problems with VirtualBox hosts but I will point out that Centos 6 uses Linux 2.6.32 which has fundamental differences from modern Ubuntu releases based on later Linux kernel releases. It wouldn't surprise me if this is a problem for USB drivers.
  12. Ok... possibly... probably not to write a driver for Linux on a dual boot Windows/Linux machine... It depends.... C# has worked for me for multi-threaded USB applications in terms of speed. I don't know if you can even write a Windows driver with it. (I gave up trying to write drivers for Winwoes long ago as half of the versions have absolutely no purpose other than to create revenue... vista...m-u-s-t... c-o-n-t-r-o-l... s-e-l-f... The problem is that I write application code so infrequently these days that I never fit into the "right hands" category any more.
  13. Oh my, I've trying to keep myself from posting to this question for a while now and just failed to resist. The only purpose for asking such a question without context is to gender ill-will with your pals while having beers at a bar. [OK, it might also be a test question to see if you've been attending classes... shame on your professor] The following commentary might be a bit more useful. I suppose that you really want to know if you should choose learning one over the other to create an application that has a higher performance. This question, while a bit more defined is not all that much better at stating a context. It depends on the application, it depends on the target, it depends on the compiler, it depends on the optimization choices, it depends on the third party libraries that your application uses, it depends you how you implement your algorithm, it depends on your skill at understanding the algorithm and your language of choice, it depends on what you've had to eat in the last 12 hours, it depends on what time it is when you write your code, etc, etc. It depends on what you mean by faster. Do you mean faster time to develop a robust application? Do you mean faster time time to execute one particular algorithm on the same hardware? Do you mean faster time to execute all code on the same hardware? Just for the sake of argument, let's assume that C generally results in faster code execution times for a particular application on particular hardware that doesn't use any third party libraries. Would you think that you can't write poor code in C that performs slower than better code in C++? Is executing a particular algorithm in 25 us better than executing it in 24us? Maybe. Probably not. Do thou think that being expert in one language will make all of your work so much faster than if you chose the other a marketable feature? I wouldn't bet on it. Besides, usually customers want something that works reliably and accomplishes something rather than runs really fast and sometimes accomplishes a task. Here's my answer and I bet that it's true for any context; "It depends". Back in the days when i386 was a hot microprocessor you could get 'hand-crafted' assembly libraries to perform certain tasks, especially for video ( back before there were GPUs and video was pretty much just buffer ). A lot of the techniques that have made the modern microprocessor faster ( and inherently unsafe ) has rendered spending time crafting such libraries useless. That doesn't mean that you can't add inline C or C++ code to your application if it makes sense. Of course the PC is but one (perhaps I should say one universe of) hardware platforms for which you can write applications. For a PC your 'fast' application might not be so notability fast running on any particular OS.
  14. Well, this statement is correct but... what would happen if you operated the HCSR04 at 3.3V? I've done that with pretty good results up to a few feet. The sensor has a small micro that seems fine with a lower operating voltage. I haven't tried to characterize the module at 3.3V as far as accuracy near the long distance of its range. There's no harm in experimenting as long as you supply 3.3V to the sensor power supply pins. Though somewhat limited these are cheap and nifty little sensors to lay around with. It would be swell if there was a version that allowed more control over the transmit characteristics and presented an different output signal, but hey, I encourage you to go ahead and play with them.
  15. As my previous post was getting a bit long-winded... I hastily cut it short forgetting the most important part of any introductory course in modern digital design or for that matter any self-guided journey into the subject. That would be verification, and at the least include some exploration of simulation and writing effective testbench code. Verification is really part of the design process but is complex enough to deserve it's own special discussion.