Leaderboard


Popular Content

Showing content with the highest reputation since 04/17/19 in all areas

  1. 2 points
    JColvin

    Read from MicroSD in HDL, Write on PC

    Hi @dcc, I'm not certain how you are verifying that the HDL is writing to and then reading back from the SD card in a normal formatting style, but in general FAT32 is a widely used format for SD cards that has existing material for it. I am uncertain why you are using a special tool to write to the SD card though; from what I can tell the tool is Windows compatible, so why not just use the Notepad program which comes with Windows and save a .txt file with the data you are interested in reading to the SD card or just using Windows Explorer (the file manager) to move the file of interest onto the SD card? If you do have a header in your file, you will need to take account for that, though I do not know what you mean by "random file" in this case. Thanks, JColvin
  2. 2 points
    SeanS

    Genesys 2 DDR Constraints

    Hi JColvin, I am definitely not using ISE. I think JPeyron had it correctly. I didn't have my board.Repopaths variable set and so the project wasn't finding the board files. Once I set this variable as suggested, the pin mapping and IO types were auto populated as expected. Kudos, Sean
  3. 1 point
    attila

    Save continous data to file in WaveForms

    Szia @Andras At the moment you have WAV RIFF WAVE export under Scope/View/Logging/Script/Example.
  4. 1 point
    Hi @Ciprian, After some time, I managed to solve this issue. In fact, It was a problem in the hardware and device tree configuration. I discovered it when probing with another example project named Zybo-hdmi-out (https://github.com/Digilent/Zybo-hdmi-out). However, as this project is for a previous version of Vivado, I tested with Vivado 2017.4. Surprisingly, it worked fine but with another pixel-format in the device tree. The Zybo-base-linux project which I used, has a pixel format in DRM device tree configuration set to "rgb888", however, for the Zybo-hdmi-out, it displayed correctly with pixel-format "xrgb8888". If I use other pixel formats, no output is displayed in both cases. Going deep into the configuration of both projects, I discovered that there are some differences in the VDMA and Subset converter settings, which changed to the configuration in Zybo-hdmi-out, solves the problem of colors and rendering, considering also a pixel format in the devicetree equal to "xrgb8888". I attached the images of both configurations. In addition to this, I managed to update the design for the Vivado version I use (2018.2) with no more differences that a change in the AXI memory interconnect replaced by the AXI Smart connect in the newer version, which is added automatically when using Vivado autoconnect tool for the VDMA block. Hope this information could help others which run in the same issue. Thanks for your help. Luighi Vitón
  5. 1 point
    if you can bring it up once in Vivado HW manager (maybe with the help of an external +5 V supply), you might be able to erase the flash. If not, you may be able to prevent loading the bitstream from flash e.g. by pulling INIT_B low (R32 is on the bottom of the board, its label slightly below the CE mark). See https://www.xilinx.com/support/documentation/user_guides/ug470_7Series_Config.pdf "INIT_B can externally be held Low during power-up to stall the power-on configuration sequence at the end of the initialization process. When a High is detected at the INIT_B input after the initialization process, the FPGA proceeds with the remainder of the configuration sequence dictated by the M[2:0] pin settings.""
  6. 1 point
    @Ahmed Alfadhel, One other thing to note is your scope was AC coupled in your pictures so you were "seeing" a bipolar waveform of -1.28v to +1.28v. If you had DC coupled, the waveform on the scope would have shifted up and you would have observed the 0v to 2.5v the DA was outputting. To get an actual bipolar output, you need the opamp level shifter/scaler.
  7. 1 point
    well, by default your signal is between 0 V and Vref. The opamp circuit has a gain of 2 (range 0.. 2 VRef) but subtracts a constant VRef (range now -VRef..Vref). It'll just shift the waveform on the scope, and double its AC magnitude.
  8. 1 point
    Let me offer a suggestion to all newbies, regardless of how smart you are, before trying to do FPGA development. Read all of the user guides for the FPGA device resources that you are likely to be using. These will include the SelectIO, Clocking, CLB , and memory guides at a minimum. [edit] also read the AC switching part of the device data sheet. Like it or not what you are doing in FPGA development is digital design and you need to have a sense of how design decisions affect timing. Read the Vivado user guides for design entry, constraints, simulation, timing closure, and debugging. Understand that even though various Zynq devices are based on certain FPGA families the documentation tends to be unique for these devices. You will be overwhelmed with all of the 'basic' information. Spend a week or so running though all of the basic documentation, spending more time on specific topics each read-through. The object isn't to memorize or understand everything but to get a general feel for how Xilinx presents its information. You can also learn stuff that you will miss in specific IP documentation by using the simulation, but only if you are careful to read all of the simulator messages. This is complicated stuff and the tools, even when they behave as described in the reference material is even more complicated. The purpose of doing this is to get a general feel for how the devices work and specific use limitations and how the tools work. It will take a year or so before you start becoming competent at it if you are a normal human.
  9. 1 point
    @askhunter Tip if you want to notify someone that you are responding to a post type @and the first few letters of their username. A selection of usernames will appear in a popup window to choose from. If you just type @ and the whole name you won't get the desired result. I confess that I'm not an expert on using the features of this site but I did figure out this one. As to understanding all of the Xilinx documentation what yo are doing is correct. Speed-read though a document to get a general sense of what's being presented and don't worry about the things that you don't grasp. Just being familiar with what information is where will help with a specific question later. The DSP48E is a very complicated piece of hardware. You only understand how complicated by trying to instantiate it as a UNISIM component to implement a particular algorithm. I've done this and it take time. You understand by doing; one step at a time. In your case I'm assuming that you are starting with someone else's code and trying to modify it. This approach takes a difficult task and turns it into an extremely difficult task. [edit] Vivado uses the multipliers in a seamless way when you specify a multiply in your HDL code. It takes care of a lot of little details, such as that the multipliers are signed 18-bit. There are a LOT of options with the DSP48E blocks. Once you start making decisions for Vivado, by say, using the use_dsp attribute in your code you are taking on responsibility for more of those details... so you had better understand how the DSP48E blocks work. Trust me, even after you have figured out all of the necessary behaviors of the DSP48E blocks it doesn't get easier as you will have to contend with routing issues that might dramatically reduce your data rates. This is a general rule for using FPGA device resources. You can use the IP wizards to help construct a component that's useful for your needs or do it yourself in HDL code and assume the responsibility for getting all of the details and constraints right.
  10. 1 point
    @askhunter I suggest that you read UG479 to see what the DSP48E blocks do. Then read UG901 to see what the use_dsp attributes do. Reading the recipe doesn't always help improve the cooking but it never hurts. A long time ago having signed multipliers in hardware was a big deal for FPGA developers. For the past decade or so these have become integrated into more complicated and useful 'DSP' blocks. The DSP nomenclature is a holdover from the days, long before IEEE floating point hardware was available, when having a fast multiplier in hardware meant that you could do some fun stuff in a micro-controller that you couldn't do with software routines. These days the lines are blurry. Most FPGA devices have some really fast hardware features, block ram and DSP blocks ( depending on how they are used ) being the most useful for grinding out mathematical algorithms. By the way, the DSP blocks can be useful for more than multiply-add operations.
  11. 1 point
    D@n

    Read from MicroSD in HDL, Write on PC

    @dcc, This is really the backwards way to get something like this going. You should be proving your design in simulation before jumping into a design on hardware. Let me offer you an alternative. Here is a Verilog driver for talking to an SD card using SPI. If you have already chosen to use the AXI bus, you can find an AXI-lite to WB bridge here that will allow you to talk to this core. Even if you already have a driver you like, this documentation for this one describes how to set up the SD card to where you can talk to it, and provides examples of how to read and write sectors. Even better, there's a piece of C++ code which can be used as a simulator with Verilator. (Not sure if this would work with MicroBlaze or not.) You can then use Linux tools, such as mkfatfs and such, to create a file with a FAT format that you can use as a "simulated" SD card. When the simulation isn't running, you can mount the card on your system and check out/modify the files, and so know that things will work (based upon your experience with simulation) once you finally switch to hardware. Indeed, if you are willing to accept the risks, you could even interact with your SD card from the simulation environment itself. If you want an example of a set up that would control the SD card interface from a ZipCPU, you can check out the ZBasic repository which has such a simulation integrated into it. Indeed, there's even an sdtest.c program that can be used for that purpose. As for reading and comprehending the FAT filesystem, there's a FATFS repository that is supposedly good for use with embedded software. I haven't tried it, so I can't comment upon it that much. Alternatively, if you can control how the file system is laid out, you should be able to place a file of (nearly) arbitrary length a couple of sectors into the FS, and force the file to be use contiguous sectors. If you do that, then you've dealt with the most complicated parts about reading from the SD card. Just my two cents, and some thoughts and ideas along the way. Dan
  12. 1 point
    JColvin

    Vivado free for Artix-7?

    Hi @TerryS, Unfortunately, the download speeds you are reporting are due to your end (either your internet connection or your computer hardware), not Xilinx's end. Note also as a fair warning (since I believe this is your first time using Vivado) that even simple projects, such as LED blinking project that xc6lx45, will probably take more time than you expect as the Vivado software needs to program and set every transistor inside the FPGA. There is a nice comment summarizing what all the tools need to during synthesis, implementation, and bitstream generation here. But as @xc6lx45, a lot of the material looks more complicated than it actually is, mostly because it's a different language. Thanks, JColvin
  13. 1 point
    JColvin

    OpenLogger ADC resolution + exporting

    Hi @sgrobler, Our design engineer who designed the OpenLogger did an end-to-end analysis to determine the end number of bits of the OpenLogger. This is what they ended up doing in a summarized fashion: <start> They sampled 3 AAA battery inputs to the SD card at 250 kS/s and set the OpenLogger sample rate to 125 kS/s and then took 4096 samples; they then took the raw data stored on the SD card and converted it to a CSV file and exported the data for processing. Their Agilent scope read the battery pack at 4.61538 V and as they later found from FFT results the OpenLogger read 4.616605445 V, leading to a 0.001226445 V or ~1.2mV difference, which is presuming the Agilent is perfect (which it is not), but it was nice to see that the values worked out so closely. They calculated the RMS value of the full 4096 samples in both the time domain and using Parseval's theorem in the frequency domain as well, both of which came up with the same RMS value of 4616.606689 mV, which is very close to the DC battery voltage of 4616 mV. Because RMS is the same as DC voltage, this gives the previously mentioned DC value of 4.616605445 V. They can then remove the DC component from the total RMS value to find the remaining energy (the total noise, including analog, sampling, and quantization noise) of the OpenLogger from end-to-end. With the input range of +/- 10V input, this produces an RMS noise of 1.5mV. At the ADC input, there is a 3V reference and the analog input front end attenuates the input by a factor of 0.1392, so the 1.5mV noise on the OpenLogger is 0.2088mV at the ADC. With the 16 bits (65536 LSBs) over 3V, 0.0002088V translates to ~4.56 LSBs of noise. The ENOB is a power of 2, so log(4.56)/log(2) results in 2.189 bits, giving us a final ENOB of 16 - 2.189 = ~13.8 bits. Note though that this ENOB of 13.8 bits is based on system noise and not dynamic range, so for non-DC inputs (which will likely be measured at some point) the end number of bits is not easily determined. The datasheet for the ADC used in the OpenLogger (link) shows that the ADC itself gives an ENOB of about 14.5 bits at DC voltage (so the 13.8 bits is within that range), but at high frequencies, this of course rolls off to lower ENOB at higher frequency inputs. Thus, they cannot fully predict what the compound ENOB would be over the dynamic range, but they suspect it all mixes together and is 1 or 1.5 bits lower than the ADC ENOB response. </end> Let me know if you have questions or would like to see the non-abbreviated version of his analysis. Thanks, JColvin
  14. 1 point
    jpeyron

    Pmod IA AD5933 measurement speed

    Hi @M.Mahdi.T, Welcome to the Digilent Forums! We typically do not test the max throughput metric when validating our products. Looking at the AD5933 datasheet here the Pmod IA would be limited to 1 MSPS due to the on board ADC. The main bottleneck for throughput to a host board will be the I2C communication which runs at 400 KHz. Here is a good forum thread for using the Pmod IA with the Raspberry PI with code and hints for use , setup and calibration. Here is the resource center for the Pmod IA which has the Raspberry PI code as well. best regards, Jon
  15. 1 point
    @Ahmed Alfadhel To understand what's going on, check out table 8 of the datasheet on page 15. Basically, the DAC provides outputs between 0 and max, where 0 is mapped to zero and all ones is mapped to the max. In other words, you should be plotting your data as unsigned. To convert from your current twos complement representation to an unsigned representation where zero or idle is in the middle of the range, rather than on the far end, just toggle the MSB. Dan
  16. 1 point
    bogdan.deac

    OpenCV and Pcam5-c

    Hi @Esti.A, If you clone the repo you obtain the "source code" for the platform and you have to generate the platform by yourself. This is a time consuming and complicated task and is not recommended if you do not understand SDSoC very well. I advise you to download the last SDSoC platform release from here. You will obtain a zip file that contains the SDSoC platform already build. After that, you can follow these steps to create your first project.
  17. 1 point
    kwilber

    New Xilinx Zynq MPSoC ebook available

    For those interested, Xilinx has just made a new Zynq MPSoC ebook available here.
  18. 1 point
    FPGAMaster

    CMOD A7 Schematic missing stuff

    Thank you Jon... I got the PM and will follow up as you suggested.
  19. 1 point
    Hi, I think a UART is the least effort. Parsing ASCII hex data in a state machine is easy and intuitive to debug, at the price of 50 % throughput. If you like, you can have a look at my busbridge3 project here, goes up to 30 MBit/second. The example RTL includes very simple bus logic with a few registers, so it's fairly easy to connect an own design. Note, it's not meant for AXI, microblaze or the like as it occupies the USB end of the JTAG port. In theory, it should work on any Artix & FTDI board as it doesn't any LOC-constrained pins.
  20. 1 point
    LOL yeppers! https://store.digilentinc.com/labview-physical-computing-kit-for-beaglebone-black-limited-time/ I've been struggling to dredge up my Unix experience from 30 years ago and apply it to this board as well as a RPi 3B+ in an effort to get them to run LabView. In the process, I had flashed the BBB's eMMC with the most current software only to find it broke LINX. I then back-rev'd the software to 8.6 2016-11-06 which seems to work so far. But, I'd like to know what it originally shipped with in order to fill out the revision continuum. -Scott
  21. 1 point
    bogdan.deac

    OpenCV and Pcam5-c

    Hi @Esti.A, SDx, which includes SDSoC (Software Defined System on Chip), is a development environment that allows you to develop a computer vision application, in your case, using C/C++ and OpenCV library. The target of SDx-built applications are Xilinx systems on chip (SoC) (Zynq-7000 or Zynq Ultrascale+). Xilinx SoC architecture has two main components: ARM processor (single or multi core) named Processing System (PS) and FPGA, named Programmable Logic (PL). Using SDx to build an application for SoC allows you to choose which functions from your algorithm are executed in PS and which ones are executed in PL. SDx will generate all data movers and dependencies that you need to move data between PS, DDR memory and PL. The PL is suitable for operations that can be easily executed in parallel. So if you are going to choose a median filter function to be executed in PL, instead of PS, you will obtain a better throughput from your system. As you said, you can use OpenCV to develop your application. You have to take into account that OpenCV library was developed with CPU architecture in mind. So the library was designed to obtain the best performance on some specific CPU architectures (x86-64, ARM, etc.). If you are trying to accelerate an OpenCV function in PL using SDx you will obtain a poor performance. To overcome this issue, Xilinx has developed xfopencv, which is a subset if OpenCV library functions. The functionalities of xfopecv functions and OpenCV functions are the same but the xfopencv functions are implemented having FPGA architecture in mind. xfopencv was developed in C/C++ following some coding guideline. When you are building a project, the C/C++ code is given as input to Xilinx HLS (High Level Synthesis) tool that will convert it to HDL (Hardware Description Language) that will be synthetized for FPGA. The above mentioned coding guideline provides information about how to write C/C++ code that will be implemented efficiently in FPGA. To have a better understanding on xfopencv consult this documentation. So SDx helps you to obtain a better performance by offloading PS and by taking advantage of parallel execution capabilities of PL. Have a look on SDSoC documentation. For more details check this. An SoC is a complex system composed by a Zynq (ARM + FPGA), DDR memory and many types of peripherals. Above those, one can run a Linux distribution (usually Petalinux, from Xilinx) and above the Linux distribution, the user application will run. The user application may access the DDR memory and different types of peripherals (PCam in your case). Also, it may accelerate some functions in FPGA to obtain a better performance. To simplify the development pipeline Xilinx provides an abstract way to interact with, named SDSoC platform. SDSoC platform has two components: Software Component and Hardware Component that describes the system from the hardware to the operating system. Your application will interact with this platform. You are not supposed to know all details about this platform. This was the idea, to abstract things. Usually, the SDSoC platforms are provided by the SoC development boards providers, like Digilent. All you have to do is to download the last SDSoC platform release from github. You have to use SDx 2017.4. You don't have to build your own SDSoC platform. This is a complex task. You can follow these steps in order to build your first project that will use PCam and Zybo Z7 board. The interaction between PCam and the user application is done in the following way: there is an IP in FPGA that acquires live video stream from the camera, the video stream is written into DDR memory. This pipeline is abstracted by the SDSoC platform. The user application can access the video frames by Video4Linux (V4L2). The Live I/O for PCam demo shows you how to do this. I suggest you to read the proposed documentation to obtain a basic knowledge needed for SDSoC projects development. Best regards, Bogdan D.
  22. 1 point
    xc6lx45

    Vivado free for Artix-7?

    Just as a reality check: To e.g. make a LED blink, the required CMOD A7-specific content is about five lines of constraints from CMODA7_Master.xdc. This may look more complicated than it actually is. And BTW, good choice, it's a great little board 🙂
  23. 1 point
    jpeyron

    Vivado free for Artix-7?

    Hi @TerryS, Thank you for posting how you got to the legacy content. I will pass this on to our content team. We will still have this content accessible since there are a wide array of people that use different versions of vivado. The legacy content is accurate for earlier versions of Vivado/SDK. best regards, Jon
  24. 1 point
    jpeyron

    VGA on Zybo

    Hi @Mukul, Glad to hear that your project is working. Thank you for sharing what you did. best regards, Jon
  25. 1 point
    @Ajay Ghodke, For being new, you've picked a pretty challenging project! Reading an image from an SD card requires several layers of processing. The first part is the physical layer. The specification outlines one of two means of communicating with a cards physical layer: using the SDIO protocol (a four-wire, bidirectional data/command channel), and using the SPI protocol. I have personally built a Verilog core that communicates over the SPI protocol, and you are welcome to use my core, while other's have done their work with SDIO and ... I haven't tried their cores so I cannot speak to it. Across this physical layer, and to get things up and running, a protocol handshake must take place between the card and the controller. This handshake is outlined in the physical specification above, and also in the instructions for the controller I just mentioned. For a given card, the handshake isn't all that difficult. Indeed, for a given class of cards it's probably not that bad. To process every possible SD card, finding appropriate examples to test with can be a challenge--just to prove that you can handle all the corner cases. Once you can handle communications at this level, you will then be able to read and write sectors from the SD card. The next level, beyond sectors, involves reading and understanding the partition table. This table will tell you where the file systems are on the SD card, and more specifically, what file systems are on the SD card. In general, most SD cards have only one file system on them so partition processing is pretty simple. That file system is a FAT filesystem--whether FAT16, FAT32, etc. I'm not certain. (I haven't gotten this far yet.) After the partition layer, you will need to process the file system. Assuming that your SD card has a FAT filesystem on it, there will be two types of files on the system: file containing directory entries, and other. These files may be found in any order on the SD card, as specified by the file allocation table. That table contains one entry per cluster on the file system, telling you that after you read the given cluster, where to find the next one. (Clusters are similar to sectors, but may be implemented as groups of sectors.) If the filesystem is in proper order, the last cluster will identify itself as the last cluster. So, the steps to processing this filesystem involve: Identifying which version of the FAT system you are working with Finding, from that information, the first cluster of the root directory Reading through the directory for the file you want. (Keep in mind, other clutsers to this directory may be out of order--you'll need to look up their locations in the table.) If your file is in a subdirectory, you'll have to find the subdirectory entry. Once you have the file (subdirectory) you want, you'll find a location on the disk where that file begins. (IIRC, it's the cluster number of the first sector of the file) You'll also find the length of the file (or subdirectory) you are interested in. If what you found was a subdirectory, and if it's the subdirectory your file is in (assuming it is in a subdirectory and not the main directory), you'll then need to repeat this process, reading the subdirectory "file" and looking for your file's entry. (Or perhaps looking for another subdirectory entry.) From the final entry, you will now know where to find the first cluster of your file, and the full length of the file in bytes. (It may be longer or shorter than the number of clusters allocated for it in the allocation table.) The file allocation table will tell you where to find subsequent clusters. If all you wish to do is to change the image in place, then you now know where to find all the pieces. At this point, you can change the file as it exists on the SD card at will. Creating new files on the SD card requires creating a directory entry that file, find empty clusters to hold the file, placing the number of the first cluster into the directory, adjusting the directory length, etc. It doesn't take long and this becomes an in-depth software task. I have seen some approaches where individuals have created their own partitions containing their own file system with their own format just to avoid all of this hassle, and to be successful doing this within a microcontroller. While doable, such solutions tend to be application specific. Hope this helps, Dan