ArKay99

Members
  • Content Count

    22
  • Joined

  • Last visited

  • Days Won

    2

ArKay99 last won the day on July 11 2018

ArKay99 had the most liked content!

About ArKay99

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not trying to get in the way of jpeyron's help, as I"m a rank newb here and not trying to step on any toes, I've seen this error before, and I would wait until this can be confirmed as the source and/or a possible solution put forward. Are you running this on Ubuntu? It seems make is trying to find a 32 bit tool and/or libs that haven't been installed according to what I've seen. This is from an old thread... https://www.microlab.ti.bfh.ch/wiki/huce:microlab:tools:linux-client:xilinx-vivado arm-xilinx-eabi-gcc The SDK is shipped with a 32bit compiler so you may need to install 32bit support. Unfortunately, the error in such a case is not very helpful in diagnosing the problem: /opt/Xilinx/SDK/2015.3/gnu/arm/lin??/bin/arm-xilinx-eabi-gcc: No such file or directory On older systems, the solution would be: apt-get install ia32-libs On newer systems, ia32-libs has been made obsolete. You should instead install: apt-get install lib32z1 lib32ncurses5
  2. I looked at the project referenced, but couldn't get into it at this time. However, there were a few things that were misleading, at least to me. Specifically, it appears that in step 11, the author has the reader generate a new BSP for standalone, however, all the subsequent references are for FreeRTOS. The illustrations show standalone and no FreeRTOS is shown. This would have lead me to try and make the BSP using standalone parameters even though it was named RTOSDemo_bsp. It also seems that is what happened to you as the SDK is complaining about something not being available for make to include (I can't see the rest of the path). Have you tried making the new BSP with FreeRTOS selected? Sorry if you've already done this, or I misunderstood what is being attempted.
  3. Thanks for the update. Since I've been working with the rgb2dvi component I was intrigued. I vote for door number 3.
  4. ArKay99

    DDR issues on Zybo-Z7-20

    Hi BogdanVanca, Thanks for the info as I assumed that what those values were for. I was looking for a 'sanity check' because warnings are warnings and I at least like to understand why. Also, the DDR on my board being intermittent and not knowing why was becoming frustrating to say the least. Here is what I did to arrive at my new set of values which seem to have fixed my issue. All this assumes that the preset.xml file is from a Zybo Z7 Rev B board. I have no way of determining the trace lengths without gerbers, and even then I don't have the tools to get a measurement that precise. I would assume those values would have to come from the pcb designer... Since I had the manual for the Zentel chips at hand, under DDR Controller Configuration I left Memory Type at DDR 3 (Low Voltage) Memory Part set to Custom No other changes here. Under Memory Part Configuration: Changed Speed Bin to DDR3 1600K CAS Write Latency (cycles) to 6.000000 tRC to 48.75 tRAS to 35.0 tFAW to 40.0 Those were the 'small changes' I made. I then set the DSQ0-DSQ3 values to 0.0 and the DQ[0] to DQ[3] at 0.25 and things started working again. However, I was puzzled as to how the values that were so far off got into the DQS and DQ fields. So I did a few searches and came upon this thread: https://forums.xilinx.com/t5/Embedded-Processor-System-Design/Package-pin-delay-considerations-for-Zynq-PS-DDR3-PCB-routing/td-p/495444 . I thought I was on to something and then upon generating the pin delay values and the package die pin values, as referenced in that thread, I started calculating the values. When I did the averages of those values, to my surprise I got the exact same values as were in the preset.xml file. Afterwards I also found those values in the I/O Planning Layout spreadsheet. So, after reading through that thread and the other linked thread, and putting it all together, it looks like that warning is suspect. The calculator gives as the CLK0-CLKS3 lengths is 18.8mm and the lengths of the DQS0-DQS3 values vary from 22.9 to 29.7. When the clock path delay is subtracted from the Data Strobe delay if the clock path is shorter than the data strobe path, you will get negative numbers. This seems like an entirely possible scenario, and one that works when the negative values are used. One last point and one that sort of answers my question of how those numbers got entered that stalled my DDR. Yesterday as I said, I entered the delay values into the appropriate fields based on the reports and averaged the delays to end up with the numbers in the preset.xml file. Then I closed the DDR configuration window and went about the other work on SDK side of things. This morning I opened up the DDR Configuration window to find the values had changed. When I opened up the calculated windows to reveal the delay calculator all the path delays were what I had entered, the same as preset.xml and the lengths I had entered were 0. When the values were then 'calculated', the resultant DQS to CLK Delay was different. I even had a positive vale for the DQS1 DQS to CLK Delay. I checked the preset.xml file and those values were untouched. So there is one possible reason for the delay values becoming 'out of bounds'...a bug? So, all of the above has given me a bit of enlightenment on the workings of the DDR controller, it's parameters, the version of memory I have, and a lot of the surrounding factors.
  5. Good news. I am curious as to how you resolved the issue.
  6. ArKay99

    DDR issues on Zybo-Z7-20

    A follow up on this. I queried on this at Xilinx forums and got many hits relating to this as far back as 2016.1. After reading quite a few threads and responses, it appears that the negative values seem to be a default value that gets installed in the presets.xml file. A few remedies suggested were to leave them alone they will be ok, even though they may trigger warnings, to editing the values in the presets.xml to positive values. A Xilinx employee suggested to edit the file to have 0.001 in all the fields and make sure Write leveling, Read gate, and Read data eye are checked. There are other things that I did according to the preset.xml file and would be glad to share them if anyone is interested.
  7. ArKay99

    DDR issues on Zybo-Z7-20

    I've been working with my Zybo-Z7-20 board for about 6 weeks now and have been having a hard time with anything having to do with the DDR. If I built and run the OOB demo using Vivado 2016.4 It ran, sort of. I would sometimes get could not write to memory 0x0010000 errors when trying to load a new change into the memory. When I tried to run the demo in Vivado 2018.2, it would give the same message, but none of my breakpoints would get hit and the program in flash would only run. If I set the linker file to run from OCM it would run, but not that well. I built the Hello World example and it ran fine. I then set the Hello World example to run from OCM and did some read write tests to DDR and they worked. Then I built and ran the memory test example and tested the DDR and it ran fine. Then I put the memory test code into the OOB demo main program, set it to run from OCM and it failed to read or write to memory. Then I started looking at physical memory. On my board (I have a B2 version) there are mounted 2 Zentel A3T4GF40ABF chips. I downloaded the manual and found a few things about them. They are DDR3L @ 1600K. The schematic shows a MT41K256M16HA-125 which is a DDR3L @1066F. The other parameters are fairly close and in the Zentel part manual it states the part can be used as a 1066 speed bin part. I thought I had found the answer to my stalled DDR, but entering a few of the parameters that were slightly different in the Zynq IP setup page didn't help. Today I gave up on trying to make the 2016.4 demo work in 2018.2 and decided to work with and build the Audio DMA example. After I got it dl'd and installed I opened up Vivado and got eight critical warnings about entering negative values into the DQS clock delay could make the DDR fail. Upon loading up the block diagram I opened the Zynq customizing dialog and found these values in the DQS to clock delay fields: DQS0: -0.050, DQS1: -0.044, DQS2: -0.035, DQS4: -0.100 , and in the board delay fields I found these values, DQ[7:0]: 0.221, DQ[15:8] 0.222, DQ[23:16] 0.217, DQ[31: 24] 0.244 Then I opened my OOB project that doesn't work and looked at the same DQS clock delay values and found: DQS0: 0.217, DQS1: 0.133, DQS2: 0.089, DQS3: 0.248 and the board delay values were DQ[7:0]: 0.537, DQ[15:8] 0.442, DQ[23:16] 0.464, DQ[31: 24] 0.521 Then I opened my Hello World project that does work and found these values: DQS0: 0.0, DQS1: 0.0, DQS2: 0.0, DQS4: 0.0 , and in the board delay fields I found these values, DQ[7:0]: 0.25, DQ[15:8] 0.25, DQ[23:16] 0.25, DQ[31: 24] 0.25 I then installed these values into the OOB project and it worked...i.e. the SDK hit my breakpoints, variable values were shown and registers were getting filled, etc. and I could step through the code. Also any changes I made were shown. The values for the Audio DMA project were in the …\board_files\zybo-z7-20\A.0\preset.xml file. They are also the values that show when I select Calculated in the dropdown. I don't know where the other project's values came from. All the preset.xml files I searched and obtained had the same values as the Audio DMA project. So having learned all about the above, I have these questions. Since the DDR chip I have on my board is different than the B.2 schematic, and different than in the DDR config page, could I be provided the 'correct' values? I did find the procedure on training DRAM in theZynq-7000-TRM.pdf, and the DRAM Training/Board Details are checked. So I could give that a try. If you've gotten this far, thanks for your interest, and any help and guidance is greatly appreciated.
  8. I looked at your constraints file and one thing puzzles me. You've constrained the clock to be a higher frequency than the 'standard' clock frequency in the file. In the dvi2rgb documentation there is this info on the TMDS clock constraint: The TMDS clock input Clk_p/n is constrained in the IP to the maximum DVI clock frequency, 165 MHz. On some architectures this might result in timing impossible to meet. Depending on the application, if a lower pixel clock frequency is acceptable, the clock can be constrained on top-level, which will override the IP-internal constraints. For example, to constrain the design for 720p resolution (74.25 MHz), calculate the clock period (13.468 ns), and add the following to a project XDC file to constrain the clock on the top-level input port: I've just been experimenting with the Vivado TPG and found that if I didn't use that frequency in concert with the rgb2dvi component things didn't work.
  9. Thanks so much. That was painless. 😉
  10. Thanks for the info. CPU from the overview is: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz 4.00 GHz 16 gig 1866 ram System type: 64-bit Operating System, x64-based processor. OS is Windows 10 Pro 64 bit. I'm thinking that even though I'm running a 64 bit processor and 64 bit Windows I should dl the 32-bit PC (i386) desktop image, specifically the: ubuntu-16.04.3-desktop-i386.iso? Thanks for the start procedure for the vm, no need to outline the steps.
  11. I'm a newb with Linux or Ubuntu, but want to install the version I need to get Linux working on my Zybo Z7 with the project. Been to the Ubuntu forum, signed up and asked the question and got several answers that didn't seem quite right about the version of both the OS, the LTS vs GA and the kernel, where they exist, etc. I've got a clean Win 10 machine and going to put VMWare on it and run Ubuntu on it, but it's clear as mud how I go about getting and installing 16.04.3. OR do I need that as the minimum and can install the most current version.. 18.xx something? What have the developers here done to get the platform needed to build the demo?
  12. As I said, I haven't done this project yet and it's possible that the SDK isn't used as the Zynq can be programmed and run with just the PL. However, when I looked at the project description I saw this... This says to me that the hardware has already been built in Vivado and the next step is to work with the SDK, by either importing the hardware handoff or generate the bitstream in Vivado, export the hardware, and include the bitstream in the export, which will create the hardware handoff, then launch the SDK from the menu in Vivado. So, if you've already done the Vivado update part as outlined by jpeyron, then I would incorporate what he's suggested to update the SDK portion of the project. I'm still new to the Xilinx tools, but I would do it the latter way, genetate a bitstream in Vivado before launching the SDK, export the hardware (including the bitstream), then launching the SDK from within Vivado. Once I get the project up and running that's when I go back and start changing, adding or subtracting things and see what my changes do while learning the system. The SDK is sensitive to changes in the hardware and will add or subtract and recompile what it needs to keep the software up to date with the hardware.
  13. According to the response by jpeyron: I haven't done this project, but went through upgrading the Out Of The Box demo from my Zybo Z7 020 board from 2016.4 to 2018.2, which has a playback version of the DMA for audio. I have completed that and have it working and learned a lot. But I installed and worked with the 2016.4 version of the tools so I could build and run the project, then peruse the code. I am about to get into this project myself, so I'm interested in your results. Did you do what was suggested? 1: The version IS important. if you are using a new version of the tools (2018.2 vs 2016.4) you have to tell the tcl what version you are running, I believe it's because the IP's are versioned. So, I would make the edit as outlined, then load the project, upgrade the ip cores. Then I would do the next steps, create wrapper and generate the bitstream. 2: There is a software component that has to be tied to the hardware component and that has to be done through the SDK. I believe the software component has to be upgraded through a new hw_handoff component (generated by exporting hardware with bitstream), then importing the correct projects in the SDK, and then creating a new board support package which will create a new fsbl project (IIRC). Then you can program the Zync, once you've set up the SDK project properly. Lastly, if you don't want to go through all of that, I didn't for the demo project as it was too much for a newb just starting out with the tools, I went to Xilinx and downloaded Vivado 2016.4 and checked the SDK, so I didn't have to do a separate dl for that too. You can have multiple versions of the tools on the machine at once. This way you won't have to go through the extra layer of determining what is different between the two versions and determine the cause and subsequent resolution just to get to a working project.
  14. Well, I applied for the ticket and was 'approved' at the Xilinx Service Portal, but upon entering it, I have no way to enter a ticket and am left with the only choices of the Forums, the FAQ's, or the Knowledge Base, of which I spent almost 2 days perusing and searching and have come up empty. I suppose because I don't have a corporate email account, and a third party product, they're not interested. At this point, I'm not sure what to do. I'm also glad that I didn't pay the $595 for a node locked SDoC license! I have also exhausted the threads relating to Eclipse and their memory problems, and it appears this has been and is a known issue for Eclipse. Here is what I've done so far... According to some of the threads relating to this, the java vm sets up memory allocation limits using the eclipse.ini file, and suggests upping those limits. I have done so to no avail. At this point I'm going to have to roll back to SDoC 2018.1 and perhaps skip over 2018.2. However, unless the 2018.1 xfopenCV issue has been fixed. I'm left with a broken product at this point. IMHO, for a product (SDoC) that claimed as it's #1 selling point is usability, Xilinx has failed miserably. It's hard enough to learn a new platform, complex software and hardware, but when the tools don't work it makes it impossible. I still have Vivado and the SDK and will move forward with those tools. Not ranting, just frustrated and disappointed...
  15. Thanks for the input Jon. In addition to starting the ticket with Xilinx I started a search on eclipse not responding and got lots of hits. I'm sorting through that now. And a awaiting a reply from Xilinx support. I will apprise as to the outcome. From the eclipse threads it might be as simple as having the correct version of java installed.