Search the Community

Showing results for tags 'zybo z7 20'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • News
    • New Users Introduction
    • Announcements
  • Digilent Technical Forums
    • FPGA
    • Digilent Microcontroller Boards
    • Non-Digilent Microcontrollers
    • Add-on Boards
    • Scopes & Instruments and the WaveForms software
    • LabVIEW
    • FRC
    • Other
  • General Discussion
    • Project Vault
    • Learn
    • Suggestions & Feedback
    • Buy, Sell, Trade
    • Sales Questions
    • Off Topic
    • Educators
    • Technical Based Off-Topic Discussions

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 4 results

  1. Dear all, I am working with a video processing platform based on this project: https://www.hackster.io/adam-taylor/creating-a-zynq-or-fpga-based-image-processing-platform-e79394 I am using a Zybo Z7-20 and I am successful to make a simple passthrough buffer using VDMA. The project is described in the link above. However, when I create a project for a simple RGB2GRAY in Vivado HLS to insert block into the previous platform, I can't make it work (link for second project here: https://www.hackster.io/adam-taylor/using-hls-on-an-fpga-based-image-processing-platform-8f029f). Using linux I can't co-simulate C/RTL in Vivado HLS, but if I use windows this is not an issue. I don't really know why, but it is topic for another question. I generate the Vivado HLS block with ap_start hardwired to 1. Sometimes when I compile Vivado SDK I got some distortions with pixel flickering, but sometimes I can't even see any signals on the screen. I am really lost as I don't understand what is happening. Without the HLS block I have no problems, but after inserting it is completely messed up. I am enclosing the Vivado IP integrator block in PDF, as well the application code (hello.cpp) in the Xilinx SDK. What I also tried instead of hardwiring the HLS block (using #pragma HLS INTERFACE ap_ctrl_none port=return) was to assign ap_start to 1 using a constant or connecting to a GPIO and driving it to 1 in the Xilinx SDK. None methods worked. I am also attaching the whole project on this link: https://drive.google.com/file/d/1yM3upD4PuwHEXGZ_6M8O1-vZxLX5bIQP/view?usp=sharing. I am using Vivado 2018.3. Please, I really need help and I do appreciate any feedback on this issue. Thank you very much indeed. design_1.pdf hello.cpp
  2. Hope you all are fine, I downloaded Digilent/Zybo-Z7-20-HDMI from https://github.com/Digilent/Zybo-Z7-20-HDMI I have upgraded the ip's, it was displaying output on monitor. Then I have created the ip of sobel edge detection and added the ip in block diagram. After solving some clocking issues, bitstream has been generated. After launching to sdk, when I Launch on Hardware (System Debugger), output doesn't display. Below is block diagram, please guide me
  3. Hello Digilent Community, I am working on a image processing project and was wondering if anyone had advice or could point me in the right direction. I have tried following some tutorials and example projects, but I am still trying to wrap my head around Xilinx Vivado and SDK. The project really shouldn't be very difficult, I think I am just missing some information or the best way to go about doing it. For the project I am using the Zybo z7-20 development board and want to save two images to an SD card. The two pictures are black and white frames from a video just seconds apart, so there is only slight change in the frames themselves. I want to compare the two frames and output either a black and white image of the change in pixels or a binary file of '0' being an unchanged pixel and '1' being a change in the pixel. MATLAB has the 'Computer Vision System Toolbox' 'Tracking Cars Using Foreground Detection' Simulink example that is similar to what I want to do on the Zybo z7-20 FPGA. The following figure show the original video (right) with blob detection (the green square) and the binary output image of the change in pixels in the foreground (left). I want to use the Zynq Processor and write C code to do the analysis, but I haven't found a clear way to access the SD card from the Xilinx SDK. The following figure is of my current Block Design with only the Zynq Processor as well as some GPIO to test. I am still researching and looking at examples to compare, but wanted to see if the community had any pointers or if someone has done this before. I am a college student and I have been really interested FPGA's and digital design for the past 6-9 months, but I have mainly written my own Verilog code and haven't worked with block design or running C code on any of my designs. Any comments or suggestions would be great. Thanks!
  4. Kris Persyn

    Digital Twin

    Hey, For a quite challenging project I am planning on using a zybo z7-20 to drive immersive visuals on a HDMI display (probably this monitor Lenovo L27i-28). Simultaneously I would want to collect sensor data through digital pins (digital signals are provided by an external uP) and also interface with matlab (used for speech recognition) via USB. My question thus is: Is this baby powerful enough to deal with all this load? Kg, Kris