• 0
Obra

Arty Z7 HDMI in with OpenCV/Linux

Question

Hello,

I'm evaluating the feasibility of using the Arty Z7 for an embedded vision project. The goal is to run OpenCV (accelerated) in Linux with HDMI as the video source. 

1. Any recommended examples to start with?
2. Can this be done with free IP cores?
3. Any tips/suggestions?

Share this post


Link to post
Share on other sites

11 answers to this question

Recommended Posts

  • 0

Hi @Obra,

I have not worked with opencv with on a zynq processor. With that being said I am not aware of any cost to using HLS and openCV. Here is Accelerating OpenCV Applications with Zynq-7000 All Programmable SoC using Vivado HLS Video Libraries and here OpenCV Installation from the xilinx wiki.  

cheers,

Jon

Share this post


Link to post
Share on other sites
  • 0

Hi @jpeyron,

Thanks for the links. My biggest concern right now is getting HDMI In to a video stream which OpenCV can handle within Linux. I think I'll need dvi2rgb, v_vid_in_axi4s and axi_vdma to make this possible. If this PL setup is correct, is the next step only to set up the Linux device tree? The input resolution is fixed.

Share this post


Link to post
Share on other sites
  • 0

Hi @Obra,

It is possible to use Arty Z7 for an embedded vision project using OpenCV. This board has an Zynq chip and the best alternative in this case is SDSoC from Xilinx. You can learn more about SDSoC from here. I think that SDSoC is the best solution for you because it provides a set of 50+ hardware optimized OpenCV functions. You can find the hardware optimized libraries here. Using SDSoC you are able: 

  • to develop your vision algorithm in C/C++ using OpenCV library that will run on PS (Processing System, ARM)
  • to evaluate the performance using profilers
  • to accelerate the most compute intensive functions on PL (Programmable Logic, FPGA)

All you have to do is to select the functions that you want to accelerate in FPGA from the IDE. SDSoC will resolve all dependencies and data transfer between PS and PL and you are able to specify some preferences regarding the data movers, memory allocation, etc. You can find more details in the SDSoC documentation.

Moreover, you can accelerate in hardware the functions that are developed by you, too. For example, I developed a simple segmentation function that changes the pixels values based on a threshold. In PL this function is executed 38.5 times faster than in PS, using SDSoC.

In order to use SDSoC with Arty Z7 and OpenCV you need a SDSoC platform targeted for Arty Z7 board that supports Linux and OpenCV. You can make your own platform (use the documentation link) or you can use the platform created by Digilent for this board. Unfortunately, the SDSoC platform for this board is not full functional yet but it will be in a short time.

Here is the Petalinux project for Arty Z7. It is not released, yet.

Here is the SDSoC hardware platform for Arty Z7.

Let me know if you need any other information.

Best regards,

Bogdan

Share this post


Link to post
Share on other sites
  • 0

@bogdan.deac

Thank you!

Is there anyway to determine what functionality is currently missing in the SDSoC platform?

Do you think fully functional platform will be available by end of October?

Share this post


Link to post
Share on other sites
  • 0

Hi @nattaponj,

It is possible to use SDSoC with Zybo board. You must to take into account that SDSoC is not a free tool. You must pay for it. Maybe you can obtain a voucher from Xilinx if your project belongs to an university. Also, you need an SDSoC platform for Zybo. Here you have three options:

  • Develop your own platform using Xilinx documentation:
  • Use default Zybo platform from SDSoC: SDSoC comes with an example platform for Zybo board. It offers a basic access to hardware acceleration for you algorithm. It does not support video input/output. You can read images or video from SD card or you can develop this platform as you wish because it is open source. See Xilinx Environment tutorial.
  • Use Digilent Zybo platform. Using this platform you are able to process an input HDMI video stream. Also, you can modify this platform as you wish.

All the mentioned methods support Linux OS, and OpenCV. For more details about SDSoC consult Xilinx documentation and Xilinx forum.

Share this post


Link to post
Share on other sites
  • 0

Hi @bogdan.deac,

Thank you for the advice. But I have more questions.

If I use xfOpenCV on the Zybo board, Can I render the video in real-time via the screen?

Using HDMI as input to the Zybo board. And the board sends out information via the VGA port to monitor.

Thank you.

Share this post


Link to post
Share on other sites
  • 0

Hi @nattaponj,

Here you can find a demo for Digilent SDSoC platform for Zybo. I think that it is a Sobel Filter. It takes a video stream from the HDMI, applies a Sobel Filter on the frames and displays the result using VGA. Some instruction from the README:

To use the project connect a video source to the HDMI I/O connector of the ZYBO,
a monitor to the VGA connector, and your computer to the USB PROG/UART connector.
Run the project on the ZYBO, and then connect to the ZYBO's COM port in a Terminal
program at Baud 115200. Further instructions are provided on the terminal.

*Note - Output on the VGA monitor will be cropped or padded if the output resolution
		does not match the HDMI input resolution.

 

So it is possible to render the video in real-time via the screen using VGA. The result will depend on the execution time of your algorithm.

Best regards,

Bogdan

Share this post


Link to post
Share on other sites
  • 0

Hi @nattaponj,

It is a compiler directive. The compiler will take it into account in order to optimize your code. You can "optimize your system and hardware functions using a set
of directives and tools within the SDSoC environment." "You can insert pragmas into application source code to control the system mapping and generation flows, providing directives to the system compiler for implementing the accelerators and data motion networks. "(SDSoC User Guide, UG1027). In SDSoC you can use two types of compiler directives: SDSoC directives and HLS directives.

Some examples of SDSoC directives:
 

#pragma SDS data data_mover(A:AXIDMA_SIMPLE) -> specify which data mover you want to use to transfer data: FIFO, DMA simple, SG_DMA, etc.

#pragma SDS data sys_port(arg:port) -> specify the memory port

#pragma SDS data zero_copy -> copy arguments in the called function.

 

Because SDSoC uses HLS(UG902) to compile synthesizeable C/C++ functions into programmable logic you can use HLS directives too. For example

#pragma HLS PIPELINE II = 1

"The initiation interval (II) is the number of clock cycles before the function can accept new inputs and is generally the most critical performance metric in any system. In an ideal hardware function, the hardware processes data at the rate of one sample per clock cycle. If the largest data set passed into the hardware is size N (e.g., my_array[N]), the most optimal II is N + 1. This means the hardware function processes N data samples in N clock cycles and can accept new data one clock cycle after all N samples are processed. It is possible to create a hardware function with an II < N, however, this requires greater resources in the PL with typically little benefit. The hardware function will often be ideal as it consumes and produces data at a rate faster than the rest of the system. "

Using this directive you specify to HLS compiler to use the pipeline technique in order to obtain an initiation interval = 1. When the HDL is generated it will try to obtain II = 1. If it is not possible it will warning you.

All directives are used to optimize the program. You can read more about this in UG1235.

Now, in your example you have

#pragma AP PIPELINE II = 1

The Sobel filter example from Zybo SDSoC platform was developed by Xilinx in 2011 ( (c) Copyright 2011 Xilinx Inc. ). In that period, HLS used another compiler named APCC which was used to overcome some limitations of standard C compilers. You can read more about this in UG902 HLS 2012.3. I think that APCC is not used anymore. Because the git project was created in Vivado 2015.4, I think it should work. It depends on what Vivado version do you use. If you have building errors you can try to replace

#pragma AP PIPELINE II = 1

with

#pragma HLS PIPELINE II = 1

and

#include "ap_video.h"

with

#include "hls_video.h"

 

Best regards,

Bogdan Deac

Edited by bogdan.deac

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now