Jump to content
  • 0

Zybo Z7-20 - xfopencv: hls::stream is read while empty


PearSoup

Question

Hi everyone,

 

Hardware:  Zybo Z7-20,  PCam 5C,  sd card

Software:  SDSoC 2017.4,  reVISION platform for Zybo Z7-20 (release, v2017.2-3)

The “Simple Filter2d Live I/O demo for Pcam5C” is perfectly working (following the readme in reVISION download archive). This code is the starting point for my application.

My C++ skills are far from expert level and also quite rusty.

 

I’m developing a live video application for my master thesis. The current step is to implement motion detection. The (simple) concept is to compute the difference between a background image and the current video frame:

Initialize:  first frame --> (to grey scale) --> (optional: resize) --> gaussian blur ==> background image

Loop:  frame --> difference to background image --> threshold --> dilate --> find contours ==> list of contours

 

Question:

I’m stuck right at the first part, the initialization. The image is already grey scale. The resizing I currently omitted. So the only remaining task is Gaussian blurring. At runtime, when xf::GaussianBlur is called, the program goes into loop, infinitely throwing the following error:

Quote

WARNING: Hls::stream 'hls::stream<ap_uint<8> >.2' is read while empty, which may result in RTL simulation hanging.

Even the first print statement after xf::GaussianBlur is not reached.

Since xf::GaussianBlur tries to create hls::streams for both the xf::Mat parameters, I thought that the src matrix might be empty. But printing the first 40 pixels (not included in the code below) yields reasonable values…

Any help appreciated!

 

The code (that differs from "Live I/O demo"):

Additional #defines in platform.h:

// parameters for xf::Mat template with 1 channel.
#define XFMAT_1C XF_8UC1,MAX_HEIGHT,MAX_WIDTH,XF_NPPC1

#define GAUSSBLUR_MAX_KERNELSIZE 30            // guessed
#define GAUSSBLUR_MAX_SIGMA 10                 // guessed
#define GAUSSBLUR_SIGMA 2.0                    // guessed

Code in main.cpp:

/* [The initialization stuff: v4l2 input, terminal, drm output, switches&buttons, ncurses] */

// Start capturing frames from video input (Pcam 5C on CSI port).
v4l2_helper_start_cap(&v4l2_help);

// Get the first frame - a grey-scale image (Y) (from YUV image???).
unsigned int v4l2_buf_index = v4l2_helper_readframe(&v4l2_help);
uint8_t* firstFrame = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
xf::Mat<XFMAT_1C> xFirstFrame(vActiveIn, hActiveIn, (void*)firstFrame);

// Store first frame as background frame.
xf::Mat<XFMAT_1C> xBackground (xFirstFrame.rows, xFirstFrame.cols);
xf::GaussianBlur<GAUSSBLUR_MAX_KERNELSIZE, GAUSSBLUR_MAX_SIGMA, XFMAT_1C>(xFirstFrame, xBackground, GAUSSBLUR_SIGMA);

// for debugging:
mvprintw( 11, 0, "xBackground:");
wrefresh(mywin);
for(size_t i = 0; i < 40; i++)
{
	mvprintw( 11, 16 + 4*i, "%u    ", xBackground.copyFrom()[i] );
}
wrefresh(mywin);

/* Loop is commented. Only the cleanup remains at the end: endwin(), etc. */

 

Link to comment
Share on other sites

6 answers to this question

Recommended Posts

Hi @PearSoup,

I think that you should read the frames in the following way:

#define MAX_HEIGHT 1080
#define MAX_WIDTH 1920
#define FILTER_WIDTH 3

float sigma = 0.5f;
uint8_t *src = (uint8_t *) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;

/* ------------------------ read rgb frame ---------------------------------- */
/* read_input_rgb function from hls_helper reads each channel in turn so we need a matrix for each rgb channel */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_r(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_g(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_b(MAX_HEIGHT, MAX_WIDTH);

/* read the r, g, and b channels */
read_input_rgb(src, img_input_r, img_input_g, img_input_b, stridePixels);

/* now if you want you can combine the channels in one matrix. some xf functions require 4 channel matrix */
/* declare a new matrix with four channels: red, green, blue and alfa */
xf::Mat<XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_combined(MAX_HEIGHT, MAX_WIDTH);
/* we are non interested in alfa channel so it is initialized with 0 */
uchar zero_data[MAX_HEIGHT * MAX_WIDTH] = { 0 };
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_alfa(MAX_HEIGHT, MAX_WIDTH, zero_data);

/* combine the channels in one matrix */
xf::merge<XF_8UC1, XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_r, img_input_g, img_input_b, img_alfa, img_combined);
/* -------------------------------------------------------------------------- */

/* ------------------------ read gray frame ---------------------------------- */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_gray(MAX_HEIGHT, MAX_WIDTH);

read_input_rgb(src, img_input_gray, stridePixels);
/* -------------------------------------------------------------------------- */

/* ------------------------ apply gaussian blur ----------------------------- */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_output_gauss(MAX_HEIGHT, MAX_WIDTH);

xf::GaussianBlur<FILTER_WIDTH, XF_BORDER_CONSTANT, XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_gray, img_output_gauss, sigma);

If you work on grayscale images use "read gray frame" method and ignore "read rgb frame".

I suggest to read the frames and to process them in a while loop because there is the possibility to encounter video frame loss.

Also, I advise you to have a look on xfopencv documentation.

Here you will find examples for all xfopencv functions.

Here you will find an example for gaussian filter.


Regards,

Bogdan D.

Link to comment
Share on other sites

Hello again,

thank you for your kind and helpful response, @Bogdan!
Yet, at first I didn't manage to get RGB to work so I used grayscale images. And to be less prone to errors I implemented everything with cv functions. This enabled me getting everything else to work properly. Now I would like to succeed on using RGB and xf, with a little help...

 

My next step: Read camera as RGB and look, if my cv code can handle it.

Goal:   v4l2 data --> rgbpxl_t* --> read_input_rgb() --> xf::Mat --> cv::Mat

Question 1:   How to get the v4l2 data as rgbpxl_t* such that it can be plugged into read_input_rgb()? The v4l2 data seems to be in yuv format, I'm unsure how to do it.

Question 2:   How to get xf::Mat with 4 channels to cv::Mat with 3 channels? It might be easy when question 1 is answered, but if not, this way I don't have to bother you twice.

// Read data from camera.
uint8_t* src = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;

// ---> ??? <---

// rgbpxl_t to xf::Mat (The code above)
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_r(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_g(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_b(MAX_HEIGHT, MAX_WIDTH);
read_input_rgb(src, img_input_r, img_input_g, img_input_b, stridePixels);
xf::Mat<XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_combined(MAX_HEIGHT, MAX_WIDTH);
uchar zero_data[MAX_HEIGHT * MAX_WIDTH] = { 0 };
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_alfa(MAX_HEIGHT, MAX_WIDTH, zero_data);
xf::merge<XF_8UC1, XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_r, img_input_g, img_input_b, img_alfa, img_combined);

// xf::Mat to cv::Mat
//cv::Mat cvFrame( ... img_combined ... );

 

Note: This is how I currently read images to cv::Mat, grayscale:

const uint32_t v4l2_buf_index = v4l2_helper_readframe(&v4l2_help);
uint8_t* src = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];

const uint32_t width  = v4l2_help.fmt.fmt.pix_mp.width;
const uint32_t height = v4l2_help.fmt.fmt.pix_mp.height;
cv::Mat cvFrame (height, width, CV_8UC1, (void*)src);

 

 

 

 And Question 3) a small side-question: Shouldn't the stridePixels for handling v4l2 be computed from v4l2? E.g. somewhat like:

// uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;	// from above
uint32_t stridePixels = 4 * v4l2_help.fmt.fmt.pix_mp.plane_fmt[0].bytesperline;

I thought that DRM stands for "Direct Rendering Manager" and is for showing the images on a display (output), while v4l2 is for camera input. Keeping both things separate would be nice? My thoughts to this might be nonsense, of course.

 

Looking forward to Your support,
PearSoup

Link to comment
Share on other sites

Hi @PearSoup,

Unfortunately I didn't work with RGB images using SDSoC and PCam . My last post is relevant if the input video stream comes from a HDMI source. With the current conditions of everyone working from home, I don't have access to the necessary hardware to assist you right now. I will get back to you when things return to normal.

Link to comment
Share on other sites

hello,

I was using code of zybo Filter2d live_IO

https://github.com/Digilent/revision-samples/tree/cf507d7093a176609c6c9c1479944052b8d51660/live_IO/filter2d_pcam/src

I have error related to hls_helper

Can someone let me know where am I going wrong. any lead would help me to go forward

also I referred to this thread from Xilinx

https://forums.xilinx.com/t5/High-Level-Synthesis-HLS/lt-lt-or-write-not-allowed-with-xfopencv-library/td-p/979851

I am not sure what changes to be made in order to solve the error.

Screenshot from 2021-01-12 10-11-29.png

Link to comment
Share on other sites

Hello @meghuiyer@gmail.com,

Have you modified the hls_helper file found on github? In your screenshot I see that the errors occurs at different lines when compared to the github source file.
Have you perhaps changed the inR, inG, inB from xf::Mat to ap_uint<8>?
 

Quote

 

void read_input_rgb(rgbpxl_t *frm,

xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> &inR,

xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> &inG,

xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> &inB,

uint32_t stride_pcnt)

 

 

Link to comment
Share on other sites

1796331218_Screenshotfrom2021-01-1213-55-54.thumb.png.c2c2cb0278115654341afb0e49de077d.png@Niță Eduard

Quote

Have you modified the hls_helper file found on github? In your screenshot I see that the errors occurs at different lines when compared to the github source file.
Have you perhaps changed the inR, inG, inB from xf::Mat to ap_uint<8>?

actually, I tried to solve the error and later I used the same code from github

Screenshot from 2021-01-12 13-59-59.png

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...