• 0
PearSoup

Zybo Z7-20 - xfopencv: hls::stream is read while empty

Question

Hi everyone,

 

Hardware:  Zybo Z7-20,  PCam 5C,  sd card

Software:  SDSoC 2017.4,  reVISION platform for Zybo Z7-20 (release, v2017.2-3)

The “Simple Filter2d Live I/O demo for Pcam5C” is perfectly working (following the readme in reVISION download archive). This code is the starting point for my application.

My C++ skills are far from expert level and also quite rusty.

 

I’m developing a live video application for my master thesis. The current step is to implement motion detection. The (simple) concept is to compute the difference between a background image and the current video frame:

Initialize:  first frame --> (to grey scale) --> (optional: resize) --> gaussian blur ==> background image

Loop:  frame --> difference to background image --> threshold --> dilate --> find contours ==> list of contours

 

Question:

I’m stuck right at the first part, the initialization. The image is already grey scale. The resizing I currently omitted. So the only remaining task is Gaussian blurring. At runtime, when xf::GaussianBlur is called, the program goes into loop, infinitely throwing the following error:

Quote

WARNING: Hls::stream 'hls::stream<ap_uint<8> >.2' is read while empty, which may result in RTL simulation hanging.

Even the first print statement after xf::GaussianBlur is not reached.

Since xf::GaussianBlur tries to create hls::streams for both the xf::Mat parameters, I thought that the src matrix might be empty. But printing the first 40 pixels (not included in the code below) yields reasonable values…

Any help appreciated!

 

The code (that differs from "Live I/O demo"):

Additional #defines in platform.h:

// parameters for xf::Mat template with 1 channel.
#define XFMAT_1C XF_8UC1,MAX_HEIGHT,MAX_WIDTH,XF_NPPC1

#define GAUSSBLUR_MAX_KERNELSIZE 30            // guessed
#define GAUSSBLUR_MAX_SIGMA 10                 // guessed
#define GAUSSBLUR_SIGMA 2.0                    // guessed

Code in main.cpp:

/* [The initialization stuff: v4l2 input, terminal, drm output, switches&buttons, ncurses] */

// Start capturing frames from video input (Pcam 5C on CSI port).
v4l2_helper_start_cap(&v4l2_help);

// Get the first frame - a grey-scale image (Y) (from YUV image???).
unsigned int v4l2_buf_index = v4l2_helper_readframe(&v4l2_help);
uint8_t* firstFrame = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
xf::Mat<XFMAT_1C> xFirstFrame(vActiveIn, hActiveIn, (void*)firstFrame);

// Store first frame as background frame.
xf::Mat<XFMAT_1C> xBackground (xFirstFrame.rows, xFirstFrame.cols);
xf::GaussianBlur<GAUSSBLUR_MAX_KERNELSIZE, GAUSSBLUR_MAX_SIGMA, XFMAT_1C>(xFirstFrame, xBackground, GAUSSBLUR_SIGMA);

// for debugging:
mvprintw( 11, 0, "xBackground:");
wrefresh(mywin);
for(size_t i = 0; i < 40; i++)
{
	mvprintw( 11, 16 + 4*i, "%u    ", xBackground.copyFrom()[i] );
}
wrefresh(mywin);

/* Loop is commented. Only the cleanup remains at the end: endwin(), etc. */

 

Share this post


Link to post
Share on other sites

1 answer to this question

Recommended Posts

  • 0

Hi @PearSoup,

I think that you should read the frames in the following way:

#define MAX_HEIGHT 1080
#define MAX_WIDTH 1920
#define FILTER_WIDTH 3

float sigma = 0.5f;
uint8_t *src = (uint8_t *) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;

/* ------------------------ read rgb frame ---------------------------------- */
/* read_input_rgb function from hls_helper reads each channel in turn so we need a matrix for each rgb channel */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_r(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_g(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_b(MAX_HEIGHT, MAX_WIDTH);

/* read the r, g, and b channels */
read_input_rgb(src, img_input_r, img_input_g, img_input_b, stridePixels);

/* now if you want you can combine the channels in one matrix. some xf functions require 4 channel matrix */
/* declare a new matrix with four channels: red, green, blue and alfa */
xf::Mat<XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_combined(MAX_HEIGHT, MAX_WIDTH);
/* we are non interested in alfa channel so it is initialized with 0 */
uchar zero_data[MAX_HEIGHT * MAX_WIDTH] = { 0 };
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_alfa(MAX_HEIGHT, MAX_WIDTH, zero_data);

/* combine the channels in one matrix */
xf::merge<XF_8UC1, XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_r, img_input_g, img_input_b, img_alfa, img_combined);
/* -------------------------------------------------------------------------- */

/* ------------------------ read gray frame ---------------------------------- */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_gray(MAX_HEIGHT, MAX_WIDTH);

read_input_rgb(src, img_input_gray, stridePixels);
/* -------------------------------------------------------------------------- */

/* ------------------------ apply gaussian blur ----------------------------- */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_output_gauss(MAX_HEIGHT, MAX_WIDTH);

xf::GaussianBlur<FILTER_WIDTH, XF_BORDER_CONSTANT, XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_gray, img_output_gauss, sigma);

If you work on grayscale images use "read gray frame" method and ignore "read rgb frame".

I suggest to read the frames and to process them in a while loop because there is the possibility to encounter video frame loss.

Also, I advise you to have a look on xfopencv documentation.

Here you will find examples for all xfopencv functions.

Here you will find an example for gaussian filter.


Regards,

Bogdan D.

Edited by bogdan.deac
Links correction.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now