• 0
PearSoup

Zybo Z7-20 - xfopencv: hls::stream is read while empty

Question

Hi everyone,

 

Hardware:  Zybo Z7-20,  PCam 5C,  sd card

Software:  SDSoC 2017.4,  reVISION platform for Zybo Z7-20 (release, v2017.2-3)

The “Simple Filter2d Live I/O demo for Pcam5C” is perfectly working (following the readme in reVISION download archive). This code is the starting point for my application.

My C++ skills are far from expert level and also quite rusty.

 

I’m developing a live video application for my master thesis. The current step is to implement motion detection. The (simple) concept is to compute the difference between a background image and the current video frame:

Initialize:  first frame --> (to grey scale) --> (optional: resize) --> gaussian blur ==> background image

Loop:  frame --> difference to background image --> threshold --> dilate --> find contours ==> list of contours

 

Question:

I’m stuck right at the first part, the initialization. The image is already grey scale. The resizing I currently omitted. So the only remaining task is Gaussian blurring. At runtime, when xf::GaussianBlur is called, the program goes into loop, infinitely throwing the following error:

Quote

WARNING: Hls::stream 'hls::stream<ap_uint<8> >.2' is read while empty, which may result in RTL simulation hanging.

Even the first print statement after xf::GaussianBlur is not reached.

Since xf::GaussianBlur tries to create hls::streams for both the xf::Mat parameters, I thought that the src matrix might be empty. But printing the first 40 pixels (not included in the code below) yields reasonable values…

Any help appreciated!

 

The code (that differs from "Live I/O demo"):

Additional #defines in platform.h:

// parameters for xf::Mat template with 1 channel.
#define XFMAT_1C XF_8UC1,MAX_HEIGHT,MAX_WIDTH,XF_NPPC1

#define GAUSSBLUR_MAX_KERNELSIZE 30            // guessed
#define GAUSSBLUR_MAX_SIGMA 10                 // guessed
#define GAUSSBLUR_SIGMA 2.0                    // guessed

Code in main.cpp:

/* [The initialization stuff: v4l2 input, terminal, drm output, switches&buttons, ncurses] */

// Start capturing frames from video input (Pcam 5C on CSI port).
v4l2_helper_start_cap(&v4l2_help);

// Get the first frame - a grey-scale image (Y) (from YUV image???).
unsigned int v4l2_buf_index = v4l2_helper_readframe(&v4l2_help);
uint8_t* firstFrame = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
xf::Mat<XFMAT_1C> xFirstFrame(vActiveIn, hActiveIn, (void*)firstFrame);

// Store first frame as background frame.
xf::Mat<XFMAT_1C> xBackground (xFirstFrame.rows, xFirstFrame.cols);
xf::GaussianBlur<GAUSSBLUR_MAX_KERNELSIZE, GAUSSBLUR_MAX_SIGMA, XFMAT_1C>(xFirstFrame, xBackground, GAUSSBLUR_SIGMA);

// for debugging:
mvprintw( 11, 0, "xBackground:");
wrefresh(mywin);
for(size_t i = 0; i < 40; i++)
{
	mvprintw( 11, 16 + 4*i, "%u    ", xBackground.copyFrom()[i] );
}
wrefresh(mywin);

/* Loop is commented. Only the cleanup remains at the end: endwin(), etc. */

 

Share this post


Link to post
Share on other sites

3 answers to this question

Recommended Posts

  • 0

Hi @PearSoup,

I think that you should read the frames in the following way:

#define MAX_HEIGHT 1080
#define MAX_WIDTH 1920
#define FILTER_WIDTH 3

float sigma = 0.5f;
uint8_t *src = (uint8_t *) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;

/* ------------------------ read rgb frame ---------------------------------- */
/* read_input_rgb function from hls_helper reads each channel in turn so we need a matrix for each rgb channel */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_r(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_g(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_b(MAX_HEIGHT, MAX_WIDTH);

/* read the r, g, and b channels */
read_input_rgb(src, img_input_r, img_input_g, img_input_b, stridePixels);

/* now if you want you can combine the channels in one matrix. some xf functions require 4 channel matrix */
/* declare a new matrix with four channels: red, green, blue and alfa */
xf::Mat<XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_combined(MAX_HEIGHT, MAX_WIDTH);
/* we are non interested in alfa channel so it is initialized with 0 */
uchar zero_data[MAX_HEIGHT * MAX_WIDTH] = { 0 };
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_alfa(MAX_HEIGHT, MAX_WIDTH, zero_data);

/* combine the channels in one matrix */
xf::merge<XF_8UC1, XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_r, img_input_g, img_input_b, img_alfa, img_combined);
/* -------------------------------------------------------------------------- */

/* ------------------------ read gray frame ---------------------------------- */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_gray(MAX_HEIGHT, MAX_WIDTH);

read_input_rgb(src, img_input_gray, stridePixels);
/* -------------------------------------------------------------------------- */

/* ------------------------ apply gaussian blur ----------------------------- */
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_output_gauss(MAX_HEIGHT, MAX_WIDTH);

xf::GaussianBlur<FILTER_WIDTH, XF_BORDER_CONSTANT, XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_gray, img_output_gauss, sigma);

If you work on grayscale images use "read gray frame" method and ignore "read rgb frame".

I suggest to read the frames and to process them in a while loop because there is the possibility to encounter video frame loss.

Also, I advise you to have a look on xfopencv documentation.

Here you will find examples for all xfopencv functions.

Here you will find an example for gaussian filter.


Regards,

Bogdan D.

Edited by bogdan.deac
Links correction.

Share this post


Link to post
Share on other sites
  • 0

Hello again,

thank you for your kind and helpful response, @Bogdan!
Yet, at first I didn't manage to get RGB to work so I used grayscale images. And to be less prone to errors I implemented everything with cv functions. This enabled me getting everything else to work properly. Now I would like to succeed on using RGB and xf, with a little help...

 

My next step: Read camera as RGB and look, if my cv code can handle it.

Goal:   v4l2 data --> rgbpxl_t* --> read_input_rgb() --> xf::Mat --> cv::Mat

Question 1:   How to get the v4l2 data as rgbpxl_t* such that it can be plugged into read_input_rgb()? The v4l2 data seems to be in yuv format, I'm unsure how to do it.

Question 2:   How to get xf::Mat with 4 channels to cv::Mat with 3 channels? It might be easy when question 1 is answered, but if not, this way I don't have to bother you twice.

// Read data from camera.
uint8_t* src = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];
uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;

// ---> ??? <---

// rgbpxl_t to xf::Mat (The code above)
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_r(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_g(MAX_HEIGHT, MAX_WIDTH);
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_b(MAX_HEIGHT, MAX_WIDTH);
read_input_rgb(src, img_input_r, img_input_g, img_input_b, stridePixels);
xf::Mat<XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_combined(MAX_HEIGHT, MAX_WIDTH);
uchar zero_data[MAX_HEIGHT * MAX_WIDTH] = { 0 };
xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_alfa(MAX_HEIGHT, MAX_WIDTH, zero_data);
xf::merge<XF_8UC1, XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_r, img_input_g, img_input_b, img_alfa, img_combined);

// xf::Mat to cv::Mat
//cv::Mat cvFrame( ... img_combined ... );

 

Note: This is how I currently read images to cv::Mat, grayscale:

const uint32_t v4l2_buf_index = v4l2_helper_readframe(&v4l2_help);
uint8_t* src = (uint8_t*) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE];

const uint32_t width  = v4l2_help.fmt.fmt.pix_mp.width;
const uint32_t height = v4l2_help.fmt.fmt.pix_mp.height;
cv::Mat cvFrame (height, width, CV_8UC1, (void*)src);

 

 

 

 And Question 3) a small side-question: Shouldn't the stridePixels for handling v4l2 be computed from v4l2? E.g. somewhat like:

// uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch;	// from above
uint32_t stridePixels = 4 * v4l2_help.fmt.fmt.pix_mp.plane_fmt[0].bytesperline;

I thought that DRM stands for "Direct Rendering Manager" and is for showing the images on a display (output), while v4l2 is for camera input. Keeping both things separate would be nice? My thoughts to this might be nonsense, of course.

 

Looking forward to Your support,
PearSoup

Share this post


Link to post
Share on other sites
  • 0

Hi @PearSoup,

Unfortunately I didn't work with RGB images using SDSoC and PCam . My last post is relevant if the input video stream comes from a HDMI source. With the current conditions of everyone working from home, I don't have access to the necessary hardware to assist you right now. I will get back to you when things return to normal.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now