bogdan.deac

Technical Forum Moderator
  • Content Count

    71
  • Joined

  • Last visited

  • Days Won

    2

bogdan.deac last won the day on November 4 2019

bogdan.deac had the most liked content!

About bogdan.deac

  • Rank
    Frequent Visitor

Recent Profile Visitors

1315 profile views
  1. Hi @PearSoup, Unfortunately I didn't work with RGB images using SDSoC and PCam . My last post is relevant if the input video stream comes from a HDMI source. With the current conditions of everyone working from home, I don't have access to the necessary hardware to assist you right now. I will get back to you when things return to normal.
  2. Hi @hmr, Add the Zynq Processing System in Block Design. After that click Run Block Automation and make sure that Apply Board Preset is checked. See picture below. Now the Zynq is configured correctly (see picture below) but the preset does not appear in Preset section (Current Preset: None). I don't know the cause of this issue.
  3. Hi @hmr, What archive have you downloaded? Please make sure that you are using Cora-Z7-10-Basic-IO-2018.2-1.zip from this link.
  4. Hi @sgandhi, You can try TCF Profiling or you can use this method.
  5. Hi @oqas, There is no connection needed in block diagram in order to use UART. The USB-UART bridge is connected to PS. You will find some Xilinx examples following this path: C:\Xilinx\SDK\2019.1\data\embeddedsw\XilinxProcessorIPLib\drivers\uartps_v3_7
  6. Hi @PearSoup, I think that you should read the frames in the following way: #define MAX_HEIGHT 1080 #define MAX_WIDTH 1920 #define FILTER_WIDTH 3 float sigma = 0.5f; uint8_t *src = (uint8_t *) v4l2_help.buffers[v4l2_buf_index].start[V4L2_FORMAT_Y_PLANE]; uint32_t stridePixels = drm.create_dumb[drm.current_fb].pitch; /* ------------------------ read rgb frame ---------------------------------- */ /* read_input_rgb function from hls_helper reads each channel in turn so we need a matrix for each rgb channel */ xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_r(MAX_HEIGHT, MAX_WIDTH); xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_g(MAX_HEIGHT, MAX_WIDTH); xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_b(MAX_HEIGHT, MAX_WIDTH); /* read the r, g, and b channels */ read_input_rgb(src, img_input_r, img_input_g, img_input_b, stridePixels); /* now if you want you can combine the channels in one matrix. some xf functions require 4 channel matrix */ /* declare a new matrix with four channels: red, green, blue and alfa */ xf::Mat<XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_combined(MAX_HEIGHT, MAX_WIDTH); /* we are non interested in alfa channel so it is initialized with 0 */ uchar zero_data[MAX_HEIGHT * MAX_WIDTH] = { 0 }; xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_alfa(MAX_HEIGHT, MAX_WIDTH, zero_data); /* combine the channels in one matrix */ xf::merge<XF_8UC1, XF_8UC4, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_r, img_input_g, img_input_b, img_alfa, img_combined); /* -------------------------------------------------------------------------- */ /* ------------------------ read gray frame ---------------------------------- */ xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_input_gray(MAX_HEIGHT, MAX_WIDTH); read_input_rgb(src, img_input_gray, stridePixels); /* -------------------------------------------------------------------------- */ /* ------------------------ apply gaussian blur ----------------------------- */ xf::Mat<XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1> img_output_gauss(MAX_HEIGHT, MAX_WIDTH); xf::GaussianBlur<FILTER_WIDTH, XF_BORDER_CONSTANT, XF_8UC1, MAX_HEIGHT, MAX_WIDTH, XF_NPPC1>(img_input_gray, img_output_gauss, sigma); If you work on grayscale images use "read gray frame" method and ignore "read rgb frame". I suggest to read the frames and to process them in a while loop because there is the possibility to encounter video frame loss. Also, I advise you to have a look on xfopencv documentation. Here you will find examples for all xfopencv functions. Here you will find an example for gaussian filter. Regards, Bogdan D.
  7. I am sorry, but I don't understand your last question.
  8. Comment out the imshow("Image",img); line, build the application and run it. Do you get the same error message?
  9. Hello @kotra sharmila, Can you show the application's code?
  10. Hi @ahmedengr.bilal, You have three options: 1. Use standard OpenCV library which can be included in Petalinux rootfs. After that you can develop your application using OpenCV functions. 2. Use xfOpenCV library from Xilinx. Find the documentation here. Some examples here. It offers a subset of standard OpenCV library functions, modified to be easily accelerated in FPGA. 3. Use standard OpenCV and xfOpenCV libraries. This options is suitable for more complex algorithms where you process some functions on ARM processor and another ones in FPGA for better performance. For all mentioned options you have to implement the image acquisition mechanism from a camera if you don't intend to use static images. Usually, the easiest way to develop video processing apps using xfOpenCV and OpenCV on Xilinx SoC is SDSoC. Find more info here. Using SDSoC you have a hardware platform which describes your hardware configuration and another important aspects like libraries, sample projects, etc. Usually, this platform is provided by the development board manufacturer, in this case Digilent. Two SDSoC platform aspects are important for you: 1. The hardware configuration which implements the image acquisition mechanism. Find Digilent Zybo Z7-20 SDSoC Platform here. 2. Sample projects. Find sample projects for the above mentioned platform here.
  11. bogdan.deac

    OpenCV and Pcam5-c

    To start the application run: ./config_pcam_vga.sh ./filter2d_test.elf
  12. bogdan.deac

    OpenCV and Pcam5-c

    Hi @Esti.A, I attached the files. filter2d_test.zip