Search the Community

Showing results for tags 'image processing'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • News
    • New Users Introduction
    • Announcements
  • Digilent Technical Forums
    • FPGA
    • Digilent Microcontroller Boards
    • Non-Digilent Microcontrollers
    • Add-on Boards
    • Scopes & Instruments
    • LabVIEW
    • FRC
    • Other
  • General Discussion
    • Project Vault
    • Learn
    • Suggestions & Feedback
    • Buy, Sell, Trade
    • Sales Questions
    • Off Topic
    • Educators
    • Technical Based Off-Topic Discussions

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 11 results

  1. Sduru

    MIPI D-PHY and CSI-2

    Hi everyone, I am dealing with MIPI CSI-2 RX and D-PHY RX IPs which are open source IPs by Digilent. Where is the latest versions of those IPs? Is there new versions of those which are compatible with Vivado 2018.3? Many thanks...
  2. Hello, I'm trying to make a standalone application for image processing on ZYBO, but I have problems with creating a block design which would make it possible. Since I started using Vivado two months ago, I'm still not familiar with creating my own block designs. I know that I should use dvi2rgb, Video In to AXI4-Stream and AXI VDMA IPs to store frames to DDR memory, but I don't know how to configure them. My idea is to create a function in Vivado SDK, e.g. CaptureImage(), which would return an address of the image saved in memory. Can somebody help me create the simplest block design to accomplish that? Best regards, Toni
  3. I am doing an image processing (not video) with Zybo Z7, and I am writing my own algorithm in C and C++ language for the image processing algorithm. For the simulation (without camera), I have some ready photos that can be loaded from SD card and then I want to test the performance of my algorithm on. I know Xilinx provide xsdps driver for the SD card, but I don't know how to use it for my project. But For the real application, the image processing will be done on the photos that are taken regularly by a camera. My priority, for now, is to first test my code using simulation. Could someone provide me a tutorial or a sample project? All the tutorials on the internet are about processing Video (not simple images)
  4. Hello Digilent Community, I am working on a image processing project and was wondering if anyone had advice or could point me in the right direction. I have tried following some tutorials and example projects, but I am still trying to wrap my head around Xilinx Vivado and SDK. The project really shouldn't be very difficult, I think I am just missing some information or the best way to go about doing it. For the project I am using the Zybo z7-20 development board and want to save two images to an SD card. The two pictures are black and white frames from a video just seconds apart, so there is only slight change in the frames themselves. I want to compare the two frames and output either a black and white image of the change in pixels or a binary file of '0' being an unchanged pixel and '1' being a change in the pixel. MATLAB has the 'Computer Vision System Toolbox' 'Tracking Cars Using Foreground Detection' Simulink example that is similar to what I want to do on the Zybo z7-20 FPGA. The following figure show the original video (right) with blob detection (the green square) and the binary output image of the change in pixels in the foreground (left). I want to use the Zynq Processor and write C code to do the analysis, but I haven't found a clear way to access the SD card from the Xilinx SDK. The following figure is of my current Block Design with only the Zynq Processor as well as some GPIO to test. I am still researching and looking at examples to compare, but wanted to see if the community had any pointers or if someone has done this before. I am a college student and I have been really interested FPGA's and digital design for the past 6-9 months, but I have mainly written my own Verilog code and haven't worked with block design or running C code on any of my designs. Any comments or suggestions would be great. Thanks!
  5. Hello, I am trying to make an HDMI passthrough application on the PYNQ-Z1 board using the dvi2rgb(1.9) and rgb2dvi (1.4) IP blocks from this github repo. Here are the technical details of my tools: Vivado 2018.2 PYNQ-Z1 board (part xc7z020clg400 - 1) (Got the board file I’m using in vivado from this webpage Dvi2rgb v1.9 Rgb2dvi v1.4 Here are some images of my project: Constraints Block Diagram clock wizard settings dvi2rgb rgb2dvi Long story short, the application doesn’t work when I use it between my laptop (Lenovo Z710 Ideapad running Windows 8.1) and my TV (Toshiba 49L420U with dimensions 1920x1080) After consulting a lot of posts on this website, especially this one and this one, I’m still not sure about what the magic formula is to get these IP blocks to work. The posts don't seem to be addressing the problems I'm having with this design, but rather making changes to the specific implementation of the project. They were all older versions of the IP blocks and vivado, and they were using different boards, so those factors may have contributed to why those examples didn't work for me. I’ve reduced my critical warnings down to three, which are the following: 1.) Timing: i get the following timing warnings after running implementation 2.) Set_property expects at least one object a. I get two of these, for the two constraints listed at the very bottom of the constraints I showed in the first image above. How can I write these constrains such that Vivado will recognize them and won't throw a warning? I read from the posts I mentioned earlier that timing requirements may throw a critical warning but the design will work anyway, but I haven't had the same fortune. So has anybody here gotten their design to fit timing and create a working project? If so I'd love to know how, and if you failed timing but still got the project to work, what did your timing analysis look like? As can be seen in the block diagram, I pulled the aPixelClkLockd signal out to an LED, which is an active high signal. But I haven't gotten this signal to be high, so obviously that's a problem. If the clock recovery block in the dvi2rgb IP can't get a lock on the incoming clock signal, does this mean that the project is not properly constrained, or does this mean that the IP block won't work with my laptop? I read a lot about DDR signals, and I believe that I set those up correctly in my block diagram and constraints file. But I didn't understand what hpd signals did, and I don't know which block diagram they are supposed to come from. Any help here would be greatly appreciated! Best, Ben
  6. hello, I just run the PCAM demo project and noticed the white balance is off. I can see the AWBs working but the image has a green cast. Has someone experienced similar results or can this be improved changing my lightening environment? What would be the best approach to improve AWB? Does the sensor provide a calibration mechanism? I think I could load different colour matix'es to the sensor for different lighting scenarios. Is this a good solution? Alternatively I can implement my own algorithm and configure it with manual white balance. What other factors could be involved in this behaviour, other than colour temperature? Judging from the colour (green) maybe gain/awb are not accounting for the bayer filter entirely? thank you
  7. Hi, I am a newbie...I am interested in buying Embedded Vision Bundle, but i have some technical concerns.I read on the datasheet of Zybo Z7-20 that the PCAM module is working (or preset) under the specific frame per second rate ( such as 30 or 60 fps) at the full resolution. ( in this given link below, the supported resolutions are given as * 1080p@30Hz* 720p@60Hz https://reference.digilentinc.com/learn/programmable-logic/tutorials/zybo-z7-pcam-5c-demo/start ) I know that it is only a demo document, and i am hopping that with proper way i can get a higher frame rates at lower resolutions.My question that is it possible to have higher fps rates (i.e 1000 fps) at lower resolution (i.e 750fps 320x240 pixels).Thank you so much in advance,Regards,Sirac KAYAPollution Control Technologies
  8. I'm doing a project in which I'm trying to implement image processing algorithms.I a novice to zybo.I need answers for some questions 1.How to load a hex file to zybo? 2.Is there any issue using UART(J11) port(means any voltage issue) 3.Will zybo supports external clocking? Thanks in advance
  9. Hello there, I have developed an image processing algorithm using the system generator and Generated the HDL netlist. Now i need to send the image file to the Verilog code generated, and compute the output and measure the time. Can anyone please help me in understanding how i should, input the image, get the output and then measure the time of execution on ZED board? Thanks in advance.
  10. Can anyone help me out I want to display simple image (200 x 200 gray scale image ) from FPGA ( Genesys 2 Kintex 7 ). I know the procedure how to give pixel values in RAM by coe.file But I don't know the procedure how to start ? where to find the procedure ? Which is one easy to display VGA Or HDMI? Is there any working example related to image display especially in Genesys 2
  11. Hi All, I tried running the image filtering demo given in the link below : http://www.instructables.com/id/Quick-Start-Test-Demo-Zybo-Xlinx-Zynq-7000-Image-F/ The Image filtering Demo worked very well for me. I wanted to try the project myself. But the project files are for ISE and not for VIVADO. Has anyone migrated it to VIVADO. Please help !!! I am a beginner Because I have never used ISE for Zynq projects. Source file link below is for ISE project and it is in the zip file: https://github.com/LariSan/Digilent-Maker/tree/master/Zybo/zybo_video_demo Thanks, Aravind