Technical Forum Moderator
  • Content count

  • Joined

  • Last visited

  • Days Won


artvvb last won the day on November 22

artvvb had the most liked content!

1 Follower

About artvvb

  • Rank
    Prolific Poster

Profile Information

  • Gender
  1. Live video processing on Zybo board?

    Unfortunately it looks like Xilinx's Video Mixer IP requires an additional license to use (I have not looked into pricing). This means that (without buying the license for the mixer) modifying the hardware pipeline would probably require a custom IP, which would be quite difficult. I should also note that I am not aware of anyone at Digilent that has used the Video Mixer IP before, so any questions about that would probably have to go to Xilinx. For the coordinate systems used in the DemoScaleFrame function, the comments at variable declaration are helpful. The most relevant here is at the declaration of xcoDest and ycoDest. int xcoDest, ycoDest; // Location of the destination pixel being operated on in the destination coordinate system The for loop that contains the Bilinear interpolation function "handles all three colors". This means that within the byte-buffer destFrame, each set of three consecutive bytes represents the R,G,B color values for that pixel with 8 bits of data apiece. The macro DEMO_STRIDE defines the line length of the video buffer - noting that the buffer is always sized to be able to contain 1920x1080x3 bytes of color data, regardless of what resolution is actually being used. What this means is that to index into an area of the destination frame outside of the for loop - which may be helpful if the target color of pixels that are indexed earlier in the for loop depends on the color of later pixels - you can set a destination index, iDest, to be 3 * 1920 * (the y coordinate of your pixel) + 3 * (the x coordinate of your pixel). The RGB values of this pixel will be placed at the memory addresses destFrame[iDest], destFrame[iDest+1], and destFrame[iDest+2]. EDIT: I just noticed the phrase pixel averaging in your post, the rest of my response is just describing this. Could you explain what you mean by blurring the area of the input frame? There are a lot of different algorithms to do something like this... As an example of what I mean, you could figure out what each of the pixels in the area to be blurred would be if they were bilinearly interpolated, take a weighted greyscale conversion of each of the pixels, then take the average of these greyscale pixels, then write that average to each of the color channels of each of the pixels in the target area. I don't believe that the result of this method would look particularly good (a grey box, where just how grey it is depends on the input frame), but might be another good step. -Arthur
  2. Unknown Resource

    @deppenkaiser The UART is instantiated and the UART pins are mapped as part of the "Run Block Automation" process, which applies the Zynq preset contained in the board file. You can configure the UART by customizing the Zynq IP. The UART is represented in the block design as a part of the FIXED_IO interface of the Zynq block. In addition, the UART connection cannot be used in a pure-HDL project. Thanks, Arthur
  3. XADC sampling and acquisition timing

    @drewphysics Unfortunately, my personal experience with using the XADC only really applies to DC and low-frequency AC waves. You may need to take into account the additional resistors and capacitors between bank 35 and the JXADC / JA Pmod headers, as seen in the Basys 3 schematic. For the sake of other people who may be reading this, I assume that you are getting the 3pF / 10kOhm numbers from pages 29 and 30 of ug480? To the question itself: I am not sure, I'd recommend asking Xilinx on their forums, or perhaps someone else can chime in here. Thanks, Arthur
  4. XADC sampling and acquisition timing

    Oops, you are correct, address is seven bits. Are your d# signals in the logic analyzer screenshots the do_out pins? My understanding of how the XADC works is that it will continuously capture data without you doing anything. Whenever a conversion is completed, the data from that conversion is loaded into the appropriate channel's register. The eoc_out flag alerts you that there is a new piece of data to check on. Your interface for actually getting at the registers of the XADC is the DRP interface, (daddr_in, den_in, di_in, do_out, drdy_out, and dwe_in). Of these signals, we don't care about the write enable or the data input, and they should be held low. To read a value from a register, we assert den_in and daddr_in (probably at the same time), and when drdy_out is asserted, we latch the data into a reg out of do_out. As for how to tell which sample is which, and which sample is being read, this is what the CHANNEL[4:0] signal in the timing diagram you added to the first post is for. According to page 73 of the 7Series XADC user guide, EOC, CHANNEL, and DRDY are intended to be used together, and should all represent information about the same sample. By this I mean that the second EOC should pertain to sample N. Thanks, Arthur
  5. XADC sampling and acquisition timing

    I reread your post, and to answer the specific question, this sounds correct to me, but you will need to ask Xilinx to make sure, as they are the ones that created the IP. Thanks, Arthur
  6. XADC sampling and acquisition timing

    I have only used Continuous Channel Sequencer mode personally (caveat for the following). I believe that the DRP interface is largely independent of the conversion process (hence tying the den_in and eoc_out ports together). This means that data should be captured from do_out when drdy_out goes low and you have provided a stable daddr_in signal, largely ignoring the End Of Conversion signal for this process. I am curious how you are driving daddr_in, is it tied to GPIO/Buttons/Switches, or is it floating currently? This port is the address port for the DRP interface and needs to be set. In the case of using only the AN6 channel, I believe the register address to get to the data is 6'h16. Thanks, Arthur
  7. PMOD TMP2 on a Zedboard

    @bit5huang We currently do not have a Pmod IP core for the TMP2. I would suggest that you use the TMP3 IP core to get an I2C connection going. From the TMP3 drivers, only the TMP3_begin, TMP3_IICInit, TMP3_ReadIIC, and TMP3_WriteIIC functions should be used, as these interact directly with a generic IIC controller. From there, review the ADT7420 data sheet to determine the correct way to configure the device and grab data. Page 13 of this data sheet contains the register map of the device, of these registers, the most important will be the temperature registers (0x00,0x01) and the status (0x02) and config (0x03) registers. Reading the higher-level functions of the TMP3 drivers might give you a decent idea of how to use the Read/WriteIIC functions. Using the TMP3 IP will not give you access to the interrupt pin, so I would recommend that you set the configuration register to 1-sample-per-second mode, then continuously poll the status register to see if a sample is available to be read. When the RDY bit of the status register goes low, take a reading from the temperature registers and printf it out over the UART connection to the Zedboard. Apologies that this isn't exactly plug-and-play, hopefully this should be enough to get you started... Thanks, Arthur
  8. XADC sampling and acquisition timing

    @drewphysics The V_N and V_P pins are indeed grounded on the Basys 3. You can find all of the connected pairs of XADC pins in the Basys 3 schematic, in Bank 35, on sheet #6. Thanks, Arthur

    @flexible111 It depends on if your project includes a Microblaze processor or not. If it does, check out this tutorial. It may be a little outdated, but I don't see any major problems from a quick skim. Thanks, Arthur
  10. Live video processing on Zybo board?

    To add to my comments on possible performance problems: Running this masking algorithm in the Zynq PS will likely be inherently slower than running it in the PL. Adding a stage to the output pipeline in hardware is likely going to be a better approach, with the caveat that it would be a significant amount more work. This stage would likely need to be created as a custom IP core that either: 1. Takes in an AXI stream and outputs an AXI stream, probably placed near the AXI stream converter IP in the pipeline (if I am remembering things correctly, I don't have access to Vivado at the moment). There may be a xilinx provided IP that does something like this, but I am unsure. 2. Takes in VGA signals and outputs VGA signals, placed directly before the output port. This approach would require more work in detecting the pixel position and resolution of the data stream, but would avoid the complexity of AXI. For the time being though, it is still worth trying to make the algorithm work in the PS, this is just a hypothetical for if the performance of the PS design is unacceptable. Thanks, Arthur
  11. Live video processing on Zybo board?

    @Shuvo Sarkar Are you selecting the output frame using the serial interface to the demo? I assume that what is currently being displayed on stream is a single captured frame with the masking applied? If this is the case, it is likely possible to modify the demo further so that it runs the "copy, scale, and mask" algorithm repeatedly. By this I basically just mean placing the modified DemoScaleFrame function inside of a while loop. Caveat: I not certain what the performance cost of doing this would look like, so it may be worth looking into how quickly the algorithm runs (perhaps a little outside of the scope of the current discussion). Thanks, Arthur
  12. Keyboard input problem - zybo buffer

    It would probably be easier to just switch to something like Tera Term, see the link I provided above. Our demos are validated with Tera Term, rather than the SDK console. It is possible that the flush cache doesn't see the '\r\n' until after it has already continued to the change res function. If this is the case, instead of flushing, you may need to call something like the following: int n=0; while (n<2) { n += Uart_Receive(...); } This will make sure to pick up the two newline characters.
  13. Keyboard input problem - zybo buffer

    Could you try using Tera Term instead? I am unfortunately not super familiar with the SDK console, and have had trouble using it in the past.
  14. Keyboard input problem - zybo buffer

    @Andrea_cau What serial terminal application are you using? If I recall correctly, Tera Term automatically sends any characters typed, so pressing enter is not required to send the '1' character. Thanks, Arthur
  15. Live video processing on Zybo board?

    @Shuvo Sarkar What exactly needs to be done depends on what you mean by "region of interest" and "binary mask". I will assume that you are trying to replace some area of what is being displayed on the screen with a rectangular image. A good starting point would be to take the input stream and output it with modifications. The DemoScaleFrame function in video_demo.c does this. The resolution scaling being done by this function also may or may not be desirable for your project. The Bilinear interpolation function implemented on line 473 of the original source is the primary point of interest here. The three variables required to tell what is being written to in the destination frame are the index, i, which can be used to determine the color channel being written to, and the destination coordinate variables xcoDest and ycoDest. A good starting point to be able to see changes being made would be to add extra code to black out a rectangular area of the screen. This can be accomplished by wrapping the destFrame[iDest] statement within an if statement, that either writes a zero to destFrame[iDest] or runs the bilinear interpolation of the source frame, depending on the coordinates of the target pixel in the destination frame. How you store, access, and process the binary mask (overlay image?) is a large topic that I would need more details to provide information on. Let us know if you have more questions. -Arthur