Jump to content

Niță Eduard

Digilent Staff
  • Posts

    62
  • Joined

  • Last visited

Everything posted by Niță Eduard

  1. Hi @Clinton, It seems that Matlab is detecting the (old) DAQ toolbox made by Mathworks. You should uninstall the toolbox made by Mathworks and install the toolbox made by Digilent for AD3 support. This is the DAQ Toolbox for Digilent devices made by us: https://www.mathworks.com/matlabcentral/fileexchange/122817-digilent-toolbox Here is a guide on how to install the toolbox and get you started with examples: https://digilent.com/reference/test-and-measurement/guides/matlab-getting-started
  2. Hi @kme, Both these toolboxes perform the same task (i.e. control the device) using the same DAQ toolbox SDK (https://www.mathworks.com/products/data-acquisition.html). The newer one, was made by Digilent with guidance from Mathworks in order for us to provide better support (new features and support for new devices): https://www.mathworks.com/matlabcentral/fileexchange/122817-digilent-toolbox This is the older one, made by Mathworks: https://www.mathworks.com/matlabcentral/fileexchange/40344-data-acquisition-toolbox-support-package-for-digilent-analog-discovery-hardware?s_tid=ta_fx_results I think if you have both installed, you will encounter some sort of conflict between them (hence sometimes you would "acquire" the device using the old toolbox instead of the new one and vice-versa)
  3. Hi @kme, If you encounter the 300k rate limit, it means that you are using the old toolbox made by Mathworks. This behavior makes me think that you have multiple DAQ toolbox enabled. Can you only enable the Digilent toolbox and let me know if this solves the issue? Thanks, Eduard
  4. Hi @Mike Jessop, We're not that familiar with the Matlab runtime package, but it seems that the toolbox wasn't included in the compilation (for App 2) or is missing something, since 'digilent' is not visible in the vendor list. When you install the toolbox, you should have a file called VendorInfo.m located at <INSTALLATION_PATH>/+daq/+digilentadaptor/VendorInfo.m There's also a file (Session.m) containing initial DAQ session settings. Did you include these in your compilation? In my case, if I comment out the VendorInfo.m file, I get the same error as you. Let me know if this solves anything. Thanks, Eduard
  5. Hi @drkome, Thank you for your screenshots and answers. Since you are implementing a pipelined design, you should make sure that there are no hazards. Are you dealing with hazards? If not, you should choose how you fix hazards (stalling instructions or dedicated hardware). I don't think venus detects data hazards, so in your case you will need to add bubbles (noop instructions) to stall. Take a look at this presentation: https://inst.eecs.berkeley.edu/~cs61c/su20/pdfs/lectures/lec14.pdf Study the Data Hazards and Control Hazards sections. Here is how to add an ILA to your design. You should use this to visualize your signals inside the FPGA: http://web.mit.edu/6.111/www/f2017/handouts/labs/ila.html Also, how are you writing data from the register file to the LED pin? If you just connect bit(0) from the ALU Result it probably won't work, because you will change your instruction. You should verify that you are writing data at address 3 and if so, write the value to a flip flop that is connected to the LED.
  6. Hi @drkome, It seems that you are building your own RISC-V processor. Depending on how you implement it (single cycle, pipelined, multicycle etc.), you may encounter different problems. What type of microarchitecture are you building? Can you post a screenshot of what you are simulating? Double check your simulation. At what clock frequency is your processor running? Perhaps your LED is blinking, but at a much faster rate that cannot be seen with the naked eye. I recommend connecting your signals (instruction, RS1, RS2, ALU result etc) to an Integrated Logic Analyzer (ILA) and then using the Hardware Monitor to analyze what is happening. Perhaps there are some issues with one or more of the IF/ID/EX/M/WB phases. This can help find the cause.
  7. Hi @Goubinda Sarkar, You can find the code from the Analog Devices repository on an older branch https://github.com/analogdevicesinc/no-OS/tree/2018_R1/Pmods/PmodAD4
  8. Hi @rspanbauer, By what you're describing, it looks like you're using the old toolbox (made by Mathworks): https://www.mathworks.com/matlabcentral/fileexchange/40344-data-acquisition-toolbox-support-package-for-digilent-analog-discovery-hardware?s_tid=FX_rc2_behav Instead of the new toolbox (made by Digilent): https://www.mathworks.com/matlabcentral/fileexchange/122817-digilent-toolbox Check out this Getting started guide: https://digilent.com/reference/test-and-measurement/guides/matlab-getting-started
  9. Hi @sarpadi, Double check your environment to see if OpenCV is referenced correctly inside Vitis HLS. This Answer Record goes into detail on how to reference OpenCV after installing it: https://support.xilinx.com/s/article/75727?language=en_US From my experience, setting this up is way easier using a tcl script. I have attached an example for setting up a Vitis HLS 2020 project (should be the same for 2022). Note that I set this up using windows, so double check the library includes. run_hls_standalone.tcl
  10. Hi @dddddddq, See this recommend flow for working with AXI4-Lite interfaces from the Vivado HLS user guide (p 109). https://docs.xilinx.com/v/u/2018.3-English/ug902-vivado-high-level-synthesis According to this, you will need to use XMylenet_Get_out_V in order to retrieve the prediction. Maybe this is why subsequent calls are not working?
  11. Hi @CBI, The PCam demo starts with a resolution of 1980x1080p, while your HLS IP processes 1280x720p frames (based on your HLS synthesis result). Try to change the demo resolution to 1280x720p using the UART (see Pcam 5C Image Sensor and Post Processing Options) or modify the starting resolution (change this line pipeline_mode_change(vdma_driver, cam, vid, Resolution::R1920_1080_60_PP, OV5640_cfg::mode_t::MODE_1080P_1920_1080_30fps); ). Let me know if this helps. Thanks, Eduard
  12. Hi @filipj, Glad you got it working. I believe it doesn't work when connecting both output streams to the AXIS switch because you are not reading from the second one, which then causes a deadlock. The Mat class is basically a hls::stream, which uses blocking reads and writes. https://xilinx.github.io/Vitis_Libraries/vision/2020.1/api-reference.html#xf-cv-mat-image-container-class The sobel filter source codes reads the input stream and does both X and Y edge processing, which then gets written to two different streams. https://github.com/Xilinx/Vitis_Libraries/blob/2020.2/vision/L1/include/imgproc/xf_sobel.hpp A simpler example would be the duplicate function, which just reads data from a stream and writes it into two different streams. https://github.com/Xilinx/Vitis_Libraries/blob/2020.2/vision/L1/include/imgproc/xf_duplicateimage.hpp If you try to read an empty stream or write to a full stream, it will stall execution until there is data to read or space to write. https://docs.xilinx.com/r/2021.1-English/ug1399-vitis-hls/Blocking-Reads-and-Writes In your design, the AXIS switch reads from one stream at a time. The Sobel IP writes the input to two separate streams (X and Y). The stream which is not consumed by the AXIS switch gets full, which then stalls the input stream. This causes the stream which is consumed by the AXIS to not read anymore data, which then causes a deadlock. When disconnecting one of the Sobel outputs, I think Vivado optimizes away the unused logic, so only one stream is used.
  13. Hi @filipj, Good catch on the AXIS port difference It seems to be a bug in 2020.1 where it does not get synthesized correctly: https://github.com/Xilinx/Vitis_Libraries/issues/28 Try to use the xf_infra.hpp file from the 2020.2 version of the library and let me know if that fixes the HLS synthesis https://github.com/Xilinx/Vitis_Libraries/blob/2020.2/vision/L1/include/common/xf_infra.hpp Thanks, Eduard
  14. Hello @filipj, Can you disconnect stream_out1_V from the AXIS switch and see if it works with one output? Thanks, Eduard
  15. Hello @Jianchao, What exactly do you mean by not editable? For the image processing part, it might be easier for you to create a custom IP (via HDL or HLS) that uses an AXI4-Stream interface and connect it to the video_in port instead of vid_io_out. Your algorithm may introduce latency and by processing data from vid_io_out, your data (vid_data) may no longer correspond to your synchronization signals (vid_hblank and vid_hsync).
  16. Hello @MHBagh, It seems that in the code all the Mats are defined as HLS_8UC3 (RGB Images). In the code, you are doing conversions between RGB and GRAYSCALE images before and after applying the Sobel operator. Try to redeclare the input and output Mats of the Sobel operator (img1 and img2) as HLS_8UC1 and see if the assert succeeds.
  17. Hello @kouroshkarimi, You will also need another Mat (which is actually a FIFO) to act as a consumer, in order for dataflow to work. If I recall correctly, the bounding box function is not made for streaming applications. As such, you will need to modify it/write a new function to achieve what you are trying to do. A pseudocode of such a function would look like similar to this (this only uses 1 roi). new_boundingbox(src_mat, dst_mat, roi, color) { for(i = 0; i < rows; i++) { for(j = 0; j < cols; j++) { // read pixel from src mat src_pixel = src_mat.read(i * rows + j) // check that pixel is located on the rectangle described by x0,y0,x1,y1 if (is_on_rectangle(src_pixel, roi) { // color it dst_pixel = color } else { // keep original pixel dst_pixel = src_pixel } // write data to dst mat dst_mat.write(dst_pixel, i * rows+ j) } } }
  18. Hello @donald, Most of your LUTs are used in instances. You can further analyze where these resources are used in the table found at Utilization Estimates > Detail > Instance inside the C synthesis report. I've highlighted it in one of your screenshots. This may give you hints on where you can optimize your design.
  19. Hello @donald, Building a neural network for a FPGA can be challenging, even with HLS. In your first post you mentioned you are new to FPGA design. If you have not done so yet, you might want to start with some simple project regarding audio processing in order to ensure that you can acquire data successfully. If you want to reuse the project mentioned, one way of lowering the resource usage is by replacing the floating point representation with a fixed point one in order to lower the number of bits in the representation. This might lower the accuracy of the model, but by negligible amounts. If you're not constrained by using this specific project, you may want to generate a neural network implementation from an existing python model, with an external tool, such as hls4ml. Note that if it meets your expectation in terms of resources, you will still have to connect it to an audio source and make the connections necessary in creating the entire application, so there is no running away from FPGA elements.
  20. Hello @donald, HLS may overestimate the resource usage during C synthesis. What you are seeing is not necessarily an error, just HLS warning you that it thinks the design may not fit in the FPGA. Try to implement the project and see if the resource usage improves.
  21. Hello @Eran Zeavi, You can select XC7Z020-1CLG400C via the Parts selection inside the device selection dialog: You can also select the Zybo-Z7-20 from the Boards dialog, if you have the board files installed. I believe Vitis HLS and Vivado share the board files, so you can follow these steps in order to add Digilent boards to the dialog: https://digilent.com/reference/vivado/installing-vivado/v2019.2#installing_digilent_board_files
  22. Hello @ozgur, I have some questions: Are you doing edge detection in hardware (on the FPGA) or in software? What board are you using? Are you doing image processing on a video feed or on a single image? What do you mean by exporting the pixels? Do you mean exporting the edge pixels of a single image in memory, so that it can be processed in software? Or do you mean exporting the processed video feed to an external display (via rgb to dvi IP)?
  23. Hello @Clyde, Take a look at these two user guides. Zynq: https://www.xilinx.com/support/documentation/user_guides/ug585-Zynq-7000-TRM.pdf Microblaze: https://www.xilinx.com/support/documentation/sw_manuals/xilinx2021_1/ug984-vivado-microblaze-ref.pdf There also exists this design hub that compiles resources regarding the Microblaze processor: https://www.xilinx.com/support/documentation-navigation/design-hubs/dh0020-microblaze-hub.html
  24. Hello @AntonioFasano, Can you post the error that Vitis is throwing you? Also, what Vitis version are you using? From our experience, Makefile bugs can differ from one version to another. Thanks, Eduard
×
×
  • Create New...