Jump to content

artvvb

Technical Forum Moderator
  • Posts

    1,055
  • Joined

  • Last visited

Posts posted by artvvb

  1. On 4/22/2024 at 12:22 AM, RATHNA said:

    can I program the arm cortex a9 processor alone without FPGA programming? If so what are the DSP functionalities Zybo board's ARM cortex A9 providing?

    Yes, you can program the PS alone without loading a bitstream, but you still need to export a hardware project from Vivado to get the correct set of PS initialization settings, even if the project only contains the Zynq PS in a block diagram.

    There isn't much in the way of dedicated DSP features in the PS - this section of the Zynq TRM provides a feature overview: https://docs.amd.com/r/en-US/ug585-zynq-7000-SoC-TRM/Processing-System-PS-Features-and-Descriptions.

  2. 20 hours ago, Michael Bradley said:

    Can I just use the PMOD ports for this purpose and alter the constraints/xdc file to map to whatever name I give them in the verilog code?

    13 hours ago, dpaul said:

    I have not taken a look for a long time into the Zybo Z7 pin mapping file. But I can tell you from memory that the PMOD connections are generally connected to the FPGA GPIO pins, so that you can control them as as you like.

    So the answer to your question would be a yes!

    Yep, Pmod I/Os are generally general purpose. Just make sure that whatever circuit you're connecting to shares a ground with the board.

  3. Hi @loser

    Try pushing some known points in and see what the result is - change the testbench input and see what happens. Below is a testbench that just counts through all bit values that phase could be, using your IP settings:

    image.png

    module cordic;
        reg clk;
        initial begin
            clk = 0;
            forever #0.5 clk = ~clk;
        end
        reg [15:0] din;
        wire [31:0] dout;
        wire dout_valid;
        
        initial begin
            din <= 0;
            forever @(posedge clk) din <= din + 1;
        end
        
        cordic_0 dut (
            .aclk(clk),
            .s_axis_phase_tdata(din),
            .s_axis_phase_tvalid(1'b1),
            .m_axis_dout_tdata(dout),
            .m_axis_dout_tvalid(dout_valid)
        );
    endmodule

    Waveform Style -> Analog Settings for dout:

    image.png

    Radix -> Real Settings for dout:

    image.png

    Thanks,

    Arthur

  4. Hi @VictorV

    Please use Vertical Scale: Decibel instead of expressions:

    image.png

    Using expressions to convert from a ratio to decibels is unnecessary, and as seen, produces incorrect results. We were told that this is due to the fact that the underlying representation of Vout and Vin is a complex number rather than just the magnitude. The same odd result of trying to calculate a ratio by dividing complex numbers like this also occurs in other SPICE-based simulators. You could calculate the magnitude with an expression of "Mag=sqrt((real(Vout))^2+(imag(Vout))^2)" and produce the expected high-pass plot (as seen below), however, there's not much reason to when the dB scale exists.

    image.png

    Thanks,

    Arthur

  5. Hi @Alturan

    Welcome to the forum.

    On 4/21/2024 at 5:18 AM, Alturan said:

    I want to generate sine, triangular and square wave on dac (pmodda3).

    Storing data in block RAM (or just LUTs) and counting through addresses is a standard way of doing this kind of thing. Each SPI transfer sent to the DAC would have a new value - like your data_i signal would be a new piece of data read out of a BRAM at the start of every transfer, incrementing the address every transfer. You would use a separate counter to control when each new SPI transaction starts to control the sample rate - assuming a 100 MHz clock, you could get a 1 MS/s DAC update rate if whenever a counter counts to 100, a new transfer is initiated, although it looks like your controller currently takes 120 clocks to send out a transfer.

    You can even control the frequency of the output signal by changing how much the address counter goes up each transfer - adding 10 to a counter that rolls over when it goes above 255 lets you count through a lookup table 10x faster than adding 1 each time.

    I'd also recommend simulating your HDL as you go, before testing in hardware - using "if clk_divided = '1' then" instead of "if clk_div_counter = CLOCK_DIVIDER - 1 then" for the shift register enable is concerning - it's probably active for 5 clocks in a row, then idle for the next 5, rather than active for one in every five clocks, like I assume is intended. Check out this guide: https://digilent.com/reference/programmable-logic/guides/simulation.

    Thanks,

    Arthur

  6. Hi @tato0316

    Welcome to the forum.

    MIO pins are not accessible through PL I/Os and don't get constrained - each MIO maps to a specific physical I/O on the chip. The Zybo's SD interface also doesn't have alternate paths on the PCB to FPGA I/O pins. You would need to access the SD card by using the PS, or maybe by controlling PS peripherals from fabric through the PS's AXI slave ports (assuming they're even addressable from there...).

    Depending on the end goal, maybe you could load the text file data into a BRAM instead, or even a couple of versions of the data, and bake it into the bitstream?

    Thanks,

    Arthur

  7. Hi @Viktor Nikolov

    500000 should be fine for a packet length - it fits within the 26-bit max of the DMA. I imagine you're also increasing the RECV_BUFFER_SIZE, since it doesn't sound like it's reporting errors - adding a return to the check in main would help debug:

    Quote

        if (words_per_packet * packets > RECV_BUFFER_SIZE) {
            xil_printf("error: receive buffer too small\r\n");
            return 1;
        }

    With fresh eyes, there's a bug in the code where the cache is handled - the ranges should be RECV_BUFFER_SIZE * sizeof(u32) bytes, rather than RECV_BUFFER_SIZE words. This would be obscured unless large enough values for the packet length and count are tried...

    Thanks,

    Arthur

  8. 36 minutes ago, Viktor Nikolov said:

     

    The XADC Wizard IP can output measurement data as an AXI Stream, which is a pretty simple protocol. In the HW design I mention here, the AXI stream is fed into the AXI-Stream Subset Converter IP (to divide the stream into chunks of 128 records by setting the tlast signal) and then into AXI DMA IP to load it into memory accessible by Zynq ARM core.

    Using the subset converter as a small packetizer is nice, I didn't realize it had that feature, great tip!

  9. Good to hear you're making progress!

    5 minutes ago, pdw said:

    My console tab doesn’t appear to have a serial terminal: how do I enable that? None of its icons had any obvious options for it.  (I'm running Vitis IDE v2022.1.0 on Ubuntu 20.04 if that helps.)

    I'm on Windows and typically use Tera Term or PuTTY for talking to COM ports, rather than the built-in serial terminal. Either PuTTY or something like Minicom might be appropriate.

    8 minutes ago, pdw said:

    And, in anticipation of the next step: on my Genesys 2 board, should I expect this serial output to come back along the JTAG cable, or will I need to connect another cable to the separate ‘UART’ USB socket?

    Yes, you will need to use both ports. This section of the manual describes the USB UART circuitry: https://digilent.com/reference/programmable-logic/genesys-2/reference-manual#usb_uart_bridge_serial_port.

    Thanks,

    Arthur

  10. It's odd that the heap is overflowing the local memory by the same number of bytes. Feels like it might indicate that it's still out of sync with the updated spec somehow, but I'm not sure - the instructions you found for switching out the XSA are the correct ones. I'd be curious whether a new application project with the same source files and the new XSA still presents the same bug.

    You could also potentially try reducing the heap size in the linker script. It doesn't solve the problem of the updated address map not getting applied, and could lead to overflows, but to just get something working initially... The screenshot below is from a Zynq project with DDR, so yours will look a little different, but this is where to find the heap size setting:

    image.png

    If you end up uploading the Vitis project, please use the File -> Export menu instead of zipping the folder in Explorer, as there are a bunch of absolute paths in various files that the software needs to update when the workspace gets moved or copied.

    image.png

     

  11. Hi @jmckinney

    I'm largely not familiar with FreeRTOS, but is there a chance that the data cache could be messing with the ability of the DMA to pull block descriptors from memory? https://www.freertos.org/FreeRTOS_Support_Forum_Archive/May_2016/freertos_Zync_UDP_TCP_with_DMA_IP_f531ef1cj.html seems potentially relevant. Caching issues are a pretty standard problem with non-scatter-gather DMA, where it just makes data unable to be seen by either side unless you flush or invalidate as appropriate. Pointers in block descriptors that aren't coming through as expected could cause all sorts of issues. If that's the cause, might be worth trying to disable the caches entirely, though I'm not sure how this affects the FreeRTOS setup.

    If you haven't, I'd also try putting an ILA on the DMA's AXI4-full master interfaces.

    Thanks,

    Arthur

  12. 1 hour ago, Oscar O. said:

    When I export the .xsa file it is in the directory where the .xpr file is for Vivado. But looking at the property of the .xsa file in the vitis environment it is down projext.sw/design_1_wrapper\export\design_1_wrapper\hw\design_1_wrapper.xsa so wondering if that is the problem (it does not see the new dma block in the schematic).

    You would think Vitis would see that it changed and copy??? it to the correct directory. Nevertheless I copied the .xsa to that directory and it still gave me an erorr.

    You should manually update the hardware specification: https://digilent.com/reference/programmable-logic/guides/vitis-update-hardware-specification. Vitis generally won't see changes to the filesystem. When just overwriting the copy of the file buried in the platform project, I'd be concerned that Vitis might need to extract other files from it, which it might not happen automatically during a build - I expect it won't see the copy without the manual update.

    One common issue that could come up after this (though I don't think you'll run into it here) is that if your XSA file has a different name, the extracted bitstream name changes. In this case, you'd need to go into the system projects Run/Debug Configurations, and change the path to the file to reflect the name of the new XSA.

    image.png

    image.png

  13. Hi Ellile,

    If the other end of your resistor is grounded, that should be correct - the "Energy through C1 100R C2" example, which seems to be what you're basing this on, calculates power from the current through the resistor multiplied by the voltage at the top of the resistor. Current is calculated as the voltage difference across the resistor divided by the resistance in ohms - so, if C2 would measure 0 by the other end of the resistor being grounded, the modified example is fine. Energy is calculated by integrating the power - summing it all up, with each sample multiplied by the amount of time between samples.

    There might be a little bit of error from the digitization process, but that's what having a sufficient sample rate is for.

    image.png

    Thanks,

    Arthur

    Edit: Moved the thread to the T&M subforum, which is more appropriate for this topic.

  14. Apologies for the delay. I haven't been successful in reproducing your setup. Some additional settings that might help are 1. the constant values applied to the CONFIG interface of the XFFT IP - I assume the tlast port is tied to a constant one. 2. XFFT settings - both the Configuration and Implementation tabs include settings that affect what values are provided to the CONFIG interface.

    I'd also be curious about why the XFFT S_AXIS_DATA interface's tlast port seems not to be connected - in simulation, not asserting tlast at the expected time causes the XFFT core to assert either an event_tlast_missing or event_tlast_unexpected error.

    If you haven't, please review the product guides for both of these IP cores:

  15. Personally, I would try to use AXI GPIO to push data into a shift register and try to do as much of the design as possible in an RTL module - using Microblaze to handle the UART interface and GPIO and not much else, with a single RTL module included in the block design at the top level. Saving writing and reading data from the BRAM until after you have a functional UART -> GPIO bridge would be appropriate, after which you could implement that BRAM in RTL instead of using IP for it. A first step would be to get a project set up that can receive a single UART character, then use an AXI GPIO to forward it to an RTL register that drives some LEDs.

×
×
  • Create New...