Ciprian

Digilent Staff
  • Content Count

    44
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Ciprian

  1. Hi @fangzr, I am not familiar with RTL8812au, but I managed to set up a ath9k dongle driver for the Zybo Z7-20 with petalinux. For the ATH9K it's easier because it has the driver in the kernel you just need to activate it. Regarding the RTL8812au all I can tell you from experience is that some Wireless dongles need additional firmware, like the ATH9K, which will be loaded in to the wireless automatically over USB, provided you set up your USB port to identify mass storage devices. These should be copied in to the corresponding folder on the rootfs of your target. Another critical issue is the driver dependencies, some Wireless drivers need to have certain Kernel drivers loaded in order for them to work, like the MAC80211 stack in the Kernel for instance. Please make sure that your driver is not depended on anything else. Based on the log you have sent us, I would start with this. -Ciprian
  2. Hi @Mingfei, If you are referring to Arty-Z7-XX-hdmi-out, where XX is either 10 or 20, demo then you have series of solutions to make it work faster. Before I go in to detail regarding this let's talk a bit what the demo actually does. It basically reads one frame (dispCtrl.framePtr[dispCtrl.curFrame]) and then takes every pixel and inverts it in to the output frame (dispCtrl.framePtr[nextFrame]) this is a software approach (based on how DemoInvertFrame() is written) which means that it will depend on the processor how fast it can manage the data. Therefor stuff like printing text on the console (like printf("Volage.....)) will slow it down. As for the solutions there are 2 way to try and speed it up: 1. Software, VDMA has a feature called frame buffer parking you can read all about it in the User Guide (PG020 for v6.3), it basically parks the output frame on one buffer and lets you edit the other 2 frames without interrupting the video out stream. This will increase your out put to 50 fps but the refresh rate of what you want to do, the actual processing, will still only work at 5 frames. 2. Hardware, you could take advantage of your FPGA and write a IP core which does the inversion in HW thus offloading the task from the processor and getting to almost no processing delay; this of course means you will have to redesign your project. I would recommend writing the IP in HLS because it's easier and placing it at the output stage be teen the VDMA and the Subset Convertor. -Ciprian
  3. Hi @Luighi Vitón, I'm not sure it's a HDMI issue, have you tried to use X11 over SSH do you get the same result? -Ciprian
  4. Hi @AndyCap, Unfortunately I don't have the time to look in to this but judging by what you said I would look in to the IP driver of the audio system <kernel>/sound/soc/adi/axi-i2s.c (as far as I remember), how the buffers are handled in regards to the DMA transfer. The issue might be that full and the empty flags are nor properly handled in the driver or in the HDL IP. Sorry for not being able to help more. -Ciprian
  5. Hi, We are planning to update our project to 2018.3 but we don't have a precise release date for it yet. If it's not mandatory to use 2018.3 please stick with 2017.4 with this project for the time being. -Ciprian
  6. Ciprian

    Digital Twin

    Hi @Kris Persyn, It depends on how you manage your resources, driving immersive visuals on a HDMI display can be done in multiple ways at different resolutions, some are PL taxing others are DDR taxing; you could generate entire frame buffers in PL or PS or you could find a optimal algorithm to change just the previous frame or you could allocate a high number of frame buffers and then run them in a loop. It also depends on how math lab synthesizes the IP you will need to add to your design. If you design your project properly and don't aim for a resolution higher more 720p( I'm being conservative, we managed to drive the HDMI at 1080p with processing filters without a problem) I think it should be enough for what you want to do, resource wise. My suggestion, download and install Vivado, download and install the board files, create and implement your project look at the resource consumption and then buy a board. - Ciprian
  7. Ciprian

    zybo hdmi to vga out

    Hello @Gourav, Unfortunately the issue could be in multiple locations and it's hard to determine this from the block diagram, it's probably a small bug either in the block design or in the HLS IP. Either way please send the archive with project so that I can look over it. Thank you -Ciprian
  8. Try adding this: &i2c0 { clock-frequency = <100000>; status = "okay"; }; Here: <petalinux_project>/project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi -Ciprian
  9. I'm not sure about this but I would look in to the HPD signal, as far as I know in the Nexys Video demo is set up so that the HPD is a pass-trough signal which might interfere with the zybo z7 design. I will ask our colleague who has more experience in HDMI if he has some input regarding your question. -Ciprian
  10. Hi @Amin, It depends on what you are planning to do. If you only need a Linux running on your Zybo Z7-20, then I can give you our pre-build BOOT.bin, kernel+rootfs images. This approach is based on our Demo HW platform, you will have the benefit of a lot of IPs in the design which allows a very versatile approach to the board, unfortunately this will not allow you to add any new IP to the design. If you want to build you own platform and base the on board Linux on it then you will need to install petalinux on linux(I recommend Ubuntu) and build/customize it the way you want, @jpeyron sent you the links for this in the previous post. -Ciprian
  11. Hi @kotra sharmila, Please follow the steps provided in this read me to modify/create your project. Please kkep in mind the following NOTE at step 12 - Ciprian
  12. Hi @Jeremy, Petalinux creates a series of .dts files based on your Hardware Description File, these are overwritten by the system-user.dtsi located in: <petalinux project root>/project-spec/meta-user/recipes-bsp/device-tree/files and the automatically created device tree files are here: <petalinux project root>/components/plnx_workspace/device-tree/device-tree-generation The petalinux project for Zybo-z7-20 from our github is configured based on this Vivado project which contains VTC, HDMI, I2S and other IPs. Consequently the system-user.dtsi is used to redefine parameters of the automatically generated device tree files. What you will have to do is rewrite the system-user.dtsi to suit your needs. Basically as far as I can tell from your .hdf you only have one axi gpio in the PL. This means that you will have to remove all the IP's from it which are not in your PL(you can see a list of them in the error you sent us) and if you want to use a different driver for your axi gpio IP you will have to add it to the system-user.dtsi and make the changes there -Ciprian
  13. Hi @Ben B, Regarding your question on using Zybo Z7-20 to capture HDMI signals. It is possible and using UIO is also an option, but because we are using the VDMA to get the Video signal it's better to use a DMA driver. Unfortunately Xilinx does not provide a complete DMA driver for any of their DMA IPs, therefore I have been using this DMA driver which includes the VDMA functionality as well. To make things as easy as possible, I generated a example project for you with the VDMA used to capture video streams and OpenCV functions to write a *.bmp file. What you need to do in order to get it working is: 1. load the HDMI2BMP.elf to /home/root on your rootfs portion of your board 2. after the board boots you need to load the axi_dma_driver root@Zybo-Z7-20:~# insmod /lib/modules/4.9.0-xilinx-v2017.4/extra/xilinx-axidma.ko 3. run the HDMI2BMP.elf This will generate a test.bmp in /home/root with the captured image. The source file for the app is in the SDK folder. Changes which I had to do to the original petalinux project are: - create a new module in petalinux petalinux-create -t modules -n xilinx-axidma --enable - copy the necessary file to Petalinux-Zybo-Z7-20/Zybo-Z7-20/project-spec/meta-user/recipes-modules/xilinx-axidma/files and update the MAKE file and the xilinx-axidma.bb - update the system-user.dtsi in /Petalinux-Zybo-Z7-20/Zybo-Z7-20/project-spec/meta-user/recipes-bsp/device-tree/files - write the demo program Hope this helps. -Ciprian Zybo-Z7-20-HDMI-RX_peta.zip
  14. It is not recommended to install petalinux anywhere else then "/opt/pkg/petalinux", we encountered similar issues when installing petalinux in other locations... That's why we usually specify in our petalinux project how to install petalinux and where. -Ciprian
  15. Hi @lowuze, As far as I know there is no Digilent Linux for PYNQ, we however have Petalinux for Arty Z7-20 which is similar to the PYNQ(Arty does not have a Microphone), it has by default XADC capabilities activated (driver in the kernel and configured device tree). Unfortunately we do not have a example project on how to read the XADC from Linux but you can access it in: /sys/bus/iio/devices/iio:device0 For more information about how to use it please search for Xilinx XADC on linux, THIS might also give you a better understanding on how the driver works and how to interface with it. -Ciprian
  16. Hi @Hong, Thank you for bringing up this issue, unfortunately without a log file we don't actually know what's wrong with it because the message you have provided is a generic error message. Could you please provide the log file? Taking in to account that 2018.3 is a very recent product (currently 4 days old) we have not yet updated our demo projects to this version. Differences between Vivado versions tend to come with changes which might make our demos fail. We usually recommend a specific Vivado version in our demo projects, if you go to Zybo-Z7-20-pcam-5c Demo you will notice that in the description of the project we specify "Created for Vivado 2017.4". We can guarantee that it will work in that version. If you need it to work with 2018.3 you will have to wait for us to update our demo or try to update it yourself. - Ciprian
  17. My first suggestion was just basically replacing PmodTMP3_0: PmodTMP3@40900000 { compatible = "xlnx,PmodTMP3-1.0"; interrupt-names = "I2C_Interrupt"; interrupt-parent = <&microblaze_0_axi_intc>; interrupts = <4 2>; reg = <0x40900000 0x1000>; with PmodTMP3_0: PmodTMP3@40900000 { clock-frequency = <100000000>; compatible = "xlnx,xps-iic-2.00.a"; interrupt-names = "I2C_Interrupt"; interrupt-parent = <&microblaze_0_axi_intc>; interrupts = <4 2>; reg = <0x40900000 0x1000>; and it should work... In theory. Either way, I'm glad you got it going and thank you for your tutorial. - Ciprian
  18. Hi @troden, "/dev/misc/rtc" is the place where RTCs will be instantiated once the RTC driver has been called and applied, for many embedded processors this is either part of their architecture or there is a explicit RTC IC on the board for them whit the driver in the kernel module. Unfortunately this not the case with the Pmod RTCC, this one is an external RTC . You first need to either add the PmodRTCC IP or the AXI I2C IP to your hardware design, then you need to update the dts to include the IP and finally, before building the kernel, you need to include the xilinx I2C driver (and optionally if you managed to adapt the mcp7941x linux driver to work with the xilinx i2c driver include that one as well) after building the kernel you can start accessing the RTC in "/dev/misc/rtc" if the mcp7941x linux driver works otherwise it will be a generic I2C device. Here is a link on how to use the xilinx I2C driver. Here is a link on the mcp7941x linux driver discussion on a separate forum. - Ciprian
  19. All the information regarding TMP3 is here including pin layout and a link to the IC manufacturer datasheet (for the register space configuration). As far as I can tell the IP is just a wrapper of a AXI IIC IP so I'm guessing that if you instantiated it in the DTS using the xilinx I2C driver you should be fine. If that doesn't work you can always try to just simply replace the TMP3 IP with a AXI IIC IP and then respecting the IC datasheet you should get it going. - Ciprian
  20. Unfortunately you are trying to do something hard on a old board using outdated tutorials. This will not be easy because you seam to lack the experience with embedded linux as well. I'l try to help you as much as I can but you will need to read up on the following terms and understand how they fit together and interact: hardware platform, device tree, first stage boot loader (aka FSBL), second stage boot loader (u-Boot), kernel, ramFS, root FS, user space and kernel space. Because you are using a soft core processor (microblaze) we cannot determine if the dts (device tree source) is ok or not, the device tree is a representation on what you have in your hardware configuration and what addresses you have at base address for each IP therefore without the EDK project or at least the address editor we cannot validate you source file. From what I've read, you seam to have everything up and going except linux booting, I'm guessing there is a problem with your second stage boot loader (probably using uBoot) which has not configured properly (uBoot also has a defconfig which needs to have UART and debug capabilities active), unfortunately that's the most I can tell you about this without actually seeing the sources. There is no tutorial that I know of which will take you step by step through this process but I recommand you focus on this: https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18842560/MicroBlaze - Ciprian
  21. Hi @n3wbie, I had a similar problem, for me it was the fact that I did not have enough space allocated to the stack in the linker script, if you changed the dimension of the RecSamples variable then that might be the issue. Regarding the sin wave, I'm not sure what you want to do but if you want to generate a sin wave from within the and then play it back to the Head Phones then you can simply use the sin function in C (you need to add the math.h library and make sure you activate the library in the project settings, described here). Otherwise you can set Mic or Line in, connect the jack to you PC and play a Sin video from youtube then you can look at the recorded samples. I'm guessing you are more familiar with MATLAB, you can try that to; the idea is that as long as you are feeding it the right samples either way works. Hope this helps, Ciprian
  22. Hi @n3wbie, Th working project is attached. what you have to take in to account when using this audio codecs with Digilent products is that you need to configure the codec (using I2C) as well as receive the samples using I2S IP core. Basically one is for the control of the codec and the other one is to receive the samples. I have written a small driver for both the I2S core and the I2C SSM2603 which is in the source files of the SDK project (in the sdk folder) which configures the registers for the codec and I2S IP core; the documentation for the codec can be found here. The IP core has not yet been documented which is the main reason we have not added it to the Digilent vivado-ip library, but it needs a 100MHz input for it to be able to synthesize the 12.228 MHz MCLK and the subsequent clocks for the I2S protocol. The demo project reads the buttons and based on the ones you press it will: BTN0 - Record 1s BTN1 - Set Mic input BTN2 - Set Line In input BTN3 - Playback 1s The project is not really optimized so it uses a variable "RecSamples", allocated to the stack memory which holds the recorded samples(48000 samples representing 1s at a 48KHz sampling rate) and it is also used fro play back, so don't press play back before record. The rest should be easily traceable from the comments in the driver and the main source code. If you have any other questions feel fr to ask. Ciprian ZyboZ7Audio.zip
  23. Hi @n3wbie, I tried to rebuild you project but there are some errors from downloading the project from Drive, for future reference when uploading a project make sure you archive it first. Regardless, I started to build a simple demo that does exactly what you want to do, it will not be fully documented but it will be documented enough so that you can understand how the IP works and how to use it in future project. This will probably take a bit of time, so I will have the demo ready for you by the beginning of next week, and I will post project and the description as a reply to this post. Regards, Ciprian
  24. Unfortunately it is not as simple as copying from one board to another. First of you must make sure that you check the configuration of the Ethernet IP in the hardware design and make sure it is configured right, compare it to our user demo although I'm not sure if we are using Ethernet lite IP in our design. https://reference.digilentinc.com/learn/programmable-logic/tutorials/genesys-2-user-demo/start The second thing you must do, and this is most likely the issue in your case, make sure that the IP ports in the XDC are mapped correctly. From a Linux and device tree perspective if you have the same IPs as the original design for the KC705 (and all the peripherals are present in the Genesys 2) then it should be OK. Basically if only the board changes in your project it's more likely a board specific issue (wrongly mapped XDC, peripherals not present, etc.) rather then a software issue. Ciprian
  25. Depending on the dimensions of your code (.elf file) you can chose to either run it from the BRAM or from DDR (provided you have a DDR controller IP in the block design). If your elf file is small enough to fit in to the BRAM you can splice it in to the bit file using SDK. This is done by programming the FPGA from SDK and in the Software Configuration section, instead of bootloop you put your .elf file in (the blue part in the attached picture), then you go to <SDK workspace directory>\<Name of Hardware Platform> and copy the download.bit on to your SD card. If your code is not small enough to fit in to the BRAM then you have to use QSPI.