• Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by xc6lx45

  1. I think the first thing I'd check is the jitter performance of the available DAC clock (its impact is proportional to the highest frequency component you're generating). The second thing I'd check is 14 bit (or whatever Nyqust equivalent performance they promise) against the LTE uplink specs. See TS 36.101. I don't think it will fly but I haven't done my homework (nor do I have the input what you're actually trying to achieve e.g. in terms of specs compliance). Check unwanted emissions, not so much close in to your signal (where the requirements are quite forgiving) but far away "out-of-band".

    Note, I've quoted the handset ("user equipment" UE) specs. Basestations ("eNB"), that's another beast still...

  2. OK I just read the LTE part.

    I suspect this is a RF systems design question, lacking the analog / RF upconverter (which is outside the FPGA).

    30.72 MHz sample rate for LTE20 is standard at baseband. Upsampling by 8 to 245.76 MHz sounds meaningful as well (depends on the converters / filters that are available). What's missing is the step from there to RF, which might be either direct conversion (using the BB signal centered at 0 Hz) or something more complex (using a non-0 Hz DAC signal) which is easier from RF point-of-view e.g. thanks to the lower ("non-infinite") fractional bandwidth. Don't be fooled by the apparent simplicity of direct conversion - it's not if you know of all the correction algorithms no one ever mentions. DC makes sense if you intend to sell billions of units, not build a simple working prototype. Superhet is the ticket, usually, with a few suitable acoustic or cavity filters.

    The question I'd ask is, "what is the intermediate frequency"? Knowing that, designing the digital upconversion is a routine job, possibly involving cascaded halfband filters if it's meant to be cheap.

    The step from IF to RF e.g. 900 MHz or 2.5 GHz needs to be bridged with dedicated radio-frequency circuitry e.g. diode ring mixer unless you're really sure you've done your homework.

    My "polyphase" answer addresses the first post but I suspect even a good answer to a bad question is equally useless.

  3. Hi,

    it sounds very unlikely. Now it does happen that boards fail (which is very rarely related to the FPGA itself, but more often unreliable USB cables, connectors, failing voltage regulators, PCB microcracks and, did I mention, unreliable USB cables) but usually, the problem is somewhere else.

    The FPGA GPIO circuitry is very robust. Unless there is an external power source involved outside the bank voltage range, I doubt you'd manage to damage it even if you tried.

    I guess we all know those panic moments, like "oh no I've bricked / toasted the board" and then you have a coffee, reboot the PC and everything is well again.

  4. Hi,

    for your toy example, you can safely enter

    set_property CLOCK_DEDICATED_ROUTE FALSE [get_nets btn_IBUF[1]]

    into the TCL command line and the warning should go away (this should cause no issues at low speed, say low MHz range)

    Without going into details, you may find that the most "simple" examples will become very complex (in a sense of getting down to the bottom what is actually happening on the FPGA) because you're deviating from standard design patterns (synchronous logic) that has information moving from register to register on a clock edge (and that clock needs to be on a true clock pin for your error to go away, which is the case for the CMOD A7 if you use the 12 MHz built-in clock)


  5. I think you'll save yourself much hassle if you don't share the same physical JTAG interface. Simply take a couple of GPIOs, connect them to an FT2232H minimodule or any other hardware interface, and run a fully independent physical interface.

    Otherwise, you're facing the problem that different software packages need access to the PC side of the JTAG box at the same time, and this usually does not work for the very simple reason that only one connection can be open at the same time on the PC side.

    I have successfully integrated my own logic to JTAG as it's ~5x faster than UART (see BSCANE2 component and USER1..USER4 registers) but that works best when I run my own bitstream uploader out of the same software package so no other JTAG tools are needed at the same time. Having the RISC inside a USERx instruction would also require changes to e.g. the debugger code because it is not a simple hardware JTAG chain (if someone has done this already, please check their documentation).


  6. Hi,

    please search the forum, it's fairly common and frequently resolved. Use a different cable, use a different computer (there is a high number of bad quality USB cables on the market; there are many unreliable PC ports e.g. because of bad polyfuses that have once tripped and never recover completely).

    You may have a bad module and there exists a newer hardware revision with improved decoupling caps but most likely the easiest way ahead for you is to get a quality cable and rule out issues with the USB port on the PC side.

  7. Hi,

    generally, the problem is that a 0 ohms output has theoretically an infinite power delivery capability, which is physically impossible. It may work correctly with some loads but not with others, and the transition area is ill-defined (e.g. peaks are clipped).

    A 50 ohms output has a controlled behavior for all (passive) loads.

    From a RF point-of-view, above some frequency the signal changes so rapidly that a reflection coming back from the other end of the (typically 50 ohms) cable is slightly delayed, compared to the generator output. Their interaction will cause unwanted RF effects such as an uneven frequency response. With a 50 ohms cable and 50 ohms source impedance, the reflection disappears once it reaches the source and the frequency response remains flat. This works even if the 50 ohms cable is not terminated at the other end (but only at the end!)

  8. one thought, you can configure 7 series 3.3 V inputs for 1.8 V input.

    There are hacks like using a series 10 kOhms resistor. It needs to drop 1.5 V so it'll sink 150 microamps into the ESD diode of the 1.8 V device at the other end. For proto-style development (and I guess that's what most of the one-size-fits-all-boards are all about) this is often acceptable.

  9. Check Trenz TE0725 family (remove one 0 ohms resistor for the bank voltage, solder a jumper wire to the large capacitor of the on-board regulator => Voilà, 1.8 V, even mixed with 3.3 V on the other bank if you want)

    The ICE 40 HX8K EVB (Lattice) is another board that can be rigged up for different VIO.

  10. 12 hours ago, JColvin said:

    You could wire individual signals from a VGA source to the XADC connector (taking care to adjust the HSYNC and VSYNC voltage signals do not exceed the 1V limit on the XADC connector) and then use some fancy logic from the XADC to properly receive and interpret the signals


    maybe I'm missing something here but even a plain 640x480x60 VGA signal has a pixel clock of 25.175 MHz, against the XADC maximum sample rate (2 channels!) of 1 MHz?

  11. I can see the reasoning behind 64 samples and matching the sampling frequency exactly ("cyclostationarity", you get an exact line spectrum).

    A true analog PLL (with a VCO) may be overkill, though. You can simulate it purely in the digital domain as a variation of my counter scheme that adjusts the division ratio on the fly.

    The main reason I'm saying "overkill" is that you can tolerate pretty high error in your sample timing thanks to the low frequency. For example, let's take the 10th harmonic of 50 Hz (500 Hz - 2 ms cycle time). If I derive the ADC clock by counting from 100 MHz base clock, I have a timing jitter of 10 ns. Relative to 500 Hz / 2 ms, this is an error of 0.002 degrees which isn't very much.

    The bigger problem is probably locking in to the grid frequency to get the 1.6 millihertz offset from my earlier post away. But as said, that can be done with a digital PLL, strictly on the logic fabric (phase detector, loop filter, basic PLL theory).

  12. Hi,


    that is V = sqrt(P*R). For 5 Watts and (assuming!) 50 ohms the voltage at your dummy load will be 16 V RMS or, multiply with sqrt(2), +/- 22.3 V peak.

    Consider using a 1 : 10 probe, check against the 22.3 V with some margin. This calculation assumes a largely sine-shaped waveform  which seems a safe bet if you're FCC compliant with regard to harmonics.

    If you want to use a 50 ohms input you may consider a series resistor. For example, 1 kOhm into 50 ohms is (approximately) 1:20 voltage division. A 1/4 W rating would be at the limit (16^2 / 1000 is around 1/4). Probing your load with this will cause a minimal variation in load impedance (mismatch) - this is where a power splitter would come in for "serious" RF engineering - but  most likely this can be swept under the rug. Or use 10k.

    it's funny how time flies - nowadays, buy a premium mobile phone and chances are good there's a modulated DC-DC converter inside with a bandwidth of many times said "RF" frequency...


  13. You may be able to make it work with e.g. a 10 kOhms serial resistor that limits the current into the FPGA input through the FPGA's clamping diode to the FPGA's supply rail at 3.3 V.

    But please validate this for yourself e.g. with a google search.

  14. Hi,

    11 hours ago, zygot said:

    The phase accumulator approach is pretty easy and flexible where appropriate. Have you looked at this ?

    and here is an option to transform sawtooth-to-sine.


    It's spline-based, which is essentially no different from linear interpolation but uses third-order polynomials instead of first-order lines. The code is heavily optimized, I think you can run it at 200+ MHz (can't remember, might be even 300) on speed grade 1 Artix.

  15. 20 hours ago, tuskiomi said:

    Good to see that I'm starting with the trivial implementation.

    Yea but it's not trivial in a sense of being the first point on a straight line that evolves somewhere. More like starting F1 racing car design with a novel horse carriage - it'll take some clever talking to get your point across later that this isn't just solution-looking-for-a-problem thinking.

    If you're running into resource issues already on a toy design, it seems like a dead giveaway that your thinking will not scale to real-world problems

  16. 49 minutes ago, tuskiomi said:

    which would allow a trained network to be executed in just a small number of clock cycles, instead of multiple hundreds.

    I think I see where this is heading ... there are essentially two design corners: a completely systolic, one-cycle-per-operation architecture, and a CPU-based architecture where everything is funneled through a single ALU.

    Both extremes are fairly trivial to implement, the former going through the roof with its resource usage and you will have a hard time feeding data in to actually use it.  The latter (=run C code) having its lunch eaten by any "hard" CPU core on the same technology node.

    The design space in-between is where it gets interesting for FPGA, but also much more challenging.

  17.  Hi,

    >> I finished my first FPGA class this last semester, ... looking to design my first CPUs ... I found myself commonly exceeding the number of cells available during experimentation.

    let me offer an opinion and I'll keep it short but I suspect you're heading into a dead end, the lesson to be learned eventually - after spending too much money and too much time - being that FPGA technology doesn't really scale this way. For example realizing that you need $1000 hardware to do a job that a Raspberry Pi could do better for $50.

    If you intend to design for ASIC and plan to use the FPGA for emulation, forget I said anything. But, after finishing "the first FPGA class", this seems unlikely as it's highly specialized niche business. Otherwise a reminder, focusing on the "interesting problem" in research is pretty much a guarantee that you're missing that dull, obvious practical problem that shouldn't exist in the first place but kills your idea (whatever that is) in reality. The question I'd ask myself is, "why am I running out of cells" and maybe "does anybody actually USE a custom-designed FPGA soft core CPU outside a research-driven environment and why not".

    As a rule of thumb, if a design grows accidentally large you can be pretty sure it will also turn out to be slow. A typical student mistake would be to design around variable bitshifts because that's how you write efficient code on a simple, integer CPU and the HDL code looks harmless. But on an FPGA it's just the opposite, it becomes an any-bit-to-any-bit matrix which is both large and slow from its depth of logic levels. E.g. a single 32-bit barrel shifter will be a significant contributor to the logic count of a simple CPU like J1B. Using a large FPGA may hide the size issue but won't recover lost speed, it's a fundamental design issue  that needs to be fixed elsewhere.

    And FYI: There are clock management tiles that will give you any frequency you choose so the input clock frequency is generally not that important. Maybe you have reasons for your "requirement" but not being aware of MMCM/PLL cells is very common at "first FPGA class" level so I'm pointing it out.

  18. ... and for the sake of completeness: one way to calculate the envelope.

    This uses a concept called "Hilbert Transform" or "analytical signals". It's probably too far out for an entry level lecture but at least you get pretty plots out of it 🙂


    % Envelope via Hilbert transform
    sig4 = sig1+sig2;
    % factor 2 because half of the signal magnitude is held (redundantly) in 
    % negative frequencies, which will soon go overboard
    sig4 = 2*fftshift(fft(sig4)); % to frequency domain
    sig4(f_Hz < 0) = 0; % discard negative frequencies
    sig4 = ifft(fftshift(sig4)); % back to time domain
    figure(); hold on;
    plot(t_s, sig1+sig2, 'k');
    plot(t_s, abs(sig4), 'b');
    legend('original 2-tone sig', 'envelope');


  19. Well it's easy to get lost in the math and specialist "shortcuts" like the idea of amplitude modulation (in a sense that the concept is useful to an electrical engineer who sees through to the bottom but may leave the audience utterly confused if not. Think of a squirrel that only knows how to hop from one tree to the next. It hops and hops and eventually ends where it started and exclaims "wow what a huge forest". But look it up on a map and the 'forest' becomes a tiny little park. The AM concept is like one jump between two specific trees. But, I digress - yes you can formally write it as AM with 100 % modulation index but I don't see how this would help my understanding here, intuitively)

    The "envelope" concept is not completely straightforward if you want to close all the gaps - you can show the blur on a scope screen but this visual approach will leave many possible questions unanswered ("how can the envelope be non-zero at zero crossings of the signal?"). I think this requires complex numbers.  My "window" was meant to cheat around that by looking at a short piece of signal (100 ms is already on the long side, make it so short that the "envelope" isn't strongly visible to the eye over the window length) and looking at how large and small magnitude values are distributed - a histogram over abs(sig). This histogram will periodically change with the beat tone, and it is ultimately what the ear responds to.

    You might have a look at open-source "Octave" - both for this problem and in general if you're teaching. Python can do probably everything better but I tend to see a trend that it's mostly used by people showing what they "could" do while those who actually "do for a living" stick to Octave (or Matlab) to get the work done. It was obsolete decades ago so there's a pretty good chance that my scripts will still work next month when I need them e.g. no one would break compatibility in the name of security etc. I'm not saying it's perfect but it's been "good enough" for me since, don't know, 1999 or so.

    Below a quick example (some details e.g. "minus one" may look weird but they enable an exact result).

    First, the two-tone signal: Run the script yourself and you can zoom in as you like etc.


    Then, its spectrum. The magnitude 1/2 comes from Demoivre's expansion cos(x) = 1/2(exp(ix) + exp(-ix)). Lines in the FFT correspond to one exp() so there is always a pair of positive / negative frequency components. The advantage of Octave over test instruments is that you can get exact results (within about 30 orders of magnitude in this example), which are easier to interpret. You can see lines at 150 and 155 Hz and nothing at all at 5 Hz.


    Then, I square the signal, modelling the nonlinearity.


    And here is the result - nothing at 150 / 155 Hz (for simplicity I used an ideal x^2 without any linear term at all). But, double frequency products and, most importantly, the nonlinear product at +/- 5 Hz.



    close all;
    nEval = 10000; % number of points to compute
    tEval_s = 1; % length of signal in seconds
    % calculate time base
    t_s = linspace(0, tEval_s, nEval+1)(1:end-1); % avoid duplicate start point (exactly full # cycles)
    phi = t_s*2*pi;
    % calculate frequency base for FFT
    f_Hz = (-nEval/2:(nEval/2-1))/tEval_s;
    % two-tone signal
    figure(); hold on;
    plot(t_s, sig1+sig2, 'k+-');
    xlabel('time/s'); ylabel('s(t)');
    title('two-tone signal');
    figure(); hold on;
    stem(f_Hz, fftshift(abs(fft(sig1+sig2)/nEval)));
    title('spectrum two-tone signal');
    xlabel('frequency/Hz'); ylabel('magnitude');
    % ideal square nonlinearity
    sig3 = (sig1 + sig2).^2;
    figure(); hold on;
    plot(t_s, sig3, 'k+-');
    xlabel('time/s'); ylabel('s(t)');
    title('two-tone signal squared');
    figure(); hold on;
    stem(f_Hz, fftshift(abs(fft(sig3)/nEval)))
    title('spectrum two-tone signal squared');
    xlabel('frequency/Hz'); ylabel('magnitude');


  20. yes, it's such a simple experiment but pretty difficult to explain when you really dig into the details...

    The 5 Hz is a periodic shift in perception, not a tone itself. For the ear, the amplitude statistics of the signal vary with time, and this is what we hear.

    If you define a histogram of the signal magnitude in some short e.g. 100 ms window and slide it along the time axis, you'd see that it changes periodically at 5 Hz. This (again, possibly more "story" than rigid science even though I guess it comes close) modulates the bias point of our nonlinear detector, so to speak.


  21. Hi,

    you're missing one detail (which is BTW also a critical "piece in the puzzle" in the classroom):

    Our ear is only able to perceive the beat because it is nonlinear. If it were a perfectly linear receiver (and an oscilloscope with FFT is pretty close to that), there will be nothing at 5 Hz because the "beat note" does not physically exist (yet). It is created in the ear.

    Most likely you will be able to demonstrate that when you put tone 1 on a first small loudspeaker, tone 2 on a second loudspeaker.

    Hold them both to one ear => you should hear the beat note

    Hold each to its own ear => you should not hear the beat note.

    The technical term is "intermodulation" (each tone has a positive and a negative frequency component, and "minus 150 Hz" will modulate "plus 155 Hz" down to "plus 5 Hz" at a nonlinearity.

    If you want to show it on electric signals, grab a 1N4148 diode with 50 ohms in series and connect it from the (50 ohms) generator output to ground. This is an electrical nonlinearity similar to what happens in the ear that makes the 5 Hz signal visible when you probe it with an FFT-mode scope.

    Classroom hint: The ear spans a dynamic range of 12 orders of magnitude (0 dB hearing threshold to 120 dB damage level is a factor of 1 000 000 000 000 in terms of power or still of 1 000 000 in terms of sound pressure / displacement). It is intuitive that some kind of nonlinearity as mentioned above needs to be involved to compress it so it can be handled by "biological means". I guess this may not be strictly academically correct as an argument, but it's useful to tell a story.

    To prove it mathematically without complex numbers, I would use y = x^2 as a prototype nonlinearity and x = cos a + cos b as signal, using cos (u) cos(v) = (cos u-v)/2 + cos(u+v) / 2 from a table of trigonometric identities.