• 0
tip.can19

What would be difference between clock latency and propagation delay?

Question

I believe the clock latency is the total time it takes from the clock source to an end point. PFA

Whereas, the propagation delay would simply be the delay between the two edges, like an input output example below. PFA

So in other words, does this mean propagation delay between clock signals is kind of a clock skew, which is measure of latency if one clock period capture?

Thanks

Tip

latency.png

propagation.png

Share this post


Link to post
Share on other sites

5 answers to this question

Recommended Posts

  • 1

I'm not so sure that there is a universally precise answer to your question. If you want to analyze the timing of a circuit you need to define the terms that you use to do it. In general propagation delay, at least to me, has meant the delay incurred due to combinatorial logic gates, buffers, wires etc. This delay is specific to any two points in the schematic. It is also temperature dependent. Propagation delay is important not only for knowing when a switching transition will occur relative to switching transitions in related logic but also in how long a signal might remain in one state before switching to the other. Obviously, clock signals have delay across a design as well. Clocks don't go through logic gates but certainly can go through buffers and wires. Whatever terms you choose to use, any timing analysis of a clocked circuit has to account for the relative time delay of the clock edges everywhere that clock is used in a circuit as well as delay of the combinatorial logic relative to an edge(s). If the rising edge of a clock is used throughout a circuit, skew is generally use to define the delay between that edge between any two points in the circuit. In a large system with a lot of clock buffers and multiple circuit boards minimizing skew can be a real headache.  Usually, clocked logic involves combinatorial logic that is sampled by a clock edge(s). All of the delays are important to analyze from a timing perspective. If you have a very wide clocked signal, say 256 bits, there will be a time delay in transition between any of the 256 bits from clock edge to clock edge. These delays can increase or decrease along a circuit path, depending on logic or just propagation delay down a wire connection. When a delay exceeds the clock period you've got trouble. A good rule of thumb for clocked logic is to keep the combinatorial logic between clock edges simple to minimize the delay through it. Generally, I think of latency in a clocked signal as when data is valid and has to do with how many levels of clocking the signal goes through from any two points in a circuit. An example is a RAM that might have one or more clock levels for the address and incoming data as well as one or more levels of clocking on the output data. In order to know when data is valid from address to data output you need to know the latency, which hopefully is fixed. For pipe-lined designs you often have to keep track of pipe depth to know where the data is at any portion of the circuit.

The clock tree that you show is generally not what you will find in programmable logic devices. These devices route clock signals differently than logic signals. Usually there are a limited number of clock lines that can reach logic anywhere in a device with very low delay. Some devices have clock regions that limit their reach in order to control delay. FPGA device generally use LUTs instead of logic gates which involved a different analysis relative to, say using MSI logic gates.

Anyone using a particular FPGA device should read the vendor reference manuals for that device to understand the clocking, logic, and IO resources. You can't design with MSI and LSI gates effectively just using the logic table, nor can you do FPGA development effectively without understanding the basic structures involved. At least with MSI and LSI gates you have complete control over where a particular portion of your circuit will reside and how the interconnections are made. In a very large system this can get very hard to manage. In an FPGA your control over where portions of your logic reside is much less. Here's where it's important to have some level of understanding about how your vendors' synthesis, timing, and place and route tools work. In very large devices where resource utilization percentages are very high and clock rates are high getting consistent, repeatable results across an environmental temperature range can be a point of extreme frustration. If your design methodology ( source HDL code ) is fighting the tools preferences then your miseries will be compounded appropriately.

Sorry if this was too long-winded; I stepped into a puddle that was deeper than first glance... 

Edited by zygot

Share this post


Link to post
Share on other sites
  • 1

I think the term propagation delay is typically used for a single logic block.
Clock latency as in your drawing shows specifically the end-to-end length of the clock tree.

For Xilinx, https://www.xilinx.com/support/documentation/user_guides/ug472_7Series_Clocking.pdf

does not contain the word "latency".

The reality might be actually more complex: See "Clock Network Deskew" on page 72: Nowadays, clock rates are so high that we may not be able to simply define the reference phase at the input of the clock distribution network, and have the clock "latency" eat away the timing budget:

In many cases, designers do not want to incur the delay on a clock network in their I/O timing budget therefore they use a MMCM/PLL to compensate for the clock network delay.

So we use a dummy delay to advance the PLL phase by the nominal "latency", therefore moving the reference point to the end of the clock distribution network.

Share this post


Link to post
Share on other sites
  • 1

Despite the fact that I may have left many trying to read my last reply in a semi-coma state it occurred to me that I forgot to address the second diagram in the original post. This diagram refers to a 50% threshold for buffers and logic. Depending on the logic family the decision point at which buffers and logic determine whether or not an input is a logic high or a logic low may not be halfway between the minimum and maximum levels. In fact most MSI and LSI families have ranges for both logic high and logic low and a third range in the middle where the state of the input is undetermined. It's quite possible for a gate to see an input that is higher than the defined logic high range and lower than the defined logic low range. I such cases what is the input logic state???? ... exactly. From the previous discussion of timing analysis you can see how things can get complicated quickly as you widen your scope of analysis.

If nothing else perhaps I've proven that such questions as asked by tip cannot be properly addressed in this kind of forum. Trust me when I say that I've only scratched the surface.

Edited by zygot

Share this post


Link to post
Share on other sites
  • 0

Thank you very much @zygot !!! @zygot

You explanation is the best! I got very good understanding of this concept from your deep explanation. :)

Thanks so much again.. Really appreciate your help!

Kind Regards

Tip

 

Edited by tip.can19
Missed tagg

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now