This is some nice debugging work; and you hit the real world vs the ideal protocol. You can check on Wikipedia for the protocol: https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol In short the network stack should ignore the second offer, but the internal state machine follows the protocol and in a response to a request the state machine expects an ACK. DHCP is an open UDP protocol where not only can UDP packets be dropped, but the server can offer the same IP address to multiple machines up to and until he ACKs a request; so at anytime in the discovery, offer, request the server has the option to not-ack the request. In fact, when a client does a discovery, many servers can respond offering the client many IP addresses. It is up to the client to pick one, and request it; only if the server acks it does the client get it. So the state machine needs to walk the states in order. The code was written to abort when it sees an out of order sequence because the assumption is, if anything goes wrong, the sequence is over and a new discovery should be restarted. If I remember, I try a few discoveries before giving up. Of course if the sequence is repeatedly disrupted every time, then the client will eventually give up no matter how many retries are attempted; which is what I am guessing is going on. The problem with your fix is, the client must have done a request if it is looking for an ack, and what comes in is an offer. An offer is NOT an ack, and the server has every right to not honor the IP address in the offer (thus why it can't act as an ack), the DHCP server is only require to honor the IP is if it acks the clients request. So an ACK at this point in the protocol is the only valid response. It is confusing to the client to see an offer come in (directed to the client's MAC) at this time. You are also dealing with a ton of intermediaries. The WiFi AP being one. The 802.11 WiFi protocol also has retries in it, and sometimes at the WiFi level a packet will get duplicated and this can cause issues. But, the WiFi layer is supposed to flush duplicate packets so it never gets on the Ethernet; but I have seen duplicates make it through the WiFI on to the Ethernet. When I originally wrote the DCHP client, I was on a wired LAN and did not see these kinds of duplicates. I suspect your Linux AP assumes it can put duplicates on the Ethernet and that is why you are getting all of the packets duplicated, in both directions. This is a really poor AP if that is what it is doing. Also be aware, the order you sniff, is not necessarily the order that is processed. Since the client is looking for an ACK, the client must have sent the request. Internally to the Client, it saw the first offer, sent the request, and probably while sending the request the second offer was sitting in the socket buffer waiting to be process, which occur after the client sent the request. So client side processing order was discover, offer, request, offer. But, you point out a good real world issue. In the real world we might get duplicate packets in an out of sequence order. This potentially can happen even on a wire LAN although extremely unlikely (requires two routings since it is UDP and there are no retries). My design choice at the time was, oh something went wrong, start over; not a bad design choice. However, the real world is saying a better design choice would have been, ignore the unexpected packet, keep waiting; and if we time out, try again. I am going to be revisiting the network stack in the next year of so, and I will put this on the list of things to address. For now, I would fix the AP! Nice debugging.