|
Bugzilla – Full Text Bug Listing |
| Summary: | Internal EDCA queue (AC_VI) has a consistent drawback wrt. backoff compared to external queue | ||
|---|---|---|---|
| Product: | ns-3 | Reporter: | levente.meszaros |
| Component: | wifi | Assignee: | sebastien.deronne |
| Status: | RESOLVED INVALID | ||
| Severity: | minor | CC: | ns-bugs |
| Priority: | P5 | ||
| Version: | ns-3.25 | ||
| Hardware: | All | ||
| OS: | All | ||
| Attachments: | Example simulation that prints the difference on stdout | ||
Is this different than bug 2222 that you recently create? Yes, it's different. How can you be sure that the same backoff is occurring between those two cases? Also, please point to the requirement in the standard that looks not ok in the implementation. "The random streams are assigned so that in both simulations the backoff period is the same for the video packets." You can see this in the attached source file. IIRC I also checked this via printf. As for the standard, I can't give you the exact pointer. I just think that the time difference and the algorithmic difference is wrong, in general. I confirm there is a difference which is due to a different NAV than the ACK timeout. We should definitely check the standard. Note that I have a difference of SIFS compared to the value you provide with the latest ns-3-dev (31 us iso 15 us). Note that the difference of 16 microseconds between you and me is because here the ACK is supposed to be transmitted at 24 Mbit/s, while in your case it is supposed to be transmitted at 6Mbit/s (the 16us difference are thus note representing the SIFS here). So this comes to a question: should the NAV duration be computed based on the lowest BSS Basic Rate or based on the expected chosen rate for the ACK. The second one is the currently implemented solution. Then the other questions are: is NAV computed too short? is ACKTimeout computed too long? Or are they expected to be different? I checked a bit, and I can see that in pcap traces on the field we get the same duration/id value in the data frame, so this means the NAV is correctly computed. In addition, we already discussed a lot about timeout values, and we agreed how those are set. I thus think it is acceptable to get a slight difference here, since NAV is computed based on effective duration, while timeouts are computed based on worst-case durations. (In reply to sebastien.deronne from comment #7) > I checked a bit, and I can see that in pcap traces on the field we get the > same duration/id value in the data frame, so this means the NAV is correctly > computed. > > In addition, we already discussed a lot about timeout values, and we agreed > how those are set. > > I thus think it is acceptable to get a slight difference here, since NAV is > computed based on effective duration, while timeouts are computed based on > worst-case durations. So, to be clear, the idea here is to reject this bug. Any other opinion about that? I don't have anything to add to the discussion for now. Thanks anyway! Not a bug (see discussions) |
Created attachment 2193 [details] Example simulation that prints the difference on stdout I attached an example that demonstrates this behavior. The example runs two separate simulations. The important part is the different timing that is written to stdout. The first simulation contains 1 EDCA client that sends 2 udp packets to a server. The server is really far away, in fact it is not at all important in the example. The client sends exactly 2 packets without any retries to the server. The first packet is a voice packet, the second packet is a video packet. There are no acks from the server. After the ack timeout expires for the first packet the backoff starts. The client sends the second packet when the backoff expires. The transmission start times are written to stdout. The second simulation has 2 EDCA clients that send udp packets to a server. It is the same as the first simulation, except the video packet is sent by the second client. The two clients are at the very same position. Again, the transmission start times are written to stdout. The random streams are assigned so that in both simulations the backoff period is the same for the video packets. Here are my results for reference: $ ./waf --run examples/wireless/wifi-backoff Separate clients: 0 START = +1000000000000.0ps START = +1000343000000.0ps Separate clients: 1 START = +1000000000000.0ps START = +1000328000000.0ps As you can see in the second simulation it takes 15us less time to send the video packet. This is due to the fact that the internal EDCA video queue starts backoff after the ack timeout expires for the voice packet, while the external EDCA queue starts backoff after the nav expires. Note that there's no propagation time between the clients. The 15us = 9us + 6us, where the 9us is the slot time, and the 6us is twice the propagation time for 1000m. I think the standard says that the EDCA queues should act as if they were separate nodes (except for the internal collision). The current implementation imposes on the internal EDCA queues a 15us drawback if there's no ack for the last transmission.