- Middleboxes, such as firewalls, often modify end-to-end Internet communication.
- Understanding these modifications imapct their relevant Internet standards is essential for identifying potential security risks.
- We recommend that RFCs provide clear and rigorous guidelines for clients, servers, and intermediaries to mitigate such issues.
Traditionally, networked communications follow a layered approach, where application-layer protocols like Hypertext Transfer Protocol (HTTP) are meant to function end-to-end. These protocols and functions must comply with agreed standards (RFCs) set by the Internet Engineering Task Force (IETF).
In practice, end-to-end communication is often broken by middleboxes—intermediate devices and proxies that enhance security and performance. A firewall is an example of a middlebox, which blocks unauthorized access to an internal network, protecting users from external threats. Learn more about middleboxes.
Understanding how these and other middleboxes affect communication processing chains and their relevant RFCs is essential for identifying potential security risks.
My colleagues, Mahmoud Attia and Marc Dacier, and I recently sought to examine how proxies, a common type of middlebox (Figure 1), impact HTTP/1.1 conformance as defined by their relevant RFCs (see paper).

We identified 47 requirement-level rules and tested compliance across twelve popular proxies, including four Content Delivery Networks (CDNs)—key infrastructure components that enhance web performance and security. Our findings revealed three distinct behaviors, namely some proxies:
- Drop non-conforming packets.
- Forward non-conforming packets as-is.
- Modify non-conforming packets into conforming ones.

Proxies Exhibit Highly Inconsistent Behavior
Our results indicate that all proxies behaved similarly in 7 of our 47 test cases. Proxies exhibit highly inconsistent behaviors in the remaining 40 test cases, exposing ambiguities in RFC interpretations (Figure 3).

Moreover, none of the proxies followed a uniform policy for handling non-conforming data. For example, Cloudflare, a leading CDN, modified packets in 17 test cases, forwarded them unchanged in 20 cases, and rejected non-conforming packets in 10 cases.
Additionally, proxies demonstrated inconsistencies between handling client requests and server responses. RFC 9112 mandates rejecting or sanitizing messages containing invalid characters (CR, LF, or NUL). While all proxies rejected such client-to-server messages, five proxies modified server-to-client responses, while seven proxies forwarded them unchanged.
RFCs Can Be Improved
Our study identifies two significant issues with HTTP/1.1 RFCs:
- Inadequate terminology
- Lack of completeness
HTTP/1.1 RFCs sometimes specify clear requirements for “clients” and “servers” but leave “proxies” ambiguous.
This allows developers to implement proxies as they see fit, but this flexibility also introduces potential vulnerabilities, such as request smuggling.
We recommend that RFCs provide clear and rigorous guidelines for clients, servers, and intermediaries to mitigate such issues. Standards should explicitly define how proxies must handle non-conforming data, such as rejecting malformed requests as a default behavior.
Moving forward, we aim to investigate recommendation-level rules in addition to requirement-level rules, analyze newer HTTP versions (HTTP/2 and HTTP/3)—which introduce greater complexity and optimizations—and further explore the impact of RFC specifications and middleboxes on protocol conformance. Addressing these issues will contribute to more secure HTTP implementations across the web.
Ilies Benhabbour is a PhD student working under the supervision of Professor Marc Dacier at KAUST. His research focuses on the issues arising from the existence of network middleboxes.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of the Internet Society.