Wireshark is a great tool to do packet analysis to learn what is being communicated and transmitted on the internet. It serves as a powerful window into the world of network traffic, allowing you to capture, inspect, and analyze data packets flowing through your network in real time.
With Wireshark, you can observe the intricate details of various network protocols — from low-level protocols like Ethernet and ARP to high-level protocols such as HTTP, DNS, TCP, and TLS. This deep visibility helps you understand how devices communicate, troubleshoot network problems, and even detect suspicious activity or security breaches.
One of Wireshark’s key strengths is its ability to filter traffic by IP addresses, ports, protocols, or even packet content, enabling focused analysis of relevant data. For instance, if you’re interested in inspecting how a web page loads, you can filter HTTP packets and see the exact requests and responses exchanged between your browser and the server.
Moreover, Wireshark supports protocol dissection, which means it can decode raw packet data into human-readable information, making complex communication flows much easier to comprehend. This feature is invaluable for students, network engineers, security analysts, and developers working to optimize networked applications.
However, installing and learning to use Wireshark can present a steep learning curve for many users, especially those new to networking. The complexity of filtering through massive amounts of packet data and understanding protocol details can be overwhelming. To simplify monitoring and logging communications between an agent and an MCP server, an alternative approach is to write a custom script that acts as a proxy server or interceptor.
This kind of proxy, often called a man-in-the-middle (MITM) proxy, sits transparently between the agent and the server. It intercepts the traffic flowing both ways, allowing you to:
- Log the data exchanged for later analysis.
- Monitor traffic in real time for debugging or auditing.
- Modify messages if needed, for example, to test different scenarios or inject additional data.
Building such a proxy provides a controlled environment tailored to your specific needs, unlike generic packet sniffers. You can focus on the relevant parts of the communication, extract meaningful logs, or even automate reactions based on message content.
Writing a MITM proxy is particularly straightforward when dealing with unencrypted or custom protocols. For encrypted traffic (such as TLS), the proxy must be capable of decrypting and re-encrypting messages, which requires additional configuration.
In many cases, scripting this kind of proxy in languages like Python, Node.js, or Go offers a balance between flexibility and ease of development. Libraries such as asyncio in Python or net in Node.js provide the necessary tools to handle concurrent connections and forward traffic efficiently.
Here is a simple example
import socket
import threading
def handle_client(client_socket, remote_host, remote_port):
remote_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
remote_socket.connect((remote_host, remote_port))
def forward(src, dst):
while True:
data = src.recv(4096)
if len(data) == 0:
break
print(f"Logging data: {data}") # Log or save data here
dst.send(data)
threading.Thread(target=forward, args=(client_socket, remote_socket)).start()
threading.Thread(target=forward, args=(remote_socket, client_socket)).start()
def start_proxy(local_port, remote_host, remote_port):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(('0.0.0.0', local_port))
server.listen(5)
print(f"Proxy listening on port {local_port}...")
while True:
client_sock, addr = server.accept()
print(f"Connection from {addr}")
threading.Thread(target=handle_client, args=(client_sock, remote_host, remote_port)).start()
# Example usage: forward local 9999 to mcp.example.com:443
start_proxy(9999, 'mcp.example.com', 443)
And here is the script provided by MarkTechStudio: llm_logger.py
import httpx
from fastapi import FastAPI, Request
from starlette.responses import StreamingResponse
class AppLogger:
def __init__(self, log_file="llm.log"):
"""Initialize the logger with a file that will be cleared on startup."""
self.log_file = log_file
# Clear the log file on startup
with open(self.log_file, 'w') as f:
f.write("")
def log(self, message):
"""Log a message to both file and console."""
# Log to file
with open(self.log_file, 'a') as f:
f.write(message + "\n")
# Log to console
print(message)
app = FastAPI(title="LLM API Logger")
logger = AppLogger("llm.log")
@app.post("/chat/completions")
async def proxy_request(request: Request):
body_bytes = await request.body()
body_str = body_bytes.decode('utf-8')
logger.log(f"模型请求:{body_str}")
body = await request.json()
logger.log("模型返回:\n")
async def event_stream():
async with httpx.AsyncClient(timeout=None) as client:
async with client.stream(
"POST",
"https://openrouter.ai/api/v1/chat/completions",
json=body,
headers={
"Content-Type": "application/json",
"Accept": "text/event-stream",
"Authorization": request.headers.get("Authorization"),
},
) as response:
async for line in response.aiter_lines():
logger.log(line)
yield f"{line}\n"
return StreamingResponse(event_stream(), media_type="text/event-stream")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)