New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: runtime/trace: flight recording #63185
Comments
I agree with all of the advantages of having flight recording in the runtime. As for
I think that program logic will be the far more useful trigger for the majority of use cases, and certainly easier to make use of. On top of that, this flight recorder will be much more straightforward to integrate with existing telemetry solutions, compared to external solutions, which will either not be designed for that (Gotraceui) or will have to be written first. |
Just to say the most frequent "I wish I knew what happened just before" for me is OOM, which isn't (AFAIK) something you can trigger on once the kernel makes its decision. Maybe we could trigger on hitting GOMEMLIMIT? |
Nice! How can we get CPU profile samples to show up in the flight-recorder data stream? The existing APIs for execution traces and CPU profiles have a start time and an end time, so those two pair well together. For this it seems that we'd need a way to ask for The panics in
You've said that the data from |
It turns out that's quite difficult to do because Linux provides no opportunity to dump any information when an OOM occurs; the OOM killer simply SIGKILLs processes. Unfortunately With that being said, there are still some best-effort things we might be able to do. Programs that don't use Either way though, it's still best-effort. Unless the snapshotting fully stops the world, the program may continue executing and OOM before the trace actually gets fully dumped. (Even then, it's still possible (but less likely) that it OOMs before the runtime successfully stops every running goroutine.) Once upon a time there was a patch proposed to Linux to allow for halting a process when it hit container limits, so that another process in a tiny memory reservation could inspect it. One could imagine that if this other process created a core dump of the halted process, we could write some code to extract any active trace buffers from the core dump into a trace. I'm not sure where this leaves us. Perhaps it suggests that |
This comment was marked as off-topic.
This comment was marked as off-topic.
If you happen to have CPU profiling running, it'll just work, but you make a good point that there's no good way to have it included all the time (i.e. no intention of producing a CPU profile). It seems to me like that should maybe be another option, either on the
Hm, that's a good point. I'll update the API to return an error on
There's currently no list of all the Ms per generation because we don't have M events, but yes, everything within a partition is namespaced by that partition's "generation number." The proposed trace parsing API exposes partition changes as a FWIW, the trace parsing API already does this exact kind of "stitching." Every partition is an entirely self-contained trace, which means all goroutines (and Ps) and their statuses get named in every partition. The trace parser uses this information to validate the stitching: new partitions' goroutine statuses need to match where that goroutine left off in the previous partition. |
This proposal has been added to the active column of the proposals project |
I think this is waiting on an implementation and experience from using that implementation. |
That's correct. I plan to have one in golang.org/x/exp/trace soon. |
I made a mistake in the commit message so gopherbot couldn't connect the CL to this issue, but the experimental API has now landed via https://go.dev/cl/550257. It is available in golang.org/x/exp/trace for Go 1.22 only. There are a few caveats with this implementation that will not be true with a real runtime implementation.
I don't think any of these are significant enough to detract from the usefulness of the experiment, but I wanted to bring it up in case one of these does become an issue. We can also explore ways to improve the experiment to make it more representative, if one of them is indeed a problem. Please give it a try! |
Proposal: runtime/trace flight recording
Updated: 23 September 2023
Background
"Flight recording" is a technique in which trace data is kept in a conceptual circular buffer, flushed upon request. The purpose of this technique is to capture traces of interesting program behavior, even when one does not know ahead of time when that will happen. For instance, if the web service fails a health check, or the web service takes an abnormally long time to handle a request. Specifically, the web service can identify such conditions when they happen, but the programmer setting up the environment can't predict when exactly they will occur. Starting tracing after something interesting happens also tends not to be useful, because the program has already executed the interesting part.
The Java ecosystem has had this for years through Java's flight recorder. Once the JVM's flight recorder is enabled, the JVM can obtain a trace representing the last few seconds of time. This trace can come from triggers set up in JMX, or by passing a flag to the JVM that dumps a trace on exit.
With the implementation of #60773 now approaching a stable point, hopefully in Go 1.22 we'll have all traces become series of self-contained partitions. This implementation change presents an opportunity to easily add something similar to the Go execution tracer by always retaining at least one partition that can be snapshotted at any time.
This is also enabled by work in the Go 1.21 release to make traces dramatically cheaper. Because flight recording relies on waiting until something interesting happens, tracing needs to be enabled for a much longer period of time. Enabling flight recording across, for example, a small portion of a production fleet, becomes much more palatable when the tracing itself isn't too expensive.
Design
The core of the design is a new API in the
runtime/trace
package to enable flight recording. This means that programs can be instrumented with their own triggers.Implementation
Because #60773 already partitions traces, most of the implementation work is already done. There are just a few additional changes that need to be made.
First, the values from
SetPeriod
andSetSize
need to adjust runtime-internal variables when(*FlightRecorder).Start
is called. The period already exists, but the size in bytes does not. The tracer will need to track a partition's size and the background goroutine that creates new partitions will need to be woken when the partition's size exceeds that value.Next, the trace reader goroutine puts buffers it takes off the queue onto a free list when it's done with them. When flight recording is enabled simultaneously with a
trace.Start
, the trace reader will need to put these buffers on some other list in case the flight recorder needs them. The task of putting those buffers on the reuse list will be handled by whichever goroutine creates a new partition, and it will only place the oldest partition's buffers on the reuse list. This way, at least one full partition will always be available for the flight recorder.Lastly, when
(*FlightRecorder).WriteTo
is called, it will need to create a new partition. Then, it will need to take ownership of any outstanding trace buffers by setting a flag on each buffer preventing it from going on the reuse list. This will likely require a brief stop-the-world, but the actual copying will not happen while the world is stopped. Once copying is done,(*FlightRecorder).WriteTo
will wait until the trace reader is done with the partitions for the buffers it was using (if a trace reader is active) and then place all the buffers it marked on the reuse list.Discussion
SetPeriod
andSetSize
could give more rigid guarantees, but it's both complex to implement and not terribly useful. The primary use-case forSetPeriod
is to allow users to ask for longer traces (for example, if a web service's "long request" means something much longer than a handful of seconds). Meanwhile the primary use-case ofSetSize
is to control memory overheads and limit the damage caused by a largeSetPeriod
.WriteTo
could allow multiple goroutines to call it since it could easily serialize them internally. However, this can create some bad situations. For instance, consider some snapshot trigger condition that causes multiple goroutines to callWriteTo
. The call is heavyweight and they'll queue up behind each other; the longest one will likely take quite a while to resolve, and the application will be significantly disrupted. It'll also produce traces that aren't very useful (consisting of short partitions corresponding approximately to the duration of the lastWriteTo
call) unless we also allow for multiple goroutines to read the same partition's buffers. However, that's going to be fairly complicated to implement, and also doesn't really add much value either, since it's just duplicate data. The current design encourages callers reduces the risk of run-time panics while also side-stepping these issues by returning an error in this case.Alternatives considered
External circular buffer
@dominikh has suggested adding a similar feature to gotraceui. Because the partitioning is actually baked into the trace's structure, it's possible for trace consumers to implement something similar themselves. The only thing missing is a standard streaming endpoint for execution traces (see follow-up work).
However, there are a few advantages to adding support directly to the standard library and runtime.
runtime/trace
package's annotations, but can instead decide when to grab a snapshot directly.SetPeriod
andSetSize
). Any external solution will be at the mercy of the runtime's defaults.Despite these advantages, it's likely worth pursuing support for such a use-case even if the API described in this proposal is made available. A shortcoming of this document's proposal is that there's no way to trigger a snapshot explicitly against trace data, only program logic. Handling traces externally also means the ability to perform ad-hoc analyses without the need for additional instrumentation.
Follow-up work
Add support for trace streaming endpoints to
net/http/pprof
As stated in the discussion of the "external circular buffer" alternative, we could support that alternative easily and well by just adding a standard debugging endpoint for streaming trace data. It probably makes the most sense to just add new query parameters to the existing
trace
endpoint; the details of that can be hashed out elsewhere.The text was updated successfully, but these errors were encountered: