Go 1.25
These are the most significant additions in Go 1.25 IMHO:
- synctest which I discussed last time. I also did a talk
- a new Garbage Collector which is more cache friendly
- Goroutines are better/dynamically adjusted - see GOMAXPROCS
- WaitGroup adds a nicety (with a small gotcha)
- JSON v2 package decodes faster, + adds nice surprises
- Trace Flight Recorder is a really cool addition that allows continuous trace recording (into a ring buffer) allowing you to dump a trace of the last few seconds of execution when something interesting happens
After trying these things in depth I found some useful: synctest (of course), trace flight recorder, and the new JSON v2 unknown
tag.
Some other things sound good on paper, but may have been overhyped by some bloggers: new WaitGroup Go()
method, GOMAXPROCS
tweaks, etc, as discussed below.
Please correct me if you have insights that I missed.
Background
Garbage Collection
One of the most controversial things about Go is garbage collection (GC). Go’s approach to memory management gets criticism from both sides:
- the C/C++/Rust fraternity don’t like GC languages
- while Java/C# don’t like the Go’s “simplistic” GC
Let’s look at GC and Go’s approach including the controversy and why I like the way Go does it.
Advantages of Garbage Collection
The (supposed) great advantage of a language with garbage collection is avoidance of a whole slew of bugs like:
- memory leaks, due to forgetting to free allocated memory
- dereferencing freed memory, a nasty problem that affects a lot of C code
- double-frees, which can also cause strange problems
However, having used C/C++ almost exclusively for more than 3 decades I found the best thing about garbage collection is that it gives you one less thing to worry about. For example, a lot of functions in C return a pointer, and it can often be difficult or simply tedious to work out how/when to “free” the memory pointed to, or even if you need to. OTOH Rust’s way of managing memory is brilliant, but requires more thought which, at least for what I do, is rarely worth.
When coding there are so many things to think about - how to make the code correct, reliable, efficient, secure, portable, verifiable (testable), and especially understandable (maintainable). The real advantage of GC is it makes things simpler, by having one less thing to worry about. It helps you to focus on writing better code.
❝ .. helps you to focus on writing better code ❞Disadvantages of Garbage Collection
The main disadvantage is how it affects performance. This includes hits to memory usage, CPU usage (throughput) and latency (especially unpredictable stalls). Luckily, Go neatly avoids most of the problems while still keeping things simple!
But before I explain that, let’s look at the garbage collection in Java, as it is the one most commonly compared to Go’s GC.
Java GC
A useful idea you may have heard of is DRY (don’t repeat yourself). The annoying thing about garbage collection is that you are repeatedly performing some operations every GC cycle. That is, whether a long-lived objects is still being used has to be checked over and over again. This gave rise to the idea of a “generational” GC where older objects are not checked as often.
This is particularly useful in Java as it tends to generate a huge number of short-lived objects. (Go is much better in this regard due to escape analysis and the avoidance of certain OO language features.) In Java the default GC uses 3 generations which means that less time is spent on GC of 2nd and 3rd generation objects, while not wasting too much memory.
(But I’ll never forget the first time I encountered a 3rd generation garbage collection. My heart sank as my software suddenly froze for what seems liked ages but was probably less than 30 seconds.)
Go’s Approach
Go, as usual, takes a simpler approach. It does not use a generational GC. Consequently, GCs consume more CPU resources overall, but this just means using a few goroutines to do most of the GC work concurrently. Most Go software runs on hardware with many cores (hardware threads) so the CPU hit is often unimportant.
The crucial part is that Go has just 2 STW (stop the world) pauses per GC cycle. Over many years the STW pauses were gradually reduced so that in Go 1.10 they were much less than a msec (unless you had a goroutine that “hogs” a thread as discussed next). Except for “hard real time” applications these STW pauses are not important.
STW pauses are when the Go runtime needs to stop all your goroutines. They are only required during the GC cycle.
Cooperative Scheduling Problem
But there was a lingering problem… Since goroutine scheduling was cooperative, the Go runtime had to wait for badly behaved goroutine(s) – ie, ones that “hog” the CPU – to be paused. While the Go runtime is waiting nothing else happens, all other goroutines and the runtime sit idly waiting for the bad goroutine(s) to behave.
This caused stalls like were seen with Java. Luckily, this was addressed in 2021 when the goroutine scheduler was made fully preemptive (except for ARM and WASM platforms) - see Go 1.14 Release Notes.
Latency Issues
Possibly the only remaining issue with Go’s GC, since then, is reported latency issues when a GC is running - i.e. all goroutines seem to slow down while concurrent GC goroutine(s) are doing their work. I have had to track down a few of these, and have found that they are usually caused poorly written code and even bugs (e.g. performing redundant work).
For example, I had an issue where web requests that normally took less than 10 msec suddenly experienced latencies of up to 5 seconds whenever a GC was running. (Under heavy use a GC would run for up to 30 seconds every few minutes.) Eventually, I tracked this down (using the wonderful execution tracer :) to bad input data resulting in one crucial goroutine repeating the same calculations over and over. Once I adjusted the code to ignore the bad data all my latency problems went away.
Of course, you can still have latency problems under high load. These can often be addressed by reducing allocations of short-lived objects. The new “cache friendly” Green Tea GC (see below) should also help.
The Controversy
About 7 years ago I recall a lot of discussion about Go’s reported amazing reduction in STW pauses. A lot of (Java) people said that it was nothing revolutionary and that there were garbage collectors available for Java that could do similar things.
It’s true that Go’s approach to GC is not a new invention (though the Go creators never claimed it was). However, Go’s approach is pretty unique and just works for the vast majority of scenarios out of the box. There are very few ways you can change Go’s GC behaviour and most of the time you don’t need to.
In Java, on the other hand, the default collector has lots of “levers”. You can even override the default garbage collector and write your own or install any of the numerous replacements that are available. In practice, very few Java developers tweak the GC, AFAIK.
The Go GC may be less performant than a tweaked Java one, but Go tries to keep things simple even at the expense of a little performance. If you have modern hardware (lots of available cores) then you probably won’t notice.
With Go 1.25, it’s now being said that the new “Green Tea GC” is an admission that Go’s GC has problems – otherwise, why would you need an alternative? This is wrong – the new GC is not a replacement but a refinement. It’s only an optional “experiment” in Go 1.25 due to the caution of the Go creators.
Go Makes it Simple
In summary: Go has a simple, effective memory management system (even before Go 1.25) because:
- it avoids a lot of GC load due to escape analysis
- Go code can often be (made) allocation free (unlike Java)
- keeps things simple - no memory management distractions
- GC STW pauses of usually (much) less than 1 msec
- preemptive scheduling (since 2021) => no surprises!
- most GC “work” is done concurrently (sep. goroutines)
- with plenty of cores the performance impact is negligible
- you get good performance out of the box for most Go code
GOMAXPROCS
To understand the point of GOMAXPROCS you need to understand a little about the Go runtime and goroutines. One of the coolest things about Go is how goroutines work. I’m not going to explain that in detail except to say that one of the main jobs of the Go runtime (not to be confused with the runtime
package) is to control your goroutines - how many and which ones are executing code at any one time.
GOMAXPROCS Environment Variable
GOMAXPROCS can refer to the number of goroutines that the runtime tries to execute concurrently, but originally it was just the name of the environment variable that the runtime looks at (when your program starts) to set it.
In Go 1.0, if GOMAXPROCS was not set, it defaulted to one. This default was soon changed to the number of cores (hardware threads) available (and obtainable with runtime.NumCPU()
). However, this did not account for other processes running on the same hardware which might also require CPU time and did not (until Go 1.25) allow for recent OS features that allow the number of hardware threads of a process to be more tightly controlled.
However, you have always had the ability to find and/or override the current setting using runtime.GOMAXPROCS()
. Moreover, after you set it, you can set it back to the default using runtime.SetDefaultGOMAXPROCS()
.
Scheduling
In simple terms, the Go runtime asks the OS (operating system) for GOMAXPROCS threads, which it uses to run its goroutines. If there are too many goroutines, the Go runtime “schedules” the goroutines onto the available threads. That is the runtime stops and starts different goroutines so they all get a bit of CPU time.
Note that just because the Go runtime has obtained GOMAXPROCS OS threads does not mean they all (or any) will run concurrently. These are OS threads – it is up to the OS scheduler (not to be confused with the Go runtime scheduler which is built into your program) to allocate them to hardware threads (cores).
The scheduling of goroutines is clever but not that simple (but very well documented). Luckily, you don’t need to understand it in detail, but a simplification is that a go-routine has 3 states:
- running - currently executing on an (OS) thread
- runnable - ready to execute (when the Go scheduler chooses)
- blocked - waiting for something
Blocked Goroutines
Goroutines become blocked when they waiting for something – either from another goroutine (such as read/write to a channel, unlock a mutex, etc), or from the OS (waiting for a system call to return, I/O, etc).
When a goroutine is blocked by the OS, the Go runtime needs to create a new OS thread (or get one from it’s thread-pool) to maintain its quota of GOMAXPROCS running goroutines. If it did not do this then the cores would be underutilized.
Note that this is a simplification in many ways. For example, a Go webserver can handle millions of simultaneous TCP/IP connections most of which are blocked (listening on a port) at any one time, but it avoids creating millions of OS threads by using connection “multiplexing” (e.g. epoll on Linux, iocp on Windows, etc). (OS threads are cheaper than processes but creating many thousands get expensive.)
In summary, GOMAXPROCS allows you to control how many (unblocked) goroutines are executing concurrently.
New GC Experiment
I have always been astonished by the Go GC (garbage collector), mainly that STW pauses could be brought down to such small times. To be honest, for decades I avoided GC languages; and the Go GC has had its problems and controversies as I discussed in the Background section above.
Nowadays, I find GC makes my life simpler, generally without noticeable effects on performance. I thought it couldn’t get any better, so I was surprised to see the new GC experiment code named Green Tea.
It’s all about the Cache
Ever since the internal CPU clock speed of microprocessors started deviating from the bus (memory) clock speed (close to 45 years ago now) the importance of how your code works with hardware cache(s) has become more and more important.
Without getting into too much technical detail, the new GC tries to group small heap-allocated objects together (in both space and time:). This improves the use of data and instruction caches.
Who does it affect?
A lot of software in Go is GC-intense. Escape-analysis is good (but could be improved :) and the obvious way to write code often leads to more allocations than necessary (though Go is nowhere near as bad as Java in this respect).
The new GC is supposed to increase garbage collection speed up to 40%. Unfortunately, this may not mean the whole program runs much faster. I did a simple test and found it hard to measure any speed improvement.
However, I did find that garbage collections use less goroutines – so there are more goroutines available to run your code. I suspect the new GC will help in some situations (such as latency problems I mentioned in the Background section above).
❝ garbage collections use less goroutines ❞GOMAXPROCS
If you don’t understand what GOMAXPROCS
is for, and how/when/why the Go runtime creates/deletes/reuses threads (to run its goroutines) I explain it in detail in the Background section above, but the salient points are:
- GOMAXPROCS = number of threads used for goroutines
- Default = estimate of available cores (hardware)
- It can be set dynamically using
runtime.GOMAXPROCS()
- It can bet set back to the default using
SetDefaultGOMAXPROCS()
It’s the 2nd point that is sometimes a problem because the way that the runtime detects the number of available cores is simplistic. Moreover, it’s set (to the default or from the GOMAXPROCS environment variable) at startup, and never modified, though it is becoming more common for the number of available CPUs to change dynamically.
Uses
The Go runtime tries to keep the CPUs busy by using, at any time, GOMAXPROCS threads for running its (non-blocked) goroutines. The exact effect of GOMAXPROCS depends on your code, the OS (operating system) and the Go runtime version.
If you never create any goroutines - never use the go
keyword, and don’t use any packages that do so - then all your code runs in a single goroutine - in this case GOMAXPROCS has no effect on the performance. (Though extra goroutine(s) may be used for garbage collections.) However, if you create lots of goroutines it can be important, but I have found that it often does not noticeably affect performance.
The general wisdom is GOMAXPROCS should be no more than the numbers of (hardware) cores, but I have found low values have little effect on throughput, though there is an effect on latency for very low values. It has even been reported that setting GOMAXPROCS higher than recommended can actually improve performance (may depend OS, Go version, etc).
Note: My experience is with server software running on hardware with 20 or more cores. I run a certain piece of Go software on 4 servers across the world and at times has 3 million or more goroutines. It typically only uses 5% CPU (with occasional spikes up to 30%). I have done lots of testing, as I can easily (dynamically) control the value of GOMAXPROCS.
Available CPUs?
The number of available CPUs could be affected by many things, like other processes running on the same system (perhaps at a higher priority) which hog the CPU(s). In this case, we are talking about the number of cores made available by the OS for a process to use; for a Go program this is the number of hardware threads available to run goroutines.
For example, on a laptop with a low battery, the OS may decide to shut down some cores to save power.
In Linux, processes can be configured to have fewer CPUs available using the cgroups
facility. This is used for the --cpus
docker option, when running Linux containers.
A Go program executes as a single process, with multiple threads.
Go 1.25 Fixes
The changes in Go 1.25 address two related issues:
- The number of actually available CPUs may be less than the number of hardware cores that the Go runtime detects
- The number of available CPUs can change dynamically
In Go 1.25 the runtime tries to obtain a better estimate of available core. Moreover, it will regularly for changes and adjust the number of threads it uses for go-routines accordingly.
An extreme example might be running a docker container using the --cpus 2
command line flag on a server with 256 cores. A Go program running in the container (before Go 1.25) would default to a large value for GOMAXPROCS but in Go 1.25 the Go runtime will detect that there are only 2 available CPUs (not 256).
Personally, this will not affect me as I already dynamically control things using runtime.GOMAXPROCS()
. I know that a lot of software does this. A lot of containerised Go software uses the automaxprocs open-source package which does this automatically. For a lot of other Go software it will have no appreciable effect.
If you set the GOMAXPROCS
environment variable, or call runtime.GOMAXPROCS()
(with a +ve value), then the Go 1.25 runtime will assume you know what you are doing and not dynamically adjust its goroutines (unless you subsequently call runtime.SetDefaultGOMAXPROCS()
).
Trace Flight Recorder
The execution tracer is an incredible tool (unique to Go) that allows you to examine what your goroutines, the garbage collector, and the Go runtime are doing in exquisite detail.
The Trace Flight Recorder is a refinement that allows you to continually save trace data (into a circular buffer). Then, at any time, you can trigger a trace of the last few seconds of execution. This is very useful when you want to analyse an event that is triggered rarely, and you don’t know how or why.
Understanding runtime behaviour
Go has a lot of tools that allow you to test and analyse your code. However, because there are so many facilities, the execution tracer tends to be forgotten. Eg.:
- automated tests - to ensure your code is correct + help find bugs
- benchmarks - ensure your code is fast + help find performance issues
- race detector - to find data race conditions in running code
- CPU profile - understand where CPU time is spent
- heap profile - understand how memory (heap) is utilised
- block profile - understand how your goroutines communicate
- other profiles - goroutine, mutex, etc
- GODEBUG (env. var.) - allows logging of things like GCs
- execution tracer - records exact behaviour of you goroutines and their interactions + use of the garbage collector, etc
I discussed these in my blog about Flame Graphs - see (Background -> Go)[https://andrewwphillips.github.io/blog/flame-graphs.html#background] but the Flight Recorder is used with the Execution Tracer.
Execution Tracer
With all these profilers, etc. the execution tracer often gets overlooked. So, what is the difference between profiling and tracing? A crucial difference is that profiling uses sampling, while the execution tracer records everything.
Of course, recording all this trace detail is not a simple thing to do, and even harder to do without slowing your code inordinately. Up until early last year the execution tracer had various problems that I mentioned in my blog on Go 1.22 Execution Tracer
Since Go 1.22 the execution tracer works brilliantly! So much so that I often turn it on in production in order to understand some problem. The extra overhead of recording a trace is small (even negligible), except that if you record for a long time it will use a lot of memory.
Why Do I Need It?
In a career of 45+ years (mainly C and C++) I have encountered lots of mysterious runtime issues (crashes, slowness, etc). Luckily, Go makes it orders of magnitude easier to create understandable software, but sometimes there are still things that are tricky.
In Go, these are usually performance problems often related to the garbage collector. When I have had these problems I find a detailed trace is invaluable. For example in my (server) software I can trigger generation of a 5-second trace at any time.
But the problem is that vital evidence may have disappeared before you realise you need it.
❝ ... vital evidence may have disappeared ... ❞Using the Flight Recorder
This is where the Trace Flight Recorder comes in handy. Now I can have tracing on all the time, and it will keep recording into a circular buffer.
recorder := trace.NewFlightRecorder(trace.FlightRecorderConfig{MaxBytes: 10e6})
if err := recorder.Start(); err != nil {
// return or handle error
}
defer recorder.Stop() // optional
This code will use a trace buffer not exceeding 10Mbytes (10e6
). Note that you can also restrict the trace length to a number of seconds, but it’s simpler to just specify the memory (reserved for the circular buffer).
If you don’t specify the trace length the trace
package will choose a sensible value - usually a few seconds. It never exceeds the memory limit you give it.
Now, if you detect a problem, you can immediately trigger a dump of the trace for the last few seconds:
if traceWantedForSomeReason() {
file, err := os.Create("trace_" + time.Now().UTC().Format("060102150405") + ".out")
if err != nil { ... } // return or handle error
_, err = recorder.WriteTo(file)
if err != nil { ... } // return or handle error
file.Close()
}
Examining Traces
Finally, I should mention that one of the coolest things about execution traces is the tooling provided to inspect them. The easiest way is to just run this command (where trace.out
is the trace file name):
$ go tool trace trace.out
A web page is generated that allows you to inspect the trace in various ways. I will talk about this in detail in a future post, but there is a brief explanation in the Go Blog.
JSON v2
Like many gophers I use the json
package (from the Go standard library) quite a bit. It has always worked pretty well for me, so I was surprised to see a v2 package, and even more surprised at how comprehensive it is.
Nice Surprises
A lot has already been written about it (e.g. see Go Blog, JSON evolution in Go: from v1 to v2, etc) so I won’t go into the details, but the highlights for me are:
- streaming - process unlimited JSON with limited memory
- improved decoding performance, including less GC load!
- easier custom JSON encoding/decoding
- better low-level support with
jsontext
package - usability improvements such as the
unknown
tag
New unknown
tag
This is my favourite. I often find I have to grab some JSON, modify part of it, then put it back where I got it. The main difficulty is it is hard to process the JSON you understand while not disturbing the rest of it, which must be preserved.
As an example, imagine I have a JSON object which I know has a “name” field but may have other unknown fields. These could be used for some other purpose, or may appear in the future and need to be handled for forward-compatibility.
{
"name": "cake",
"ingredients": ["eggs", "flour", "etc"],
"image": {
"src": "images/cake1.jpeg"
},
"id": 12345
}
The simple way to update the name field is just to decode into a struct
like this:
var v struct { Name string }
if err := json.Unmarshal([]byte(JSONString), &v); err != nil {
// handle error
}
v.Name = "simple cake"
if out, err := json.Marshal(v); err != nil {
// handle error
}
But this loses any JSON fields we don’t know about. The JSON after the round trip is just:
{
"name": "simple cake"
}
To get around this I just decode into a variable of type any
. This stores the JSON objects into a map[string]any
, JSON arrays into []any
, and JSON strings, numbers, etc into Go scalar types (string
, float64
, etc). It can handle any JSON at all including recursive data types like nested objects and arrays.
This way we can change the name field without losing any of the other JSON data.
var v any
if err := json.Unmarshal([]byte(JSONString), &v); err != nil {
// handle error
}
v.(map[string]any)["name"] = "simple cake"
if out, err := json.Marshal(v); err != nil {
// handle error
}
Try this code in the Playground
This works, but it’s messy and less type safe working with any
types are type assertions.
With JSON v2 we can just add any extra field to the struct (called Other
in the code below) with the JSON unknown
tag
import "encoding/json/v2"
...
var v struct {
Name string
Other map[string]any `json:",unknown"`
}
if err := json.Unmarshal([]byte(JSONString), &v); err != nil {
// handle error
}
v.Name = "simple cake"
if out, err := json.Marshal(v); err != nil {
// handle error
}
Now when we encode the struct as JSON again we retain all the original data.
Recommendation
One interesting development in Go 1.25 is that the original (v1) json package has been re-implemented using the new JSON packages. So you might see some differences, such as performance improvements and even bugs, but only if you turn on the jsonv2
experiment.
To help the Go Creators finding bugs you might like to run all your JSON-related tests under Go 1.25 with the experiment turned on. This may reveal bugs in the new packages or the “porting” or the original JSON package.
You don’t need to make any code changes - just build with Go 1.25 with GOEXPERIMENT=jsonv2, and run your tests.
Warning: Don’t build production software with the experiment turned on!
WaitGroup
Waitgroups are a mainstay of working with goroutines, but it is easy to make mistakes.
WaitGroup Gotchas
With complex uses of WaitGroup
it can be tricky to ensure that your use of Add()
balances your calls to Done()
. Luckily, if you call Done()
too many times your code will panic with the message panic: sync: negative WaitGroup counter.
Unluckily, if you call Done()
too few times then you can get deadlocks, goroutine leaks or just plain mysterious issues. (One way to address this would be to have a variation of Waitgroup with a context in order to timeout – this has not been added to the standard library, but maybe one day.)
Another common mistake is not calling Add()
in the correct place. For example:
var wg sync.WaitGroup
go func() {
wg.Add(1)
// do something that we await to finish
wg.Done()
}()
// ... maybe do something else concurrently
wg.Wait()
This may work if the new goroutine starts immediately and wg.Add(1)
executes before wg.Wait()
. More likely the Wait()
will execute first and, since the wait group count is still zero, Done()
returns immediately.
Errgroup
These (and other) problems with sync.Waitgroup
were addressed by the Go developers in 2016 with the errgroup
package. The above problem is avoided like this:
import "golang.org/x/sync/errgroup"
...
var eg errgroup.Group
eg.Go(func() error {
// do something that we await to finish
})
New Go()
Method
Go 1.25 finally adds errgroup’s Go()
function to the standard library. For many years I have preferred errgroup
, but now maybe I’ll switch back.
Gotcha One problem with using the new Go()
method is that it’s not obvious that it runs your code in a new goroutine. For example, if you are “catching” panics (using recover()
) in the outer goroutine they will not be caught in the function passed to the Go()
method. If you want to avoid panics terminating the whole program you also need to add a call to “catch” panics inside the function passed to Go()
.
synctest
Finally, a quick update on the new synctest
package (available as an “experiment” in Go 1.24). I talked about it effusively last time.
synctest.Test()
In the Go 1.24 experiment you’d call synctest.Run(f)
. This takes one parameter (f
) - the function to be run inside the bubble.
func TestSyncTest(t *testing.T) {
synctest.Run(func() { // deprecated in G0 1.25
// code to run in the bubble
})
}
In Go 1.25, this has been replaced with synctest.Test(t, f)
which also takes a *testing.T
parameter. (This was done for a minor technical issue which I’m not sure I agree with, since it restricts synctest to only be used with tests.)
func TestSyncTest(t *testing.T) {
synctest.Test(t, func() {
// code to run in the bubble
})
}
synctest.Wait()
One comment I got about my synctest examples is that I did not demonstrate the use of synctest.Wait()
. Unfortunately, I could not find a good reason to use synctest.Wait()
. It is better, at least for external tests, to use the normal Go synchronisation facilities (like WaitGroup
).
Since then, I have seen some valid uses of synctest.Wait() to verify conditions/state in internal tests, but I frown on internal tests.
Conclusion
It’s great that synctest
has migrated from experiment status but there are some other important improvements in Go 1.25. I anticipate nice things from the new Green Tea GC when it comes out of experiment mode, probably in Go 1.26.
The changes for GOMAXPROCS
and sync.WaitGroup
are not particularly useful (at least to me), but it is good to have them in the standard library rather than relying on external packages (github.com/uber-go/automaxprocs
and golang.org/x/sync/errgroup
)
The new JSON/v2 package, on the surface, does not look that useful, but it has a myriad of improvements in efficiency, reliability, security, compatibility, verifiability and usability. I especially appreciate the new unknown
tag. It’s also nice to see a major improvement to Go that comes from the general Go community (ex Google).
It’s also interesting to see that they have re-implemented the original JSON (v1) package in terms of the new (v2) package. This means you will get JSON decoding performance improvements without any code changes (but you have to turn on the experiment in Go 1.25 for this).
The new Green Tea GC experiment is more cache aware, which is always a good thing. But we’ll have to wait to see how effective it is.
Finally, one thing that I love (though again maybe that’s just me) is the Trace Flight Recorder. I previously discussed the Execution Tracer improvement in Go 1.22, but due to the unwieldiness of traces it was hard to catch rarely occurring events in production. The runtime/trace.FlightRecorder
allows you to trigger a trace capture of the last few seconds at any time.
To help improve Go 1.26 run your tests on code built with Go 1.25 and the experiments turned on (GOEXPERIMENT=greenteagc,jsonv2
). You don’t need to make any code changes but don’t forget that production code should not be built with the experiments on. Please report any issues to the Go Issue Tracker.
Comments