Go 1.23
A lot has been happening. I got caught up in watching the Paris Olympics and did not realise that a the Go 1.23 RC is already out. (I like to explorer the RCs so I am ready to tell you all about the release when it arrives :)
Go 1.23 is significant as we now officially have iterators with the range-over-function language addition (previewed in Go 1.22). Iterators, and their use, leads to all sorts of interesting topics such as why not just use channels and goroutines, why Go has secret coroutines and the good and bad of passing around slices all of which I discuss later.
This is a big release so I have restricted my discussion to:
- Iterators - uses, but mainly implementation
- Compiler telemetry - see my rant below
- Cool enhancements to traces and PGO
- Library additions like unique package
There are also some hidden surprises, not mentioned in the release notes:
- quantum-resistant algorithms in the crypto package
- coroutines added to the Go runtime (to enable “pull” iterators)
- a way to preserve field order in structs structs package
Background
General
Iterators in C++ (STL)
I discovered the joys of iterators when I first started using the C++ STL library (1997?). STL was a ground-breaking piece of software made possible by the new C++ templates. (Templates in C++ were what many languages now designate as generics).
STL can be loosely characterised as having 3 parts:
-
Containers (generic types): vectors (a bit like Go slices), maps/sets (balanced-tree and hash-table), linked-list, deque, etc
-
Algorithms (generic functions): comparing, copying, filtering, digesting, sorting, searching, permuting, etc
-
Iterators: which allowed containers and algorithms to work together seamlessly
Side Note: Iterators in Go 1.23 are roughly equivalent to forward iterators in STL.
You may have heard the old adage that “programming = data structures + algorithms”, corresponding to 1 and 2 above. But there is a 3rd important aspect of programming – where iterators are crucial – of decomposing a problem into smaller problems. This goes by lots of names such as modularity, separation of concerns, information hiding, encapsulation, decoupling, etc. but basically boils down to the age-old principle of “divide and conquer” as I discussed in my old blog Handling Software Design Complexity
There are many ways to do this, but they generally have some sort of performance cost, sometimes a large one. The great thing about iterators is they efficiently decouple algorithms from data (containers).
The beauty of iterators is they enable decoupling with very little performance impact.
“Iterators” in C
STL iterators are just a generalisation of pointers which have been used in C since it was invented. In fact, pointers work with STL algorithms (much the same as any STL iterator) when you are using an array as the “container”.
It’s fair to say that a great deal of C’s success is down to performance. A major part of that is how pointers were very efficiently used to iterate over an array, especially given the instructions provided on the first computers on which C ran (DEC minicomputers).
Language Design C vs Go
This leads me to digress onto the interesting topic of how hardware developments influence language design. C and Go made different design decisions
This section is purely conjecture on my part so please don’t ask for references. And I am open to any mistakes in my logic being pointed out.
Closely related to C’s iterators (ie, pointers) is the way Ken Thompson?? Another example would be how C arrays are implicitly converted to pointers to the first element of the array, since using arrays “by value” would involve lots of memory copying which was an expensive operation
OTOH Go was created much later. Copying of large amounts of memory is now more acceptable due to Moore’s law and the way hardware, such as caches, has developed. In Go (unlike C) if you use an array in an expression you get it’s “value”, ie, a complete copy of it. Moreover, many standard library functions (e.g. io.ReadAll()
, strings.Split()
return complete arrays which would have been anathema to the C standard library.
I should qualify this by saying that Go still has a high (but not overwhelming) regard for efficiency. For example, the slice in Go is just a more sophisticated variation on the way C arrays convert to pointers. I don’t think this is a coincidence as both were invented by Ken Thompson: array/pointers in B/C and arrays/slices in Go.
struct field optimization in C
Apart from iterators Go 1.23 has the beginnings of something I have been keen on for a while - the ability to have more control of the layout of fields in a struct. C allows a lot of control with:
- fields are laid out in memory in the same order they are declared
- control of padding (unused bytes) between fields using
#pragma pack
- different sizes of integers and floats
- bit-fields for bits/very small integers
For more about how this works in C see this
Having this sort of control allows
- reduction in memory requirements in memory constrained environments
- mapping a
struct
into an external binary memory layout
Nowadays, these things are less important since we have lots of memory and usually use some sort of text encoding to exchange data.
Moving from C to Go I was concerned when I discovered Go does not provide any control of the layout of a struct in memory. In reality I have never really needed this except in one situation - saving memory when creating a large number of structs on the heap - see structs package below.
Go
State of Go Generics
First, I’ve been intending to write about the impact of generics since Go 1.18. This seems like a good time since without generics we would not have iterators.
When I first used Go I quickly grew to love it. There were a few things I missed, but I understood how their inclusion might make the language harder to use or more difficult to evolve. But there was one thing that I believed could really make Go simpler: parametric polymorphism (aka generics).
My first glimmer of hope was when Ian Lance Taylor gave a talk about 5 years ago (Gophercon 2019) on proposals for adding generics to Go. After my own experiences with C++ templates (early 90s), STL (late 90s)) and C# generics (2005) I was very keen for this addition. I even wrote a specialised set container in 2021 (see rangeset) using the experimental “go2go” transpiler and spoke about Go generics at the Sydney Go meetup.
At the time there were all sorts of objections to adding generics to Go such as::
- they will lead to code being written that is impossible to decipher (like C++)
- generics won’t be used since we already have slices and maps
- they will be over and inappropriately used, much like channels were when they were added to Go
Well none of these things happened. They haven’t got as much use as I expected (by me and others) but they are occasionally very useful for productivity and for simplifying code. I am particularly enamoured with generic functions from the slices
package.
I believed that something with real potential was channels of generic types which could be used to create “streams” that could be connected together much like Linq in C#, and Streams in Java 8. But better than in any other language you could easily create concurrent processing in Go using channels to communicate between goroutines.
Channels of generic types have the potential to create an ecosystem of concurrent stream processing.
I was hoping a package that enabled this sort of thing we appear in the Go standard library (and as set package als BTW). I am not sure why this has not happened - maybe waiting for iterators. I may have to create one myself if I can find the time. Please tell me if you know of such a package.
Go before Iterators - with Slices
A common pattern in Go has been to build and return a slice. This can be seen in standard library functions such as strings.Split()
which decouples the splitting of a string into sub-strings from the use of those sub-strings. This pattern has even continued after the introduction of generics to Go, with maps.Keys()
, etc.
This works OK but has always grated with me since it is not scaleable (and not the sort of thing you see in C :). If the slice is large you may unnecessarily use a lot of memory, add to GC load, and do more up-front work (instead of amortizing the cost over the uses of the values).
It’s particularly wasteful if the caller only wants to use element(s) near the start of the slice such as this: strings.Split(longString)[1]
.
Go before Iterators - with Channels
I was not really that keen to have iterators in Go since Go already provides channels. You can avoid the above-mentioned problems of returning a slice by instead returning a channel. Moreover, you can range over a channel just as easily as a slice (or an iterator).
This is the approach I took in 2022 for my rangeset
generic container (see rangeset Iterator method) The following code use the Iterator()
method which returns a channel providing the values of the integer set in numeric order.
set := rangeset.Make(2, 5, 3) // make set of 3 ints
for v := range set.Iterator(context.BackGround()) {
println(v) // 2 3 5
}
Of course, if you want to terminate the iteration early you use the context parameter like this:
set := rangeset.Make(2, 5, 3) // set of 3 ints
ctx, cancel := context.WithCancel(context.Background())
for v := range set.Iterator(ctx) {
if v > 3 {
cancel() // **IMPORTANT**
break
}
println(v)
}
Note that cancel()
must be called if you break
or return
from the loop, otherwise there will be a goroutine leak.
It is safer, and harmless, to always call cancel()
on a cancellable context – for example, later code changes may add another break
. So it would be better to put the call to cancel()
after the loop or even defer
it, in case a return
is used within the loop.
set := rangeset.Make(2, 5, 3) // set of 3 ints
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // **IMPORTANT**
for v := range set.Iterator(ctx) {
if v > 3 {
break
}
println(v)
}
So why do we need iterators? Why not just use channels? Arguments I’ve seen:
- for a simple iteration it would be more efficient to not create an extra goroutine (use coroutines?)
- concurrent operations means you have to be careful of data races (debateable - see below)
- you can get a goroutine leak if you stop early and don’t somehow cancel the iteration or drain the channel
- using channels as iterators (but not implementing them IMHO) is a little more complicated
- channels are slower but usually not significantly
John Arundel wrote an excellent blog about iterators where he compares them with using channels (see Iterators in Go). His main argument against channels seems to be that because you need to create a new goroutine then your program is “probably incorrect” and he extends the invitation “(Prove me wrong, Gophers. Prove me wrong.)”. It’s impossible to prove him wrong, but it is a rather pessimistic outlook and means almost all the Go code I have written is “probably incorrect”!!
However, I think I have come around to liking the idea of iterators mainly because they are a little simpler to use, since you don’t have to create a context
or find some other way to avoid goroutine leaks.
Go before Iterators - with Functions
One use of iterators is to perform some operation with, or on, the elements of a container. This is often performed using functional programming. That is you pass a function to an Iterate
function or method that processes each element in turn. For example, using my rangeset
package ((see Iterate method)) I can easily add up all the values in the set like this:
set := rangeset.Make(2, 5, 3)
sum := 0
set.Iterate(func(v int) {
sum += v
})
println(sum) // 10
This works fine for something simple but can get very messy when you want an iterator that terminates early, or you need a pull iterator (discussed later).
Actually, this is basically how iterators have been added in Go 1.23 - a simple enhancement to range
loops to use an iterator function. That, plus a bit of support with the addition of the iter
package to the standard library and the (hidden) coroutines to allow push iterators to be converted to pull iterators.
Coroutines
When I first heard about iterators being added to Go it seemed to be tied to the addition of coroutines to the language. It turns out that coroutines were added to the Go runtime (to facilitate pull iterators), but are apparently only available to the standard library.
What are coroutines?
In brief, coroutines were invented a long time ago as a generalisation of subroutines, but it turns out that most of the time we can get by without them, so most languages don’t provide them. In Go, we can use goroutines to get a similar effect using channels, although it’s not exactly the same as it requires separate threads, and a scheduler to place said goroutines on the threads. If this sounds like nonsense see the excellent talk Raghav Roy - Coroutines and Go from Gophercon AU last year.
I am still investigating if I can use coroutines in “normal”, but unsafe, Go code. I suspect coroutines will make their way into the Go language one day, like the unique
package has done in Go 1.23
TBD pull iterator example
Iterators
John Arundel has an introduction to these new Go iterators - see Iterators in Go. Please read John’s stuff if you need an introduction.
Me? I am just going to jump straight into the nitty-gritty of how iterators work.
Range Over Function
Iterators in Go 1.23 use a lot of functional programming, so you need to pay special attention. When using the new range over function feature, the value of the range expression is a function that takes exactly one parameter (and returns nothing) – we will call this the iterator function. The (single) parameter to the iterator function is also a function that takes up to two parameters and returns a bool – we will call this the yield function. To add to the confusion the iterator function is often returned from another function that can take any number of parameters which are then “captured” by the iterator function (closure).
Hopefully, an example will clarify this, but first here’s a quick reminder of the syntax of range loops…
You will recall that Go only allows you to range over slices, arrays, strings, maps, and channels (and int
s since Go 1.22) and that you can have up to two loop variables. But the types of the loop variables are restricted, eg: for a slice the 1st loop variable is the index (always an int
); for a channel the 2nd loop variable signals the end (always a bool
); etc.
for i, v := range mySlice { // two loop variables
for k, v := range myMap { // two loop variables
for k, _ := range myMap { // one loop variable
for v := range myChan { // one loop variable
for range 10 { // no loop variables
With range over function the yield function’s parameters correspond to the loop variables but there are no restrictions on their types except that you can have no more than 2. Note that the yield function usually takes one or two parameters but can even have no parameters, though I have yet to see a good use for this.
OK, here’s an example where the yield function has one parameter, so the range loop can only have one loop variable v
.
// iteratorFunc provides the first 7 integers in order
func iteratorFunc(yieldFunc func(int) bool) {
for i := 0; i < 7; i++ {
if !yieldFunc(i) {
return
}
}
}
...
// use Go 1.23 range over function feature
for v := range iteratorFunc {
println(v) // prints 0 1 2 3 4 5 6
}
The compiler takes the loop code and the iterator function to effectively build the “inline” code below. Note that, since the loop code does not include any control flow statements like break
, continue
, goto
, return
, the value returned by yieldFunc
is irrelevant and the return
inside the if
is dead code.
// effective resulting code
for i := 0; i < 7; i++ {
println(i)
}
Let’s look at a more complicated example. This one provides two loop variables (the yield function has two parameters). It also breaks out of the loop early.
func iteratorFunc(yieldFunc func(int, float64) bool) {
for i := 0; i < 7; i++ {
if !yieldFunc(i, math.Sqrt(float64(i))) {
return
}
}
return
}
...
for v, root := range iteratorFunc {
if root == 0.0 {
continue
}
if root >= 2.0 {
break
}
println(v, root)
}
Which effectively becomes:
for i := 0; i < 7; i++ {
if !func(v int, root float64) bool {
if root == 0.0 {
return true // continue
}
if root >= 2.0 {
return false // break
}
println(v, root)
return true // end of loop
}(i, math.Sqrt(float64(i))) {
break // yield returned false
}
}
Capturing parameters in Iterator Functions
Hopefully you now understand how the iterator function and yield function are used by the compiler. Now we look at the further refinement where another function returns the iterator function. This is done in order to capture parameter(s) passed to it.
This example is similar to the above except that the iterator function is returned from the intsAndRoots
function. Further, the number of iterations is passed as the max
parameter which is captured by the iterator function, so it knows when to stop the loop.
func intsAndRoots(int max) {
return func (yield func(int, float64) bool) {
for i := 0; i < max; i++ {
if !yield(i, math.Sqrt(float64(i))) {
return
}
}
}
}
...
for v, root := range intsAndRoot(4) {
println(v, root) // 0 0 1 1 2 1.414 3 1.732
}
Iterators and Generics
At this point we really should investigate using iterators with containers.
The advent of generics in Go 1.18 started a boom in packages implementing containers (such as my rangeset :). And with containers comes the requirement to be able to efficiently visit each element. IMHO This has been the main impetus for adding iterators to Go 1.23.
In a way the addition of generics lead to the need for iterators. OTOH generics have also been extremely useful in the implementation of iterators as we will see in the next section on the new iter
package.
As an example, here’s an iterator that presents the elements of a slice in reverse order. This shows that iterators are useful for built-in containers like slices, not just for generic containers. (BTW I love this example as reverse iterators were one thing I really found useful in STL.)
Note that in the code below, Backward()
is a generic function so that it can work on slices of any type. In this case we show its use with a slice of int
.
// Backward returns a generic iterator yielding elements in reverse order
func Backward[T any](s []T) func(func(int, T) bool) {
return func(yield func(int, T) bool) {
for i := len(s); i > 0; i-- {
if !yield(i-1, s[i-1]) {
return
}
}
}
}
...
for _, v := range Backward([]int{2, 5, 3}) {
println(v) // 3 5 2
}
Standard Library Iterator Support
Iter
There has also been a bit of support for iterators added to the standard library. There is a new package called iter
which has types that can make code easier to read, principally through the new Seq
and Seq2
. These are discussed elsewhere, so I’ll just give a quick example of using Seq2
with the above Backward()
function.
func Backward[T any](s []T) iter.Seq2[int, T] { ...
Maps
There are also new some functions in the maps
packages which return iterators:
func maps.Keys[Map ~map[K]V, K comparable, V any](m Map) iter.Seq[K]
- iterator of keys of a map
func maps.Values[Map ~map[K]V, K comparable, V any](m Map) iter.Seq[V]
- iterator of values
func maps.All[Map ~map[K]V, K comparable, V any](m Map) iter.Seq2[K, V]
- iterator of keys and values
This explains why the corresponding functions in golang.org/x/exp/slices
and golang.org/x/exp/maps
were not ported into the standard library in Go 1.19. These functions returned slices, since Go at that time did not have iterators. Obviously, the names were being kept in reserve awaiting the addition of iterators.
func maps.Keys[Map ~map[K]V, K comparable, V any](m Map) []K
- slice of keys of a map (from golang.org/x/exp/maps)
Slices
The slices
package has similar slices.All
and slices.Values
functions, as well as slices.Backward
(like my version above) which returns an iterator that gives the slice elements in reverse order. There is also slices.Collect()
which returns a slice when passed an iterator, but you probably won’t have a lot of use for that until you start using iterators a lot.
Missing?
One thing I would have expected is a variation on strings.Split()
that returns an iterator instead of a slice. As I mentioned Go before Iterators - Slices in Background/Go this sort of thing is not efficient at scale. I suspect the Go team were too busy/cautious to add it yet, and it will appear in Go 1.24, as well as alternatives for any other standard library functions that return slices.
Go Tools
Telemetry
We have seen big strides in recent years in the ability of software to detect and diagnose internal problems, e.g. the rapid uptake of Open Telemetry. Telemetry is about increasing the quality attribute of software that I call debugability (for the last 30+ years). For details, see my old blog post on Software Quality Attributes . In brief, debugability is how easy it is to find the cause of a bug, whereas verifiability is how easy it is to detect bugs and maintainability is how easy it is to fix bugs.
❝ debugability is how easy it is to find the _cause_ of a bug ❞Anyway telemetry is usually associated with backend (ie, server) software, but can be useful for anything. Russ Cox wrote a detailed and convincing argument (see Transparent Telemetry) (not to be confused with Open Telemetry) as a precursor to adding this to the Go compiler.
I can think of many benefits to the Go team, such as:
- detect compiler panics even if not reported
- detect build failures due to accidentally introduced dependencies
- detect problems such as poor build times in unusual environments
- detect poor code generation for difficult to test setups
- avoiding work on features that nobody ever uses
- concentrate first on issues affecting the most compiler users
- become more quickly aware of new issues
- get accurate information rather than poorly reported defects
- get a much better idea of how many devs use Go and how much :)
Controversy
Unfortunately, as soon as there was a proposal to add telemetry in the Go toolchain there was a lot of negativity by many people who did not fully read and/or understand the proposal. In some quarters there was distortion if not complete misrepresentation of the proposal, such that some people thought telemetry was to be added to executeables that the Go compiler produced!
All the Google and Go detractors came out of the woodwork. (I guess there are a lot of Google detractors simply due to the size of the company, and because some parts of Google do not have a good track record when it comes to regard for privacy.) However, the Go team were very careful to consider privacy from the very start, as you can see if you look at the above links. Moreover, everything that is collected is in the public domain (anonymised of course).
I think this shows that even the dev community is not immune to disinformation that has plagued society with the advent of so-called social media.
For the record I will state that I have never worked for Google and have no plans to do so. I use Go professionally and I have worked on the Go compiler.
Opt in or opt out?
The initial proposal for Transparent Telemetry in the Go compiler was that it be opt-out (see Opting In to Transparent Telemetry). In other words, by default the compiler would remember a few things about how you used the compiler and this information would become public (anonymized, of course) and hence available to the Go compiler team at some point in the future.
Unfortunately, due to the above-mentioned controversy this was changed to be opt-in. So by default, the compiler does not do any telemetry. This is a real shame as the results will be distorted as many have complained (e.g. see Making Go Telemetry opt-in is a mistake)
After installing Go 1.23 please turn this option on.
go telemetry on
For a full explanation see Go Telemetry
Trace Tool
As I mentioned last time I was ecstatic about the fixes to the Go Execution Tracer (see Execution Tracer). If you’ve tried it in the past and been frustrated try it again - it now works!
One thing that was still a problem was that if your software crashed while you were capturing a trace then you would loose the end of the trace. This could be annoying if the problem was the point of tracing the execution.
Apparently the execution tracer better handles crashes and/or the Go tools better tolerate corrupt traces.
PGO
There’s a lot to like about PGO, but one of the things I noticed (see Profile Guided optimisation) is that using it slows down builds (in Go 1.22). This acts as a disincentive to use it.
I am pleased to report that this has been addressed. In fact I could not see any appreciable difference in the times.
There have also been some improvements to the code generated. Code for “hot loops” is now memory-aligned (AMD/386 only).
And there is still room for further PGO optimisations!
Standard Library
iterator support
I covered additions to the standard library to support iterators above.
unique package
This is a cute little package that (I thought) I had no immediate use for, so initially only gave it a cursory glance. It’s useful for performance in some esoteric situations. But I found it more useful for avoiding memory “leaks” - ie redundant entries hanging around in long-lived maps. I’ll explain this shortly, but first let’s quickly look at how it works.
The only function provided by the unique
package is unique.[T]Make()
which is a generic function that stores, or interns, values in a system-wide “container” for the specific type (ie, T
the type parameter) and returns an element “handle”. You can call the Value()
method on a handled to get back the actual value and compare handles, usually more efficiently than comparing the underlying values.
Of course, you must provide a comparable type for the type parameter T
, for obvious reasons. For what “comparable” means in Go see Comparability from my blog in 2023.
The common code example for unique
uses string
s like this:
u1 := unique.Make("abc") // inferred type parameter
u2 := unique.Make("abc") // same handle value as u1
u3 := unique.Make[string]("xyz") // explicit type parameter
println(u1 == u2) // true
println(u1 == u3) // false
An obvious problem that you may have noticed (but I initially failed to see :( ) is that there is no way for you to delete objects. This brings me to the coolest feature of unique: once all handles to an object are no longer used the underlying object that the handles “point” to is deleted.
This is very useful for an issue I had involving a long-lived map
that kept track of a large number of objects which were referenced (by key) in many other places (maps, slices, etc). The problem was that unused entries in the map would hang around and the various approached used to track them as they are added or regularly clean them up were tedious and error-prone. Using unique
instead of a map
neatly avoids this problem.
In summary, you should use unique
if you have one (or more) of the following situations:
- you need to improve memory usage, and have a (possibly large) number of (possibly large) objects many of which are identical
- you need to improve CPU usage, and you compare objects much more than you use their actual value
- you have a map with a long life-time, and it is difficult to determine when entries should be deleted
I was going to go give examples of these 3 uses - maybe later…
HostLayout (structs package)
Using unique
is one way to save lots of memory if you have a lot of structs. Another is to make your structs smaller, so they fit in a different memory size class.
Side-note: Part of the reason that Go GC works so well is that it only allocates blocks of certain memory sizes (see runtime/sizeclasses.go). If you ignore this and have a lot of structs on the heap you can waste a heap of memory. For example, if you have a struct
that is just one byte larger than one of these sizes it will have to go to the next size up - possibly almost doubling memory use.
As I mentioned in struct field optimization in C (in Background above) you can optimise struct memory usage (in C) by reordering fields, changing padding, etc. but Go gives you (almost) none of this ability. One problem is the Go spec. even says that the order of fields in a struct
may not be preserved.
The Go spec. says this, I believe, to allow some future Go compiler to automatically optimise field order, but this optimisation has never been attempted or considered AFAIK. I always strongly doubted it would ever happen because it would break too much code. (Ed: I just this discussion from almost 10 years ago.)
So despite the Go spec., it’s a good idea to reorder fields in a struct in Go. For example:
type struct {
Ignore bool // 0 offset
Args []string // 8
HasError bool // 32
} // len 40
will use 40 bytes (even though a slice uses 24 and a bool uses 1) due to padding after the bool
s. Further, when placed on the heap it will be placed in the 48 byte size class.
type struct {
Args []string // 0
Ignore bool // 24
HasError bool // 25
} // len 32
This simple change means it fits in 32-byte size class saving 16 bytes each time.
However, doing this has always posed a quandary to me as the behaviour is not specified. That is, I can’t rely on the Go spec to ensure this will always work.
Luckily, in Go 1.24 there is a new type called HostLayout
declared in the new structs
package. You don’t really need to know anything about this type except it has zero size and if you declare of field of this type in your struct
then the compiler won’t fiddle around with your layout. Use it like this:
type struct {
_ structs.HostLayout // Go 1.24
Args []string
Ignore bool
HasError bool
}
Warning: A field with a type of zero-size (like struct{}
or [0]int
) may add to the size of a struct if it is the last field declared in the struct. It’s recommended to use HostLayout
at the start of the struct.
BTW Looking at the original proposal it seems that
Note that, ostensibly HostLayout
was added to allow exchange of binary data with the “Host” operating system or other external sources. However, I find it much more useful to use it to ensure the Go compiler will not mess with your layout.
crypto (tls package)
The security of the internet, electronic banking and the transfer of just about all confidential information in commerce and industry depends on the security provided by public key cryptography. Years ago we were all extremely confident with this situation, since the algorithms used have a sound basis in maths and even if one algorithm was found to be vulnerable it could be perhaps replaced fairly quickly with another.
Then along came quantum computers, and we discovered that all the algorithms we thought impervious are susceptible. (Personally, I felt that (widespread) quantum computing wasn’t feasible, but with every advance I become less and less sure. :)
Fortunately, new quantum resistant algorithms have been developed which can be run on conventional computers. Several quantum safe algorithms have been or are close to being standardised. We should be adopting these ASAP for important data, especially with the threat of Store Now - Decrypt Later.
I was very pleased to discover (by accident when looking at the crypto/tls
code) that the standard library now supports X25519Kyber768 for TLS 1.3. This algorithm will be used in Go 1.23 as long as the other end supports it. Note that this is not mentioned in the release notes but only in crypto/tls/common.go.
timers (time package)
The time.Timer
and time.Ticker
types are extremely useful but (were) a bit tricky to use.
One thing I found is that if you create a timer then stop using it, without calling the Stop()
method, the timer will not be garbage collected until the timer goes off. This is not a problem for a single timer but (eg. if you use a lot of timers with long duration to detect a timeout error) you might end up with thousands or even millions of unneeded timers hanging around and using memory and adding to GC load.
Tickers have the same issue, and seemingly worse, they are never garbage collected unless Stop()
is called. However, this is not normally an issue because tickers usually run in a loop making it hard to forget to call Stop()
.
In Go 1.23 unused timers are immediately available for garbage collection.
At the same time a couple of other fixes were made to timers, that avoid problems though they seem unlikely to affect anyone (but see Go 1.23 Timer Channel Changes for details). However, one improvement may break existing (poor) code…
Before Go 1.23 the timer’s channel member C
had capacity of 1. I think this was to prevent standard library code blocking - but it seems a better solution has been found. Now the channel is unbuffered so calling len()
(or cap()
) on it returns zero whereas it could previously return 1.
Code that uses the length of the channel to tell if the timer had gone off will now fail.
if len(tmr.C) > 0 { // always false in Go 1.23+
fired := <-tmr.C
// use fired value
}
However, this sort code would hopefully never be used. For one, it is unreliable in a concurrent environment (e.g. another goroutine might read from the channel after the test but before the assignment, whence fired
gets a default time.Time
value which would be thousands of years out!). Instead, a non-blocking select
should be used:
select {
case fired := <-tmr.C:
// use fired value
default: // timer has not yet gone off
}
}
If you have old code that you suspect has this problem, and don’t have time to change it, you can get the old behaviour with the GODEUG setting asynctimerchan=1
.
Conclusion
As we’ve seen Go 1.23 has a lot of useful stuff and a few surprises. Have a play around with iterators and don’t forget to turn on compiler telemetry.
$ go telemetry on
There are some changes that could break existing code (e.g. using time
, tls
, etc packages). This should not stop you from upgrading to the new compiler as you can retain backward compatibility by:
- using the
GODEBUG
environment variable - with the module go.mod file
- using a
//go:debug
directive
see Go, Backwards Compatibility, and GODEBUG
There are a few other things I did not think important (but you may disagree :) in the Release Notes.
Comments