Thoughts on "Go statemement considered harmfull"

Question from a newbie:
So here, although you awaited the call, tasks might still be running? Doesn’t this go against the very definition of await? And if the nursery abstraction should be thought of as equivalent to the allocator abstraction, why would we need particular language construct to deal with asynchronicity?

In many concurrency models (go, python asyncio, futures, callback-based io…) foo() could schedule a task and return. The new task is then executed. It’s the user responsibility to keep track of a kind of reference to this new task and communicate it to the caller (using channels or the return value for this). Then the caller can wait for the result of the new task. Most modern systems will issue a kind of warning if a concurrent task is never awaited or if the program exits while such tasks are still running.

Regarding the syntax, I used informal conventions, I don’t want to consider syntax details here or at this point.

If was posted by someone other than Andrew in the discussion, so the Idea isn’t sanctioned, just that it has been discussed in conjunction with async/await.

Yeah, I, for myself, didn’t want to pollute the github issue with very speculative and vague concepts, that’s why I posted here first to brainstorm, and shake the idea.

The huge difference with a wait group, is that you must explicitly have a reference to a living nursery to create a concurrent task

would it be fair to say, when comparing it to a wait group, that it just enforces that all tasks are created from a wait group? Beyond that I’m not seeing that big a difference.

They can always create a nursery and assign it to a global too to get around the guarantees too it seems.

Things like group wait and cancellations are in wait groups too and see more of a side topic. Clean cancellation has much bigger hurdles than not being to do them all at once.

Absent lexical guarantees from the compiler and language, I’m not seeing too much of a difference still. The idea has some merit to prevent the leaking of tasks, and I can see the parallels with memory and other resoures allocated (ef, file handles).

Related: GitHub - sourcegraph/conc: Better structured concurrency for go

btw. they also mention the article from the OP :smiley: -which I just read again, with a similar conclusion; a good idea behind too much marketing for my taste.

Hey, don’t take it personally :). go in this piece is a concept that encompass almost all existing concurrent programming patterns, in any language. It’s just that the name suits well for the pun, and the parallel with goto is relevant, imho.

It breaks the call stack, it breaks error handling in the flow (somewhat related), it breaks abstraction and black-box reasoning, it leads to spaghetti code that is hard to grasp for a human brain. The fact that we got used to those inconveniences doesn’t mean that we shouldn’t acknowledge they exist or shouldn’t try to address them. We’re just like old-timer goto folks.

Some patterns do alleviate this a bit, go, twisted, elixir, rust async, promises, futures, etc… are arguably better than the “good” old time JS callback hell, to name a few, but they still have their flaws, their foot guns and a high irreducible complexity.

1 Like

While I agree with you, I think technically function calls are enough to break black-box reasoning. If the language doesn’t have strict constraints/limits on what the function can do.

You can do many things, you can do trampolining (your function spins in a while loop calling whatever function the previous sub call gave you to continue with, effectively making the normal call-stack broken and half inventing a new call stack), you can have a bunch of functions that use CPS (Continuation Passing Style) to have a very dynamic looking call stack, if you have Tail-Calls you can just write your program in a way so that it never actually returns. And you could use all of those things to implement your own sub-language with its own control flow constructs. (Interpreters sometimes use techniques like this)

I am not saying it is a good thing to do this, just that in most languages this is possible with ordinary looking functions and thus you don’t really have “black box” reasoning.

You at least have to prohibit some of those things, like functions that loop forever and have some limits on what functions are allowed to do, can use etc.
Else the black box reasoning is:

  • either all the containing functions behave nicely and eventually return
  • or one of them is badly written or malicious and the program may:
    • never halt
    • consume all memory and resources it can get
    • encrypt all data it can write to like some ransome-ware
    • send all data to somewhere else
    • etc.

I don’t think the abstraction needs to handle all those things to be useful, I mostly want to say that I don’t think that black box reasoning is a real thing you get from using this abstraction alone, I think you only get that, if it is tightly integrated with a language that gives you a lot more control over what the functions are able to do.

The language controls what the scope of the possible programs is, if you constrain your language to something that disallows certain things, then this may allow you to do some reasoning, without looking at the implementations of the functions.

Overall I think there are different domains and depending on what kind of program you want to write, proving things about that program may vary widely between the amount of effort that takes and the usefulness you can derive from having these things be verifiable.

The limited scope of what you can do with regex is very useful, but a program written by one person may not want to restrict and or prove things that program can do. On the other hand it would be nice to be able to disallow libraries from using the network or filesystem, without having to look at the source code to manually verify that.

3 Likes