NIMBY Rails devblog 2025-02
Stacked stop logic rationalization (1.16)
DISCLAIMER: this section is an exercise in game rules lawyering maybe 1% of the player base cares about. The TL:DR is that 1.16 will do a smarter but different thing when two train stops are the same in a row, for some new definition of “same”.
With the rewrite and separation of the train driving code and scheduling code in 1.16, I needed to review and rewrite some old rules codified in the game logic. Some of these rules are of the “the pieces just fell on the floor this way” variety, behaviors which were once the result of immature, prototype code but which many saves and players massively built around. Some even have survived large rewrites in the past in the form of special cases in the code. My “favorite” offender is the stop stacking rule. This rule says that, when a train has 2 or more stops on the same platform, one after the other, it must ignore every stop but the last one, and immediately set itself as that last stop, ignoring all the other ones.
There is no deep, researched decision for that behavior. It’s just the way some old (circa v1.5) code worked. It does not need to be that way. And indeed, after v1.8, the last vestiges of that original code disappeared, when “track+point” train pathfinding was implemented. So why it’s still working that way in 1.15? Because there’s an incredibly gnarly sequence of 200 lines of code which basically emulates the pre-1.8 train logic in the singular case the train finds a stacked stop situation.
I’ve wanted to delete it the second after I finished writing that code 2 years ago, and in 1.16 I’ve finally deleted it. Why now? Having a well specified set of rules and making sure trains run every stop is important for the new scheduling logic in 1.16, and for future development too. Plus I don’t want to touch that code ever again. So what’s replacing stacked stops? A set of explicit rules for matching and advancing stops which make sense, including for the first time a rule to match schedule timing with train behavior for stacked stops (to the extent this is possible).
For the train driving behavior the rule is easy. Trains never skip stops. That’s it, that’s the rule. Trains will arrive and start their stop for their currently run line and stop, without skipping anything. If, when the departure time arrives, the next destination is on the same platform it is currently stopped at, the train will just switch to that line+stop and start the wait until departure, without moving. Repeat as many times as needed if the upcoming stops keep including the current train platform in their platform set.
Platform set? Central to the new rules is the concept of “platform set”. The platform set of a line stop is the set of all the secondary platforms plus the main platform. The new rules do not make the distinction anymore, both at the train level and at the schedule level. So how does this rule look like at the schedule level? In a single sentence: if every possible platform a train might be stopped at is also a valid platform for its next stop, it can be safely assumed the leg will be always 0. This makes the scheduled time match the train behavior, without needing to make sure the main platforms of stacked stops match. This works both at the line level and the order glue leg level.
Some examples of this rule application. Uppercase letters are platform names, all of the same station, and (A,B) is a platform set of a stop. (A) -> (B) means going from platform A to platform B. They are the same station, but they are different platforms, so the train will drive from one to the other.
- (A) -> (B): NOT in the rule, since set (A) is not included in set (B). The train is forced to drive and it will be scheduled as such.
- (A) -> (A): IN the rule, set (A) is the same as set (A), the train won’t move and the schedule logic will assume a 0 seconds leg.
- (A) -> (A,B): IN the rule. All possible starting points are included in the second stop, 0 leg time.
- (B) -> (A,B): IN the rule. All possible starting points are included in the second stop, 0 leg time.
- (A, B) -> (A): NOT in the rule. One of the possible starting points, B, is not in the second stop platform set (this can still result in a 0 leg scheduled time if both main platforms are A for example, but trains stopping at B will encounter an unscheduled leg)
- (A,B) -> (A,B,C): IN the rule. All possible starting points are included in the second stop, 0 leg time.
New geometric 2D collision check (1.16)
Another dinosaur in the code is the train collision system. And this is a real living fossil, barely touched since v1.2. It has been in need of a overhaul since it was implemented, but it has been forever waiting for the signals update, since the way it works make it pointless to implement without taking signals in mind.
So I put it in my backlog and never think too much about it. But while I was refactoring the train logic in 1.16, I needed to touch the collision code. This is because the code takes some major liberties by doing things like assuming 1) It’s possible to modify certain train sim data in parallel and 2) Trains always have some speed, or at least a speed value that can be read. 1) was valid back in v1.2 when the game was single threaded, and I made sure ever since the specific data was still writable in parallel in later versions, just to avoid modifying the collision system. But 2) is not true in 1.16, since as part of the train logic rewrite, a train having a speed (or being driven at all) is optional.
Touching the collision code was unavoidable. But while I was looking into it, I couldn’t help but shake my head. I realized it was worthless to wait for the signal update since the whole concept of how collision is handled is limited to the point of being wrong. The collision code used the occupancy and reservation maps of a few of the connected tracks of the track under the train head to detect nearby trains. So it misses many cases, for example: if the other train is 2 segments away, if the other train is in a too-close parallel track, if the other train is in a branch with an extra segment in the middle, long etc.
It doesn’t need to be implemented that way, at all. The natural way of implementing collision is to check the geometry of the train cars on the world map around the train head, independently of the track map (except when different layers are involved). This has always been in my mind, but it has a huge problem: generating and keeping updated a geospatial index of all of these little oriented boxes has a real impact in the sim speed. How much? 30% slower. I know because, after too much head shaking, I decided to take a couple days off 1.16 and just went and implemented such a system.
But it worked very nicely, handling many missing collision cases, and it was compatible with 1.16. So I started optimizing it. The first optimization is realizing general purpose spatial indexes, like the RTree I used, can be a lot slower than a special purpose one. I replaced the expensive, general purpose spatial index (an RTree) with a special purpose index of my own design. This new index is built on top of a parallel, lockfree append-only multimap, which is erased and recycled on every frame on the recognition most trains are changing position on every frame anyway, so why pay the cost of rebalancing a tree? The second insight is that the objects being stored in this map have a hard upper size bound, which is very tiny compared to the size of the world. And the queries to be run on the map are also going to use a tiny area. So the new structure is based on sparse rectangle rasterization and point sampling, rather than a tree of multi-level rectangles. You can imagine it is a sparse pixel image of the entire world, which only stores the pixels with a nonzero value. These pixels represent the trains, and the size of the pixel is chosen so most trains need at most 4 pixels. Queries also cover at most 4 pixels. The pixels are indexed in the previously mentioned multimap, with a hash of the pixel coordinate as the key. Worst case inserting the train footprint is always just 4 hash lookups and writes, and a query is just performing 4 hash lookups on the multimap. This is extremely fast compared to querying and specially writing into an RTree.
I mention “train footprint”. The final optimization was storing just the train bounding box, rather than the individual cars. This means the result of a lookup is a reference to the train, with enough information to recreate the real geometry of the cars. This massively reduced the overhead of having to calculate the full 2D geometry of every car in every frame. The final check is then performed with this 2D geometry calculated on the fly.

The checker train only considers its front “bumper” segment, but all the potential collider trains are fully checked car by car (this rule is enough for every possible collision, unless you are simulating derailment at any point, or earthquakes I guess). It considers every geographically nearby train as a collider, not just those in certain selected tracks, as long as they are on the same layer. I expect existing saves will get collisions in places where they didn’t before, like in the screenshot, but hopefully they all will make sense.
Special purpose depot/technical orders (not in 1.16, nor ever)
I ended the last devblog with a cliffhanger: if trains are free to do anything while unassigned, why not explore other ways of giving orders to trains, outside of the schedule system? My initial idea was to introduce a new kind of order which directly specifies a station (maybe with some extra configuration for stop options), to make it possible to do things like depot runs without depot lines, and then build more on top of it. This would address a frequent player request to simplify / overhaul the depot schedule mechanics, which is another dinosaur of the game design (indeed “depot lines” are just 2 day idea I quickly implemented to have something like a depot in v1.5 and move on with other stuff; it of course immediately became impossible to touch in any way).
But a “depot order” is not doing the new train logic any justice. It’s like buying a Threadripper with a RTX 5090 and then using it exclusively to run a Gameboy emulator. I also don’t want enshrine a specific way of running depots as doing ad-hoc single stop orders. Indeed I don’t want to even have the word “depot” anywhere in the UI. There’s a dozen modern train games which already do a good job of “here’s the lines, here’s the depots, this is the default depot of the train, magic sprinkle on top and everything is in place automatically”. But that’s not the NIMBY Rails way.
Introducing NimbyScript (maybe 1.17, maybe later)
In the last post I wrote this as an absurd extreme of what such a custom order system could look like:
“here’s 10000 lines of player provided code full of conditionals and nondeterministic inter-train coordination to tell the trains what to do”
Turns out I fully spoiled what I was planning on working for most of February. Dear reader, that’s not an absurd extreme. That’s a goal idea of the new custom order system. It should be capable of supporting that and simpler stuff too, of course.
Custom train orders will be the starting point for player-provided programmability in NIMBY Rails. Some capability for running player code has always been planned since it’s going to be part of the signals project, but when thinking about everything I want to do for signals, I’ve come to the realization it’s better to start introducing new elements required by said project slowly over time. Track+point in 1.8, the 1.14 schedule system, or the new collision system are examples of things which started life in my notes under the umbrella of the signals project. NimbyScript will be another example.
What is NimbyScript? It is a statically typed, AOT natively compiled (but JIT reloaded) programming language for running custom player logic in NIMBY Rails. It has a limited set of features to try to make errors not crash or corrupt the game (specially important since it is compiled to native code!).
Why come up with a custom programming language, when the Lua family exists, for example? The issue with existing scripting languages is that they have enormous overhead for calling into the scripts, and for marshalling data in/out of said scripts. For the signals project I want to be able to call into the script functions millions of times per second. This can only be performant if the “script” is not a script, it is native code and it defines its functions using the C ABI convention. The overhead of calling into the script should be the same or only a little slower than calling into any C++ function. Yes, I know about LuaJIT and Luau. I’ve used LuaJIT in the past. You still need to marshall values in/out and it still incurs overhead. And of course all of these scripting languages manage memory using a garbage collector. I will never run a garbage collector in my game. I’m benchmarking against native code, so of course only native code is going to match it.
This ridiculous performance goal was my starting point. Nothing but compiled native code could match it. But it should also be a scripting language, the player should be able to type code somewhere in the UI and/or provide some live-reloaded text files directly. Most importantly the player should not have to deal with a compiler and a build system! The game itself should be all that is required to develop the scripts. Is such a thing possible? Yes, JIT scripting languages accomplish a limited form of this capability, by compiling “hot” functions into native code as they are encountered. The concept of generating native code live in RAM while the program is running and then jumping into it is very old.
In my search for an existing language which matched my requirements I found (TinyCC)[https://github.com/TinyCC/tinycc], a minimalist, very fast C compiler with a limited set of optimizations. It also has an amazing capability: it can be compiled into a library which then compiles and links C source text directly into RAM. You can then just call any function defined in the C code as if it was part of your own C/C++ program, with zero overhead. It is just compiled C code calling into compiled C code, there’s no marshalling needed, it’s all native code, and it can be live reloaded as many times as needed. It was seemingly a perfect fit. Except it’s C.
The idea of making my game scripting language just C is kinda horrific. Effective, bug free C code is hard to write, it really needs an experienced programmer. It’s extremely easy to make errors in C, the kind of errors which crash the entire game, with zero feedback or help from the C compiler. It’s just too low level to serve the purpose of being the scripting language of a videogame. But as a backend for another programming language? It looks perfect for it.
As I was experimenting with TinyCC, measuring its compilation speed (extremely fast), the overhead of calling into its generated code (zero), and its performance (40% to 100% of the MSVC++ compiled code of the game, but quite a bit slower if you need to do heavy maths; these numbers match or exceed the best JIT compiled scripting languages anyway), I kept thinking I was in a dead end. “If only function definition was a bit cleaner”, “If only I could hide pointers with a safer wrapper”, “If only there was a saner way of defining script-global variables that it’s not static data”. So I told myself, if I have so many ideas for how such an hypothetical “sane/limited C-like scripting” language, why not make one, and translate it to plain C and compile it with TinyCC? NimbyScript was born.
I’ve spent most of February writing a compiler frontend for NimbyScript. It is a compiler frontend rather than a transpiler because I want full, precise control of the language, to provide better error messages than C does and for offering safer, easier typing than in C. The main challenge of the language so far has been exposing game objects into it. I want to, at least, offer read-only access to most game objects relevant to the train sim, but I use some C++ features which make this hard. In contrast, exposing functions is quite easy, so my first experiments involved creating a thin scripting API for doing simple things like stopping a train or making it drive to to a particular line/stop ID combination, which only involves plain integers. These are just some minimal capabilities as a proof of concept. My idea list for the scripting API is huge, but I don’t want to give away too much at the moment.
Unfortunately this is a much larger effort than I anticipated. I could have had something ready for a 1.16 release without stretching it too much in time, but it would not have the level of polish it needs. It’s the kind of feature that really needs that polish to make sure any fixes and future evolution do not invalidate too much the scripts written in the first version of the language. This is also why I’m not showing it off just yet, or how it integrates in the UI and sim.
So 1.16 will be published with just the new unassigned dispatch orders system as explained in the last post. The dispatch system and the huge amount of internal changes that enable it (and will enable NimbyScript for custom orders) will probably require quite a bit of testing, so my current plan is to make “NimbyScript for orders” be its own major version, after 1.16 stabilizes the new dispatch system. Ideally NimbyScript will be in 1.17, but it could be delayed again.