Getting to 200 was mostly a matter of upgrading tracks that needed maintenance anyhow in the 90s, in the 90s however cargo traffic wasn't causing as many disruptions and congestion as today and the talks about "new exclusive" lines is mainly meant to shift air-traffic to faster AND non-congested lines, but new lines are far more expensive/prohibitive both due to new land requirements and making it a "big-bang" build.
That "minor" detail seems to have been outside of all popular reporting I've read on the subject, any links to how large a part the EU would've contributed?
Having been on a board of a European non-profit(Scandinavia) in a "tricky" situation, I can kind of sense the issues between the lines, the letters of law commonly around non-profits and following strict interpretations of them _did_ make me raise an eyebrow at Collabora people being on a board awarding contracts.
At the same time, laws sometimes leaves wiggle-room to not totally shut down every interesting non-profit.
So already there we have a bit of friction, now add the friction of startup/company type people vs "nonprofit believers", both tugging at the interpretations (plus whatever mentioned issues that has been carried over from early days of SO/OOo)
In truth, startup/company type people too often see wiggle-room as carte-blance for doing things that can be illegal or deterimental for a nonprofit without an extremely loose interpretation of the laws. At the same time non-profit believers can be a bit of a pain to deal with on a personality side.
Without too much context, I joined the board of the nonprofit since I had been part of an outreach arm and thought that there needed to be some perspective from that side on the board whilst others on the board came from other parts or had no previous affiliation.
So my experience was being caught in the middle, trying to sort out issues with useful people in the outreach arm (creating needed momentum for the organisation) using the wiggle-room in questionable ways, yet managing panicking non-profit believers that couldn't see the conflict they were creating by going overboard would be very deterimental in the long run (people doing "boring tasks" but also not good at creating momentum).
Something like the LibreOffice foundation also needs to exist in "both worlds", and from my perspective both sides are probably at fault here, but in the end they need to spend more time understanding the problems seen from "the others" perspective and realize that the drama they are creating will benefit everyone but them if cannot they find "smooth" ways to resolve things.
> Collabora people being on a board awarding contracts.
That is indeed problematic. I think it was particularly problematic that we had a chairperson of the board who was also the CEO and (part?) owner of a company, Allotropia, that would participate in tenders (it was bought by Collabora a couple of years back).
But - that situation is resolvable, and was resolved, by him no longer being on the Board of Directors; and beyond that - is resolvable by a strong separation of the body managing tenders from the BoD, so that employees and stockholders from Collabora can't participate in the process. What happened instead is that tenders ended altogether.
> now add the friction of startup/company type people vs "nonprofit believers",
There is a third group, which is foundation employees and contractors. Our last Board of Directors contained 2 foundation employees, one person who is a regular contractor or freelance service service provider to the foundation. He has since resigned, although so have two non-employee directors (with two of the three resignations being for mysterious reasons not disclosed to the trustees).
Add to this the fact that the Directors aren't physically at the offices, and that the foundation is managed mostly by the Executive Director, who also prepares the budget, handles official correspondence (hidden from the trustees of course) etc - and you get a very interest group.
> but in the end they need to spend more time understanding the problems seen from "the others" perspective
The thing is, that it isn't two sides, one against the other. It is one powerful clique which, so as to solidify its power, expels a large group which it perceives as difficult to control. And the majority of the LibreOffice contributors are passive here. Now even more passive with the expulsions and the abrogation of the electoral process.
It probably depends on how young one was, I was young enough to play it for a year or two before Doom appeared (also Doom was kind-of sluggish on my machine at the time).
Fond memories. I remember going to the local YMCA (sub-2000) and going from DOS terminal to DOS terminal typing in (IIRC) `exec wolf3d.exe` and finding one of the few PC's that had it loaded to play it.
Probably yes, the NES is "easier" in this regard since char-rom is read from cart (so a cart only needs to provide the bits in correct order), the GB(C)'s have video-ram that contains all parts so you need to transfer it over. Don't exactly remember but iirc classic GB was a tad too slow for this, but GBC has a DMA that might be fast enough. (I've developed on classic GB mostly so don't know the characteristics of GBC mode).
I don't think it's widely known (only found and documented somewhat recently) that there is a way for cartridges to directly drive the Game Boy Color LCD, bypassing the CPU/PPU (PGB Modes). At that point though it becomes even less of a Game Boy game than what the Wolfenstein and other carts are doing.
Also related: "There oughta be GTA5 for the Game Boy" about a Wifi cartridge that can stream video (gameplay, etc) directly to the GB screen.
https://there.oughta.be/gta5-for-the-game-boy
I played a bit with original gameboy too. I was very surprised when, iirc, the cpu is not even fast enough to clear the screen in one vertical blank, or even in one frame! It takes like three to fully clear the map.
Yeah, you really need to structure your code around working with the tilemap system.
I did a small racing prototype with both vertical and horizontal scrolling and segmented my updates to 4x4 blocks of tiles per-frame (160x144 resolution so 20x18 of 32x32 tiles is visible at any point in time, so stippled updating 4x4 blocks outside of view is within the budget together with updating some of the tiles each frame)
You need to do mid-frame tile updates just to show a full bitmap frame. There’s 360 8x8 tiles on the screen, but the tile indices are 8 bit (you can only reference 256 tiles). You can store only 384 tiles in VRAM - a bit more than a full screen. So the mid-screen update is to go from one tile dictionary to the other, so you can access 360 tiles in total.
You can update 1 tile per scan line (during hblank), so 154 tiles per frame (including 10 vblank scanlines). So you need 2.5 frames to replace all tiles.
If you are really smart about updates, you can “race the beam”, basically start updating tiles just as the frame starts rendering, just behind the active scan line. Then you can update maybe 280 tiles before the active scan line of the next frame catches up with you.
Exactly, for all the hate of Windows, I could at least just look for shit named co-pilot and uninstall it for a pretty nice experience on my new computer. Phones aren't always as straightforward (especially jarring as "Google services" are required in Sweden on Android for stuff like mobile identity systems).
This is so absurd... I have to keep an old (rooted in order to hide that adb is enabled) phone connected to my home server just to use such app, because grapheneos without google services is apparently not secure enough.
Narrow mustache was leading a marginal party at the start of 1930 (Black tuesday happened only at the end of october 1929 so the Great Depression was only kinda starting) and his party "only" gained 18% of the popular in september of 1930, it's the years after that made his rise so with a start of 1930 cutoff he's still mostly a marginal player.
Broad mustache had risen to power, but only properly gotten rid of the other faction in his country the years before.
If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
My thesis work (back when EcmaScript 5 was new) was an AOT JS compiler, it worked but there was limitations with regards to input data that made me abandon it after that since JS developers overall didn't seem to aware of how to restrict oneself properly (JSON.parse is inherently unknown, today with TypeScript it's probably more feasible).
The limitations are clear also, the general lambda calculus points to limits in the type-inference system (there's plenty of good papers from f.ex. Matt Might on the subject) as well as the Shed-skin Python people.
eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
> If it wasn't built by Matz I'd have severe doubts, but it's clearly defined and I presume he knows all limitations of the Ruby semantics well.
It's a very pragmatic design: Uses Prism - parsing Ruby is almost harder than the actual translation - and generates C. Basic Ruby semantics are not all that hard to implement.
On the other extreme, I have a long-languishing, buggy, pure-Ruby AOT compiler for Ruby, and I made things massively harder for myself (on purpose) by insisting on it being written to be self-hosting, and using its own parser. It'll get there one day (maybe...).
But one of the things I learned early on from that is that you can half-ass the first 80% and a lot of Ruby code will run. The "second 80%" are largely in things Matz has omitted from this (and from Mruby), like encodings, and all kinds of fringe features (I wish Ruby would deprecate some of them - there are quite a few things in Ruby I've never, ever seen in the wild).
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code? And how is untyped parsing done (ie JSON ingestion?).
They are pervasive. The limitations are similar to those of mruby, though, which has its uses.
Supporting send, method_missing, and define_method is pretty easy.
Supporting eval() is a massive, massive pain, but with the giant caveat that a huge proportion of eval() use in Ruby can be statically reduced to the block version of instance_eval, which can be AOT compiled relatively easily. E.g. if you can statically determine the string eval() is called with, or can split it up, as a lot of the uses are unnecessary or workaround for relatively simple introspection that you can statically check for and handle. For my own compiler, if/when I get to a point where that is a blocking issue, that's my intended first step.
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code?
Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around. I'm not 100% sure, but probably the untyped JSON ingestion example uses those.
Remove that, and you have a very compact and readable language that is less strongly typed than Crystal but less metaprogrammable than official Ruby. So I think it has quite a lot of potential but time will tell.
> Quite a lot, that's what allows you to build something like Rails with magic sprinkled all around
True, but I'd point out that use in frameworks/DSLs etc is the main place you see those things, and most of the code people write in their own projects don't use these.
In my experience (YMMV), eval and send are rare outside of things like, slightly cowboy unit tests (send basically lets you call private methods that you shouldn't be able to call, so it's considered terrible form to use it 'IRL'. Though there is a public_send which is a non-boundary-violating version too).
Also in my opinion, unless you're developing a framework or something, metaprogramming (things like define_method etc) are Considered Harmful 95% of the time (at least in Ruby), as I think only about 5% of Ruby developers even grok it enough to work in a codebase with that going on. So while it might seem clever to a Staff Eng with 15 years of Ruby experience, the less experienced Rubyist who is going to be trying to maintain the application later is going to be in pain the whole time due to not being able to find any of the method definitions that appear to be being called.
I disagree, I use metaprogramming in application code quite regularly, although I tend to limit myself to a single construct (instance_eval) because I find that makes things more manageable.
In my opinion the main draw of Ruby is that it's kind of Lisp-y in the way you can quickly build a metalanguage tailored to your specific problem domain. For problems where I don't need metaprogramming, I'd rather use a language that is statically typed.
The two are not mutually exclusive. On many occasions I've used C# to define domain-specific environments in which snippets of code, typically expressions, are compiled and evaluated at runtime, "extending the language" by evaluating expressions in the scope of domain-specific objects and/or defining extension methods on simple types (e.g., defining "Cabinet" and "Title" properties on the object and a "Matches" extension method on System.String so I can write 'Cabinet.EndsWith("_P") || Title.Matches("pay(roll|check)", IgnoreCase)').
I don't think instance_eval is too nasty. The toughest "good" codebase I've worked in was difficult because it used method_missing magic everywhere, which built tons of methods whose existence you had to just infer, based on configuration stored in a database. So most method calls could not be "command clicked" or whatever to jump to their definition, because none were ever defined.
Or even just a compiler to C piggybacking off <objc/runtime/objc.h>; I think Apple still spends a lot of time making even dynamic class definition work fast. I haven't touched Cocoa/Foundation in a while, but I think (emphasis on think) a lot of proxy patterns in Apple frameworks still need this functionality.
>eval, send, method_missing, define_method, as a non-rubyist how common are these in real-world code?
The interesting bunch (to me, based on experience) is `eval`, `exec`, and `define_method` (as well as creating new classes with `Class.new` `Struct.new`). My sense is that the majority of their use is at the time of application boot, while requiring files. In some ways, it is nearly a compilation step already.
> eval, send, method_missing, define_method , as a non-rubyist how common are these in real-world code
This depends on the individual writing code. Some use it more than others.
I can only give my use case.
.send() I use a lot. I feel that it is simple to understand - you simply
invoke a specific method here. Of course people can just use .method_name()
instead (usually without the () in ruby), but sometimes you may autogenerate
methods and then need to call something dynamically.
.define_method() I use sometimes, when I batch create methods. For instance
I use the HTML colour names, steelblue, darkgreen and so forth, and often
I then batch-generate the methods for this, e. g. via the correct RGB code.
And similar use cases. But, from about 50 of my main projects in ruby, at
best only ... 20 or so use it, whereas about 40 may use .send (or, both a
bit lower than that).
eval() I try to avoid; in a few cases I use them or the variants. For instance, in a simple but stupid calculator, I use eval() to calculate the expression (I sanitize
it before). It's not ideal but simple. I use instance_eval and class_eval more often, usually for aliases (my brain is bad so I need aliases to remember, and sometimes it helps to think properly about a problem).
method_missing I almost never use anymore. There are a few use cases when it is nice to have, but I found that whenever I would use it, the code became more complex and harder to understand, and I kind of got tired of that. So I try to avoid it. It is not always possible to avoid it, but I try to avoid it when possible.
So, to answer your second question, to me personally I would only think of .send() as very important; the others are sometimes but not that important to me. Real-world code may differ, the rails ecosystem is super-weird to me. They even came up with HashWithIndifferentAccess, and while I understand why they came up with it, it also shows a lack of UNDERSTANDING. This is a really big problem with the rails ecosystem - many rails people really did not or do not know ruby. It is strange.
"untyped parsing" I don't understand why that would ever be a problem. I guess only people whose brain is tied to types think about this as a problem. Types are not a problem to me. I know others disagree but it really is not a problem anywhere. It's interesting to see that some people can only operate when there is a type system in place. Usually in ruby you check for behaviour and capabilities, or, if you are lazy, like me, you use .is_a?() which I also do since it is so simple. I actually often prefer it over .respond_to?() as it is shorter to type. And often the checks I use are simple, e. g. "object, are you a string, hash or array" - that covers perhaps 95% of my use cases already. I would not know why types are needed here or fit in anywhere. They may give additional security (perhaps) but they are not necessary IMO.
Why do you say HashWithIndifferentAccess shows a lack of understanding? Like many Rails features, it's a convenience that abstracts away details that some find unpleasant to work with. Rails sometimes takes "magic" to the extreme through meta-programming. However, looking at the source [1], HashWithIndifferentAccess doesn't use eval, send, method_missing, or define_method. So I'm not sure how it seems weird to someone who works more with plain Ruby.
Seeing the performance improvement numbers I'm pretty sure there's a type-inference system below it to realize types in all paths (same as the AOT JS compiler I created).
It's not to be beholden to types per-se, but rather that fixed types are way faster to execute since they map to basic CPU instructions rather than operations having to first determine the type and then branch depending on the type used.
The problem with dynamic types is that they either need to somehow join into fixed types (like with TypeScript specifying a type-specification of the parsed object) or remain dynamic through execution (thus costing performance).
I think you could work around send(). Not a Ruby person, but in most languages you could store functions in a hashmap, and write an implementation of send that does a lookup and invokes the method (passing the instance pointer through if need be).
Won’t work with actual class methods, but if you know ahead of time all the functions it will call are dynamic then it’s not a big deal.
Anyone tried using Kaitai descriptions? It seems like a fairly flexible system that would be an excellent starting point for a hex-editor that wants to add good higher level coloring (and perhaps even editing).
reply