Y'all are a bunch of wankers!

LOL, this guy knows his shit with OS development. He complains

about Google Fuchsia OS.

Actually, Google will make Fuschia work as a smartphone/tablet platform and whatever else. From a design perspective, it's actually quite bad.

When I first started reading the code to Fuschia, I was going line by line asking myself "Haven't we already made this mistake before?". It was like one major compilation of "I took an OS course based on Tananbaum's book and decided just to copy every mistake we never learned from". And in the end we have a brand spanking new 30 year old operating system.

Ok, I'm being harsh and it's only partially fair. Let me start with your issues.

It's not necessary to sort out the issues with latency and message passing. They are making a real-time (or near real-time) operating system which in its own right already suggests that they're willing to sacrifice performance in favor of deterministic time. Telephones always benefit from real-time kernels in the sense that it allows dropping overall transistor and component count. Every telephone which ever boasted 4 day batteries ran real-time operating systems and it was generally a good idea.

Secondly, there's been a pretty huge move in Firefox and Chrome to optimize their shared memory systems to reduce or eliminate hardware locks by marshalling the memory reads and writes. Add to that that almost all modern development paradigms are asynchronous programming... unless you're stuck with some shitty language like C or C++, and most of the switch and latency issues are irrelevant. This is because you can keep multiple cores spinning more or less non-stop without much concern for kernel level inter-thread synchronization. Take it a few steps further and expose things like video hardware access directly to individual applications that would operate their own compositors based on a GPU context and apply shaders to support windowing type tasks... then it's going to be quite impressive and the locks should be a lot less relevant.

From that perspective, I don't see a good solution to the audio problem as I've never seen a sound card which would support the principle of shared resources. I don't think it would be even mildly difficult to design such a device though. The only real issue is that if mixing is moved entirely to hardware, then depending on the scenario, it would be necessary to have at least quite a few relatively long DSP pipelines with support for everything from PCM scaling to filtering. There's the other problem which is that protection faults to trigger interrupts could be an issue unless there's some other creative means of signalling user mode code of buffer states without polling. Audio is relatively unforgiving of buffer starvation.

So, let's start on my pet peeves.

Google's been working on this for some time and they still don't have a system in place for supporting proper languages. C and C++ are nifty for the microkernel itself. But even then, they should have considered Rust or rolling their own language. This world has more than enough shitty C based kernels like Linux and BSD. If you want to search the CVEs and see what percentage of them would never have been an issue if the same code was written in a real programming language, be my guest, but I'm still sitting on WAY TOO MANY unreported critical security holes in things like drivers from VMware, drivers from Cisco, OpenVPN certificate handling code, etc... I absolutely hate looking at C or C++ code because every time I do, unless it's painfully static in nature, it's generally riddled with code injection issues, etc...

And yes, I've been watching Magenta/Fuschia evolve since the first day and I follow the commit logs. It's better than TV. It's like "We have a huge number of 22 year old super hot-shot coders who really kick ass. Look at this great operating system kernel" and it looks like some bastard high school or university project written by people who have little or no concept of history.

Linux is good for many things. It's amazing for many reasons. Linus did an amazing job making it and it's a force of nature now. There's no going back and we're all lucky that he made what he made. But that said, Linux is a cesspool of shitty code. Linus and crew have been pretty good about trying to keep the central architecture safe and sound. They've done a pretty good job of following some rules of consistency in the core kernel. But let's be honest, a lot was forfeited regarding security to make Linux the jack-of-all-trades screaming faster kernel it is.

Google should have made something more like the Go architecture. Back in the old days, real-time operating systems like QNX were based almost entirely on message passing. A single standard means of pushing events into the pipeline, distributing them and processing them. This made things like processor affinity/allocation lovely.

These days, every real programming language supports the principle of asynchronous programming which is the best thing ever to bless development. We move from C/C++ languages to real languages that understood what things like strings and memory were. They actually understood the difference between a character and a byte. In 2018, our languages know what a thread is. And no... there is absolutely no possible way that hand coded C/C++ code and/or assembly will ever be able to produce better, faster or more secure than a language which is specifically designed to do this as part of it's runtime. Also, there's absolutely no possible way that C/C++ threading and memory management will ever be as safe as another language.

That's the thing that I absolutely hate about Fuchsia. Even if Google were to create their very own static analysis tool to try and generate errors when "unsafe code" is used without an explicit #pragma or #define, C/C++ still have no fucking idea what a program or module is. It compiles lines of code and functions and classes. It generates object files which have no real relation of each other. It depends on dynamic link loaders to link module to module. The code itself is generally pretty static and while relatively efficient, it's not until the hacker is in the system which we're detecting problems with it.

I generally believe that Fuchsia can be saved over time by porting to Rust or making a new C-like language with actual intelligence in the language. Then instead of just making code as "unsafe" or not, take an extra step to provide something like a role-based access control system that would allow blocks of code to apply for permission to access specific resources. This could be managed mostly at compile-time. But it should be a requirement to actually explicitly explain to the language what impact the unsafe code will have on the system.

Once that's done, port all the operating system modules including file system support to Dalvik or something else. Get the code into a system which actually doesn't ever require two coders who have never met to share data structure memory management with each other.

Consider this.

Most C coders still don't know what malloc vs. calloc vs realloc are used for.

Most C coders (see Google's git for Fuchsia) prefer unsafe methods to safe methods for memory management.

Most C++ coders improperly use delete when they should use delete [] and they don't even know why it makes a difference.

Abstraction in C and C++ is frigging hopeless. The insane number of times I've seen buffer manipulation code in C and C++ (see the Fuchsia git) which directly manipulated buffers, their lengths, etc... when they could and should have implemented an abstraction/object to do so is horrifying. They're like "But mommy... I don't want to write 400 lines of code to avoid having my cool OS hacked to shit and back... no one will do anything bad if I just manage the memory here".

Shall we have a talk about things like "How many ways in the Linux kernel is there to make and manage a list?". WOW!!! We have an awesome rbtree.h implementation which removes any need for good code and instead let's us store things REALLY FAST using macro programming!

Don't worry Google's Fuchsia sucks for this too.

No... this is not a good operating system. We already had this operating system. I think it's been written 100 times before. Maybe 1000. I'm just absolutely amazed at how impressively Google fucked this one up. And even more so, how can it be possible that Google doesn't have a single programmer that knows better?
Permalink DrNo 
April 12th, 2018 11:56am
And your point is?
Permalink Fabio Sindre 
April 12th, 2018 12:04pm
For all the "geniuses" at Google, they sure do produce a lot of trash.
Permalink FSK 
April 12th, 2018 12:16pm
> Most C coders still don't know what malloc vs. calloc vs realloc are used for.

Wait. Really?
Permalink Send private email xampl9 
April 12th, 2018 12:59pm
>> Most C coders still don't know what malloc vs. calloc vs >>realloc are used for.

>Wait. Really?

Let me rephrase that...

Of the people presenting for interview at Google claiming to be C coders...
Permalink Samed Maud 
April 12th, 2018 1:00pm
"they should have considered Rust or rolling their own language"

Fucking wanker. Along with the rest of that crap.
Permalink Reality Check 
April 12th, 2018 1:00pm
"Most C coders still don't know what malloc vs. calloc vs realloc are used for."

Agree that that only shows the person making the claim is an absolute fucking moron.

Who is this dipshit anyway?
Permalink Reality Check 
April 12th, 2018 1:01pm
https://linux.slashdot.org/comments.pl?sid=11975345&cid=56422125


LostMyBeaver is a dumbshit.
Permalink Reality Check 
April 12th, 2018 1:03pm
https://www.youtube.com/watch?v=tXnwfLVFEHM
Permalink YouTubeBot 
April 12th, 2018 3:19pm
Nice beaver!
Permalink Algernon Montgomery 
April 12th, 2018 4:33pm

This topic is archived. No further replies will be accepted.

Other topics: April, 2018 Other topics: April, 2018 Recent topics Recent topics