Sanding our assholes with 150 grit.

I don't care if malloc returns NULL

Honestly, what the fuck am I going to do if malloc returns NULL on a modern computer with virtual memory? I mean, if that happens, I've run out of physical memory long ago and now the disk scratch space is gone, well I'm fucked no matter what I do. There's no easy way I'm going to safely save the state and exit under these conditions, so just crashing on the imminent NULL dereference that is coming is probably the best solution anyway.

I know this contradicts a lot of common wisdom, but I think I am right here.

And yes, if this was medical equipment or the space shuttle I'd think about it all differently.
Permalink Practical Economist 
July 10th, 2007 4:27am
You could try freeing some of the other memory you're using when that happens.
Permalink Send private email a2800276 
July 10th, 2007 5:38am
And then what?
Maybe it's not your process that consumes the memory.
And if you can find out what memory to release at that point why not release it earlier?

if (malloc() == null)
{
    you.InTrouble = true;
}
Permalink Send private email Locutus of Borg 
July 10th, 2007 5:57am
Well, let's say you're caching 10MB of something or the other. You could release those 10MB and try to malloc again. Lots  of apps have some sort of homebrew sortof garbage collection that they could run in case malloc fails. You could also just fail in a controlled manner with a proper message. Just completely ignoring the malloc failure is retarded.
Permalink Send private email a2800276 
July 10th, 2007 6:14am
I think PE and LoB are trying to say that if your system is so low on resources that malloc fails, it will most likely not have enough resources to display a message.
Permalink Practical Geezer 
July 10th, 2007 6:24am
sure, you can release memory and display a "oops, you are fucked." message to the user, but whats the point?  its not like you can do anything about it.

you cant even provide a _useful_ error message because malloc gives you no way of knowing where the memory has gone.  it could be anything from RAM failure to a hdd crash.

basically, with modern operating systems, the point at which malloc fails is the point at which you can reasonably assume that the entire system is about screwed anyway.  you might as well just let it crash.  its not going to be your fault and very likely no one is even going to notice anyway, what with their entire system shutting down and everything.
Permalink Send private email zestyZucchini 
July 10th, 2007 6:40am
malloc() will not fail unless you blow 32-bit or 64-bit architecture limits.  It has nothing to do with current memory consumption vs. the amount of physical memory (RAM+swap) available.

Running out of memory does not happen at the malloc(), it happens during demand paging.  All processes on the system are affected equally and is a critical condition.  How the OS handles this is undefined, but killing tasks at random is as good a strategy as any.
Permalink Michael B 
July 10th, 2007 7:17am
Nice to know but how does that change the argument?

if (malloc() == null)
{
    you.InTrouble = true;
}
Permalink Send private email Locutus of Borg 
July 10th, 2007 7:22am
Close

if (malloc(n) == NULL)
{
    fprintf(stderr,"malloc: %d failed (%u): %s\n",
        n,n,strerror(errno));
    abort();
}

Catch the negative number being cast to unsigned int.  Dump core so a backtrace can be done.
Permalink Michael B 
July 10th, 2007 7:41am
What if you're calculating the amount of memory to malloc, perhaps based on user input???

e.g.

Computer: How many items do you want to store?

User: (enters some ridiculously large number)

Computer [internally]: malloc returns null when I try to malloc 1000000000000000000000000 * sizeof(object)

Computer: Please try entering a smaller number of items.
Permalink s 
July 10th, 2007 7:51am
What if the user is a cosmic space alien with a vagina-like organ that emits radio broadcasts?
Permalink Send private email muppet 
July 10th, 2007 7:53am
Build in a default maximum?
Permalink Michael B 
July 10th, 2007 8:08am
Well, see, this is a computer we're talking about.

There are SO MANY ways you can be screwed.

Why leave it a mystery as to which one bit you, just because it happens to be 'malloc'?  Go ahead, tell your poor user they're screwed THIS time by 'malloc'.

First of all, you should NEVER see that message.  Second of all, if your user's ever DO see that message, they can tell you so you can try to fix it.

A 'silent fail' is VERY hard to troubleshoot.
Permalink SaveTheHubble 
July 10th, 2007 8:38am
>A 'silent fail' is VERY hard to troubleshoot.

It isn't a silent fail. Have you never had an operating system tell you that it was low on virtual memory, and then that it was out of virtual memory (which is always coupled with a cascade of failures). It logs such situations.

When the OS starts churning like a dog, and then starts giving memory errors, failures in specific applications are seldom a concern any longer.

I completely agree with the OP for regular development (obviously embedded/safety programming is a different matter, but we aren't all writing heart defibs) -- if a memory allocation fails, it's a much bigger problem than you can deal with in your app and the best recourse is just to completely fail immediately. If you load up your app with countless failsafe branches for memory allocation faults, desperately (and with certain futility) trying to release token amounts of apparently unnecessary memory to keep going, then you've probably just added faults to handle a situation that you can't really handle.
Permalink DF 
July 10th, 2007 9:25am
+1 DF

There is this school of error handling that says that an application may never fail and that every error should be handled gracefully.

I disagree with that. I/O errors, memory errors, security exceptions, etc. They are there for a reason. Desperately trying to prevent your applications from crashing is useless and wil result in constructs like

void main()
{
    try
    {
        // Your code here
    }
    catch
    {
        // Do unspeakable things here to keep your zombie app running
    }
}

Sometimes when the shit hits the fan you should cut your losses and crash.
Permalink Send private email Locutus of Borg 
July 10th, 2007 9:34am
God I hate the assumptions in your post, and in the OP, about the nature of the app, even on a desktop app, and desktop PC.

You're assuming that the recent failed malloc can't be big enough by itself to fail.

And you're assuming that it malloc fails it must be catastrophic both for the app, and an indicator of something else catastrophic going on.

Your app may be like that, but not all are.

It's not unknown, or even that unusual, for an app to attempt to allocate a huge chunk of memory - and knowing that it may fail - if it fails, continue onwards using a different approach to solving some problem that doesn't depend on the huge malloc.  And note, in this scenario, there was enough memory for the OS to keep running before the failed malloc, and there is afterward too.

For this type of app, what would really be useful is a platform independent functions which said malloc me non-virtualized memory, and how much non-virtualized memory can I safely get?
Permalink s 
July 10th, 2007 9:34am
(The assumptions I'm hating referred to DF's post, although LoB has the same implicit assumptions).
Permalink s 
July 10th, 2007 9:36am
I'm thinking that an assert() immediately following malloc might be a good idea.  It doesn't tend to give a great error message, but it gives -an- error message.  The program will fail, but you'll have an error message that will hopefully point you to the right place in the code.  It will also terminate the program before a NULL pointer can really cause havoc on your data.

It's true that there might be perfectly valid reasons why malloc would fail that don't indicate imminent systems failure.  But it's the most common situation.  If you're sophisticated enough to be writing an algorithm that adapts to available resources you probably aren't needing to be a part of this discussion at all--you've solved this problem a long time ago.
Permalink Send private email Clay Dowling 
July 10th, 2007 9:43am
>It's not unknown, or even that unusual, for an app to attempt to allocate a huge chunk of memory - and knowing that it may fail - if it fails, continue onwards using a different approach to solving some problem that doesn't depend on the huge malloc.

Don't you think "hate" is a little strong of a word?

We're talking about general memory management. You're talking about an extremely rare (in the world of memory allocations) edge condition and using it to defend masturbatory, useless out of memory handling.
Permalink DF 
July 10th, 2007 9:48am
> Don't you think "hate" is a little strong of a word?

For the fact that you don't care what malloc returns, yes.

For the fact there's a general industry-wide tendency to dumb down programming and wrap in cotton wool so that I can never see what is actually going on? And the assumptions that underlie that tendency?  Hate is probably too strong a word here too, but it's closer.

Contrary to common wisdom, the fact that dickwads may misuse a feature, should not be sufficient reason to remove it. 

There is sadly an increasingly cultural trend in Western society to regulate, license and control every little aspect of life in an attempt to eliminate risk and dickwadery, even risks and dickwadery that are not worth worrying about.

The dumbing down and nanny-state style of programming, backed by the Java ethos (if ever there was a beaucratic nightmare of nanny-state programming, neatly wrapped in red tape, Java is it),  is merely a reflection of that broader cultural trend.

Personally, I rather have the risks, and the dickwadery that comes with freedom and the opportunity to innovate.
Permalink s 
July 10th, 2007 10:13am
Somebody went off his meds.
Permalink Send private email Clay Dowling 
July 10th, 2007 10:21am
> I know this contradicts a lot of common wisdom

You are perfectly correct. Strewing nulls through out the system is a waste of time and clutters up the code. There are memory exception handlers now that get called when this happens, but there's nothing practical you can do. I log and die and expect to get restarted.
Permalink son of parnas 
July 10th, 2007 10:31am
Never test for an error condition you don't know how to handle.
Permalink LH 
July 10th, 2007 11:55am
Some feedback:

Back in the old days, on those old toy systems we once had with small memory limits and no virtual memory, I handled this completely differently. I would grab a block of emergency memory on launch. Then, instead of malloc, I'd call 'my_malloc' which would call malloc, check for NULL, if NULL free the emergency block and then call malloc again while setting gMemoryLow = true. Then, in the main loop, gMemoryLow would be noted, the program state would be saved to a recovery folder, and a message would be put up "Memory is getting low! Save your work now. Try closing unneeded windows and documents to free up memory."

But nowadays, you'll never run out of memory just from opening the usual documents here and there unless the user accidentally opens 1,000,000 1TB documents, so I check to make sure the user never tries to open more than 100 documents at a time and put up a message if they do, to give a chance to back out. I don't have any situations where the user types in a number and I allocate N*# of bytes or such, and # could be 2^32. If I did, I would check the number before allocating and force some limit on it.

I finally decided that in the 21st century on general purpose machines, malloc failure is just a catastrophic failure and there's no sensible recovery plan possible, so why bother.
Permalink Practical Economist 
July 10th, 2007 12:02pm
Nobody has mentioned about saving the user's work before crashing. That's a minimum to be done. For example one app I've worked with allocated a small safety buffer of memory on launch and kept it in reserve. If it later ran out of memory it would use the safety buffer to execute a minimal save and exit routine.
Permalink Send private email bon vivant 
July 10th, 2007 12:04pm
It's perfectly reasonable to generalize, so cram it with walnuts.  Ugly (s)
Permalink Michael B 
July 10th, 2007 12:26pm
>>Nobody has mentioned about saving the user's work before crashing. That's a minimum to be done.<<

Your malloc() is returning NULL.  Shit is fucked.  Don't even try.
Permalink Michael B 
July 10th, 2007 12:27pm
It's probably better to periodically save the users work in a temporary area when things are going well than try and save the users work after you've run out of memory.
Permalink Send private email Wayne 
July 10th, 2007 12:38pm
>> I finally decided that in the 21st century on general purpose machines, malloc failure is just a catastrophic failure and there's no sensible recovery plan possible, so why bother. <<

That's the trouble.  Malloc() failure does not mean you're out of memory anymore.  It means YOU CALLED ME WRONG.  It's most likely to fail because you told it to allocate negative bytes.  This indicates a bug in the application and you want to trap it here and die in a controlled manner rather than sending a NULL pointer up into your application so it can die at some undefined later time with a mysterious SIGSEGV.
Permalink Michael B 
July 10th, 2007 12:43pm
Isn't that the benefit of exceptions?  The C++ new operator will throw an exception if it fails.

I haven't called malloc() directly in years.
Permalink Send private email Wayne 
July 10th, 2007 12:53pm
I'm mentioning malloc as an example, but this applies to new as well. I don't care if new throws an allocation exception. Other exceptions, sure, I care about them.

As to calling malloc with negative numbers, well, don't do that. (WTF?)
Permalink Practical Economist 
July 10th, 2007 1:17pm
I agree PE, I don't care if new raises an exception either -- I was just responding to Michael B's post.
Permalink Send private email Wayne 
July 10th, 2007 1:34pm
>>>Your malloc() is returning NULL.  Shit is fucked.  Don't even try.<<<

Except the application I refer to did indeed do so, and it worked quite successfully. Saying execution cannot continue after malloc() returns null is the refuge of the inexperienced.
Permalink Send private email bon vivant 
July 10th, 2007 1:58pm
...as long as the operating system doesn't come along a kill you before you get a chance.  While you *can* recover from an out of memory condition (if you're careful) there are very few cases where that isn't just a big waste of time.
Permalink Send private email Wayne 
July 10th, 2007 2:05pm
"as long as the operating system doesn't come along a kill you before you get a chance"

Ah yes, the good old Unix "system running low on resources, let's randomly kill a few processes without warning" thing.  :-O  Fortunately Windows is not quite so unforgiving, and there are some more reliable exit strategies on that platform.
Permalink Send private email bon vivant 
July 10th, 2007 2:17pm
Wayne's solution is a lot more likely to keep you from loosing data bv. 

I might also point you to this helpful little reference: http://www.openbsd.org/cgi-bin/man.cgi?query=malloc  Note in particular the section titled Diagnostics.  Once malloc returns NULL, the process is done done done.  The program can trap the signal, but frankly at that point all it should really be doing is attempting to clean up any resources that the OS won't naturally get such as database connections.
Permalink Send private email Clay Dowling 
July 10th, 2007 2:29pm
>Saying execution cannot continue after malloc() returns null is the refuge of the inexperienced.

It isn't saying that execution *can't* continue -- it's saying that trying to make all of the fallbacks that would reasonably allow for execution to continue -- presuming the rest of the system hasn't already started collapsing (which it *will*) -- adds complexity to the code (ergo adds faults), and takes time. If you're making your code more faulty under real world scenarios, and complex, spending more time doing so, to handle a situation so rare that it will almost never happen, and when it does your fate is largely out of your hands anyways.

There's a *bit* of a difference there.

And I'd say the refuge of the inexperienced is clutching onto cargo cult practices that aren't relevant just because it gives them a warm feeling that they've somehow elevated themselves.
Permalink DF 
July 10th, 2007 2:29pm
If you are calling malloc in an application you are happy to abort when it fails, and you haven't got your call to malloc in a single, central  piece of wrapper code that logs a diagnostic and aborts nicely then you (or whoever was responsible) is an idiot and not to be trusted with C.
Permalink  
July 10th, 2007 2:48pm
Autosave is helpful as part of a comprehensive strategy, but there can be a gap between the last autosave and the time of a crash, and for complex apps a save operation might take long enough that frequent autosaves are not practical.

Clay, your reference is for Unix, which is the archetypal broken OS. There are more robust OS's around these days. :-)

What's all this about the *system* dying when a process runs out of memory? Processes have resource limits; just because a process tried to exceed its limits doesn't mean the system will fail. It just means the process will die.

All I'm saying is that complex apps have bugs and sometimes they crash unexpectedly (maybe with a null pointer exception or something). You can reserve, in advance, a safety buffer and some code that will only perform safe operations, and in the code, using that buffer, you can do a clean up and save vital stuff before the program exits. That's all. On Windows it is quite practical to do.
Permalink Send private email bon vivant 
July 10th, 2007 2:52pm
There is a certain joy in spending most of time working on web applications -- if there is an error you just die, print a nice error message, and email the exception.  There's no point in handling anything because the interaction is stateless.
Permalink Send private email Wayne 
July 10th, 2007 3:09pm
>>You can reserve, in advance, a safety buffer and some code that will only perform safe operations, and in the code, using that buffer, you can do a clean up and save vital stuff before the program exits. That's all. On Windows it is quite practical to do.<<

You're wrong.

Failing malloc() means your position is no longer tenable but you happen to be fortunate enough to have a moment to shoot yourself on your own terms.  It's called abort().
Permalink Michael B 
July 10th, 2007 3:57pm
>> Except the application I refer to did indeed do so, and it worked quite successfully. Saying execution cannot continue after malloc() returns null is the refuge of the inexperienced. <<

Congratulations!!!!!!  We're so proud of your custom memory allocator/ability to set resource limits/your clever disposable caching system that allowed you to get malloc() failure to be a signal for something besides memory corruption/logic errors!!!

I'm glad you're here to offer every pathologically slanted case possible in every discussion about anything and will never let a single person be so arrogant as to get away with the slightest generalization!!!
Permalink Michael B 
July 10th, 2007 4:09pm
In my own case, I already have my own custom malloc that tracks memory leaks and buffer overwrites, so that's not an issue. NULL coming back is still an unrecoverable disaster for modern PCs.
Permalink Practical Economist 
July 10th, 2007 4:22pm
When I read the title I felt a sense of DejaVu


2004

http://discuss.joelonsoftware.com/default.asp?joel.3.40753.27

2005

http://discuss.joelonsoftware.com/default.asp?joel.3.85197.27

I bet I'm missing some more out there.

Sorry to be the A.R. here, but those were really good discussions :)
Permalink Send private email Masio 
July 10th, 2007 6:48pm
Those are interesting. I have to comment though - they talk about systems in which NULL is not 0L. I keep hearing about those things, and it's like hearing about the systems where true is defined internally as 0, and false as 1, so that when you use true interchangeably with 1, some guy always wants to bring up this system he once worked on where true was 0, NULL was 901, and bytes had 11 bits.
Permalink Practical Economist 
July 10th, 2007 7:59pm
What's the value in redefining NULL, exactly?  I can only imagine it being used for working around an architectural flaw, really.
Permalink Michael B 
July 10th, 2007 11:42pm
Well, personally I don't believe these systems with true=0 and NULL != 0 actually exist. Sure, you could define them to be some weird ass thing, but that's just wrong & the people who bring this stuff up in meetings are just doing it to be annoying.

"What if NULL isn't 0?"
"It will always be 0."
"But what if it's not?"
"It won't."
"Open your mind here. We have to account for all conditions to have robust software. The PNG-100 had 41 bit address pointers and NULL was the nonodecimal pattern 0nBFK1918J. We should be prepared in case a customer needs to support the library on that platform."
"Eat shit and die."
"Oh that's just like you. Big Boss, do you think that we should have bugs in the program due to sloppy thinking like PE says?"
"Yeah PE, Marvin has a point. We should be prepared in case that happens. PE, replace all instances NULL comparisons with a NULL comparison function and initialize it at start up according to all the possible values according to the architecture."
Permalink Practical Economist 
July 10th, 2007 11:52pm
If you're working in C++, you are assured that NULL is 0.  In fact Stroustrupe recommended using 0 instead of NULL, because NULL might be defined in terms of the data size in some headers, which might cause no end of bitching from the compiler when you assign a uint32_t to a char since you'd be dropping 3 of the 4 bytes.
Permalink Send private email Clay Dowling 
July 11th, 2007 8:33am

This topic is archived. No further replies will be accepted.

Other topics: July, 2007 Other topics: July, 2007 Recent topics Recent topics