Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
32-bit x86 Position Independent Code – It's That Bad (ewontfix.com)
136 points by cremno on April 15, 2015 | hide | past | favorite | 102 comments


I always wondered why Linux can't just use the same thing as windows for DLLs: Instead of PIC, a big tables of places to patch with the final position.

It looks really terrible to lose one precious register and have all this PIC overhead.

I don't use mono's ahead of time compilation because it also creates PIC. The JIT has one register more available. But I haven't measured it yet


Linux did this before, but moved to PIC around 1995 as part of the a.out->ELF transition.

ELF actually supports (and always did) non-PIC code on x86 Linux in shared libs, ld.so will do fixups to the code on the fly. See http://eli.thegreenplace.net/2011/11/03/position-independent...


> Instead of PIC, a big tables of places to patch with the final position.

Among other things, because that means the text section has to be writeable, and can't have a shared mapping across processes unless it's mapped in the same place in every process, which breaks address-space layout randomization (ASLR).


On Windows, one can decide a default position for each DLL and if one is lucky, there is space available in that area of the virtual address space. Otherwise the loader needs to do the relocation work and modify pages and make them writeable temporarily.

If multiple processes load the same DLL and the default location happens to be chosen well and there is virtual address space in every process, then the linker can simply share the DLL.

That's how I think it was before ASLR came around.

I think Windows removes the writeable-flag again after relocating.

For the area I described (Mono's JIT vs. Mono's ahead of time compilation as PIC .so-file) I personally don't care that much for ASLR, because it's a managed language and because of specific circumstances of that project.


Linux had a similar mechanism, "prelink", which also completely broke with ASLR.


Windows has ASLR too (since Vista) and it works with existing DLLs. Only pages which contain relocations can't be shared, which in practice means only code that accesses global data (including calls through the import table to other modules.) All the other pages can be.

The whole process image is writable when the loader is running - else how could it load the code into memory? :-) Section attributes are applied after loading.


Windows doesn't store those thunks in the text section. They're in the .idata section instead.


Off topic, but does anybody know if there's a way to turn off ASLR e.g. while debugging (honestly, i'd be interested if you can do this on any platform)? I'm pretty confident the answer is no, but it can really be a pain sometimes...


Either the linker, or GDB (or your debugger) can disable it for you. Under GDB it's controlled by the setting disable-randomization (defaults to false).


/proc/sys/kernel/randomize_va_space on Linux?


On Windows, the linker has an option to do that.


setarch x86_64 --addr-no-randomize myprogram args...

(or -R for short)


Windows DLL's don't have features such as the executable overriding symbols in the attached library.

In fact, if you call LoadLibrary at runtime, there is no symbol resolution; you have to look up symbols yourself using GetProcAddress.

Under dlopen, symbol resolution takes place; it can change the destinations of function calls.


Under dlopen, symbol resolution takes place; it can change the destinations of function calls.

I'm probably a bit biased since I'm rather used to Windows' explicit import-export system, but having function calls change just because a library was loaded sounds like an opportunity for some extremely confusing bugs.


The more I play around with musl (the author's C library) the more I'm convinced dynamic linking was not worth the trouble.


As a programmer who unfortunately is suckered into doing devops at times, the more I get to just update the openssl library instead of the whole OS whenever a security flaw in openssl is announced, the more I'm convinced that dynamic linking was very well worth the trouble.

Funny how perceptions change depending on the angle you're looking at a problem.


Oddly enough I was thinking about openssl updates specifically. Did you remember to restart all your long-running clients? (I'm a sysadmin who occasionally gets roped into doing development...)

Dynamic linking makes security more difficult to reason about, which is a Bad Thing. It has many pluses, but also many minuses. And with symbol versioning it gets very problematic to actually figure out what your real code path is to begin with.


And when you update all your clients with static linked OpenSSL, did you also remember to restart every single one of them? These seem like equivalent problems to me.


Not quite, I think. With a .so, I can ask `lsof` which services on the machine require restart: the ones that haven't are linked to a deleted so.

And you wouldn't need to restart the clients with static OpenSSL — you need to recompile them. And with static linking, I'm not sure how you would easily determine the linked version out to say, the minor or the micro. (Perhaps, if this is your OS's thing like nix, the package manager keeps track…)


Tip for Debian/Ubuntu users: checkrestart from the debian-goodies package is a nice wrapper around lsof that lists running binaries that rely on outdated solibs:

http://manpages.debian.org/cgi-bin/man.cgi?query=checkrestar...

https://gehrcke.de/2014/06/good-to-know-checkrestart-from-de...

It not only shows processes that run older solibs, but in the case of services, it will also give you the commands to restart them :).


It also shows programs that are still using the old binary that has been deleted (unlinked from the filesystem) but not refcount 0 freed. This can also be used to uncover some forms of hiding that hack attempts use.


Not quite, but you've identified a key leverage point.

You only need to relink the apps, unless the newly patched library breaks its own ABI. Otherwise, it would be ideal for package distributors and package management systems to ship just a single .o file for an app, and do the final linking at package install time. Then updating a buggy static library doesn't require full recompile of any apps.


I don't think that nix would be able to tell you which version of OpenSSL some binary linked statically, as there would no longer be a reference from that binary to /nix/store/xxx-openssl-1.0/lib/libssl.so.

Nix generally uses dynamic linking, though the links are to absolute paths to a specific library version, so upgrading OpenSSL requires a recompile much like with static linking.

With "normal" dynamic linking, upgrading OpenSSL and restarting some service means you're now running untested code, hoping that the new version of the dynamic lib really doesn't change an interface on which your code depends. But to be fair, this is usually a good assumption.


Should be relatively easy to make a tool that uses `losf` or similar to get a list of all running processes that depend on a shared library.


I personally would like to see a distribution have a mixed policy -- dynamic linking against core libraries that are part of the default / basic OS install, and static linking against low usage libraries. That way, you keep the "update one library package to fix a bunch of programs", yet you also reduce cross dependency issues by limiting the number of library packages a given program is dependant on.


IIRC Snowflake[0] does that (but also has a very experimental per-process view of /usr -- pretty cool stuff, really, but takes some getting used to).

[0]: https://github.com/GregorR/snowflake


You are right. Lack of it will be the next wave of security problems


There are no problems with updates anymore. You don't have to download everything to update a bunch of static binaries. There are things, like bsdiff, that allow you to download and patch only tiny differences related to a security flaw.

There is no need for dynamic linking for updates. Really.


What if you have binaries compiled from proprietary code bases from umpteen vendors using different compilers and compiler options and what-not, and they all use the Internet for self-updating and stuff and they all need to be patched? How are you going to do that?

Also - weakly related - what if you want to use a plugin from vendor A inside the program of vendor B, and they live in the same address space - how are you doing it without dynamic loading?


Chances are, if you're using those binaries, they have their own versions of whatever libraries they depend on already baked into themselves somehow. So even if they depend on OpenSSL, and you update OpenSSL on your system, they're probably still going to end up using a version from a year ago regardless of what you do.


> What if you have binaries compiled from proprietary code bases from umpteen vendors using different compilers and compiler options and what-not, and they all use the Internet for self-updating and stuff and they all need to be patched? How are you going to do that?

The proprietary code isn't going to chance using system libs beyond libc, because of portability. So they likely still need to be patched separately.


> What if you have binaries compiled from proprietary code bases…

Then don't do that. Seriously, RMS, ESR and others have written a plethora of well-reasoned essays indicating why proprietary software is a poor choice.


So you're admitting that this isn't a solution and you need to change the rules of the game to make it work.

Well, I'm convinced.


'Doctor, it hurts when I stab myself in the eye!'

'Well, don't stab yourself in the eye.'


ESR advocating for something is reason enough to seriously consider the alternative.


I don't see a problem with proprietary code bases.


There is however, an important security benefit from PIC: ASLR. It's not a silver bullet, but can stop a range of vulnerabilities from being reliably exploitable usable unless an memory leak is also available.

Even if you so far as control the instruction pointer as an attacker, you might just not know where to jump to in the target address space...


ASLR is a pretty hackish solution to the main problem, which is that the function call stack and parameter stack are combined. The problems that ASLR mitigates don't exist in languages like Forth, where there are two separate stacks - a call stack and a separate parameter stack.


This is really an ABI issue, more than a language one. I'm not sure anything in C requires a combined stack. Plus, processor arches like ARM and POWER use link registers rather than a stack to store the return address. Its the ABI that then chooses to put the LR on the parameter stack. Even on x86 SP could be dedicated as a function stack and separated from BP (or pick another x86_64 register) as the parameter stack.

Of course being an ABI issue, probably makes it really hard to solve, and hence hacky solutions abound. (actually its probably pretty easy to solve given LLVM, just impossible to get anyone to accept).


Hm, you may be right that nothing in C requires it. But things like POSIX pthreads do - e.g. pthread_attr_setstack() only lets you set a single stack address and size.

So yes, you'd have to create a new runtime from the ground up, not just a new ABI for an existing API.


ARM does store the return PC in a link register, but unless you're in a leaf function, it will still be pushed into the stack and can be overwritten by overflows.


It's foolish to think ASLR only prevents stack smashes - it also prevents unlink-style attacks via heap overflows and use-after-frees, data overflows into pointers and other segments, and many other vectors.

Yes, it is a mitigation strategy that wouldn't exist in an ideal world. But we have to do best with what we have, and security takes a very pragmatic approach at this.


OpenBSD now has ASLR for static executables. There was some discussion of it on #musl so it might happen.


One of the main reasons for dynamic linking has become irrelevant, I believe: the availability of disk and memory space has grown faster than the size of the binary objects.


IMHO, static linking, be it using a mechanism alike what npm is doing or be it plain-old static linking of binaries also means "you link it, you own it".

For every package, you link statically, you as the parent package owner become responsible for all security flaws of all the packages you link statically.

And by "responsible" I mean: Every dependent security announcement of a dependent package also becomes your security announcement.

If you're AWESOMY 1.1 and you link statically against openssl and openssl announces a security flaw, then you better and quickly release AWESOMY 1.1.1 with an accompanying security announcement too.

Are you willing to do this? Do you trust the chain of dependencies all the way down to also be willing to do the same?

As a responsible developer, I'd much rather delegate that responsibility away to a packager or even the user, especially with well-known libraries like openssl.

If I'm owning AWESOMY 1.1 and link dynamically against openssl, then I don't have to do anything when openssl releases a security announcement. I can, if I want to, inform my users to maybe update openssl, but with some likelihood they are already doing this anyways for some other package.

For me as a developer, this is considerably more convenient.

Yes. Static linking has huge advantages for me as a developer too, but it also comes with a great many additional responsibilities I'm personally not willing to take on, also because I don't trust my dependencies to be as diligent about their dependencies.


At least on Linux, (commercial) people are packaging shared libs with their binary and using LD_PRELOAD wrappers anyway. "Own-it-by-proxy"-except-you're-shipping-it-so-you-kinda-own-it-anyway.

Dynamically linked, dlopen()-or-equivalent plugin designs are an exception. Otherwise, I'd like to see more statically-linked applications.


For every package, you link statically, you as the parent package owner become responsible for all security flaws of all the packages you link statically

Well, speaking as an administrator rather than a developer: a developer shouldn't be making that kind of policy decision (time of symbol resolution) to begin with barring a strong technical need (plug-in based architecture, etc.).

I mean, 90% of packages don't care whether their libraries are dynamic or static, but the ones that do can still be annoying. (Coreutils, of all things, requires dynamic linkage for stdbuf, though that seems to be considered a bug by the maintainer; on the other end of the spectrum, getting Perl to even build statically is like pulling teeth, and that was a design decision.)

Do you trust the chain of dependencies all the way down to also be willing to do the same?

The only time that is a problem is those cases where the package developer literally plunks down a bunch of code from his upstream into his own tree (think the embedded glib inside pkg-config, or the hideous monstrosity that is gnulib). I think all sane people agree this is bad -- if you aren't significantly altering the code (at which point it's "yours") there's no sense in doing that and risking a sync problem. But that's not even static linkage; that's just literal code-sharing.


The embedded glib inside pkg-config solves the chicken-egg problem: pkg-config requires glib and glib requires pkg-config. This way, you can build pkg-config first with its embedded glib, then glib itself and then go solve your own problems, instead of butchering the build system to make them build at all.


Oh, true, and I didn't mean that as a criticism of pkg-config; they only did that because it was literally the only option. In general you should only duplicate actual code if it's the only option, was my point; pkg-config is the pattern and gnulib is the anti-pattern in that.


I believe that what you describe is the remaining argument in favor of dynamic linking.

That said, I'd like to point out that it is convenient for the developer - since customers are ultimately interested to know if your software is vulnerable, and you'll have to explain how the vulnerability affects it -.

It's also a double edged sword: openssl has a good ascending compatibility record, but that can't be said about all user space libraries. IOW, it's tricky to guarantee that your software will work flawlessly across all the incarnations of CentOS 6.x, if you have a lot of external dependencies, for example.


IOW, it's tricky to guarantee that your software will work flawlessly across all the incarnations of CentOS 6.x,

It's not so tricky, since Red Hat specifies nowadays what guarantees can be expected for what packages:

https://access.redhat.com/articles/rhel-abi-compatibility#Ap...


Dynamic linking is required for anything resembling a plugin architecture (such as PAM).


Not really—one could use processes and some form of IPC (shared memory, pipes, whatever). The efficiency could be pretty bad, but the safety and reliability could be better.


In fact, OpenBSD delegates logins to a login_<foo> binary, e.g. http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/.... That works. OpenBSD's system is not as flexible as PAM, with the attendant upsides and downsides.


Yeah. There's a reason I stick with OpenBSD.

I have absolutely no need for my authentication system to be "flexible".


"And by "responsible" I mean: Every dependent security announcement of a dependent package also becomes your security announcement."

Yes, and no, because as the developer you can look at the security vulnerability and decide if its actually exploitable in your application. That is assuming you can determine it, but in a lot of cases its simple. Especially in a huge library like openSSL. Say for example the only thing i'm using openSSL for is some limited functionality, say SHA256, then I probably can ignore 99.99% of the security issues because they just won't apply.

I've been in this situation with an embedded platform that ships as part of the product I work on. Its pretty much got daily security updates, and yet we rarely get hit by any of them because our usage of the platform is like 1% of its functionality.


This is an oft-repeated mantra, but is it really true? I don't have a linux desktop running here, but try an experiment: start up a desktop environment such as KDE, and write "free -m" and see how much is there under "shared" heading. Without shared libraries, some extra memory corresponding to some multiple of that number would be used.

To calculate exactly how much memory is saved by shared libraries, you'd need to write a kernel module to walk the internal structures describing physical pages and summing reference counts of used pages. Maybe it's already been done?


some extra memory corresponding to some multiple of that number would be used

Depends. If I have 7 instances of my terminal emulator loaded (say I hadn't discovered tmux yet or something), the loader can share their .rodata and .text segments, and in many cases it does (YMMV; heuristics apply; void where prohibited; etc.). So a lot of things that are in shared libraries right now might still be only loaded into memory once if their binaries are segmented correctly.

The Plan9 people (Plan9 doesn't do dynamic linking) claim that the memory savings they get from skipping the relocation overhead are greater than the memory hit from the times that the same stuff does get loaded multiple times, though obviously always take self-promotion with a grain of salt.

Use case probably matters -- my servers run few processes to begin with, and it's often a lot of versions of the same process, whereas my laptop runs a ton of very different processes. Same sort of argument that makes me happy with udev on my laptop while also very happy with a static /dev tree on my servers.


I agree, sharing will (most probably) still work on segment-level [0]. But this means that if I have two different executables each linking an identical version of, say QT, in its .text segment, those copies will not be shared.

[0] In fact, certainly. Program loading works by mmaping the executable with MAP_SHARED flag into the process's address space, and the VFS takes care to keep all mappings coherent. The simplest way of keeping mappings coherent is to actually share the backing storage between all mappings. It's the foundation of CoW.


You could do that calculation in userspace with a stock kernel, by summing up the sizes of the non-writable mappings of the various library files in /proc/*/maps, then using mincore() to find out how much of the libraries are resident.


mincore is a system call and does not take a pid parameter. This means it has to be executed by each process individually, which would require injecting code into each running process and executing it in some way.

Unless there's a tool which makes this extremely simple (maybe Intel's pin?), I believe that writing a kernel module is simpler. The module's init function tallies up the pages and writes out the result into the kernel log. Then the module exits.


You don't need to run mincore() on target pids - you just need to write a tool that opens(O_RDONLY) and mmaps(PROT_READ) each library file, then calls mincore() on each page in the mapping to find out which pages of the library are loaded shared.

The results of mincore() in one process with a shared mapping of a file are enough to tell you how much of that file is loaded shared system-wide.


Just found out that physical page information is exposed through /proc: https://www.kernel.org/doc/Documentation/vm/pagemap.txt


Ah, good point! Though I think I'll do a dlopen on each file to mimic whatever the loader usually does. I'll do it as a weekend-project :)


In case it helps, here's my little utility (which just shows how much of a file is in core):

https://github.com/keaston/fincore


Are you suggesting that every single calculator, file manager, terminal, editor, package manager and music player ship with their own copy of libQt5* (around 78M) or gtk?


Static linking doesn't pull the entire library; it pulls the parts of the library it uses.

That said, QT and GTK probably should be broken into smaller libraries in a perfect world.


So I take it you're offering to buy me some more RAM, then? Unfortunately, my motherboard is already full at 4GB so this upgrade would be more expensive than you might expect at firs. What about my friends that have older 2GB laptgops? Do they get an upgrade too?

I'm joking, of course, but there IS a lot of variation in computer hardware, which makes this kind of broad generalization even more problematic.

Also, memory size is probably not that useful of a metric for modern hardware, where the penalty for overflowing the CPU caches can be huge.

edit:

Why statically link when you can prelink(8) instead?

( http://linux.die.net/man/8/prelink )


Can you give more details? Isn't this particular issue pretty much x86 specific anyway? I think it's not a problem anymore on x86_64.


So what's the story on 64-bit x64 or other processors?


Position independent code on x64 is quite trivial, it has a new instruction addressing mode called RIP-relative addressing so that the program can access everything relative to where it is currently executing.


There are some pretty pictures here which help to explain how x86_64 handles relocations in a much less cumbersome way: http://www.mindfruit.co.uk/2012/06/relocations-relocations.h...


Is 32-bit Linux still worth worrying about? It seems that 64-bit CPUs and kernels have been around for a really long time now. (Honest question.)


We still have embedded 8-bit. Expect embedded 32-bit approximately forever.


Embedded 32 bit x86, with the SysV ABI and support for register calling conventions, though?

This is specific to i386, as far as I'm aware. And even then, the worst parts of it are only specific to i386 using the same SysV calling conventions -- ARM has PC relative addressing, as do most other processors commonly used in embedded systems. If you have PC-relative addressing, the entire problem goes away.


Yeah, but we all know the embedded 32-bit stuff that's going to be around forever is ARM, not x86.


We hope it's not going to be x86… I dunno. I still see too many embedded 486/686 clones around for my liking, and Intel is now trying to get into the whole IoT business with x86-32 Quark CPUs.


Intel is still selling (well, trying to…) 32-bit only Atom/Quark CPUs for smartphones and IoT devices.

Coincidentally, Android defaults to PIC in newer SDK versions. So there is still a pressure to optimize for IA32+PIC.


Say you want to build flying robots 1/64 of an inch in diameter. Do you want 64-bit CPUs?


No, but you don't want x86 either, at least given anything vaguely resembling today's technology. See, for example, the Michigan Micro-Mote, which is only a few times that size(!) but has ROM and RAM sizes measured in bytes:

http://www.ee.columbia.edu/~mgseok/pdfs/phoenix_isscc_dac_de...


If you want linux to run in places where hardware is less frequently updated, yes.


For a JIT I'm working on, I've been trying to call into libc from the emitted code, and I'm running into this.


"I know on darwin, we kinda hate those sorts of relocations because they are a performance sap. This type of performance sap is nasty as it is pervasive and invisible and hard to ever get back." -- Mike Stump (gcc-patches ML, 2012)

I think the Solaris linker took some interesting approach with shared libraries at runtime as well.


There was a time when shared libraries were a necessity. Not to mention supporting myriad architectures. That was a different era. Conditions have changed.

Of course, the designs and implementations have not. They are still in heavy use, 20 years later. What "just works" is never questioned. I am glad to see some people questioning the old assumptions.

Unrelated question: Why does Linux pass arguments in registers instead of using the stack?


>Why does Linux pass arguments in registers instead of using the stack?

It's a convention. Depending on your architecture, compiler and API, it uses the stack in a lot of cases (cdecl/stdcall on i386). Linux on AMD64 always uses the SYSV ABI which uses registers as much as possible (i'm assuming for performance reasons, since 'fastcall' does the same on i386).


I didn't know that was the name of it, interesting, though it sounds like an Anachronism

http://en.wikipedia.org/wiki/X86_calling_conventions#System_...


"It's a convention."

Does Minix use that convention? How about BSD?


"The calling convention of the System V AMD64 ABI is followed on Solaris, Linux, FreeBSD, Mac OS X, and other UNIX-like or POSIX-compliant operating systems."

http://en.wikipedia.org/wiki/X86_calling_conventions#System_...


int80h.org

Anyway, the original question was _why_ does Linux use registers to pass arguments. Arguments can be passed on the stack, but Linux chose to use registers. Why this choice over the other option?

Do you know the answer?

Just curious.


It's because in theory, it's faster to copy the values to registers than to have to push them all to the stack and then have the callee pop them. At least that was the thought behind Microsoft's __fastcall, which was their original version of a calling convention using registers, though in that case it seems the benchmarks aren't conclusively in support of the theory.


What makes you say conditions have changed? Why should I have to wait for everything that consumes OpenSSL downstream to be updated/recompiled to fully patch my system?


It's easy to see why sysadmin types like shared libraries, even though that ability is an accidental side-effect of an architecture originally developed as a way to save disk space, but as an end user I strongly prefer isolated applications which depend only on the base OS. Updating one program and thereby inadvertently updating a bunch of other unrelated programs is an anti-feature for me as an end user, because it means I can't trust the state of my system. I don't want to waste time figuring out why things no longer work the way they used to; I want to keep on using my computer for its intended purposes. Therefore I, as an end user, update software as infrequently as possible, and only do so when I don't have anything else I need to do with the computer for the rest of the day, just in case I have to waste a bunch of time figuring out why things don't work anymore.

If I were a sysadmin, and maintaining the computer were my job, this would not be a problem, because figuring out why things don't work and fixing them is what I would be trying to use the computer for; but as an end-user, I just want things to work.

If every application were built with static linking, and system libraries only updated with an explicit system upgrade, I would be much more likely to upgrade frequently, because it would be possible to know and limit the scope of churn implied by any given upgrade act.


Are you suggesting that the original purpose of shared libraries relates to patching software?

Others seem to be suggesting that the emergence of dynamic linking was a response to general limitations in secondary storage and memory. Limitations that no longer exist.


Does it have to be the original purpose to be a useful feature?


Is this supposed to be an answer?

I like marssaxman's comment.

I'm not sure I would call this a "feature" because I think of "features" as being intentional and if marssaxman is correct, this justification for shared library use was an accidental side effect.

But I have limited knowledge of the history behind dynamic linking. Hence my questions. I am just curious.


Playing Devil's advocate, why do you need to wait for others to recompile the software?


registers are much faster than stack, assuming you immediately make use of them; to use them from the stack you have to load them into registers anyway in most cases


In theory passing args in registers is faster because you avoid one copy in simple/small routines (the ones that perform a very simple op on the reg arguments and return). In practice there are way more complex routines than simple routines and the arg from register is copied back on the stack in the local variables area of the routine because it needs that register to perform some op or simply because it needs that reg to pass an argument to a subroutine. So in practice, the arg copy is only shifted from the caller to the callee.


For x86 you might be right, because there arent many registers. I doubt it is as much of an issue for x86-64. It depends on a lot of stuff that I don't have the numbers for, but, from your example:

If the functions tend to perform operations that are dependent on the arguments first (I know lots of functions I see do, doing things like an immediate null pointer check and pointer dereference on an arg) then it is better to already have them in a register, you can often immediately replace the value of the pointer in the register with the value from dereferencing.

For your point about there being more complex functions than simple ones, it doesn't matter which there are more of, it matters which are called more often. If every complex function on average calls 0.5 complex functions and 5 simple functions, you still probably have more simple function calls overall.


Except in leaf functions, where the win materializes. Leaf functions are a healthy fraction of calls, counting dynamically, so passing args in registers is a good idea.

It's true that the benefit isn't enormous.


> Obviously what we'd like to see is: "foo: jmp bar"

Shouldn't that be "foo: call bar"?


I think it's JMP due to tail-call optimization. If there's a JMP, the RET of the inner function (bar) returns from the outer one (foo) also, and one unnecessary jump is eliminated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: