This is the mail archive of the crossgcc@sourceware.org mailing list for the crossgcc project.
See the CrossGCC FAQ for lots more information.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
On Thu, May 11, 2006 at 01:26:29AM -0700, Michael K. Edwards wrote: > Don't get me wrong -- working cross-compile build procedures are a > good thing. But I wouldn't really want to cross-compile, say, Perl, I've tried it and it doesn't work. Perl is fundamentally broken by design with regard to cross compiling, and the Perl crew doesn't want to do anything against it. > or even the Python extension that glues to libxml2; Python itself works, I havn't tried libxml2 integration. But you are surely right with the general problem. > too much of the build procedure relies on being able to run binaries > immediately as they are compiled. I am a lot more comfortable with > qemu than with scratchbox, because the only sleight of hand involved > is at the opcode and syscall level, and there are absolutely no host > binaries involved other than qemu itself (and the libraries it calls > and the host kernel, if you like). I understand the motivation of people who do that kind of stuff, and qemu is undoubtfully a good tool for that. My motivation is another one. > Timing aside, there should be no differences between the result of > compiling under an ARM chroot and qemu and the result of compiling on > a "real" ARM. Except, of course, that you don't have to find an ARM > you can hook up to a RAID array, cram a gig of memory into, and run at > GHz speeds in order to compile, say, TAO in less than 24 hours. I > don't need an ORB on my embedded system today, but I may well next > year -- and I prefer not to disqualify code bases with heavy tool > requirements (C++ templates, anyone?) and extensive compile-time test > harnesses just because it's impractical to cross-compile them. How did you know that I already tried to crosscompile TAO in PTXdist? ;) > I might add that it's about time embedded Linux projects started using > grown-up software packaging techniques (I like dpkg and apt; YMMV) and > that mkxdeb makes it almost as easy to bootstrap a rootfs image for an > ARM on an x86 dev host as it is to build a native rootfs with > debootstrap. (You do, of course, need a Debian repository for your > arch.) I've done it the hard way too, and I'm doing it the hard way > again right now (not least because there is no big-endian ARM port in > sarge or AFAIK sid either) -- but as far as I'm concerned that's only > the first stage in a scorched-earth bootstrap procedure. Packaging is a good thing, although I've the impression that at the momemt none of the mainstream packaging mechanisms (rpm, deb) are the right thing for embedded. That's why we currently use ipkg, although the code is a horrible mess. One design decision of PTXdist (vs. just using Debian) was full configurability. If you go this way, you'll end up with one distribution variant for one customer project. Full configuration doesn't work well with precompiled binaries. But it lets us build 4 MB root images including kernel, whereas debian x86 standard minimum installation is something around 200 MB these days. Still too much to fit into some NOR flash. > Then start layering on applications, without worrying about whether > they cross-compile easily. And if you find yourself slinging XML in > C, ask yourself why you're not doing it in Python instead (or Java if > you must). Don't settle for "I can't figure out how to cross-compile > libxml2's Python bindings" (or "I can't build against the J2ME class > library and still run Junit tests on the dev host, and modifying other > people's Ant scripts makes me go cross-eyed"). Well usually this quickly leads to a situation where speed and space are the arguments (although I know that this is a temporary argument, as hardware becomes more and more powerful). > Remote GDB stubs are all very well, but when I am troubleshooting > complex software misbehavior I want strace and ltrace and Perl (and > oprofile and valgrind when I can get them) right there on the target. We usually try to design our customer software in a way that it can be debugged on the host, not on the target. I can compile PTXdist projects completely with host-gcc instead of cross-gcc, just by running with somehting like "ptxdist --native go". The end of this method is usually when you'll have to access hardware (which is very often the case with our embedded / automation projects), but then you are lost with other methods anyway. At least with things like ethernet you can use UML's virtual networks (and with CAN as well). > I'll do that. Having evangelized the above approach, it behooves me > to follow it from beginning to end for once. :-) Thanks :-) I very much understand your approach. I'm just asking myself if the community shouldn't decide between the ability to cross compile or not. If everyone goes your way, cross compilation support in OSS tools becomes less and less tested and ends up like so many of these sourceforge projects out there who never gained critical mass. Robert -- Dipl.-Ing. Robert Schwebel | http://www.pengutronix.de Pengutronix - Linux Solutions for Science and Industry Handelsregister: Amtsgericht Hildesheim, HRA 2686 Hannoversche Str. 2, 31134 Hildesheim, Germany Phone: +49-5121-206917-0 | Fax: +49-5121-206917-9 -- For unsubscribe information see http://sourceware.org/lists.html#faq
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |