What is an effective method for installing up-to-date software on an out-dated production machine? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election Results Why I closed the “Why is Kali so hard” questionNon-Root Package ManagersBest way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?Is there a drawback when using a chroot in high performance computing?How to update Apache and PHP using SCL?Installation on debian 5 32-bit without being a rootnetcdf installation and usr/local… foldersInstalling software as userWhat is the best tool for automating the building 32-64bit libraries for Unix & Windows building C++ software, replicable by users and machines?Qubes OS - Update a Template Kernel

Is 1 ppb equal to 1 μg/kg?

Is drag coefficient lowest at zero angle of attack?

Determine whether f is a function, an injection, a surjection

Why use gamma over alpha radiation?

Is above average number of years spent on PhD considered a red flag in future academia or industry positions?

Can I add database to AWS RDS MySQL without creating new instance?

Can a non-EU citizen traveling with me come with me through the EU passport line?

How do I keep my slimes from escaping their pens?

How should I respond to a player wanting to catch a sword between their hands?

Strange behaviour of Check

Working around an AWS network ACL rule limit

What items from the Roman-age tech-level could be used to deter all creatures from entering a small area?

When communicating altitude with a '9' in it, should it be pronounced "nine hundred" or "niner hundred"?

Who can trigger ship-wide alerts in Star Trek?

How to politely respond to generic emails requesting a PhD/job in my lab? Without wasting too much time

How do you clear the ApexPages.getMessages() collection in a test?

Choo-choo! Word trains

Complexity of many constant time steps with occasional logarithmic steps

Can the prologue be the backstory of your main character?

Why don't the Weasley twins use magic outside of school if the Trace can only find the location of spells cast?

Jazz greats knew nothing of modes. Why are they used to improvise on standards?

Why is "Captain Marvel" translated as male in Portugal?

Can smartphones with the same camera sensor have different image quality?

What is the electric potential inside a point charge?



What is an effective method for installing up-to-date software on an out-dated production machine?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Community Moderator Election Results
Why I closed the “Why is Kali so hard” questionNon-Root Package ManagersBest way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?Is there a drawback when using a chroot in high performance computing?How to update Apache and PHP using SCL?Installation on debian 5 32-bit without being a rootnetcdf installation and usr/local… foldersInstalling software as userWhat is the best tool for automating the building 32-64bit libraries for Unix & Windows building C++ software, replicable by users and machines?Qubes OS - Update a Template Kernel



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








6















My company uses a small out-dated cluster (CentOS 5.4) to do number crunching (finite element calculations to be more specific). They are using a commercial package and have no idea of Linux. They don't want to change anything on the machines as long as they run, which I accept as a time-effective policy. I do not have administrative rights.



I can have them install smaller packages, but not change e.g. the python version from 2.4 to 2.6+, so I decided to compile the current version (./configure --prefix=/home/mysuser/root2) and ran into a few problems with the dependencies (wrong version of e.g. readline, zlib, curses, bz2 ... or packages not found). I also need to update gcc which complaines about missing GMP, MPFR and MPC.



The reason for doing this is I'd like to compile other test software to do run on these machines.
What can I do to effectively install the packages I need in order to compile the software I need to work with? I'm elsewhere using archlinux and would find it quite handy to be able to do something along the lines



pacman --root /home/myuser/root2 -S <package>


But I have no idea if this is possible or clever.



Other related SE questions: gentoo-prefix and pkgsrc seem to be not so easy (I may be wrong, though).










share|improve this question
























  • You should be able to do local installs, eg in your home directory. Are these installs just for your own use? Is there some reason you can't use other, more up-to-date machines?

    – Faheem Mitha
    Apr 2 '12 at 8:21












  • Yes just for me. I'm currently using ./configure --prefix ...; make; make install but I'm looking for a way that simplifies the dependency resolution.

    – Sebastian
    Apr 2 '12 at 8:40












  • Dependency resolution is usually handled by the systems package management, but using package management for locally installed packages is difficult. Is creating some kind of virtual machine for yourself an option?

    – Faheem Mitha
    Apr 2 '12 at 8:53











  • no, because I want to use it for high performance calculations (using MPI etc).

    – Sebastian
    Apr 2 '12 at 8:56

















6















My company uses a small out-dated cluster (CentOS 5.4) to do number crunching (finite element calculations to be more specific). They are using a commercial package and have no idea of Linux. They don't want to change anything on the machines as long as they run, which I accept as a time-effective policy. I do not have administrative rights.



I can have them install smaller packages, but not change e.g. the python version from 2.4 to 2.6+, so I decided to compile the current version (./configure --prefix=/home/mysuser/root2) and ran into a few problems with the dependencies (wrong version of e.g. readline, zlib, curses, bz2 ... or packages not found). I also need to update gcc which complaines about missing GMP, MPFR and MPC.



The reason for doing this is I'd like to compile other test software to do run on these machines.
What can I do to effectively install the packages I need in order to compile the software I need to work with? I'm elsewhere using archlinux and would find it quite handy to be able to do something along the lines



pacman --root /home/myuser/root2 -S <package>


But I have no idea if this is possible or clever.



Other related SE questions: gentoo-prefix and pkgsrc seem to be not so easy (I may be wrong, though).










share|improve this question
























  • You should be able to do local installs, eg in your home directory. Are these installs just for your own use? Is there some reason you can't use other, more up-to-date machines?

    – Faheem Mitha
    Apr 2 '12 at 8:21












  • Yes just for me. I'm currently using ./configure --prefix ...; make; make install but I'm looking for a way that simplifies the dependency resolution.

    – Sebastian
    Apr 2 '12 at 8:40












  • Dependency resolution is usually handled by the systems package management, but using package management for locally installed packages is difficult. Is creating some kind of virtual machine for yourself an option?

    – Faheem Mitha
    Apr 2 '12 at 8:53











  • no, because I want to use it for high performance calculations (using MPI etc).

    – Sebastian
    Apr 2 '12 at 8:56













6












6








6


2






My company uses a small out-dated cluster (CentOS 5.4) to do number crunching (finite element calculations to be more specific). They are using a commercial package and have no idea of Linux. They don't want to change anything on the machines as long as they run, which I accept as a time-effective policy. I do not have administrative rights.



I can have them install smaller packages, but not change e.g. the python version from 2.4 to 2.6+, so I decided to compile the current version (./configure --prefix=/home/mysuser/root2) and ran into a few problems with the dependencies (wrong version of e.g. readline, zlib, curses, bz2 ... or packages not found). I also need to update gcc which complaines about missing GMP, MPFR and MPC.



The reason for doing this is I'd like to compile other test software to do run on these machines.
What can I do to effectively install the packages I need in order to compile the software I need to work with? I'm elsewhere using archlinux and would find it quite handy to be able to do something along the lines



pacman --root /home/myuser/root2 -S <package>


But I have no idea if this is possible or clever.



Other related SE questions: gentoo-prefix and pkgsrc seem to be not so easy (I may be wrong, though).










share|improve this question
















My company uses a small out-dated cluster (CentOS 5.4) to do number crunching (finite element calculations to be more specific). They are using a commercial package and have no idea of Linux. They don't want to change anything on the machines as long as they run, which I accept as a time-effective policy. I do not have administrative rights.



I can have them install smaller packages, but not change e.g. the python version from 2.4 to 2.6+, so I decided to compile the current version (./configure --prefix=/home/mysuser/root2) and ran into a few problems with the dependencies (wrong version of e.g. readline, zlib, curses, bz2 ... or packages not found). I also need to update gcc which complaines about missing GMP, MPFR and MPC.



The reason for doing this is I'd like to compile other test software to do run on these machines.
What can I do to effectively install the packages I need in order to compile the software I need to work with? I'm elsewhere using archlinux and would find it quite handy to be able to do something along the lines



pacman --root /home/myuser/root2 -S <package>


But I have no idea if this is possible or clever.



Other related SE questions: gentoo-prefix and pkgsrc seem to be not so easy (I may be wrong, though).







compiling upgrade not-root-user






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 18 hours ago









Rui F Ribeiro

42.1k1483142




42.1k1483142










asked Apr 2 '12 at 4:58









SebastianSebastian

5,48633047




5,48633047












  • You should be able to do local installs, eg in your home directory. Are these installs just for your own use? Is there some reason you can't use other, more up-to-date machines?

    – Faheem Mitha
    Apr 2 '12 at 8:21












  • Yes just for me. I'm currently using ./configure --prefix ...; make; make install but I'm looking for a way that simplifies the dependency resolution.

    – Sebastian
    Apr 2 '12 at 8:40












  • Dependency resolution is usually handled by the systems package management, but using package management for locally installed packages is difficult. Is creating some kind of virtual machine for yourself an option?

    – Faheem Mitha
    Apr 2 '12 at 8:53











  • no, because I want to use it for high performance calculations (using MPI etc).

    – Sebastian
    Apr 2 '12 at 8:56

















  • You should be able to do local installs, eg in your home directory. Are these installs just for your own use? Is there some reason you can't use other, more up-to-date machines?

    – Faheem Mitha
    Apr 2 '12 at 8:21












  • Yes just for me. I'm currently using ./configure --prefix ...; make; make install but I'm looking for a way that simplifies the dependency resolution.

    – Sebastian
    Apr 2 '12 at 8:40












  • Dependency resolution is usually handled by the systems package management, but using package management for locally installed packages is difficult. Is creating some kind of virtual machine for yourself an option?

    – Faheem Mitha
    Apr 2 '12 at 8:53











  • no, because I want to use it for high performance calculations (using MPI etc).

    – Sebastian
    Apr 2 '12 at 8:56
















You should be able to do local installs, eg in your home directory. Are these installs just for your own use? Is there some reason you can't use other, more up-to-date machines?

– Faheem Mitha
Apr 2 '12 at 8:21






You should be able to do local installs, eg in your home directory. Are these installs just for your own use? Is there some reason you can't use other, more up-to-date machines?

– Faheem Mitha
Apr 2 '12 at 8:21














Yes just for me. I'm currently using ./configure --prefix ...; make; make install but I'm looking for a way that simplifies the dependency resolution.

– Sebastian
Apr 2 '12 at 8:40






Yes just for me. I'm currently using ./configure --prefix ...; make; make install but I'm looking for a way that simplifies the dependency resolution.

– Sebastian
Apr 2 '12 at 8:40














Dependency resolution is usually handled by the systems package management, but using package management for locally installed packages is difficult. Is creating some kind of virtual machine for yourself an option?

– Faheem Mitha
Apr 2 '12 at 8:53





Dependency resolution is usually handled by the systems package management, but using package management for locally installed packages is difficult. Is creating some kind of virtual machine for yourself an option?

– Faheem Mitha
Apr 2 '12 at 8:53













no, because I want to use it for high performance calculations (using MPI etc).

– Sebastian
Apr 2 '12 at 8:56





no, because I want to use it for high performance calculations (using MPI etc).

– Sebastian
Apr 2 '12 at 8:56










3 Answers
3






active

oldest

votes


















3














Your management is wise in not trying to upgrade a working cluster that is performing an important function based on a proprietary package.



Backporting packages is time consuming and risky, that is, not always feasible. You might avoid the time penalty if you can finding the packages that you want to install in the original CentOS 5.4 repository or in some CentOS 5.4 backport repository. While you can have several versions of GCC on one host at the same time (the embedded systems/cross compile folks do this all the time), but it is not trivial to have more than one glibc in a single run-time environment.



So, you are best advised to work in a separate, newer environment that has the packages that you need and find some way to test the output of the old environment in the new one. In any event, do not risk breaking anything in the old environment or you may need all of the stackexchange.com reputation points that you can get to find your next job ;-)






share|improve this answer

























  • thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

    – Sebastian
    Apr 2 '12 at 7:52






  • 2





    You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

    – Eli Rosencruft
    Apr 2 '12 at 8:20












  • I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

    – Sebastian
    Apr 5 '12 at 9:51



















2














Installing packages from distributions is often difficult when you don't have root permissions, as they assume a fixed directory layout and the dependency system tends to require some packages with setuid or setgid programs that you can't install as non-root.



Compiling from source is more often than not the easiest way. (And if you're after speed, you can choose the best compilation options for your particular processor model.)



To organize the packages that you compile (or install by extracting tarballs), I recommend using stow or the more powerful but more complex xstow. Their basic mode of operation is to install each package in a separate directory, then create symbolic links to put them all together. Here's a typical compilation and installation session with stow:



tar xzf foobar-42.tar.gz
cd foobar-42
./configure --prefix=~/software/stow/foobar-42
make
make install
cd ~/software/stow
stow foobar-42


That last command creates symbolic links from files and directories under ~/software/stow under ~/software. For example, if ~/software/stow/foobar-42 contains a lib/foobar directory and files bin/foobar and man/man1/foobar.1, then you will end up with symbolic links



~/software/bin/foobar -> ../stow/foobar-42/bin/foobar
~/software/lib/foobar -> ../stow/foobar-42/lib/foobar
~/software/man/man1/foobar.1 -> ../../stow/foobar-42/man/man1/foobar.1


To uninstall a program, run stow -D foobar-42 in the ~/software/stow directory, and delete ~/software/stow/foobar-42. To make a program temporarily unavailable (e.g. to try another version), just run the stow -D part.



See also Non-Root Package Managers ; Best way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?






share|improve this answer

























  • I'm going to test this, it sounds interesting!

    – Sebastian
    Apr 3 '12 at 6:15


















0















What is an effective method for installing up-to-date software on an out-dated production machine?




I encounter this frequently while testing a few open source projects I contribute to. For example, I need modern cURL, Git, Wget and a few other tools on Fedora 1. It is easier to use Fedora 1 to test GCC 3 rather than trying to build GCC 3 on a modern machine. I also need modern tools on modern Solaris because Solaris ransomware's bug fixes and updates. If you don't buy a support contract then you don't get the updated software.



For your CentOS machine I believe you have three options.



First, you can use a repository like Red Hat's Software Collections (SCL). I use the SCL to provide modern packages on a CentOS 7 production web server. SCL provides an updated Apache, updated Python, updated PHP, etc. Also see How to update Apache and PHP using SCL?.



There are other repos specifically for Red Hat and CentOS, like Remi Repos. I had a lot of trouble with Remi in the past on CentOS 7 so I don't use it.



Second, you can use an external package manager and additional repos (in addition to the native CentOS package manager). I don't use this method, but Linux From Scratch gives a good survey of the benefits of different package managers.



Third, you can install software locally. Here, locally means you install in /usr/local or /opt or similar. However, you also need to install dependent libraries locally, too. That's where the challenge lies.



I use local installs frequently. You can find my collection of scripts at GitHub | BuildScripts. There are a few benefits to the build scripts.



  • They are easy to use - they provide consistent build settings, including Release configurations and RUNPATHs.

  • They are easy to remove - rm -rf /usr/local followed by hash -r is usually all that is needed to start over.

  • They download and build dependencies as required.

If interested, all you need to do to manage dependencies is ldd prog to determine the external libraries. The ones I update are the important ones, like libcurl, libssl, libcrypto, libxml, etc.



They scripts use a simple method to manage dependencies. Anything more than a week old is rebuilt. That ensures you always get the latest tools and libraries without the complex dependency tracking.




You can do some interesting things with method (3). For example, I acceptance test libraries. One of the things I do is build all the local software with -fsantize=undefined or -fsantize=address by adding the flags to CFLAGS and CXXFLAGS.



I've uncovered more bugs that I can count. See, for example, Unistring 0.9.10 and Undefined Behavior sanitizer findings.




The one downside to local installs is those god damn path problems that plague Linux. The software in /usr/bin will use the libraries in /usr/local/lib rather than /usr/lib. I've made a few systems unstable because programs in /usr/bin use the wrong libraries.



There is no way to set a policy of "binaries in /usr/bin can only use libraries in /usr/lib" (and similar for libraries depending on other libraries). The idiots who thought it was a good idea to compile and link against one library, and then load the wrong library at runtime should get a Darwin award.



Linux is the only major OS that has not managed to solve the problem. Other OSes, including AIX, BSDs, OS X and even Windows managed to solve the problem.






share|improve this answer

























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f35523%2fwhat-is-an-effective-method-for-installing-up-to-date-software-on-an-out-dated-p%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3














    Your management is wise in not trying to upgrade a working cluster that is performing an important function based on a proprietary package.



    Backporting packages is time consuming and risky, that is, not always feasible. You might avoid the time penalty if you can finding the packages that you want to install in the original CentOS 5.4 repository or in some CentOS 5.4 backport repository. While you can have several versions of GCC on one host at the same time (the embedded systems/cross compile folks do this all the time), but it is not trivial to have more than one glibc in a single run-time environment.



    So, you are best advised to work in a separate, newer environment that has the packages that you need and find some way to test the output of the old environment in the new one. In any event, do not risk breaking anything in the old environment or you may need all of the stackexchange.com reputation points that you can get to find your next job ;-)






    share|improve this answer

























    • thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

      – Sebastian
      Apr 2 '12 at 7:52






    • 2





      You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

      – Eli Rosencruft
      Apr 2 '12 at 8:20












    • I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

      – Sebastian
      Apr 5 '12 at 9:51
















    3














    Your management is wise in not trying to upgrade a working cluster that is performing an important function based on a proprietary package.



    Backporting packages is time consuming and risky, that is, not always feasible. You might avoid the time penalty if you can finding the packages that you want to install in the original CentOS 5.4 repository or in some CentOS 5.4 backport repository. While you can have several versions of GCC on one host at the same time (the embedded systems/cross compile folks do this all the time), but it is not trivial to have more than one glibc in a single run-time environment.



    So, you are best advised to work in a separate, newer environment that has the packages that you need and find some way to test the output of the old environment in the new one. In any event, do not risk breaking anything in the old environment or you may need all of the stackexchange.com reputation points that you can get to find your next job ;-)






    share|improve this answer

























    • thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

      – Sebastian
      Apr 2 '12 at 7:52






    • 2





      You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

      – Eli Rosencruft
      Apr 2 '12 at 8:20












    • I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

      – Sebastian
      Apr 5 '12 at 9:51














    3












    3








    3







    Your management is wise in not trying to upgrade a working cluster that is performing an important function based on a proprietary package.



    Backporting packages is time consuming and risky, that is, not always feasible. You might avoid the time penalty if you can finding the packages that you want to install in the original CentOS 5.4 repository or in some CentOS 5.4 backport repository. While you can have several versions of GCC on one host at the same time (the embedded systems/cross compile folks do this all the time), but it is not trivial to have more than one glibc in a single run-time environment.



    So, you are best advised to work in a separate, newer environment that has the packages that you need and find some way to test the output of the old environment in the new one. In any event, do not risk breaking anything in the old environment or you may need all of the stackexchange.com reputation points that you can get to find your next job ;-)






    share|improve this answer















    Your management is wise in not trying to upgrade a working cluster that is performing an important function based on a proprietary package.



    Backporting packages is time consuming and risky, that is, not always feasible. You might avoid the time penalty if you can finding the packages that you want to install in the original CentOS 5.4 repository or in some CentOS 5.4 backport repository. While you can have several versions of GCC on one host at the same time (the embedded systems/cross compile folks do this all the time), but it is not trivial to have more than one glibc in a single run-time environment.



    So, you are best advised to work in a separate, newer environment that has the packages that you need and find some way to test the output of the old environment in the new one. In any event, do not risk breaking anything in the old environment or you may need all of the stackexchange.com reputation points that you can get to find your next job ;-)







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Apr 2 '12 at 7:00

























    answered Apr 2 '12 at 6:46









    Eli RosencruftEli Rosencruft

    52026




    52026












    • thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

      – Sebastian
      Apr 2 '12 at 7:52






    • 2





      You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

      – Eli Rosencruft
      Apr 2 '12 at 8:20












    • I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

      – Sebastian
      Apr 5 '12 at 9:51


















    • thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

      – Sebastian
      Apr 2 '12 at 7:52






    • 2





      You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

      – Eli Rosencruft
      Apr 2 '12 at 8:20












    • I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

      – Sebastian
      Apr 5 '12 at 9:51

















    thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

    – Sebastian
    Apr 2 '12 at 7:52





    thanks for this answer. I'm not familiar with CentOS that much. Can I install (in other than /) using yum and the packages from a web repository? thanks

    – Sebastian
    Apr 2 '12 at 7:52




    2




    2





    You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

    – Eli Rosencruft
    Apr 2 '12 at 8:20






    You can download rpm packages to your personal directory and then use these instructions to extract the files, including any executables

    – Eli Rosencruft
    Apr 2 '12 at 8:20














    I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

    – Sebastian
    Apr 5 '12 at 9:51






    I found this mirror and managed to get something working (mirror.centos.org/centos/5/os/x86_64/CentOS) . For the moment the gentoo prefix seems to be a less complicated alternative.

    – Sebastian
    Apr 5 '12 at 9:51














    2














    Installing packages from distributions is often difficult when you don't have root permissions, as they assume a fixed directory layout and the dependency system tends to require some packages with setuid or setgid programs that you can't install as non-root.



    Compiling from source is more often than not the easiest way. (And if you're after speed, you can choose the best compilation options for your particular processor model.)



    To organize the packages that you compile (or install by extracting tarballs), I recommend using stow or the more powerful but more complex xstow. Their basic mode of operation is to install each package in a separate directory, then create symbolic links to put them all together. Here's a typical compilation and installation session with stow:



    tar xzf foobar-42.tar.gz
    cd foobar-42
    ./configure --prefix=~/software/stow/foobar-42
    make
    make install
    cd ~/software/stow
    stow foobar-42


    That last command creates symbolic links from files and directories under ~/software/stow under ~/software. For example, if ~/software/stow/foobar-42 contains a lib/foobar directory and files bin/foobar and man/man1/foobar.1, then you will end up with symbolic links



    ~/software/bin/foobar -> ../stow/foobar-42/bin/foobar
    ~/software/lib/foobar -> ../stow/foobar-42/lib/foobar
    ~/software/man/man1/foobar.1 -> ../../stow/foobar-42/man/man1/foobar.1


    To uninstall a program, run stow -D foobar-42 in the ~/software/stow directory, and delete ~/software/stow/foobar-42. To make a program temporarily unavailable (e.g. to try another version), just run the stow -D part.



    See also Non-Root Package Managers ; Best way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?






    share|improve this answer

























    • I'm going to test this, it sounds interesting!

      – Sebastian
      Apr 3 '12 at 6:15















    2














    Installing packages from distributions is often difficult when you don't have root permissions, as they assume a fixed directory layout and the dependency system tends to require some packages with setuid or setgid programs that you can't install as non-root.



    Compiling from source is more often than not the easiest way. (And if you're after speed, you can choose the best compilation options for your particular processor model.)



    To organize the packages that you compile (or install by extracting tarballs), I recommend using stow or the more powerful but more complex xstow. Their basic mode of operation is to install each package in a separate directory, then create symbolic links to put them all together. Here's a typical compilation and installation session with stow:



    tar xzf foobar-42.tar.gz
    cd foobar-42
    ./configure --prefix=~/software/stow/foobar-42
    make
    make install
    cd ~/software/stow
    stow foobar-42


    That last command creates symbolic links from files and directories under ~/software/stow under ~/software. For example, if ~/software/stow/foobar-42 contains a lib/foobar directory and files bin/foobar and man/man1/foobar.1, then you will end up with symbolic links



    ~/software/bin/foobar -> ../stow/foobar-42/bin/foobar
    ~/software/lib/foobar -> ../stow/foobar-42/lib/foobar
    ~/software/man/man1/foobar.1 -> ../../stow/foobar-42/man/man1/foobar.1


    To uninstall a program, run stow -D foobar-42 in the ~/software/stow directory, and delete ~/software/stow/foobar-42. To make a program temporarily unavailable (e.g. to try another version), just run the stow -D part.



    See also Non-Root Package Managers ; Best way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?






    share|improve this answer

























    • I'm going to test this, it sounds interesting!

      – Sebastian
      Apr 3 '12 at 6:15













    2












    2








    2







    Installing packages from distributions is often difficult when you don't have root permissions, as they assume a fixed directory layout and the dependency system tends to require some packages with setuid or setgid programs that you can't install as non-root.



    Compiling from source is more often than not the easiest way. (And if you're after speed, you can choose the best compilation options for your particular processor model.)



    To organize the packages that you compile (or install by extracting tarballs), I recommend using stow or the more powerful but more complex xstow. Their basic mode of operation is to install each package in a separate directory, then create symbolic links to put them all together. Here's a typical compilation and installation session with stow:



    tar xzf foobar-42.tar.gz
    cd foobar-42
    ./configure --prefix=~/software/stow/foobar-42
    make
    make install
    cd ~/software/stow
    stow foobar-42


    That last command creates symbolic links from files and directories under ~/software/stow under ~/software. For example, if ~/software/stow/foobar-42 contains a lib/foobar directory and files bin/foobar and man/man1/foobar.1, then you will end up with symbolic links



    ~/software/bin/foobar -> ../stow/foobar-42/bin/foobar
    ~/software/lib/foobar -> ../stow/foobar-42/lib/foobar
    ~/software/man/man1/foobar.1 -> ../../stow/foobar-42/man/man1/foobar.1


    To uninstall a program, run stow -D foobar-42 in the ~/software/stow directory, and delete ~/software/stow/foobar-42. To make a program temporarily unavailable (e.g. to try another version), just run the stow -D part.



    See also Non-Root Package Managers ; Best way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?






    share|improve this answer















    Installing packages from distributions is often difficult when you don't have root permissions, as they assume a fixed directory layout and the dependency system tends to require some packages with setuid or setgid programs that you can't install as non-root.



    Compiling from source is more often than not the easiest way. (And if you're after speed, you can choose the best compilation options for your particular processor model.)



    To organize the packages that you compile (or install by extracting tarballs), I recommend using stow or the more powerful but more complex xstow. Their basic mode of operation is to install each package in a separate directory, then create symbolic links to put them all together. Here's a typical compilation and installation session with stow:



    tar xzf foobar-42.tar.gz
    cd foobar-42
    ./configure --prefix=~/software/stow/foobar-42
    make
    make install
    cd ~/software/stow
    stow foobar-42


    That last command creates symbolic links from files and directories under ~/software/stow under ~/software. For example, if ~/software/stow/foobar-42 contains a lib/foobar directory and files bin/foobar and man/man1/foobar.1, then you will end up with symbolic links



    ~/software/bin/foobar -> ../stow/foobar-42/bin/foobar
    ~/software/lib/foobar -> ../stow/foobar-42/lib/foobar
    ~/software/man/man1/foobar.1 -> ../../stow/foobar-42/man/man1/foobar.1


    To uninstall a program, run stow -D foobar-42 in the ~/software/stow directory, and delete ~/software/stow/foobar-42. To make a program temporarily unavailable (e.g. to try another version), just run the stow -D part.



    See also Non-Root Package Managers ; Best way to upgrade vim/gvim to 7.3 in Ubuntu 10.04?







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Apr 13 '17 at 12:36









    Community

    1




    1










    answered Apr 3 '12 at 0:12









    GillesGilles

    548k13011131631




    548k13011131631












    • I'm going to test this, it sounds interesting!

      – Sebastian
      Apr 3 '12 at 6:15

















    • I'm going to test this, it sounds interesting!

      – Sebastian
      Apr 3 '12 at 6:15
















    I'm going to test this, it sounds interesting!

    – Sebastian
    Apr 3 '12 at 6:15





    I'm going to test this, it sounds interesting!

    – Sebastian
    Apr 3 '12 at 6:15











    0















    What is an effective method for installing up-to-date software on an out-dated production machine?




    I encounter this frequently while testing a few open source projects I contribute to. For example, I need modern cURL, Git, Wget and a few other tools on Fedora 1. It is easier to use Fedora 1 to test GCC 3 rather than trying to build GCC 3 on a modern machine. I also need modern tools on modern Solaris because Solaris ransomware's bug fixes and updates. If you don't buy a support contract then you don't get the updated software.



    For your CentOS machine I believe you have three options.



    First, you can use a repository like Red Hat's Software Collections (SCL). I use the SCL to provide modern packages on a CentOS 7 production web server. SCL provides an updated Apache, updated Python, updated PHP, etc. Also see How to update Apache and PHP using SCL?.



    There are other repos specifically for Red Hat and CentOS, like Remi Repos. I had a lot of trouble with Remi in the past on CentOS 7 so I don't use it.



    Second, you can use an external package manager and additional repos (in addition to the native CentOS package manager). I don't use this method, but Linux From Scratch gives a good survey of the benefits of different package managers.



    Third, you can install software locally. Here, locally means you install in /usr/local or /opt or similar. However, you also need to install dependent libraries locally, too. That's where the challenge lies.



    I use local installs frequently. You can find my collection of scripts at GitHub | BuildScripts. There are a few benefits to the build scripts.



    • They are easy to use - they provide consistent build settings, including Release configurations and RUNPATHs.

    • They are easy to remove - rm -rf /usr/local followed by hash -r is usually all that is needed to start over.

    • They download and build dependencies as required.

    If interested, all you need to do to manage dependencies is ldd prog to determine the external libraries. The ones I update are the important ones, like libcurl, libssl, libcrypto, libxml, etc.



    They scripts use a simple method to manage dependencies. Anything more than a week old is rebuilt. That ensures you always get the latest tools and libraries without the complex dependency tracking.




    You can do some interesting things with method (3). For example, I acceptance test libraries. One of the things I do is build all the local software with -fsantize=undefined or -fsantize=address by adding the flags to CFLAGS and CXXFLAGS.



    I've uncovered more bugs that I can count. See, for example, Unistring 0.9.10 and Undefined Behavior sanitizer findings.




    The one downside to local installs is those god damn path problems that plague Linux. The software in /usr/bin will use the libraries in /usr/local/lib rather than /usr/lib. I've made a few systems unstable because programs in /usr/bin use the wrong libraries.



    There is no way to set a policy of "binaries in /usr/bin can only use libraries in /usr/lib" (and similar for libraries depending on other libraries). The idiots who thought it was a good idea to compile and link against one library, and then load the wrong library at runtime should get a Darwin award.



    Linux is the only major OS that has not managed to solve the problem. Other OSes, including AIX, BSDs, OS X and even Windows managed to solve the problem.






    share|improve this answer





























      0















      What is an effective method for installing up-to-date software on an out-dated production machine?




      I encounter this frequently while testing a few open source projects I contribute to. For example, I need modern cURL, Git, Wget and a few other tools on Fedora 1. It is easier to use Fedora 1 to test GCC 3 rather than trying to build GCC 3 on a modern machine. I also need modern tools on modern Solaris because Solaris ransomware's bug fixes and updates. If you don't buy a support contract then you don't get the updated software.



      For your CentOS machine I believe you have three options.



      First, you can use a repository like Red Hat's Software Collections (SCL). I use the SCL to provide modern packages on a CentOS 7 production web server. SCL provides an updated Apache, updated Python, updated PHP, etc. Also see How to update Apache and PHP using SCL?.



      There are other repos specifically for Red Hat and CentOS, like Remi Repos. I had a lot of trouble with Remi in the past on CentOS 7 so I don't use it.



      Second, you can use an external package manager and additional repos (in addition to the native CentOS package manager). I don't use this method, but Linux From Scratch gives a good survey of the benefits of different package managers.



      Third, you can install software locally. Here, locally means you install in /usr/local or /opt or similar. However, you also need to install dependent libraries locally, too. That's where the challenge lies.



      I use local installs frequently. You can find my collection of scripts at GitHub | BuildScripts. There are a few benefits to the build scripts.



      • They are easy to use - they provide consistent build settings, including Release configurations and RUNPATHs.

      • They are easy to remove - rm -rf /usr/local followed by hash -r is usually all that is needed to start over.

      • They download and build dependencies as required.

      If interested, all you need to do to manage dependencies is ldd prog to determine the external libraries. The ones I update are the important ones, like libcurl, libssl, libcrypto, libxml, etc.



      They scripts use a simple method to manage dependencies. Anything more than a week old is rebuilt. That ensures you always get the latest tools and libraries without the complex dependency tracking.




      You can do some interesting things with method (3). For example, I acceptance test libraries. One of the things I do is build all the local software with -fsantize=undefined or -fsantize=address by adding the flags to CFLAGS and CXXFLAGS.



      I've uncovered more bugs that I can count. See, for example, Unistring 0.9.10 and Undefined Behavior sanitizer findings.




      The one downside to local installs is those god damn path problems that plague Linux. The software in /usr/bin will use the libraries in /usr/local/lib rather than /usr/lib. I've made a few systems unstable because programs in /usr/bin use the wrong libraries.



      There is no way to set a policy of "binaries in /usr/bin can only use libraries in /usr/lib" (and similar for libraries depending on other libraries). The idiots who thought it was a good idea to compile and link against one library, and then load the wrong library at runtime should get a Darwin award.



      Linux is the only major OS that has not managed to solve the problem. Other OSes, including AIX, BSDs, OS X and even Windows managed to solve the problem.






      share|improve this answer



























        0












        0








        0








        What is an effective method for installing up-to-date software on an out-dated production machine?




        I encounter this frequently while testing a few open source projects I contribute to. For example, I need modern cURL, Git, Wget and a few other tools on Fedora 1. It is easier to use Fedora 1 to test GCC 3 rather than trying to build GCC 3 on a modern machine. I also need modern tools on modern Solaris because Solaris ransomware's bug fixes and updates. If you don't buy a support contract then you don't get the updated software.



        For your CentOS machine I believe you have three options.



        First, you can use a repository like Red Hat's Software Collections (SCL). I use the SCL to provide modern packages on a CentOS 7 production web server. SCL provides an updated Apache, updated Python, updated PHP, etc. Also see How to update Apache and PHP using SCL?.



        There are other repos specifically for Red Hat and CentOS, like Remi Repos. I had a lot of trouble with Remi in the past on CentOS 7 so I don't use it.



        Second, you can use an external package manager and additional repos (in addition to the native CentOS package manager). I don't use this method, but Linux From Scratch gives a good survey of the benefits of different package managers.



        Third, you can install software locally. Here, locally means you install in /usr/local or /opt or similar. However, you also need to install dependent libraries locally, too. That's where the challenge lies.



        I use local installs frequently. You can find my collection of scripts at GitHub | BuildScripts. There are a few benefits to the build scripts.



        • They are easy to use - they provide consistent build settings, including Release configurations and RUNPATHs.

        • They are easy to remove - rm -rf /usr/local followed by hash -r is usually all that is needed to start over.

        • They download and build dependencies as required.

        If interested, all you need to do to manage dependencies is ldd prog to determine the external libraries. The ones I update are the important ones, like libcurl, libssl, libcrypto, libxml, etc.



        They scripts use a simple method to manage dependencies. Anything more than a week old is rebuilt. That ensures you always get the latest tools and libraries without the complex dependency tracking.




        You can do some interesting things with method (3). For example, I acceptance test libraries. One of the things I do is build all the local software with -fsantize=undefined or -fsantize=address by adding the flags to CFLAGS and CXXFLAGS.



        I've uncovered more bugs that I can count. See, for example, Unistring 0.9.10 and Undefined Behavior sanitizer findings.




        The one downside to local installs is those god damn path problems that plague Linux. The software in /usr/bin will use the libraries in /usr/local/lib rather than /usr/lib. I've made a few systems unstable because programs in /usr/bin use the wrong libraries.



        There is no way to set a policy of "binaries in /usr/bin can only use libraries in /usr/lib" (and similar for libraries depending on other libraries). The idiots who thought it was a good idea to compile and link against one library, and then load the wrong library at runtime should get a Darwin award.



        Linux is the only major OS that has not managed to solve the problem. Other OSes, including AIX, BSDs, OS X and even Windows managed to solve the problem.






        share|improve this answer
















        What is an effective method for installing up-to-date software on an out-dated production machine?




        I encounter this frequently while testing a few open source projects I contribute to. For example, I need modern cURL, Git, Wget and a few other tools on Fedora 1. It is easier to use Fedora 1 to test GCC 3 rather than trying to build GCC 3 on a modern machine. I also need modern tools on modern Solaris because Solaris ransomware's bug fixes and updates. If you don't buy a support contract then you don't get the updated software.



        For your CentOS machine I believe you have three options.



        First, you can use a repository like Red Hat's Software Collections (SCL). I use the SCL to provide modern packages on a CentOS 7 production web server. SCL provides an updated Apache, updated Python, updated PHP, etc. Also see How to update Apache and PHP using SCL?.



        There are other repos specifically for Red Hat and CentOS, like Remi Repos. I had a lot of trouble with Remi in the past on CentOS 7 so I don't use it.



        Second, you can use an external package manager and additional repos (in addition to the native CentOS package manager). I don't use this method, but Linux From Scratch gives a good survey of the benefits of different package managers.



        Third, you can install software locally. Here, locally means you install in /usr/local or /opt or similar. However, you also need to install dependent libraries locally, too. That's where the challenge lies.



        I use local installs frequently. You can find my collection of scripts at GitHub | BuildScripts. There are a few benefits to the build scripts.



        • They are easy to use - they provide consistent build settings, including Release configurations and RUNPATHs.

        • They are easy to remove - rm -rf /usr/local followed by hash -r is usually all that is needed to start over.

        • They download and build dependencies as required.

        If interested, all you need to do to manage dependencies is ldd prog to determine the external libraries. The ones I update are the important ones, like libcurl, libssl, libcrypto, libxml, etc.



        They scripts use a simple method to manage dependencies. Anything more than a week old is rebuilt. That ensures you always get the latest tools and libraries without the complex dependency tracking.




        You can do some interesting things with method (3). For example, I acceptance test libraries. One of the things I do is build all the local software with -fsantize=undefined or -fsantize=address by adding the flags to CFLAGS and CXXFLAGS.



        I've uncovered more bugs that I can count. See, for example, Unistring 0.9.10 and Undefined Behavior sanitizer findings.




        The one downside to local installs is those god damn path problems that plague Linux. The software in /usr/bin will use the libraries in /usr/local/lib rather than /usr/lib. I've made a few systems unstable because programs in /usr/bin use the wrong libraries.



        There is no way to set a policy of "binaries in /usr/bin can only use libraries in /usr/lib" (and similar for libraries depending on other libraries). The idiots who thought it was a good idea to compile and link against one library, and then load the wrong library at runtime should get a Darwin award.



        Linux is the only major OS that has not managed to solve the problem. Other OSes, including AIX, BSDs, OS X and even Windows managed to solve the problem.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 14 hours ago

























        answered 15 hours ago









        jwwjww

        1,64432668




        1,64432668



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f35523%2fwhat-is-an-effective-method-for-installing-up-to-date-software-on-an-out-dated-p%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            -compiling, not-root-user, upgrade

            Popular posts from this blog

            Frič See also Navigation menuinternal link

            Identify plant with long narrow paired leaves and reddish stems Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?What is this plant with long sharp leaves? Is it a weed?What is this 3ft high, stalky plant, with mid sized narrow leaves?What is this young shrub with opposite ovate, crenate leaves and reddish stems?What is this plant with large broad serrated leaves?Identify this upright branching weed with long leaves and reddish stemsPlease help me identify this bulbous plant with long, broad leaves and white flowersWhat is this small annual with narrow gray/green leaves and rust colored daisy-type flowers?What is this chilli plant?Does anyone know what type of chilli plant this is?Help identify this plant

            fontconfig warning: “/etc/fonts/fonts.conf”, line 100: unknown “element blank” The 2019 Stack Overflow Developer Survey Results Are In“tar: unrecognized option --warning” during 'apt-get install'How to fix Fontconfig errorHow do I figure out which font file is chosen for a system generic font alias?Why are some apt-get-installed fonts being ignored by fc-list, xfontsel, etc?Reload settings in /etc/fonts/conf.dTaking 30 seconds longer to boot after upgrade from jessie to stretchHow to match multiple font names with a single <match> element?Adding a custom font to fontconfigRemoving fonts from fontconfig <match> resultsBroken fonts after upgrading Firefox ESR to latest Firefox