![]() You may even be able to share compiled binaries between your machines (but many things would need to be similar, such as operating system, CPU word length, et cetera) Parallel pip installation If this turns out to be the bottleneck then it's worth looking into compiling code in multiple processes. This may speed up download times as all packages are now found on a nearby machine.Ī lot of time also goes into compiling packages with C code, such as PIL. ![]() Or the following if you wish to replace the official PyPI altogether: -index-url YOUR_URL_HERE You can host PyPI yourself and add the following to your requirements.txt file ( docs): -extra-index-url YOUR_URL_HERE If the time goes to querying PyPI and finding the packages (in particular when you also download from Github and other sources) then it may be beneficial to set up your own PyPI. Have you analyzed the deployment process to see where the time really goes? It surprises me that running multiple parallel pip processes does not speed it up much.
0 Comments
Leave a Reply. |