risc-v is obv cool but i don't have any hardware for it
Post
risc-v is obv cool but i don't have any hardware for it
hell fucking YES we are adding support for x86 cygwin. ran emacs in that shit when i interned at microsoft. deep behind enemy lines
of course the fucked up little config format says hey you know what makes perfect sense to add along with specifications of what output formats to build for? that's right!! host-specific binary paths
oh you're building our programming language on your own computer instead of rented software on rented hardware? that's so cute!!! i love retro stuff like that
you cannot be fucking serious https://forge.rust-lang.org/infra/other-installation-methods.html#source-code this link is hidden in the middle of the bootstrap.example.toml. not mentioned anywhere in the fucking INSTALL.md that negs you about building from source. no they actually have fucking source tarballs too. jesus fucking christ they're up to date links too
for context, the bootstrap.example.toml file is >1000 lines of commented text
they also provide gpg signatures the way everyone else does (except pypi because they had a corporate offering to sell)
the file ordering in the tar file is all fucked up because clearly some smartass wanted to generate it with 1337 h4XX0r parallelism. or alternatively it's just fucked up for no reason
just including all the test data from the lzma-sys
crate. yeah i agree bandwidth and disk space are free and i need those to build your language
this is actually the crux of the wheel proposal i'm making btw and it's why i haven't done it yet bc zstd alone is missing most of the savings vs classifying output files from the build system
oh my god that's 273M with xz compression. zero effort. negative effort
literally so close to getting it
# Set the bootstrap/download cache path. It is useful when building rust
# repeatedly in a CI environment.
#build.bootstrap-cache-path = /path/to/shared/cache
what if we had a method to cache outputs that are shared across invocations of the bootstrapping process. what if?
now it turns out there's a secret flag that completely changes all of the build requirements
# Use the optimized LLVM C intrinsics for `compiler_builtins`, rather than Rust intrinsics.
# Choosing true requires the LLVM submodule to be managed by bootstrap (i.e. not external)
# so that `compiler-rt` sources are available.
now i've heard there was a secret flag
that toml displayed for the runtime source
but you don't really care for C now, do you
# Setting this to a path removes the requirement for a C toolchain, but requires setting the
# path to an existing library containing the builtins library from LLVM's compiler-rt.
literally just a fucking riddle. this is the canonical "schema" for building rustc
important followup
# Setting this to `false` generates slower code, but removes the requirement for a C toolchain in
# order to run `x check`.
this is a riddle
no you don't get to see a usage example
i am going to remove openmp support from clang even if it requires patching
ok this one seems like strictly spack's fault. already had to add a variant
this puts us down to ~4k source files, and i'm now building with -j6
so i can build gcc at the same time. gcc is more respectful of my time so it gets more processor cores
meanwhile gcc has an entire separate info document breaking down its build prerequisites in precise detail https://gcc.gnu.org/install/prerequisites.html
If you configure a RISC-V compiler with the option
--with-arch
and the specified architecture string is non-canonical, then you will needpython
installed on the build system.
saved the most significant bit for the end: "build system". imagine! giving a shit about distinct layers of the dependency graph! working together!
Necessary to uncompress GCC
tar
files when source code is obtained via HTTPS mirror sites.
one interesting omission here is how to contact said https mirror sites. i would not be surprised if that's because libcurl is not under a copyleft license, while zstd is dual-licensed under GPLv2
quite unfortunate that openpkg's https certificate has expired https://gcc.gnu.org/install/binaries.html this is literally the precise case that tls certs were created for and this error means precisely that they cannot be trusted for download purposes
the pants binary repo would pull from any http cache, but we used to use raw.githubusercontent specifically. github is of course not trustworthy anymore (and never has been, given their decision since forever to avoid providing Content-Length
, even for release tarballs!)
this idea is really highly generalizable and thoroughly underappreciated:
Multithreaded download of single files (option
--chunk-size
)
yes!!!!!!! pipeline everything!!!!!! this works for any interface boundary between processes exhibiting ~independent latency response!!!!
importantly for this case, it does require parallel threads of execution as opposed to a coroutine abstraction. the specific problem here involves two notionally independent random coprocesses which each respond to input, but in an unpredictable manner
if it were predictable, this would become an offline problem, which often affords an analytical optimum strategy. if you know the ratio of input bytes/sec to output bytes/sec, you don't need to independently measure and react to each
similarly, for random processes which do not change behavior based on input (such as monitoring 100,000 client requests at once--any new chunk of data will come in essentially at random from the lot), and don't respond to the result of other random processes (to an extent--if a server is overloaded, other clients may back off), it can be very efficient to simply wait on all such random events in the same thread (this is the coroutine or async approach popularized by node.js)
but pipeline problems are different, for all the above reasons! the online and interdependent properties motivate using parallel threading when the source and sink of a data flow are meaningfully distinct. copying a file onto the same filesystem can often be done with a single syscall, but writing it into a network request requires buffering, even if the OS can cheat there too. that's because the filesystem and the network depend on distinct hardware resources!
cool that nix gets a special flag to support their very specific dependency model
# Always patch binaries for usage with Nix toolchains. If `true` then binaries
# will be patched unconditionally. If `false` or unset, binaries will be patched
# only if the current distribution is NixOS. This option is useful when using
# a Nix toolchain on non-NixOS distributions.
#build.patch-binaries-for-nix = false
it might be "useful" to support other toolchains too
well i like that they aren't unconditionally sending the telemetry remotely
# Collect information and statistics about the current build, and write it to
# disk. Enabling this has no impact on the resulting build output. The
# schema of the file generated by the build metrics feature is unstable, and
# this is not intended to be used during local development.
#build.metrics = false
although the unconditional bootstrap downloads are a form of telemetry already
A space for Bonfire maintainers and contributors to communicate