For the purposes of this question, lets assume all future computers are gonna become locked down and you’d need corporate approval to run things… so with such a hypothetical dark future in mind: How to hoard as much as info as possible?
Chemistry, math, physics, optics, metallurgy… The thing that is hard is how your needs for knowledge will change over time and what is accessible to you at each stage.
For general electronics, The Art of Electronics is the goto book. For actually understanding practical stuff, you need to build a knowledge of the industrial revolution and how it evolved. The inventions of James Watt opened up steam. The Bessemer process scaled iron. Large heavy castings drove the potential for large lathes, but lathes are the key to everything. A lathe is capable of cutting a more precise screw than the one used to operate it. That old screw can be replaced with the new, until you achieve your desired precision.
A reference flat is made using two granite stones rubbed together with water in between until the top one creates suction that can lift the other.
Prussian blue and hand scraping are used to make machine flat surfaces.
Automotive suspension components like springs and torsion bars are a good source of cheap tool steel. Engine heads are a good source of casting scrap and quality hardware. Wipers, window motors, and starters are great for building machines. Understanding how to repair and diagnose this stuff is a major skill. Knowing how to make real controlled heat is fundamentally important.
I’ve never encountered single sources for this stuff.

How to Invent Everything: A Survival Guide for the Stranded Time Traveler
It’s basically a book on how to DIY various technologies
Sounds like you’re talking about an old-school encyclopedia book collection. Like Encyclopedia Britannica. They take up a lot of physical space and sing if the information becomes outdated quickly. But they are a great source for history, geography, science, etc. And you might be able to find them second-hand from an online seller for a relatively reasonable price.
Encyclopedia Britannica
They stopped printing in 2010 :/
Edit: also jeez an entire set of encryclopedias are kinda pricy ngl 👀
Oh okay. Looks like World Book is still printing encyclopedias though.
You can actually download Wikipedia if you have the space for it.
Wait till they insert AI chips into your pc and start scanning for anti-government articles to delete out of your .zim file
Just print it
Then I will go back to the pc I have now. :)
Your PC can’t last forever
Just like books or AI chips.
There are some very well running commodore-s or amigas out there.
I think I’m set if my phone or electronics last that long from this decade. I only have 40 yrs to go - if I’m realistic:DI only have 40 yrs to go - if I’m realistic
Probably far less if society breaks down to the point that libraries cease to exist.
Engineering textbooks and presumably chemistry and medical textbooks. But you then have to be careful to select one that’s not massively confusing and a slog to read
CRC Handbook of Chemistry and Physics better known as the rubber book
Penguin publishes reference “dictionaries” of various subjects, that are more like mini-encyclopedias. I’ve got ones covering mathematics, philosophy, psychology, sociology, and literary theory.
Look that up in your Funk and Wagnalls!
I’ll probably get vote-murdered for this, because this is unfortunately not a popular opinion for a lot of very justified reasons that I actually mostly agree with, but I’m going to throw this out there anyway, and I hope people hear me out for long enough that you can decide for yourself instead of just kneejerk downvoting.
Imagine if someone created a statistical numerical model that was based on, and could therefore approximately reproduce something close to the cumulative total of all human knowledge ever recorded on the internet which probably represents exabytes of information, but this numerical model was only the size of a few movie files, and you could dump those numbers into a simulator that within some margin of statistical error, reproduced almost any of that information on currently available consumer-level hardware.
If you’re not picking up what I’m putting down, I just described open weight LLMs that you can download and run yourself in ollama and other local programs.
They are not intelligences and they do not represent knowledge, because they don’t know anything, can’t make their own decisions and can never be assumed to be fully accurate representations of anything they have “learned” as they are simply greatly minimized and compressed statistical details about the information already on the internet, but they actually still contain a great deal of information, provided you understand what you’re looking at and what it’s telling you. The same way demographics can provide a great deal of information about the world without needing to individually review every census document by hand, but never tell the entire story perfectly.
While I agree with the suggestions to get a proper encyclopedia or just download Wikipedia, for a more reliable and trustworthy dataset, I think you’re doing yourself a disservice if you dismiss the entire concept of LLMs and vision models just because a few horrific companies are hyping them and overselling them and using them to destroy the world and civilization in disgustingly idiotic ways. That’s not the fault of the technologies themselves. They are a tool, a tool that is being widely misused and abused, but it’s also a tool that you can use, and you get to decide whether you simply use it wisely, or abuse it, or don’t use it at all. It’s your call. It’s already there. You decide what to do with it. I happen to think it’s got some pretty cool features and can do some remarkable things. As long as I’m the only one in charge of deciding how and when it’s used. I acknowledge it was plagiarized and collected illegally, and I respect that (as much as I respect any copyright) and I’m not planning to profit from it or use it to pass off other people’s work as my own.
But as a hyper-efficient way to store “liberated” information to protect ourselves against the complete enshittification of content and civilization? I don’t see the harm. Copyright is not going to matter at that point anyway, the large companies who control the data and the platforms for it have already proven they don’t respect it and they’re going to be the ones dictating it in the future. They won’t even let us have access to our own data, nevermind being able to do anything to prevent them from taking it in the first place. We, the people and authors and artists and musicians and content creators it was designed to protect, now have to protect ourselves, from them, and if that means hiding some machine learning models under my bed for that rainy day, so be it.
the title says non-electronic, so you’re dead in the water really.
Anyhoo, if I were living in an apocalypse and had a laptop, would I prefer that it had wikipedia or an LLM?
It really depends on accuracy of the information it’s outputting versus the storage requirements of the model, compared to the storage requirements of a wikipedia dump.
Regardless, it’s kinda comical to imagine a non-technological society being able to consult one of our LLMs without any understanding of technology generally. Like you could ask it to describe the chemical makeup of the star Proxima Centauri and it would give you a response that would sound absolutely infallible and you would have no way to validate it - it would seem to have god-like prescience. Then you could ask it something mundane and it either lies or tells you it’s unable to answer.
I think that, while yes, LLMs are an option for data storage, I don’t think that they’re worth the effort. Sure, they might have a very wide breadth of information that would be hard to gather manually, but how can you be sure that the information you’re getting is a good replica of the source, or that the source that it was trained on was good in the first place? A piece of information could come from either 4chan or Wikipedia, and unless you had the sources yourself to confirm (in which case, why use the LLM as all), you’d have no way of telling which it came from.
Aside from that, just getting the information out of it would be a challenge, at least for the hardware of today and the near future. Running a model large enough to have a useful amount of world knowledge requires a some pretty substantial hardware if you want any amount of speed that would be useful, and with rising hardware costs, that might not be possible for most people even years from now. Even with the software, if something with your hardware goes wrong, it might be difficult to get inference engines working on newer, unsupported hardware and drivers.
So sure, maybe as an afterthought if you happen to have some extra space on your drives and oodles of spare RAM, but I doubt that it’d be worth thinking that much about.
I like the idea, but I’m not sure what the math behind it would be







