Ideas and articles on memory safety.
We can design around it explicitly or implicitly in many ways, butsoftware developers must always be aware of the memory their systemuses.
Systems like Forth are interesting because they don't just abstract overthe stack machines - they are the stack machines. This means theygive the programmer direct, immediate knowledge of how much memorythey're using at a given time.
Type systems that encapsulate memory are interesting, but by comparisonfeel like a bit of a hack: we have some system A with some semantics,then we graft system A' over it by making some assumptions about systemA, then we enforce rules of system A' by enforcing some system B thatfeels unrelated to the domain A and the language for the domain A', butthat enables optimisations and these things. I'm not entirely sure howto express the feeling of fighting both against the type system'senforcement of memory and against the language itself, but it detersexperiments - and play is the point.
Safety in Non-Memory-SafeLanguages isinteresting; it investigates different proposed solutions investigatedwhen abstracting over programming languages. To me, the design of a goodsolution has a lot to do with how developers think about what they'reusing - we want to give developers as much information at the top levelas possible while allowing them to choose to focus on some things butnot others at specific times (ideas vs. hardened systems vs.optimisations, while giving the programmer all the information abouttheir ideas, their interfaces and how their code compiles and interfaceswith the GC). GC obscures, and optimisation engineers for GC'd languagesare tweaking compiler flags, memorizing the bytecode generated byspecific functions, or rolling their own systems and toolchains toprovide new performance profiles for languages.
But becoming an expert on how a programming language is written tooptimise it feels like a violation of the abstraction boundary thelanguage was created to erect in the first place!
Mmapped memory management, though idiomatic on Unix systems, also feelslike a bit of a hack - it doesn't match the memory model of developers.Nobody wants to think about how much memory operation X occupies,allocate space for it, then execute the operation - they want to first accomplish the operation however they can, then revisit their work andapply optimisation constraints if we determine that this point in thecode is the bottleneck.
GenerationalReferences are newin Vale and seem to feel both safer and more optimal than currentsolutions. Absolutely worth learning more about. The fact that they'restrongly considering branch prediction optimisations here is a greatidea.
Functional Programming and Reference Counting :ProgrammingLanguagesaddresses functional programming reference counting techniques in lean;specifically, optimizing theorem proving.