rkagerer 18 hours ago

Before the modern Cloud took shape, developers (or at least, the software they created) used to be more trustworthy.

I love those classic tools from the likes of Sysinternals or Nirsoft. I didn't hesitate to give them full access to my machine, because I was confident they'd (mostly) work as expected. Although I couldn't inspect their source, I could reason about how they should behave, and the prevailing culture of the time was one where I knew the developers and myself shared a common set of expectations.

Their creators didn't tend to pull stunts like quietly vacuuming all your data up to themselves. When they did want feedback, they asked you for it first.

There wasn't such a potent "extract value" anti-culture, and successful companies recognized enduring value came from working in the user's best interest (eg. early Google resisted cluttering their search results).

Although silos existed (like proprietary data formats), there was at least an implicit acknowledgement and expectation you retained ownership and control over the data itself.

Distribution wasn't locked behind appstores. Heck, license enforcement in early Office and Windows was based on the honour system - talk about an ecosystem of trust.

One way to work toward a healthier zeitgeist is to advocate tirelessly for the user at every opportunity you get, and stand by your gut feeling of what is right - even when faced with opposing headwinds.

  • UltraSane 14 hours ago

    Sysinternals tools are written by Mark Russinovich, who is the CTO of Azure

    • tremorscript 7 hours ago

      Correct me if I am wrong but I think, Russinovich may now be the CTO of Azure because once upon a time as an independent developer he wrote sysinternals. Its one way to show your talent as an independent developer.

    • userbinator 7 hours ago

      That's who he is now, but he wasn't always working at MS.

  • userbinator 7 hours ago

    100% agreed. Those small independent developers also had more on the line, and I'd trust them far more than a Big Tech company that cares almost exclusively about $$$ and is made up of employees which largely dilute responsibility among themselves and many of which probably aren't even there because of how well they can program.

  • bulatb 16 hours ago

    Standing in the wind won't stop the storm. You only get blown over.

    "If everyone just" stood in front of the storm, they'd all get blown over, and the storm would go on.

    No one wants to hear that individual heroics aren't the answer, but they aren't. Our moral intuition fails on problems bigger than it's meant for.

    One person, out of hundreds, working in a factory that makes a slightly cheaper widget, that changes a BOM, that drives a different industry to swap one process for another, that forces other factories to move, that changes water use and heat flow in a region, that nudges weather systems out of a regime that favors storms, is doing more to stop the storm than any hundred people standing in the wind.

    The person washing windows at a lab that figures out how any of those steps might be connected doesn't get to be a hero, and doesn't feel righteous for fighting, but they're fighting, while the people who choose feeling righteous aren't.

    • numpad0 13 hours ago

      Or, 30% of the society could build up distrust in tech, AI, and scraping on the Internet, and quietly start sabotaging flow of data and/or cash. They're not going to disclose decision criteria that implement that behavior, even when imprinting it to people around.

      I think such "immune response" of human society will be more realistic modeling of the storm blowing. The Leviathan won't listen to few of scales screaming data abuse is technically legal or whatever, if it deem that unreasonable.

    • spencerflem 15 hours ago

      I'm confused by your analogy-

      In the case of user-hostile software, what is an example of an action that would lead to a change in overall climate

      • bulatb 14 hours ago

        People want to think in terms of people and their actions, not in terms of systems and their structures. That's the problem.

        Show a programmer a system that requires constant on-call burning out the team—they'll tell you (if they've been around) heroics are a sign it's broken, not to find more heroes. Definitely not to pull the person working on systemic issues off that nonsense and get them a shift.

        Show them an equivalently structured broken system in a different field and they'll demand heroics.

        I can't give you an example of an action, because I don't know, and I'm not talking about actions. I'm saying the approach itself is wrong and any action it suggests won't be effective.

        Moralizing has a brutal, self-perpetuating metaproblem that makes people more and more convinced it works the more it doesn't. Actions based on moralizing will be chosen if suggested and they will not work, so any moralizing framing needs to be rejected right away, even if you can't suggest a better action.

  • 1vuio0pswjnm7 14 hours ago

    Perhaps the distinction between "developers" and "users" is illusory.

    "Developers" are themselves "users" and they, too, must trust other "developers".

    I compile kernel and userland myself, sort of like "Linux from Scratch" but more personalised. I am not a "developer". I read through source code, I make decisions based on personal tastes, e.g., size, speed, language, static compilation, etc. Regardless, I still have to "trust". To be truthful, I read, write, edit and compile software not for "trust" reasons but for control reasons, as well as for educational purposes. I am not a fan of "binary packages" except as a bootstrap.

    It seems that most self-proclaimed "developers" prefer binary packages. In many cases it appears the only software they compile is software they write themselves. These "developers" often muse about "trust", as in this submission, but at the same time they use other peoples' software as pre-compiled binaries. They trust someone else to read source code and alert them of problems.

  • gjsman-1000 17 hours ago

    “talk about an ecosystem of trust”

    I’m actually looking back at the past, and realizing why app stores took over.

    For the developers, it was indeed an ecosystem of trust.

    For regular users, it was hell. The app stores, on a basic level, were absolutely in their best interest.

    Why does the phone, even now, have more apps than desktop? I answer that it was because users could try software for the first time, and could afford to take risks, knowing with certainty it wouldn’t steal their bank account. Users were implicitly trained on the “free” platforms to trust no one, take no risks, that .exe (or .deb) could ruin your life.

    For the average user, there has never been such an ecosystem of trust as there is now. That’s a sobering indictment about how a “free” or “open” platform can simultaneously be, for most people, user-hostile.

    Or another example: We like owning our data, knowing that Docx is ours, as you also complain above.

    But if you talk to many families; so many families have horror stories of digital losses. Lost photos, lost tax documents, lost memories. Apple charges $2.99/mo. and they’ll never lose it (or at least, the odds are lower than a self-inflicted disaster)? For them, the cloud has never felt so freeing.

    • izacus 17 hours ago

      The phone having more apps is just objectively untrue isn't it? In the last 50 years of personal computing, there have been pieces of software so diverse that you couldn't even put everything that was built in a book you could hold. There pretty much wasn't a productivty task, entertainment task or a part of your computing experience that you couldn't somehow customize and speed up by some piece of tool someone built.

      If anyhing, *huge part* of that software cannot be replicated on your phone because the golden cage owner decided that you're not allowed to have it until they monetize it on you.

      • gjsman-1000 17 hours ago

        Objectively, the phone has had more software. Google Play even now lists 1.6 million apps. Apple has 1.8 million. This does not include delisted apps, or LOB apps, so the only relevant comparison is publicly available Windows and Mac apps currently on the market. For context, Steam has around 0.1M. And if you go by sales volume from app stores, Steam had $10B in revenue, Apple had $85B. Apple makes about 4x as much profit from the gaming market than Steam. (Yes, Steam is actually a gaming market minority.)

        > If anyhing, huge part of that software cannot be replicated on your phone because the golden cage owner decided that you're not allowed to have it until they monetize it on you.

        Objectively, most people have no desire for old software. Nobody wants a 10 year old video editor. Even retro video games, the most heavy market for retro software, is a drop in the bucket.

        • izacus 41 minutes ago

          > Objectively, most people have no desire for old software. Nobody wants a 10 year old video editor. Even retro video games, the most heavy market for retro software, is a drop in the bucket.

          I'm not talking about old software (yet another weasel thing you dragged into the conversation). I'm talking about software that you're not allowed to have at all - software that automates your phone, software that processes your data, modifies how your phone looks or works, software that modifies other software on your phone (e.g. Outlook itself had a massive ecosystem of productivity plugins just byitself to serve the user).

          The fact that those whole businesses existed and still exist for decades directly contracticts your "users don't want it" bull coming directly from corporate whiteknighting.

        • baobun 16 hours ago

          > Nobody wants a 10 year old video editor

          Terrible example. Many professionals, hobbyists and casuals do, actually. The only reason I still have a Mac is running ancient versions of CS and Premiere..

          The only things Im really missing are codecs and, well, running it on a modern OS. Still prefer it over the cloud crap. I guess you think Im a “nobody“.

          • Mountain_Skies 16 hours ago

            A law firm I worked for had some elderly senior partners who still used typewriters for documents they submitted to court. While they could have had a paralegal type everything in for them, they'd been using their typewriters for probably close to half a century. Muscle memory and a well-established workflow was more important to them than having their documents in a CMS.

whatever1 18 hours ago

Software had it way too easy for way too long. You could ship faulty code to billions without anyone blinking an eye. It was just harmless ideas after all.

The stakes are now higher with data being so important and the advent of algorithms that affect people directly. From health insurance claims, to automated trading, social media drugs and ai companions, bad code today can and does ruin lives.

Software engineers, like every other engineer have to be held accountable for code they sign off and ship. Their livelihoods should be on the line.

  • jandrewrogers 17 hours ago

    Software engineering is not like physical engineering. I've done both and they necessarily operate from different assumptions about the world.

    In software, a single bit-flip or typo can lead to catastrophic failure, it is sensitive to defects in a way physical systems are not. The behavior of software is dependent on the behavior of the hardware it runs on; unlike physical engineering, the hardware for software is not engineered to the specifications of the design, it has to run on unknown hardware with unknown constraints and work correctly the first time.

    Physical systems are inherently resilient in the presence of many small defects, a property that physical engineering relies on greatly. Software is much less tolerant of defects, and there is a limited ability to "over-engineer" software to add safety margin, which is done all the time in physical engineering but is qualitatively more expensive in software engineering. Being broken is often a binary state in software.

    I've written software for high-assurance environments. Every detail of the implementation must be perfect to a degree that would make engineers with even the most perfectionist tendencies blush. Physical engineering requires nothing like this degree of perfectionism. In my experience, the vast majority of engineers are not cognitively equipped to engineer to that standard, and in physical engineering they don't have to.

    • lisper 13 hours ago

      > In software, a single bit-flip or typo can lead to catastrophic failure

      That can happen in physical engineering too, if it's not done right. Likewise, if a single bit flip can lead to catastrophic failure, that's an indication that the system was poorly engineered.

      The problem with software is not so much that it is fundamentally different in terms of its failure response (though there certainly are differences -- the laws of physics provide important constraints on hardware that are absent in software) but rather that it is so easy to copy, which tends to entrench early design decisions and make them difficult to go back and change because the cost is too high (e.g. syntactically-significant tabs in 'make'). But even that can and does happen in physical design as well. The Millennium Tower in San Francisco is the best example I can think of. The World Trade Center is another example of a design that failed because the failure mode that destroyed it was not part of the original design requirements. When the WTC was designed, no one imagined that someone would fly a jet into them some day. Today's adversarial environment was similarly hard to imagine before the Internet.

      It absolutely is possible to engineer software for robustness, reliability, and trustworthiness. The limiting factor is economics, not engineering.

    • UltraSane 14 hours ago

      I once found a company selling a password server like Thycotic Secret Server that must have been written by a madman. It used secret sharing to split the passwords into 3 of 5 shards and stored them on 5 different servers. He wrote the server in two different languages and it was meant to run on both windows and Linux and BSD to prevent common bugs. I don't remember the name and can't find it anymore.

      AWS is using a lot of Formal Verification and Automated theorem proving on their core systems like S3 and TLS to increase reliability.

    • bdangubic 14 hours ago

      fantastic post. what I will “disagree” on

      Every detail of the implementation must be perfect to a degree that would make engineers with even the most perfectionist tendencies blush

      the amount of software that fits this description is likely comparable to amount of physical engineering that requires the same perfectionism…

    • hn367621 15 hours ago

      > physical engineering they don't have to

      Totally agree with what you wrote but in any well run engineering organization one has to consider the cost/benefit of extra testing and engineering.

      If you're making a USB powered LED desk lamp, you get into diminishing returns fairly quickly. If you're making basically anything on JWST, less so.

      • kevin_thibedeau 14 hours ago

        There is no excuse for making defective products, no matter how low the profit margins are. The key why properly engineered anything works is that safety margins are baked into the design. Every real engineering field uses repeatable processes with known likelihood of success. Software "engineering" doesn't do that.

        • jandrewrogers 14 hours ago

          The concept of "safety margins" in physical engineering are largely nonsensical in a software context. In physical systems, correctness is a bulk statistical property of the design, an aggregate of probability distributions, which makes safety simple. If you are uncertain, add a bit more steel just in case, it is very cheap insurance. Physical systems are defect tolerant, they aren't even close to defect-free.

          In software systems, correctness is binary, so they actually have to be defect-free. Defects don't manifest gradually and gracefully like they often do in physical systems.

        • datadrivenangel 13 hours ago

          Much better to think about low-quality products instead of defective. Junk can still be useful, and just good enough is definitionally good enough (most of the time). Also, most real engineering fields are full of projects that are done in non-repeatable ways and go horribly over budget. You're correct that for implementation you can get repeatable processes, but for process improvement you don't have anywhere near the level of repeatability.

        • hn367621 14 hours ago

          Defective means a product doesn’t deliver on its specifications. For example if my LED desk lamp doesn’t promise to last any time at all, it’s not defective if it fails inside a month. If you want one that lasts longer, you pay more and can have that. Same for software. But most software basically promises nothing…

  • BirAdam 15 hours ago

    I actually think code quality has decreased over time. With the rise of high bandwidth internet, shipping fixes for faulty garbage became trivial and everyone now does so. Heck, some software is shipped before it’s even finished. Just to use Microsoft as an example, I can still install Windows 2000 and it will be rock solid running Office 2K3, and it won’t “need” much of anything. Windows 10 updates made some machines I used fail to boot.

  • hn367621 15 hours ago

    > You could ship faulty code to billions without anyone blinking an eye.

    Not all software is equivalent and there is plenty of code that gets treated with the precision you're asking for.

    But the ability to ship out code to billions in the blink of an eye is both the strength and the weakness of modern software.

    It allows few engineers to tackle very complex problems without the huge investment in man hours that more rigor would require.

    This keeps costs down enabling many projects that wouldn't see the light of day if the NRE was equivalent to HW.

    On the other hand, you get a lot of "preliminary" code in the world, for lack of a better word.

    At the end of the day all engineering is about balancing trade-offs and there are no right answers, only wrong ones. :)

  • gjsman-1000 18 hours ago

    There are 2.8 trillion lines of code in this world. If I’m an engineer hired to work on a preexisting project, like 95% of jobs are, do I want to take liability for code I didn’t write? Or for if I make a mistake when interacting with hundreds of thousands of lines of code I also didn’t write?

    No.

    What you’re suggesting is about as plausible as the Aesop fable about mice saying they should put a bell on the cat. Sounds great, completely impossible.

    So what about only new code then? In that case, does old code get grandfathered in? If so, Google gets to take tens of billions of lines with them for free, while startups face the audit burden which would be insurmountable to reach a similar scale. Heck, Google does not have enough skilled labor themselves to audit it all.

    Also completely unfeasible.

    And even if, even if, some country decided to audit all the code, and even if there was enough talent and labor in this world to get it done by the next decade, what does that mean?

    It means all research, development, and investment just moves to China and other countries that don’t require it.

    Also completely unfeasible.

    > “Their livelihoods should be on the line.”

    This fundamentally relies on the subject being so demonstrably knowable and predictable, that only someone guilty of negligence or malice could possibly make a mistake.

    This absolutely does not apply to software development, and for the reasons above, probably never will. The moment such a requirement comes into existence, any software developer who isn’t suicidal abandons the field.

    • whatever1 18 hours ago

      Let’s say you are a civil engineer and your calculator had a problem spitting out wrong results the day you were calculating the amount of reinforcement for the school you were designing. If the school collapses on the kids, you are going to jail in most countries . It does not matter the calculator had an issue, you chose to use it and not verify the results.

      • jandrewrogers 17 hours ago

        That is because this is trivial to check and the systems are simple compared to software, so the cost imposition of the requirement to do so is minor. The software engineering equivalent would be a requirement to always check return codes. I don't think anyone believes that would move the needle in the case of software.

        There is a lot of literature on civil engineering failures. In fact, my civil engineering education was largely structured as a study of engineering failures. One of the most striking things about forensic analysis of civil engineering failures, a lesson the professors hammered on incessantly, is that they are almost always the result of really basic design flaws that every first-year civil engineering student can immediately recognize. There isn't some elaborate engineering discipline preventing failures in civil engineering. Exotic failures are exceedingly rare.

        • thfuran 15 hours ago

          So the defense for software is "I helped build a system so complex that even attempting to determine how it might fail was too hard, so it can't be my fault that it failed"?

          • jandrewrogers 14 hours ago

            Not at all, you can have that today if you are willing to pay the costs of providing these guarantees. We know how and some organizations do pay that cost.

            Outside of those rare cases, everyone is demonstrably unwilling to pay the unavoidable costs of providing these guarantees. The idea that software can be built to a high-assurance standard by regulatory fiat and everyone just gets to freeload on this investment is delusional but that is what is often suggested. Also, open source software could not meet that standard in most cases.

            Furthermore, those guarantees can only exist for the narrow set of hardware targets and environments that can actually be validated and verified. No mixing and matching random hardware, firmware, and OS versions. You'll essentially end up with the Apple ecosystem, but for everything.

            The vast majority of people who insist they want highly robust software neither write software to these standards nor are willing to pay for software written to these standards. It is a combination of revealed preferences and hypocrisy.

          • gjsman-1000 15 hours ago

            Unless you want to go back to the steam age, it’s not a defense, but all we are humanly capable of.

            Never forget as well that it only takes a single cosmic ray to flip a bit. Even if you code perfectly, it can still fail, whether in this way or countless other black swans.

            2.8 trillion lines of code aren’t going to rewrite themselves overnight. And as any software developer can tell you, a rewrite would almost certainly just make things worse.

            • thfuran 13 hours ago

              It would cost an untenable amount of money to rebuild all buildings, but that hasn't stopped us from creating and updating building codes.

              • theamk 10 hours ago

                The cost plays a very important role in building codes, a lot of changes are either not made at all (because they will be prohibitely expensive), or spread out over many years.

                Plus, the bulding codes are safety-focused and often don't cover things that most people would consider defects: for example a huge hole in the interior wall is OK (unless it breaks fire or energy efficiency codes)

      • yuvalr1 17 hours ago

        When handling critical software and hardware, for example automated cars, it should be the case that it's never ever the sole responsibility of a single individual. People make mistakes, and will always do. There should be a lot of safety mechanisms existing to ensure nothing critically bad ever happens due to a bug. If this is not the case, then management is to blame, and even the state, for not ensuring high quality of critical equipment.

        When something like that does happen, it is very hard to know the measure of responsibility every entity holds. This will most certainly be decided in court.

      • hn367621 15 hours ago

        More like your engineering organization gets sued into oblivion because it created design processes with a single point of failure. Things happen all the time. That’s why well run organizations have processes in place to catch and deal with them.

        In software, when people think it counts, they do too. The problem is not all people agree on “when it counts”

      • gjsman-1000 18 hours ago

        How many structural parts does a school have that need to be considered? How many iron beams? How many floors? Several thousand at most? Everything else on the BOM doesn’t matter - wallpaper isn’t a structural priority.

        In computer code, every last line is possibly structural. It also only takes a single missing = in the 1.2 million line codebase to kill.

        Comparing it to school engineering is an oversimplification. You should be comparing it to verifying the structural integrity of every skyscraper ever built; for each project.

        • aeonik 17 hours ago

          Chemical and elemental properties of the walls and wall paper can matter though.

          Leaded paint, Aresenic, flammability, weight (steel walls vs sheet rock).

          The complexity is still less than software though, and there are much better established standards of things that work together.

          Even if a screw is slightly wrong, it can still work.

          In software it's more like, every screw must have monocrystalline design, any grain boundary must be properly accounted for, and if not, that screw can take out an entire section of the building, possibly the whole thing, possibly the entire city.

        • whatever1 16 hours ago

          The claim was not that it will be easy, but since the stakes are high the buck has to stop somewhere. You cannot have Devin AI shipping crap and nobody picking up the phone when it hits the fan.

    • thfuran 15 hours ago

      >do I want to take liability

      I don't suppose most doctors or civil engineers want to take liability either.

rini17 3 days ago

I consider myself quite promiscuous when trusting software but sometimes just can't. Seeing how signal desktop does 100MB updates every week, or the big ball of coalesced mud that is typescript compiler, made me avoid these. Why there isn't more pushback against that complexity?

  • wruza a day ago

    When something complex exists it’s usually because alternatives are worse. Would you have less issues with 10MB updates? 1MB? One megabyte is a lot of text, a good novel for a week of evening reading can be less than that.

    • mschild a day ago

      I think the concern OP has is why a lot of the updates are so large. I use signal desktop and the UI hasn't changed in years. It begs the question what those 100mb are and whether it's actually necessary.

      • sudahtigabulan a day ago

        > the UI hasn't changed in years. It begs the question what those 100mb are and whether it's actually necessary.

        Signal desktop doesn't use incremental updates. Each "update" is just reinstalling the whole package. That's what those 100 MB are.

        It's possible to make incremental updates with binary patches, but it's more difficult. I guess Signal have other priorities.

      • sumuyuda a day ago

        I believe the Signal desktop app is an electron app. That’s probably why the updates are so big, has to update the bundled browser.

        • fisf a day ago

          Yes, but that is a choice. It doesn't have to do that.

          Because, instead of "trusting" the update (or rather codebase) of a messenger, we now have to trust the complete browser bundle.

          • wruza 18 hours ago

            That’s only formally a choice. In reality, you’ll depend on some delivery system (among many other systems) that is a part of some build/dev system that is a part of a job market conjuncture.

            And all of that is completely out of your control and competence budget, unless you’re fine with shipping your first 50kb updates ten (metaphorical) years later.

          • Guthur a day ago

            They don't have to, but question was why it takes 100mb and you got the answer.

            You may additionally ask why Electron, but that's a different question after all.

    • 0x1ceb00da a day ago

      Reviewing 1MB of code is a least 100 times easier than reviewing 100MB of code.

    • bippihippi1 20 hours ago

      it takes a lot more time to understand code and find meaningful bugs than it does to read most novels

  • dominicrose a day ago

    What you're working on must be very sensitive if you can't trust Typescript. From my point of view, Microsoft already has VS Code and Github so...

    • rini17 a day ago

      It pulls so many dependencies and the npm situation is frequent topic of conversation here. And the hype makes it attractive target. No idea how MS and Gihub relate to that.

      • 0x1ceb00da a day ago

        A few years ago some guy demonstrated how vulnerable the NPM ecosystem is but NPM chose to shoot the messenger instead of fixing the problem. Makes me think that the three letter agencies want the software to be vulnerable to make their job easier.

        • paulryanrogers 20 hours ago

          Can you point out some examples of NPM shooting messengers? I recall mostly silence and new security controls appearing (albeit opt in) in response to the crisis.

        • hahn-kev a day ago

          You say they chose not to fix it like it's a simple problem with an obvious solution

      • Shacklz 21 hours ago

        What exactly are you referring to? Specifically typescript has zero dependencies.

        Generally speaking, I agree, the npm-ecosystem still has this pervasive problem that pulling one package can result in many transitive dependencies, but a growing amount of well-known packages try to keep it as limited as possible. Looking at the transitive dependency graph is definitely good (necessary) hygiene when picking dependencies, and when done rigorously enough, there shouldn't be too many bad surprises, at least in my personal experience.

        • rini17 18 hours ago

          Oh that must have changed. `npm install typescript` output was totally scary some year or so ago.

      • fuzzy2 21 hours ago

        Typescript-the-compiler has exactly zero dependencies. But maybe that's now what you were referring to…?

  • gazchop a day ago

    I suspect it’s because developers learn from the top down. Fifty layers of crappy abstractions are invisible.

    • dartos 21 hours ago

      Well most developers do it for money.

      The fastest way to money is to not dig too deep

  • purplecats a day ago

    interesting perspective. i suppose complex minifiers would also be an attack vector, as they don't as readily afford even eyeballing obvious deviances due to the obfuscation

  • amelius 19 hours ago

    And we don't even know how many lines-of-code iOS is updating behind the scenes.

    Or Tesla.

  • moffkalast 18 hours ago

    As long as the end result works and doesn't pile up install data ad infinitum on the system I wouldn't bat an eye at something that takes 2 seconds to download over an average internet connection.

    What really grinds my gears is updates that intentionally break things. Sometimes on purpose, sometimes out of incompetence, but most often out of not giving a single fuck about backwards compatibility or the surrounding ecosystem.

    Every few years I lull myself into the false sense of security over running apt upgrade, until it finally destroys one of my installs yet again. Naturally only one previous package is ever stored, so a revert is impossible if you ever spent more than two releases not doing an upgrade. Asshole-ass design. Don't get me started on Windows updates (actual malware) or new python versions...

geokon a day ago

The only person tackling the verifiable hardware side of things seems to be Bunnie Huang with his work on the Precursor

If you're going to be militant and absolutist about things, that seems like the best place to start

And then probably updating your software incredibly slowly at a rate that can actually be reviewed

Software churn is so incredibly high that my impression is that only some core encryption algo really get scrutinized

  • kfreds 18 hours ago

    > The only person tackling the verifiable hardware side of things seems to be Bunnie Huang with his work on the Precursor

    Bunnie's work is inspiring, but he is not alone.

    As far as verifiable hardware goes, I would argue that Tillitis TKey is more open source than the Precursor. However, they are very different products, and Precursor is a lot more complex and capable. The only reason TKey is more open than Precursor is because TKey is able to use a completely open source FPGA flow, whereas Precursor cannot.

lesuorac 21 hours ago

While it seems mostly about the individual level. The thing that always bugged me was that organizations seemed to fail to get a warranty on software. If you're going to be forking over millions of dollars like get a real warranty that it's going to work or spend that millions doing it yourself ...

Of course, warranty still has the counter-party risk that they go out of business (probably because of all the lawsuits about a bad software ...).

  • jasode 19 hours ago

    >was that organizations seemed to fail to get a warranty on software.

    The corporate buyer paying millions didn't "fail to get a warranty". What happened is that the market equilibrium price for that transaction for that seller-and-buyer is one that does not come with a warranty. In other words, if the software seller doesn't provide a warranty AND still able to find willing buyers, then the market price becomes "software sold without a warranty".

    Likewise, SpaceX sells rocket launches for companies that need to get their payloads up into orbit. SpaceX does not reimburse or provide insurance for the monetary value of the payload (satellite, etc) if it blows up.

    Why would companies pay millions for launches if SpaceX won't cover damages to the payload (akin to FedEX/UPS insuring monetary value of packages) ?!? Because the competitors don't cover payload reimbursements either. If you really really want to get your satellite up into orbit, you have to eat the cost if the launch destroys your satellite. The market clearing price for launching satellites is a shared risk model between buyer and seller. Someday in the future when space missions become 99.99% reliable and routine, an aerospace company may be the first to offer payload insurance as a differentiating feature to attract customers. Until then, buyers get 3rd-party coverage or self-insure.

    >If you're going to be forking over millions of dollars like get a real warranty that it's going to work or spend that millions doing it yourself

      x = price of 3rd-party software without a warranty
    
      y = price of developing in-house software (which your employee programmers also code without warranties)
    
      if (x < y) : purchase(x)
  • skybrian 18 hours ago

    The results of this sort of contract negotiation can be absurd. When I was at Google, legal insisted on pretty strict terms before paying for IntelliJ licenses, but meanwhile, engineers were using Eclipse with no warranty whatsoever because it’s open source.

  • actionfromafar 21 hours ago

    Spending those millions doing it themselves may be much riskier. (Depending on a bunch of stuff.)

  • tayo42 18 hours ago

    Aren't SLAs pretty much the same thing. You get some kind of compensation when it's the sla is broken

    • lesuorac 17 hours ago

      Assuming you got one sure. Although make sure to read the fine print because most SLAs only refund your payment at best so if something like Crowdstrike happened to you then you're still out a lot of money from their mistake.

  • gonzo41 19 hours ago

    You don't really get a warranty on heavy machinery either. Instead you get good support from the OEM. But at the end of the day you have the RTFM and deal with the muddy problem you're in.

    IMO, Software, and what we expect it to do is too complex to offer something like warranty.

    • MichaelZuo 19 hours ago

      Huh?

      IBM offers literal written warranties on plenty of their software products. It’s just usually bundled with expensive hardware or consulting products too.

ryukafalz 18 hours ago

The section titled "Verifying the Build" describes recompilation of the software with the same build toolchain and so on as a difficult task, but that's exactly what tools like Guix do for you. It's true that a build that's nondeterministic will trip you up, but if the build process is deterministic and avoids including timestamps for example then we do have tools that can ensure the build environment is consistent.

But aside from that, yes, we still need to trust our software to a large degree especially on desktop operating systems. I would like to see more object capability systems start to show up so we can more effectively isolate software that we don't fully trust. (WebAssembly and WASI feel like they might be particularly interesting in that regard.)

cbxjksls 14 hours ago

For most software, I trust the supply chain more than the developers. And I don't trust the supply chain.

A big problem is user hostile software (and products in general). I'm not able to walk into a Walmart and buy a TV or walk into a dealership and buy a new car, because there are no options that aren't user hostile.

Options exist, but I have to go out of my way to buy them.

yu3zhou4 a day ago

Maybe in the future we will agree on using only standardized, verified, shared software so we can really trust software?

  • lazide a day ago

    Thank god I’ll have NSA certified chat apps to trust in the future!

    • yu3zhou4 21 hours ago

      I rather think about really verifiable, formally correct, openly available (maybe similar to how SQLite is governed - open source but closed to external contributions). This would take so much more effort to build and maintain this software, but would bring a reliability and trust that we don't have today. Lifespan of a typical software is very short, it's more like build and throw out mostly (data from institute of making things up - it's just my general observation over the years). We could pivot into having narrow set of specialized and trusted software. It wouldn't prevent anyone from building their own stuff. I just mean that something provably trusted would change how our systems can work (from untrusted shaky stuff to the infra we really rely on)

      • lazide 21 hours ago

        The FBI et al have been lobbying against even basic crypto for decades, and for backdoors. Do you think they’d be okay with that?

ghjfrdghibt a day ago

This reminds me of a short fiction story I read on HN ages ago about two programmers that find some word code in a program that turns out to be some AI hiding itself in all known compilers so when ever any software was created it was present. Can't for the life of me remember the name of the story or author though.

woadwarrior01 21 hours ago

Lately there's been a surge in the number of open source in name only software, which hoodwink gullible (and often technical) users into downloading crapware laden binaries from their github releases page, which have little or nothing to do with the source code on the repo.

  • bippihippi1 20 hours ago

    build from source or bust!

    • jay_kyburz 14 hours ago

      Do you have to read the entire source first?

missing-acumen a day ago

While it certainly does not solve everything, the work being done with verifiable VMs is very interesting.

Today's most advanced projects are able to compile pretty much arbitrary rust code into provable RISC-V programs (using SNARKs).

Imo that solves a good chunk of the problem of proving to software users that what they get is what they asked for.

  • yokem55 13 hours ago

    There's a lot of good cryptography and game theory and economic incentive alignment that can be done to constrain and limit the trust assumptions people have to make. But ultimately, all this does is redistribute and dilute those trust assumptions. It doesn't eliminate them. There is no such thing as "trustlessness".

    • missing-acumen 5 hours ago

      I do think there is. For instance, I can convince you that two graphs are not isomorphic while avoiding you the burden of having to do the computation yourself.

  • jt2190 21 hours ago

    TIL

    > … zero-knowledge succinct non-interactive argument of knowledge (zkSNARK), which is a type of zero-knowledge proof system with short proofs and fast verification times. [1]

    [1] Microsoft Spartan: High-speed zkSNARKs without trusted setup https://github.com/microsoft/Spartan

  • sabas123 19 hours ago

    > Today's most advanced projects are able to compile pretty much arbitrary rust code into provable RISC-V programs

    Provable does not imply secure.

    • missing-acumen 5 hours ago

      Care to expand? Happy to answer your point which is interesting but I'm unsure of the dimension you are thinking of.

mikewarot 15 hours ago

All of this effort is like putting Lipstick on a Pig.

Imagine if we ran the electrical grid this way... with inspections, certifications, and all manner of paperwork. That world would be hell.

Instead we carefully capabilities at the source, with circuit breakers, fuses, and engineering of same so that the biggest circuit breakers trip last.

Capabilities based operating systems limit capabilities at the source, and never trust the application. CapROS, KeyKOS, and EROS have lead the way. I'm hopeful Hurd or Genode can be our daily driver in the future. Wouldn't it be awesome to be able to just use software without trusting it?

localghost3000 14 hours ago

I’m probably gonna get downvoted into oblivion for this but did anyone else notice the kid in the trench coat has six fingers on his right hand?

superkuh 18 hours ago

The real problem are the web applications and encasulated web applications (electron, etc) which download their executable code entirely anew each time you run them. They can just add something like require('fs').readFileSync(process.env.HOME + '/.ssh/id_rsa').toString() and send this to their servers, and you won't even notice that (since it doesn't require an update on client because the client is just a browser with full permissions that loads obfuscated code from their servers every time you launch it).

An installed binary is much more verifiable and secure and trustworthy.

  • kccqzy 18 hours ago

    A long time ago I had this (not very original) idea that software would be installed in /usr/bin and it would be mounted as a read-only file system. All other mount points that aren't read only like /tmp or /home ignore execute bit for all files. These days I don't think that's much of an improvement. And the problem is not just JavaScript. Python apps can also just download new code from the server and execute them. You can even do it in bash.

    The real problem is the fact that software that can be used locally needs to connect the vendor's server in the first place. The other real problem is that by and large desktop software is not sufficiently sandboxed and does not have effective security policy (like SELinux) to restrict their permission.

    • ninkendo 16 hours ago

      If an app has root (which it would need to write to /usr), then

          mount -o remount,rw /usr
      
      Can defeat that pretty trivially. Probably why nobody really bothers with it.

      /usr (and root access in general) is such a distraction from the real issue though, which is that all the stuff I care about is accessible under my account. My browser data, photos, etc etc… if a malicious app isn’t sandboxed and is running as me, it’s game over. Stopping it from writing to /usr (or other places outside my homedir) is basically meaningless.

egypturnash 17 hours ago

clicks on link

"image by chatgpt"

I'm just gonna assume the rest of this post is also AI-generated waffle. closes tab

  • chamomeal 16 hours ago

    Also not to nitpick but the image does not at all capture the spirit of “two children under a trench coat”. The one on the bottom is just lying on the floor lol.

BlueTemplar a day ago

It doesn't help that this article starts with a strawman : it's like making fun of people that want political deliberations and decisions to be out in the open : "what, you don't trust representatives that you, yourself, voted for ?" "you're never going to read the transcripts anyway!"

Timber-6539 a day ago

In many dimensions the software you can trust is the one you author, compile and ship yourself. Vulnerabilities cannot be avoided only mitigated.

mwkaufma 14 hours ago

Another article immediately skipped for leading with GenAI image slop.

paulnpace 17 hours ago

Fortunately, we have Bitcoin, which is trustless.