• 0 Posts
  • 245 Comments
Joined 4 months ago
cake
Cake day: June 5th, 2024

help-circle
  • Interoperability is a big job, but the extent to which it matters varies widely according to the use case. There are layers of standards atop other standards, some new, some near deprecation. There are some extremely large and complex datasets that need a shit-ton of metadata to decipher or even extract. Some more modern dataset standards have that metadata baked into the file, but even then there are corner cases. And the standards for zero-trust security enclaves, discoverability, non-repudiation, attribution, multidimensional queries, notification and alerting, pub/sub are all relatively new, so we occasionally encounter operational situations that the standards authors didn’t anticipate.















  • If a self-driving car kills someone, the programming of the car is at least partially to blame

    No, it is not. It is the use to which the system has been put that is the point at which blame can be assigned. That is what should be verified and validated. That’s where some person is signing on the dotted line that the system is fit for use for that particular purpose.

    I can write a simplistic algorithm to guide a toy drone autonomously. So let’s say I GPL it. If an airplane manufacturer then drops that code into an airliner, and fail to test it correctly in scenarios resembling real-life use of that plane, they’re the ones who fucked up, not me.






  • It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

    You’re attempting to redefine “bug.”

    Software bugs are faults, flaws, or errors in computer software that result in unexpected or unanticipated outcomes. They may appear in various ways, including undesired behavior, system crashes or freezes, or erroneous and insufficient output.

    From a software testing point of view, a correctly coded realization of an erroneous algorithm is a defect (a bug). It fails validation (a test for fitness for use) rather than verification (a test that the code correctly implements the erroneous algorithm).

    This kind of issue arises not only with LLMs, but with any software that includes some kind of model within it. The provably correct realization of a crap model is still crap.