• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 24th, 2023

help-circle
  • At least the EU is somewhat privacy friendly here (excluding the Google tie in) compared to whatever data sharing and privacy mess the UK has obligated people to do with sharing ID pictures or selfies.

    Proving you are 18+ through zero knowledge proof (i.e. other party gets no more information than being 18+) where the proof is generated on your own device locally based on a government signed date of birth (government only issues an ID, doesn’t see what you do exactly) is probably the least privacy intrusive way to do this, barring not checking anything at all.


  • It is complicated. It is not technically always, but in practice is may very well be. As this page (in Dutch) notes that, unless the driver can show that ‘overmacht’ applies (they couldn’t have performed any action that would have avoided or reduced bodily harm), they are (at least in part) responsible for damages. For example, not engaging the brakes as soon as it is clear that you would hit them, would still result in them being (partially) liable for costs, even if the cyclist made an error themselves (crossing a red light).

    Because the burden of proof is on the driver, it may be hard to prove that this is the case, resulting in their insurance having to pay up even if they did not do anything wrong.


  • Wouldn’t the algorithm that creates these models in the first place fit the bill? Given that it takes a bunch of text data, and manages to organize this in such a fashion that the resulting model can combine knowledge from pieces of text, I would argue so.

    What is understanding knowledge anyways? Wouldn’t humans not fit the bill either, given that for most of our knowledge we do not know why it is the way it is, or even had rules that were - in hindsight - incorrect?

    If a model is more capable of solving a problem than an average human being, isn’t it, in its own way, some form of intelligent? And, to take things to the utter extreme, wouldn’t evolution itself be intelligent, given that it causes intelligent behavior to emerge, for example, viruses adapting to external threats? What about an (iterative) optimization algorithm that finds solutions that no human would be able to find?

    Intellegence has a very clear definition.

    I would disagree, it is probably one of the most hard to define things out there, which has changed greatly with time, and is core to the study of philosophy. Every time a being or thing fits a definition of intelligent, the definition often altered to exclude, as has been done many times.


  • The key point that is being made is that it you are doing de facto copyright infringement of plagiarism by creating a copy, it shouldn’t matter whether that copy was made though copy paste, re-compressing the same image, or by using AI model. The product being the copy paste operation, the image editor or the AI model here, not the (copyrighted) image itself. You can still sell computers with copy paste (despite some attempts from large copyright holders with DRM), and you can still sell image editors.

    However, unlike copy paste and the image editor, the AI model could memorize and emit training data, without the input data implying the copyrighted work. (exclude the case where the image was provided itself, or a highly detailed description describing the work was provided, as in this case it would clearly be the user that is at fault, and intending for this to happen)

    At the same time, it should be noted that exact replication of training data isn’t exactly desirable in any case, and online services for image generation could include a image similarity check against training data, and many probably do this already.



  • I think the video LegalEagle uploaded explains it quite succinctly: for the sale there was a certain split between the debtors, the debtors with the largest portion were willing to forego a portion such that the other debtors would get a larger portion if The Onion’s bid was the winning one. In effect, the other debtors would get more money out of the 1.75m than the 3.5m bid, and the debtors that ‘got less’ are the ones that offered the money in the first place.



  • 8uurg@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    A very similar situation to that analysed in this paper that was recently published. The quality of what is generated degrades significantly.

    Although they mostly investigate replacing the data with ai generated data in each step, so I doubt the effect will be as pronounced in practice. Human writing will still be included and even curation of ai generated text by people can skew the distribution of the training data (as the process by these editors would inevitably do, as reasonable text could get through the cracks.)