• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: October 17th, 2023

help-circle
  • exactly: It’s “open source” like android. The core android is open source (in many cases because they are required to), but that does not include anything that makes the actual system work for normal users. The core android is open source (“Android Open Source Project”), but that includes practically nothing: Essentially the stuff that is in there are things that have to be open source (like the linux kernel they use). However, if you want to have the system “practically useable” you need a lot more, which is usually the “Google Mobile Services”, which are proprietary. You are also generally required to install all items in the GMS, i.e. even if you only need the play store, you still have to install google chrome.

    Further, the android name and logo are trademarked by google, so even if you want to roll your own android, you would not be allowed to call it android. WearOS is essentially the same thing: The android subsystem is open, the actual thing you call WearOS (plus trademarks, etc.) are not.




  • train one with all the Nintendo leaks

    This is fine

    generate some Zelda art and a new Mario title

    This is copyright infringement.

    The ruling in japan (and as I predict also in other countries) is that the act of training a model (which is just a statistical estimator) is not copyrightable, so cannot be copyright infringement. This is already standard practice for everything else: You cannot copyright a mathematical function, regardless of how much data you use to fit to it (that is sensible: CERN has fit physics models to petabytes worth of data, that doesn’t mean they hold a copyright on laws of nature, they just hold the copyright on the data itself). However, if you generate something that is copyrighted, that item is still copyrighted: It doesn’t matter whether you used an AI image generator, photoshop, or a tattoo gun.


  • First, I don’t think that’s the right comparison. You need to compare them to taxis.

    It’s not just that, you generally have a significant distribution shift when comparing the self-drivers/driving assistants to normal humans. This is because people only use self-driving in situations where it has a chance of working, which is especially true with stuff like tesla’s self-driving where ultimately people are not even going to start the autopilot when it gets tricky (nevermind intervening dynamically: they won’t start it in the first place!)

    For instance, one of the most common confounding factors is the ratio of highway driving vs non-highway driving: Highways are inherently less accident prone since you don’t have to deal with intersections, oncoming traffic, people merging in from every random house, or children chasing a ball into the street. Self-drivers tend to report a lot more highway traffic than ordinary drivers, due to how the availability of technology dictates where you end up measuring. You can correct for that by e.g. explicitly computing the likelihood p(accident|highway) and use a common p(highway) derived from the entire population of car traffic.