• 8 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • Even if tik tok was nakedly controlled by the Chinese government, who gives a shit? I can go over to RT (Russia Today) right now and get fed Russian propaganda. Hell, until 2022 I could add it to my cable package. I can to this day still get it as a satellite TV option. If the concern is “foreign government may influence public opinion on a platform they control” then the US has a lot of banning to do.

    But we don’t because free speech is a thing and we’re free to consume whatever propaganda we want.

    We gave up that principle because “China bad” (and the CCP is, to be clear). But instead of passing laws around data privacy, or algorithmic transparency, or a public information campaign to get kids off of tik tok, the US government went straight to “The government will decide what information your allowed to consume, we know what’s best for you” and far too many people are cheering.

    Besides, the point your making is bullshit anyway given the kill switch mechanism Tik Tok offered.

    TikTok was banned because 1) China bad, and 2) Tik Tok is eating US social media companies lunch. Facebook and Twitter and Google throw some campaign donations at the politicians that killed their biggest rival, and the politicians calculate that more people hate tik tok than like it (or care about preventing government censorship if the thing being censored is something they don’t like). It’s honestly one of the grossest things I seen dems support lately.


  • That’s true, but I think what recent conflicts have demonstrated is that total firepower isn’t everything. Ukraine was significantly outmatched by Russia and hung on, even before western weapons shipments. Hamas, estimated at something like 30k fighters strong and armed with small arms and light rockets/artillery, continues to fight effectively against the US armed IDF. Then we have historical examples like the US war in Vietnam, or the US failures to fight insurgents in Iraq (with the tide only changing after deliberate hearts and minds political/social strategy).

    The whole “we have a lot of planes” thing is just defense contractor marketing. How that translates on the battlefield, especially when the civilian population despises you, is not great.

    A war like that would devestate Isreal and drag the US into a true quagmire. It would sap a tremendous amount of resources and leave the US more vulnerable to the china’s and Russias of the world.

    Not to mention our good old buddy international terrorism, which Bidens unwavering support of Bibi is already making us a prime target for. Shit would be fucked.


  • While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

    And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

    Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

    So Ilya is a shit head is my takeaway.






  • We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

    I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.




  • I don’t know enough to know whether or not that’s true. My understanding was that Google’s Deep mind invented the transformer architecture with their paper “all you need is attention.” A lot, if not most, LLMs use a transformer architecture, though your probably right a lot of them base it on the open source models OpenAI made available. The “generative” part is just descriptive of the model generating outputs (as opposed to classification and the like), and pre trained just refers to the training process.

    But again I’m a dummy so you very well may be right.


  • Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.

    Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.


  • Interesting perspective! I think your right in a lot of ways, not least that it’s too big and heavy now. I’d also be shocked if the next iPhone didn’t have an AI powered siri built in.

    I guess fundamentally I am skeptical that we’re all going to want a screens around us all the time. I’m already tired of my smart watch and phone buzzing me with notifications, do I really want popups in my field of vision? Do I want a bunch of displays hovering in front of my while I work? I just don’t know. It seems like it would be cool for a week or so, but I feel like it’d get tiring to have a computer on your face all day, even if they got the form factor way down.


  • Apple has always had a walled garden on iOS and that didn’t stop them from becoming a giant in the US. Most people are fine with the App Store and don’t care about openness or the ability to do whatever they want with the device they “own.” Apple would probably love to have a walled garden for Macs as well, but knows that ship has sailed. Trying to force “spatial computing” (which this article incorrectly says was an Apple invention, it’s not Microsoft came up with that term for its hololense) on everyone is a great way to move to a walled garden for all your computing, with Apple taking a 30% slice of each app sale. I doubt the average Apple user is going to complain about it either so long as the apps they want to use are on the App Store.

    I think the bigger problem is we’re in a world where most people, especially the generations coming up, want less screens in their life, not more. Features like “digital well-being” are a market response to that trend, as are the thousands of apps and physical products meant to combat screen addiction. Apple is selling a future where you experience reality itself through a screen, and then you get the privilege of being up to clutter the real world with even more screens. I just don’t know that that is a winner.

    It’s funny too because at the same time AI promises a very different future where screens are less important. Tasks that require computers could be done by voice command or other minimal interfaces, because the computer can actually “understand” you. The Meta Ray-Ban glasses are more like this, where you just exist in the real world and you can call on AI to ask about the things you’re seeing or just other random questions. The Human AI pin is like that too (doubt it will take off, but it’s an interesting idea about where the future is headed).

    The point is all of these AI technologies are computers and screens getting out of your way so you can focus on what your doing in the real world, whereas Apple is trying to sell a world where you (as the Verge puts it) spend all day with an iPad strapped to your face. I just don’t see that selling, I don’t think anybody wants that world. VR games and stuff are cool because you strap in for a single emersive experience, and then take the thing off and go back to the real world. Apple wants you spending every waking moment staring at a screen, and that just sounds like it would suck.



  • I don’t use TikTok, but a lot of the concern is just overblown China bad stuff (CCP does suck, but that doesn’t mean you have to be reactionary about everything Chinese).

    There is no direct evidence that the CCP has some back door to grab user data, or that it’s directing suppression of content. It’s just not a real thing. The fear mongering has been about what the CCP could force ByteDance to do, given their power over Chinese firms. ByteDance itself has been trying to reassure everyone that that wouldn’t happen, including by storing US user data on US servers out of reach of the CCP (theoretically anyway).

    You stopped hearing about this because that’s politics, new shinier things popped up to get people angry about. North Dakota or whatever tried banning TikTok and got slapped down on first amendment grounds. Politicians lost interest, and so did the media.

    Now that’s not to say TikTok is great about privacy or anything. It’s just that they are the same amount of evil as every other social media company and tech company making money from ads.



  • Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn’t lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don’t see how training an LLM (which doesn’t retain the exact copy of the training data at least in the vast majority of cases) isn’t fair use. You aren’t going to get an argument from me.

    I think most people who will disagree are reflexively anti AI, and that’s fine. But I just haven’t heard a good argument that AI training isn’t fair use.


  • There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.