• 1 Post
  • 13 Comments
Joined 11 months ago
cake
Cake day: June 21st, 2023

help-circle

  • I agree, but it’s one thing if I post to public places like Lemmy or Reddit and it gets scraped.

    It’s another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.

    Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.






  • After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

    Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.

    That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.

    Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service? (I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).

    I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).


  • I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.

    Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

    This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”. Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

    Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.

    In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.


  • This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.

    Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

    He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).