A little bit of neuroscience and a little bit of computing

  • 1 Post
  • 31 Comments
Joined 1 year ago
cake
Cake day: January 19th, 2023

help-circle
rss
  • I think it makes sense. Visual media just work well and universally so for all humans I’d say. All the other limited platforms are stuck with some indelible fashion like a haircut from a certain era and so always show their age eventually.

    On top of this I think there’s an argument that YouTube have been uniquely successful in their attempt to take a middle path between profitability and facilitating creators, the result of which is that you get a performant and easy to use service (with a pile of ads) that connects with what feels like a huge range of real people talking about real interests.




  • My question is if there’s any legal mechanism to prevent this on other platforms? Pixelfed for example.

    Good question!

    I’ve been saying for a while that the fediverse is blind to this issue as everything here is completely scrapable through either the public web or by running federated servers. On top of that, being culturally inclined toward more “serious” conversation and providing content warnings and alt-text for images, we’re probably generating relatively valuable training data.

    And yet everything is public as though it’s still 2012.

    There are alternatives. BlueSky for instance is basically private to members only. They recently announced that content would be made public to the web and a number of users were upset.

    Group chats and Discord servers are probably similar, and from what I can tell “new” popular places for social activity online.

    A major issue the fediverse has, IMO, is that it’s kinda stuck trying to fight Twitter and Facebook circa 2012, when that battle was lost and we’re on to new battle fronts now.



  • Absolutely. Arguably already happening with lemmy.world and mastodon.social depending on your values.

    But this is where the open protocol, decentralisation and FOSS platforms kick in. The same or similar platforms can form their own networks or sub-networks with a hopefully high degree of flexibility in what connections are and are not made over the network. IE, enshitification can be routed around easily.

    That at least is the aim. If you tune into the right people and conversations on the Fedi, there’s a little bit of concern about the place, IMO, that the current implementation of things, including the protocol itself, maybe is t good enough for this to become a reality. The centrality of instances rather than an architecture with more portable entities and data strikes me as an obviously central issue in this regard.

    Personally I’m curious to watch for what happens when BlueSky open up next year and in particular how interested developers get in their system and building on top of it. If developers buy in and their system allows for organic innovation and growth while providing a more robust architecture, then it could be a rather interesting development.










  • Thanks! I’d read it already. Good one too. Though I wasn’t consciously referencing it in my mind, it no doubt planted the seed for my thought.

    The basis of my thought was my own reflection on whenever I’ve seen AI images that are intended to be beautiful and attractive. While they are often somewhat uncanny and even unnatural, in my experience they are definitely hitting the right “buttons”, like an artificial sweetener. But, IME, unlike artificial sweeteners, can effectively go for being more “sweet” than anything natural ever could.

    I don’t think I like it, but the capacity is definitely there and I can’t see why people won’t eventually get used to being aroused by some ridiculously proportioned and shiny but undeniably “sexy” AI character/imagery and find increasingly little of interest in our dull, flabby, hairy and flat selves.

    For the porn and modeling industries, maybe there’ll be a liberating effect of freeing women from the industry. Maybe sexual relationships will feel free to emphasise the physical and psychological intimacy rather than the visual attractiveness.

    In the end though, beauty standards will probably just become more problematic. Weird sci-fi shot is probably in store.



  • There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.

    What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.