

The whole point is that AI art doesn’t take time or effort.
Yep, what I’m saying is that placeholders just got better, that’s all.
Yes, but you can’t have professional art during the whole process of development. It’s far more efficient for a solo dev to test first before paying an artist to make the final assets.
Game development is so chaotic, I’ve seen people throw away thousands of dollars of art because it turns out the game never needed those assets in the first place.
No, but it does empower solo indie creators to do something beyond that. Like a dude who’s a solo programmer can now make a reasonably okay looking game without dipping into “programmer art”.
Obviously once their game gets enough traction they should pay a real artist to do it right but it’s not a bad idea to prove the concept first using low effort AI art.
Like every day? Yeah it’s worse now but Google is still useful for a lot of things.
That being said, I do have AdBlock so it’s a different internet for me.
So this is what I’m excited about in AI.
LLMs are statistical machines that simply output reasonable sequences of tokens. Useful! Not particularly smart, but it approximates language. I think it proves that a great majority of what humans do is learned sequences of behaviors.
But now we’re working on corralling that statistical language into workflows that improve the reasoning of the output. These are the first experiments into what makes thinking actually work. Is it iteratively refining a rough concept (like we’re seeing in this paper)? Or is it subdividing tasks into more easily solved problems (like the Atom of Thoughts paper)?
Once we find something that works, a real theory of intelligence seems much more likely to emerge. If that happens, I wouldn’t be surprised to see LLMs die out in favor of something far simpler and more efficient.
Probably no. Unless your code represents a high percentage of all code in the world (it doesn’t) the AI will churn through it without noticing. Fun idea though!
I mean, a lot of people can also charge at home. It’s way cheaper and more convenient.
As much as people like to shit on them due to the connection to Musk, they can be good cars with some caveats.
Buying used is great because you don’t send that money to Musk. In fact the updates and support technically cost them over time.
I suspect the fediverse is stickier than mainstream social media, since there’s no incentive to be anti-consumer, and anyone can make their own frontend.
It’s only a matter of time until these companies keep making unforced errors and people join us.
Please file Nicole between “Beans” and “Trying not to Poop”
The French really know how to party.
Being pedantic about the differences between mammoths and mastodons is legitimately one of my favorite pastimes. Thank you!
They haven’t released the models yet, but seem to suggest they will as Apache licensed. The voice models are in 1B/3B/8B sizes, so that sounds relatively reasonable for consumer hardware.
I think this is the natural next evolution of LLMs. We’re starting to max out on how good a string of text can represent human knowledge, so now we need to tokenize more things and make multimodal LLMs. After all, humans are far more than just speech machines.
Approximating human emotion and speech cadence is very interesting.
Agreed on all counts, except that rebrands rarely succeed without boatloads of cash behind them. And even then not always.
I’ve heard (and experienced to a certain extent) that a rebrand is a sign of the beginning of the end for a product.
People love swapping Minecraft seeds because it allows them to share unique experiences. Like if something really cool generated, other people will want to see that thing happen!
To deny players this would be a huge error - seed hunting is a non-trivial community engagement factor.
Surely the random seed should be considered a necessary part of the input, no?
AI still uses random seeds like other procedural algorithms.