Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

alternatively, you can use AI as a starting point. All design is iterative.


But why? It's not all that much harder to do original work than redrawing the output of an AI.


I'm not an artist, so I won't comment on the "ease", but if it were true, then why would a ban would be required?

Regardless, you should check out the AI features in the adobe products [1]. Generative removal, fill, etc [2].

AI, in modern tools, is not just "draw the scene so I can trace it".

[1] https://www.adobe.com/ai/overview/features.html


Sure and limit yourself at the starting point, people underestimate how much limiting these tools are, they're trained a on a fixed set can only reproduce noise from here and there


> they're trained a on a fixed set can only reproduce noise from here and there

This anti-AI argument doesn't make sense, it's like saying it's impossible to reinvent multiplication based on reading a times table. You can create new things via generalization or in-context learning (references).

In practice many image generation models aren't that powerful, but Gemini's is.

If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.


If image generation is anything like code generation then AI is not good at copying layout / art style of the coder / artist.

Using Visual Studio, all the AI code generation is applying Microsoft's syntax style and not my syntax style. The return code line might be true but the layout / art / syntax is completely off. This with a solution that has a little less than one million lines of code, at the moment, which AI can work off of.

Art is not constant. The artist has a flow and may have an idea but the art will change form with each stroke with even removing strokes that are not fitting. I see as AI generated content lacks emotion from the artist.


Image generation is nothing like AI code generation in this regard. Copying artist style is one of the things that is explicitly quite easy to do for open-weight models. Go to civitai and there are a million LORAs trained specifically on recreating artist style. Earlier on in the Stable Diffusion days it even got fairly meanspirited - someone would make a lora for an artist (or there would be enough in the training data for the base model to not need it) and an artist would complain about people using it to copy their style, and then there would be an influx of people making more and better LORAs for that artist. Sam Yang put out what was initially a relatively tame tweet complaining about it, and people instantly started trying to train them just to replicate his style even more closely.


Note, the original artist whose style Stable Diffusion was supposedly copying (Greg someone, a "concept art matte painting" artist) was in fact never in the training data.

Style is in the eye of the beholder and it seems that the text encoder just interpreted his name closely enough for it to seemingly work.


Greg Rutkowski

Early stable diffusion prompting was a lot of cargo cult copy pasting random crap in as part of every prompt.


Putting it in the context of an anti-AI argument doesn't make sense. AI was everywhere, like in photoshop brushes, way before it became a general buzzword for LLMs or image generation. I'm not anti-AI but that it can come up with a limit set based on its training data it simply is the truth. Sure one can get inspiration from a "times table" but if you only see 8s and 9s multiplied you're limiting yourself


> If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.

This reminds me, if you ask most image models for something "with a transparent background", it'll generate an image on top of a Photoshop checkerboard, and sometimes it'll draw the checkerboard wrong.


I've seen plenty of artists start by painting over an image they got from google image search, and end with something incredible.

And it's not that limiting. You aren't stuck with anything you start with. You can keep painting.


And then you decide that the “starting point” is good enough because the deadline is looming.


Has it occurred to you if this tooling was good, you wouldn't need to encourage creatives so hard to use it?


If that Disney Star Wars creature AIslop video was any indication, that starting point is pretty fucking bad.

https://youtu.be/E3Yo7PULlPs?t=668


For an artist, the starting point is blank page, followed by a blur of erased initial sketches/strokes. And, sources of inspiration are still a useful thing.


Starting from something basically done might have the same effect as spec music has done for movies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: