This was one of those times when the image in my head was way more epic. Ah, well. I cranked the fisheye lens/glowy effects/stark contrast dials as far as I could! I’m not displeased with it! But it is funny how unlike the thing in my mind this comic persists in being.
I actually ran a description for the first panel through Midjourney a few times at the very outset of sketching, just to see what it came up with. I haven’t played with AI image generation extensively, mostly because I haven’t (yet) found a way to make it all that useful a part of my process. Midjourney routinely gets my image in the right ballpark, but has a hard time dialing in the particulars just yet. In this case, it got the right vibe – a fairytale, grandmotherly type doing something magic. Oh, and a book? You got it! To it’s credit, it does have this uncanny ability to nail the “artistic composition” of its attempts! But they were still such a far cry from anything that would’ve worked in the panel. So I ignored them and let them wash down the steady flow of everyone else’s new prompts and images coming through.
I kinda wish I’d kept the images it produced now! Hadn’t expected to be talking about it. (Opens Midjourney, tries to recreate his prompt, goes downstairs for coffee.) (SFX: footsteps fade, distant sloshing of liquids, footsteps return, groaning flop into creaky office chair.) Here’s what it came up with with while I was gone, which is very similar to what it produced before, with perhaps a splash more eldritch horror:
And after one more go with slightly more specific text:
(Oh! And one of a Kirby mech that someone else had asked for while I waited a minute for the second image to render):
(Seriously – this is the other reason I don’t use Midjourney often! The luring call of that rabbit hole is not to be taken lightly!)
I probably could have gotten it to give me something that more closely fit the image in my head, or at least something that worked to tell the story I wanted to tell in that panel. But I think it would’ve taken an awful lot of prompting and re-prompting, slowly revising and refining the image over many generations (you can pick one or more of the four pics to run through further iterations based on that version). And even then, I’d have had to redraw the thing in my own style. In this case, it was clearly the better choice to just draw the damned thing more or less as it was in my head (even if that did turn out to be on the “less” end of that spectrum this time).
But I suspect the gap is narrowing.
I recently saw that someone had made an interface for the ChatGPT API in the Unity engine. You can talk at it to make video games now. How long before you can create comics with consistent style, setting and character (all beautifully composed, of course!), generated with decreasing levels of handholding, from any reasonable comic script?
And where’s my outrage? I mean, I do have some grumpy old man in me who, when the day comes, is going to be pissed that “the kids these days have no idea how much graphite I had to drag across paper in order to do this stuff!” Most of me just can’t wait.
(On a side note, has anyone thought to combine something like Midjourney with an EEG machine? Maybe use some eye-tracking software to figure out which part of the picture is becoming more pleasing to you as the AI just starts throwing things up on the screen with increasingly clever, more refined guesses? And where can I find a Silicon Valley type who will throw some money at the idea?)