- Sabrina Ramonov 🍄
- Posts
- Build SaaS with AI - Part 6
Build SaaS with AI - Part 6
Part 6: Image Generation & UX Layout Updates
This is part 6 of my series Build SaaS with AI — where I go from Idea to MVP with AI-driven coding!
In this post, we’ll work on:
testing Cursor AI’s Composer feature
adding image generation
UX/layout improvements
Here’s the Youtube video for this post:
What type of content do you LOVE?Your vote helps me prioritize what to focus on next! |
Cursor Composer
Last post, I introduced the Composer feature from Cursor AI.
Composer allows us to create and edit multiple files simultaneously, automating what used to be a manual and error-prone process:
create new file
apply code edits
save code edits
repeat for each file…
Composer automates all these steps!
First, enable it in Cursor Settings as a beta feature.
Open up the Composer interface with CMD + i on Mac.
And here’s Andrej Karpathy, Cofounder of OpenAI, praising Cursor 🙂
For example, when adding a new feature that requires multiple new files, Composer takes care of creating all files automatically. This speeds up development and helps maintain consistency across our codebase.
And via Composer, we can:
save all proposed changes
test them thoroughly
only apply the changes once we’re confident they work
This works much better than applying changes and trying to UNDO them… which is a 100% nightmare in Cursor.
Image Generation
Now, I’ll use Cursor Composer to implement image generation in our app.
Open it with CMD + i.
For now, I’ll use OpenAI's DALL-E for generating images.
Although it’s not my favorite image generation option, we can leverage our existing OpenAI integration, making it an easy choice to start with.
Here’s the functionality I want to add:
Add a checkbox that indicates whether a user wants to generate an AI image for the selected social platform.
For each selected platform, generate an AI image for that platform. This means that if a user selects multiple platforms, they receive separate AI images for each.
Use the first 300 characters of the AI-generated text as the image prompt. This limitation ensures compatibility with DALL-E’s input requirements while also ensuring images relevant to the text.
As I mentioned in part 1, the multimedia aspect of this app is very exciting. I’m starting with text and images… but imagine video generation in a future version!
Here’s the prompt, and we’re going to chat with the entire codebase:
- Use Next.js, Typescript, Tailwind CSS, and App Router.
- Use Next.js App Router when creating new screens.
- Do not use src directory.
- ALWAYS show the proposed directory structure including all new files that must be created.
- ALWAYS show directory structure first.
- For each step in your plan, specify the filename and path to the file.
- For next.js components that require client side state, use 'use client'.
- Tailwind config is specified in tailwind.config.ts
- Use a modern color palette, ensuring that any user-facing text is readable and visible. Force the text colors everywhere to ensure only light mode is supported.
- ALWAYS add robust logging to help troubleshoot issues.
Complete the following tasks:
- Add a checkbox to the right of the prompt selector dropdown. If the user checks the checkbox, then the app will generate an AI image. Only show this checkbox after the user selects the platform.
- use OpenAI DALL-E to generate an image for each selected platform. For example, if a user selects Facebook and clicks "Generate Content", then the final output should be (1) AI-generated text content and (2) AI-generated image.
- If a user selects multiple platforms, then generate a separate AI image with dimensions optimized for each platform.
- For the image generation prompt, use the first 300 characters of the AI-generated text.
Act as an expert in Next.js and OpenAI, and ask me clarifying questions until you are 95% confident you can complete the task successfully.
And, here’s what the new functionality looks like — I selected Facebook, Twitter, and LinkedIn, and the app generated text content and an AI image for each platform:
Below, probably my favorite AI image so far 😅
I feed in my newsletter playbook about growing on TikTok and get this:
Obviously, rendering correct text within AI images is a well-known problem that needs a lot of improvement. I’ve heard Flux is much better at it, so I’ll most likely switch to that from DALL-E.
Update Layout
Finally, I want to update the layout.
As you saw in the screenshot above, everything’s laid out vertically on top of each other.
But, if I scroll down to see the generated content, then I lose sight of the original content!
So, I want to switch to a 2-column layout:
The left column displays the content I’m repurposing
The right column displays a list of generated content, including both AI-generated text and images.
Basically, I want to keep all related content visible at once, making it easier to manage and edit.
First, I take a screenshot of the current layout.
Then, I crop sections and rearrange them how I want:
It’s my amateur UX hack 😅
Upload the updated layout image to Cursor Composer along with this prompt:
- Use Next.js, Typescript, Tailwind CSS, and App Router.
- ALWAYS show the proposed directory structure including all new files that must be created.
- ALWAYS show directory structure first.
- For each step in your plan, specify the filename and path to the file.
- For next.js components that require client side state, use 'use client'.
- Tailwind config is specified in tailwind.config.ts
- Use a modern color palette, ensuring that any user-facing text is readable and visible. Force the text colors everywhere to ensure only light mode is supported.
I want you to update the "Create" screen as shown in the attached image. Here is a summary of the changes:
- Change it to a 2-column layout. The left column is the ContentGenerator component. The right column displays a list of generated content, showing the AI generated text and image for each selected platform.
- Left column should be fixed in position while the user scrolls the right column.
- Within the GeneratedContent component, the generated text and generate image should be next to each other, in the same row. Display the AI generated image as smaller so it fits in the same row as the generated text, and preserving the generated image's original aspect ratio.
Act as an expert fullstack developer, and ask me clarifying questions until you are 95% confident you can complete the task successfully.
@ContentGenerator.tsx @GeneratedContent.tsx @create
Ah, now this looks much better! 😁
I can see the original content on the left, while editing the generated content on the right.
And, I can one-click to regenerate the AI image until I get something I like.
Up Next…
We’re close to wrapping up this MVP and series!
The next part is the last:
Publish content via Zapier
Update styles
Vercel deployment
Did I miss anything?
Have ideas or suggestions?
Message me on LinkedIn 👋
Sabrina Ramonov
P.S. If you’re enjoying my free newsletter, it’d mean the world to me if you share it with others. My newsletter just launched, every single referral helps. Thank you!
share by copying and pasting the link: https://www.sabrina.dev