Image of a silver concept motorcycle, designed using generative AI

Best Generative AI Tools for Industrial Designers

When I first started using generative AI tools like Midjourney and Dall-E, I felt a bit disheartened and downtrodden.  Countless hours spent developing my ability to sketch and render and understand form, all sent to the cliff's edge like a herd of buffalo.  I was left asking "What role does a human designer play in the world of AI?"  Let's look at a few cutting edge AI tools, and see where we bipeds fit in. 

A.I. Tools for Ideation

Recently, there have been two AI tools that I've been using throughout the design process and in a variety of ways.  These tools are extremely versatile and have a broad range of uses.  For the moment, we'll focus on these tools as creative agents to assist during the outset of a design process. Obviously, motorcycles will be our muse for this exercise.  

 Dall-E

Many of you may have heard of the company OpenAI. They are the company responsible for ChatGPT (which is a fantastic tool for idea generation BTW). OpenAI has also been working on an image generation tool called Dall-E.  The user interface is very intuitive and the AI has a very conversational style. You can describe what you're looking for as if you were speaking with a friend or another designer.  The prompt can be fairly long and use colloquial language. Dall-E will generally output one or two images based on your prompt.   

As you can see in the images below, the quality of the images are quite good. There are obvious mechanical issues, but these are great for creating an initial inspiration board. 

Another key feature within Dall-E that gives the user additional control is the ability to select specific areas of the image that you want to change, and re-prompt Dall-E to change that specific area. This is a great feature when you like a majority of the image, but want to change a few select areas. 

 Midjourney

By now you've probably all heard of Midjourney (MJ). MJ was initially released on Discord and allows users to create four images at one time. You'll find more success with Midjourney prompts that use clear and descriptive language without additional "fluff".  MJ has released a number of different render models and you can manually select which model you want to use. MJ does feel more like a tool than some of the other image generators, because there are a number of modifiers that you can add to your prompt that change the output style.  You can really hone in on a particular style or aesthetic by experimenting with different command values (commands such as Style and Chaos, for example).  

The two images below shared the same prompt, but used different MJ render models and settings.  It's possible to get a broad range of concepts, in vastly different styles, in a short time frame with Midjourney.   

 Best Tool for Refining a Design

Dall-E and Midjourney will both allow you to upload an original sketch or image and use that image as the starting point of the AI generation. The results tend to vary drastically and the time and effort required to get the output you have in mind can be longer than is worthwhile.  If you have a sketch or an initial render of a vehicle or product, there is another tool that will be very useful.

Vizcom AI

Vizcom is a tool being built by a team that is focused on meeting the needs of product designers.  Vizcom is a web based experience that allows you to sketch directly in the web interface, or upload an image or 3D model.  Once you have a sketch or model view that you like, Vizcom has a selection of six render styles that you can choose from.  Then you add a prompt and adjust the drawing influence/AI strength. The results are impressive.

In the exercise below, I uploaded a screen shot of a working model and used the prompt "a black and chrome concept motorcycle" in conjunction with the Automotive Exterior render model with a drawing influence of 90%.   

 

I use Vizcom for fast color explorations and to work out certain surfacing details that I may not have fully developed.  Vizcom also has the ability to take a rendered image and turn it into a 3D mesh.  Currently, the mesh is fairly rough, but it can be super useful if you're starting with a sketch and want to explore the item in different views. 

Final Thoughts

The future of design is going to be powered by artificial intelligence, but thankfully for designers, AI is not able to run the design process from end to end. If it could, the world would be full of strange and nonfunctional items. However, AI is a great tool for creatives to leverage. I've found that I'm able to work more efficiently and find novel ideas faster while using AI. As a creative person this is both exciting and daunting. Exciting because I'm able to do more and stretch my skillsets farther than I could before, and daunting because it all seems to be moving so quickly. There is a nostalgic part of me that cherishes the world of design circa 1960.  A world of draft tables, mechanical pencils, and band saws.  Then I remember that a designers job is to create a better, more interesting, and more usable future.  So with that in mind, I'm excited to see how AI can help designers bring their visions into reality.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.