The word robot “entered the English language in 1923” (Long, 2011) after Karel Capek used it in 1921 on R.U.R., Rossum’s Universal Robots, a play about artificial people. The word comes from the Czech and refers to ‘forced labour’.
“A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously. And a robot can sense and manipulate its environment.” (Simon, 2018)
From their origins, robots were used in assembly lines, where they carried on dangerous and repetitive tasks, and faster than humans could. Humans have used technology to create a new species, to create robots to help make our lives easier. That is how the Digital Revolution started, and how technology keeps being that great tool that is at our service. With robots, however, there is controversy to whether or not these robots will take on all of our jobs.
The first known robot that had mobility and could perceive its surroundings was Shakey, built by SRI International in 1966. It “could perform tasks that required planning, route-finding, and the rearranging of simple objects. The robot greatly influenced modern robotics and AI techniques” (SRI International, n.d.)
Artificial Intelligence, or AI, is one of the tools used in robots and computers to make them smarter, by making them learn through machine-learning. The term robot is still complex, since it is normally used to refer to the actual machines that move and walk, or that carry out a task by moving its parts. However, exploring AI corresponds more with the assignment.
Artificial Intelligence works with data, making the robot or computer obtain it and then creating algorithms based on it. With that data, the machines can carry out the task, and react to external inputs. After some trial and error, they learn how to execute the task correctly next time. Machines then become more intelligent, although they still cannot understand the reasons behind their actions.
"I’m building design tools that integrate intelligent algorithms with the design process; tools that try to make designers better by learning about what they’re doing. What we’re doing. Augmenting rather than replacing designers." (Gold, 2016)
Robots like driveless cars, or those used on repetitive tasks on assembly lines, do not solve a problem creatively. They do not personalise the task, so the end result is always the same, the expected. With improved Artificial Intelligence, they could generate different results, because they learn from the outcomes. However, that is not enough to replace the job of a designer. “Our job requires a certain level of creativity within particular bounds, and is about communicating with people.” (Peart, 2016)
Nevertheless, there are some repetitive tasks in graphic design that can be automated, and some of them already are. For example, the ‘actions’ panel in Adobe Photoshop allows people to record a few steps, and then repeat the sequence of actions on other images. This is useful to bloggers and youtubers, for example, that have to create images or thumbnails with the same layout, or to photographers that want to retouch a batch of photographs.
The designer Jon Gold has explored the world of AI by making a computer learn about typographic trends, and then create pairings of fonts that are rare. At the end of the day, graphic design follows rules, and Gold was able to give enough information to his computer, so that it could differentiate fonts by their contrast or x-height, to avoid mixing fonts that are very similar.
While the idea of Artificial Intelligence can work on certain tasks of design, machines would have to learn from all types of designers, not only from the more minimal aesthetics and balanced design. By studying David Carson or Neville Brody, the machine could get disoriented regarding hierarchy, even readability. That would be the only real way that machines could take on the job of a graphic designer: by being innovative.
Companies like Adobe and AirBnb are working on AI tools that help designers in less stimulating tasks like retouching photos or digitising web mockups. While the improvements are remarkable, there is yet a long way before machines can relied on to do part of the designers’ work without being supervised.
Websites like Tailor Brands and MarkMaker offer autogenerated logos based on a few questions to the user: ‘select the examples that you like from below’, ‘what type of font do you prefer’, ‘what colours do you like’ are only a few. As MarkMaker (n.d.) explains, “A genetic algorithm allows the system to learn your preferences and improve its designs. At the beginning of each session, logos are generated based on a random ‘gene pool’. When logos are (lik)ed, their genes are reinforced, and new logos are created by borrowing and recombining their traits.”
Another example of graphic design AI is The Grid, which generates websites based on the content the user has, and not really considering any problems or challenges, or re-thinking the content itself.
All these examples show that, while auto generation of graphic design elements is possible and accessible to anyone for a very low price, machines are not creative, do not recognise complex aesthetics and cannot understand what the client really needs, instead of what they like.
On the near future, machines will help designers on repetitive tasks, like generating a newspaper spread, an appropriate colour palette, or even a moodboard given enough information. Although it would be a great advance to have intelligent machines to create full designs, the reality is that understanding the problems that designers solve is a challenge. As Gold (2016) explains, we are still on the same development level as in 1984 when the Macintosh was released. “Decades later it feels like we’re breaking through to another era. The most exciting and intellectually stimulating years in the history of our industry; the cusp of real designer-computer symbiosis.” And this combination is needed to ensure quality and innovation.