On a par with Google: Facebook already has its own image creator using artificial intelligence

Artificial intelligence imagers have become a digital trend that several technology companies are following. Now, in addition to the already known crayon (formerly DALL-E mini) from Open AI and Image of Google, Goal -parent company of Facebook– has joined this nice branch of technological art with his own version which he calls Make-A-Scene.

As indicated through a publication in his official blogthe firm hopes to adopt this new tool on its way to developing immersive worlds in the metaversein addition to contributing to the creation of high quality digital art.

Just by typing a word or phrase, the system will start a process in which the writing will go through a transformation model, then go to a neural network that analyzes the text to develop a contextual understanding of the relationship between words. After capturing the essence of what the user describes, artificial intelligence will synthesize an image using a set of adversarial generative networks (GAN in English).

A rapidly advancing technology

Due to the multiple efforts that currently exist to train models of artificial intelligence With ever-larger sets of high-definition images and well-chosen textual descriptions, the most advanced generators can now create photorealistic images of practically anything you order them to do. However, this process changes depending on the chosen AI.

We have Image of Google which uses a diffusion model “that learns to convert a pattern of random dots into images, starting with low-resolution figures and gradually increasing resolution”. On the other hand, the Parti AI of Googleit first converts a collection of images into a sequence of code inputs, similar to the pieces of a puzzle. A certain text is then translated into these code entries and a new image is created”.

Meta’s contribution to AI image generators

As pointed out mark zuckerberg in the entry about Make-A-Scene in the Blog of GoalWhile the aforementioned systems can render almost anything, the user has no real control over aspects of that image in its final form. “To harness the potential of AI to drive creative expression, people should be able to shape and control the content that a system generatessaid the CEO of the company.

That’s why what he does Make-A-Scene is to incorporate the sketches created by the user to your system, resulting in a 2048 x 2048 image px. With this combination, the user will be able to describe what he wants in the image and, in addition, he will be able to control the general composition of the image.

Make-A-Scene demonstrates how people can use both text and simple pictures to convey your vision with greater specificityusing a variety of elements, shapes, arrangements, depth, compositions and structuresindicates mark zuckerberg.

The test of Make-A-Scene were encouraging in that the human evaluator groups preferred this text-and-drawing system over the text-only system, since it better fit the description of the original sentence in a 66% and to the original sketch in a 99.54% of the times However, for now the company it has not mentioned when it will be made available to the public.

We recommend you METADATA, RPP’s tech podcast. News, analysis, reviews, recommendations and everything you need to know about the technological world. To hear it better, #StayHome.

We would love to give thanks to the writer of this short article for this outstanding web content

On a par with Google: Facebook already has its own image creator using artificial intelligence