This might be intriguing, especially given Facebook’s continuing development of augmented reality devices.

The Social Network has today outlined a new machine learning process called ‘Anticipative Video Transformer (AVT)’, which is able to predict future actions in a process based on visual interpretation.

Overview of Facebook's visual anticipation process

As you can see in this example, the new process is able to analyze an activity, then anticipate what action is likely to come next as a result.

Which could have a range of applications – as explained by Facebook:

“AVT could be especially useful for applications such as an AR “action coach” or an AI assistant, by prompting someone that they may be about to make a mistake in completing a task or by reacting ahead of time with a helpful prompt for the next step in a task. For example, AVT could warn someone that the pan they’re about to pick up is hot, based on the person’s previous interactions with the pan.”

That seems like something out of a science fiction film, allowing for whole new smart home applications. Again, in the context of AR glasses, this might give a number of useful tips to assist individuals in doing a variety of chores at home or at work.

“Three losses are used to train the algorithm to predict future actions and characteristics. To predict labeled future action, we first classify the features in the last frame of a video clip; second, we regress the intermediate frame feature to the features of the succeeding frames, which trains the model to predict what comes next; and third, we train the model to classify intermediate actions. We’ve shown that maximizing these three losses together improves our model’s ability to predict future actions by 10% to 30% compared to models trained just on bidirectional attention.”

It’s not something Facebook plans to implement right immediately, but the potential is huge, and it might someday enable all kinds of new methods of directing user behaviors and eliminating errors by predicting future moves.

Facebook presents the example of replacing a vehicle tire, with AR glasses guiding you in the appropriate way. It may also act as a reminder for your daily rituals, based on visual assessments of where you are and what you’re doing.

The possibilities are truly unlimited, and when you consider how Google Glass turned into a major tool in industrial workspaces by offering in-view pointers and instructions for technical applications, Facebook’s wearable AR gadgets have much more potential.

It’s still a long way from being a consumer-facing device, but the experiment highlights Facebook’s continuous AI work and speaks to the expanding capabilities that’ll likely be included into a future stage of its AR glasses initiatives.

You can read more about Facebook’s Anticipative Video Transformer (AVT) process here.