Nowadays people can converse through computers and phones, requesting suggestions and issuing commands based on individual interests. The major reason why all this technology works so seamlessly is due to the great level of advancement made in Artificial Intelligence (AI). Recently, artificial intelligence has become a critical part of social media use, as the programs quickly understand the deeper meaning of various human actions and give more intelligent and accurate results. Various social network sites have either acquired or developed AI programs to enhance their appeal and functionality to users.
Moreover, AI is also effective in analyzing the huge amount of data involved in social media sites. AI makes this possible by ingesting and deciphering large amounts of data to determine the patterns, predict trending topics and hashtags (Gaudin Para.6). Facebook, the leading and largest social network sites, refers to this capability as ‘deep learning’. With approximately 800 million users logging to the sites on a daily basis, large amount of unstructured data is generated in the process. As a result, deep learning technology allows Facebook to provide a more personalized user experience. Facebook are constantly seeking to develop better AI programs the will make it easier for users to find other users, content and pages that appeal to them.
Recently, Facebook introduced an AI program that makes it easier for users with visual impairment to read through the photos uploaded by other users. This technology is a radical new improvement from what other social media sites are currently using. The program will have an impact on all photos uploaded to Facebook, as every upload will receive a description automatically, which will enable visually impaired users to hear about the photos through a screen reader (a software that allows visually impaired or blind people to hear and read the content on a computer screen). The new feature will be appropriately referred to as the Automatic Alternative Text (AAT) that is powered by artificial intelligence or what Facebook refers to as object recognition technology.
When it comes to describing the photos to the partially blind and blind users the software always describes the bare minimum, for instance, “two people bench outside.” While this description may appear too simple, it still remains the most effective way of communication when there are no visual cues. Furthermore, the development of this software has enabled millions of blind and visual impaired Facebook user to freely interact with other users in all aspects including through photos. Other than giving this group of users more confidence and control over what they are commenting on or liking it also improve their communication with other users. Although this feature is currently available only on Facebook and not on other platforms owned by the companies such as WhatsApp, Instagram and Messenger and can only be accessed through iOS devices, it still remains as a giant leap for Facebook in its quest to help very user to see the nearly two billion photos shared through it various sites daily.
M, the name of the new virtual assistance app developed by Facebook, is built on the Facebook Messenger platform- Facebook’s instant Messaging app. M was launched in 2015 albeit on a limited scale and it is almost similar to Microsoft’s Cortana and Apple’s Siri. However, M is fundamentally different, and as described by the developers, M can comply to questions such as “can you make me dinner reservations?” Siri from Apple and other artificial assistant would never be able to satisfactorily comply with such a question. The only reason as to why the M is able to comply with such answers is because the artificial intelligence technology is built upon allows the system to respond to such questions similar to how humans would. One of the major differences between this AI and others similar ones from other developers is that this one is actually supervised by people.
Although other researchers have termed this move as retrogressive, however, when looked in a long-term perspective, this partnership could help improve this AI. Since humans will only answer queries that the AI is incapable from answering, in the long run the AI could learn from the supervisor and therefore significantly improve its capabilities. This feature will greatly advance the way Facebook communicate with its individual users, as the social network site will meet the specific requirements of each user, thereby, enhancing user experience.
Sine it launch Facebook has become a place to scrutinize what your friends are up to, post pictures and share your thought with the world. However not all this flow of information especially photos is pleasing to everyone; there are individuals who find certain photos annoying or vulgar. As a result, these individuals are likely not to have a good customer experience and proper communication with friends. Due to such scenarios, Facebook developed an artificial intelligence that allows people to filter out certain images that they do not wish to see or their children to see. For instance, a user who is fed up with cat updates, could block all pictures of cats posted by their friends, the AI will then filter out all the pictures containing cats while still allowing the user to see every other post (Rushton para.4). Before the development of this particular AI, a user had to unfriend the user posting unwanted photos. However, with this AI, a user can easily filter out unwanted images while still remaining with all their friends. This AI is based on the deep learning system that was initially developed for the blind and users with visual impairment.
The AI system has already become very sophisticated it is not only capable of distinguishing between various species of animals but it can also tell apart different breeds of dogs. According to Facebook, this Ai technology can be applied to assist people curate their experiences and what they see on the Site. Moreover, this feature could be useful for individuals going through break-ups, censoring images of users one has fallen out with and block photos one is tired of viewing. This AI will further enhance user experience and improve communication between AI and humans.
For instance, one example of how Facebook uses AI to filter large amounts of data is how deep learning was used in an experiment to analyze and determine how people laugh on the site. The result showed that while haha followed by emoji are the most common type of laughter used. Also it showed that women and young people prefer to use emoji while men often use longer hehes.
The above chart shows the number of letters in use when users are portraying laughter. More letters are used when emojis are involved than any other type of laughter.
Facebook recently rolled out an automatic translation system based on the AI technology, which assist users to translate news feeds and posts. Although this technology is not the first in the market it differs from other similar AI’s in that it is able to translate and interpret quirky regional slangs, it is like an urban dictionary used to translate status updates. This feature will be particularly be helpful to users in different parts of the world as it will allow them to better communicate with each other. Furthermore, this AI system will allow Facebook to better engage with developing economies, which are critical to sustaining future Facebook’s growth, as users in Developed countries stagnate and begins to slow. Currently, over half of Facebook’s users do not speak English; this AI system so far assists over 800 million users to see translated news feeds and posts every month.