In the week following October 10, a story of law enforcement cracking down on a drunk driver spread quickly across news sites and Facebook feeds. The arrest didn’t feature a celebrity or a police chase, instead, the click-worthy detail was that the Lakeland, FL driver broadcast herself driving home drunk on Twitter’s live streaming platform, Periscope. Here at Quoted, the story got us thinking: if the police use social media to assess when a crime is happening, could insurance companies use social media to assess whether drivers are a high-risk liability? And if so, how might that affect consumers?
How Not to Use Periscope
It was a Saturday evening in humid Lakeland, Florida, when the local police department received 911 calls from multiple concerned citizens about a possible drunk driver. Callers had seen the driver, identified as 23-year-old Whitney Beall, in a live stream on the app Periscope. In the shaky video, Beall clearly states she is driving while intoxicated. For 11 minutes, the haphazard video captures Beall as she struggles with directions, laments a flat tire, and hits a curb. Luckily, the fiasco was halted before any drastic harm took place. In response to the 911 calls, a LPD officer downloaded the app and used landmarks in Beall’s video to locate her before a car collision could take place.
Beall’s case is far from the first example of law enforcement using social media to fight crime. According to the IACP Center for Social Media, 86.1% of agencies use social media to gather evidence in criminal investigations. This trend will only accelerate as social media expands and takes new forms.
An Invasion of Privacy?
When a video is broadcast to the masses, little privacy can be expected. But what about other forms of social media near and dear to our hearts? Behemoth social sharing platform, Facebook, is quick to emphasize their customizable privacy levels. Simply toggle the privacy settings to “Friends Only” to keep out the prying eyes of strangers, right? Well, not exactly.
One illustrative account of how malleable expectations of privacy can be is the 2012 case of Melvin Colon. Colon, a suspected New York gang member, was charged with violent and narcotics-related crimes. Among the evidence were incriminating Facebook posts that referenced past crimes and contained threats against others. These posts were hidden from the public, but one of Colon’s Facebook friends allowed police to access Colon’s “private” profile. “Colon’s legitimate expectation of privacy ended when he disseminated posts to his ‘friends’ because those ‘friends’ were free to use the information however they wanted — including sharing it with the government,” wrote the federal judge who ruled on Colon’s case. In essence, if personal information is shared with any group of people, claims of privacy are null.
In the cases above, police officers leveraged social media to glean evidence of crime. But criminal activity is only one piece of information that can be picked up from monitoring social media accounts. If insurance companies were to turn to social media, they would encounter troves of data that appears inconsequential on the surface but could be analyzed to calculate elaborate risk profiles. Even things like how frequently a person travels, what kind of activities they take part in, or how big their family is, could be used by insurance companies to determine what policy rates and services are offered to specific individuals.
Don’t Feed the Bots
Digital information shared on social media channels is already being collected, sorted, and targeted to consumers in troubling ways. In May, The Nation reported on how companies can turn Facebook activity into an alternative “credit score” and the danger inherent in such a practice: “Companies can smuggle proxies for race, sex, indebtedness, and so on into big-data sets and then draw correlations and conclusions that have discriminatory effects. […] It’s discrimination committed not by an individual ad buyer, banker, or insurance broker, but by a bot.”
A 2014 White House report on big data echoes the same concern: “Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies could be used to ‘digitally redline’ unwanted groups, either as customers, employees, tenants, or recipients of credit.”
The fact of the matter is that social media activity creates a teeming pool of digital information which can be collected and weighed according to a variety of internally-defined formulas. And in an industry where pegging high and low-risk users is so important, it wouldn’t be surprising if car insurance companies begin including social media activity in their risk defining algorithms.
What do you think? Should consumers worry about insurance companies looking at drivers’ social media? Tell us in the comments.