HP just introduced a new technology to fight back against the feeling that somebody’s watching you.
HP’s EliteBook 1040 and EliteBook 840 laptops now have an option to add a new one-touch solution called SureView that combats what the company calls “visual hacking.” SureView was developed with 3M privacy technology, and HP first showed off the technology during CES in January.
To call this activity “hacking”, however, is a bit of a stretch. What we’re really talking about is someone who’s literally peeking over you shoulder to read the information on your screen.
Regardless, SureView sounds like a pretty cool feature. All you do is tap F2 on the laptop’s keyboard and SureView “reduces up to 95 percent of visible light when viewed at an angle.” With SureView enabled, HP says those pesky eavesdroppers will have a much harder time reading your TPS reports.
HP’s positioning SureView an ideal solution for young (I refuse to use the “m” word) corporate drones who might unwittingly display sensitive company data at public places like Starbucks or the airport. In reality, SureView could be ideal for people of any age to keep email, usernames, account numbers, and other sensitive data private while you’re in public.
The impact on you at home: How real is the threat of “visual hacking?” 3M sponsored a study that showed visual hacking is easy enough, which anyone who’s ever walked by a laptop in public already knows. It’s not clear how often visual hacking has resulted in real damage, but it really doesn’t matter. I’m sure many people have experienced a feeling of “exposure” when viewing private information on a laptop while in a public place. We haven’t tested HP’s technology yet. But from the sounds of it SureView could go a long way to alleviating real or imagined spying so you can work more confidently while sipping on that delicious mocha double no-foam latte.
Opera is expanding the reach of its free, mobile VPN app. The browser maker recently announced that Opera VPN is now available for Android in Google Play. The new app is similar to the iOS version Opera released in May.
Opera provides five virtual server locations to choose from including the United States, Canada, Germany, Singapore, and the Netherlands. These server locations can either help you stay secure while you’re using a public Wi-Fi hotspot or evade regional restrictions—just don’t count on fooling Netflix.
For this latest app Opera added a new feature that is not part of the iOS app, called the “Wi-Fi security test tool.”
This feature tests the Wi-Fi network you’re connected to in order to see how secure it is. Testing my home network, Opera VPN gave me a B+. I lost points for having an exposed IP address, being at risk for Wi-Fi sniffing, and at risk for eavesdropping by my Internet service provider. No doubt that last risk is always there unless you activate Opera’s VPN, which the Protect WiFi button helpfully turns on for you.
The Android version also has a feature called Guardian that blocks ad trackers for you. Guardian is not on by default. The iOS version also blocks ad trackers for added privacy, but the feature is on by default and doesn’t have a fancy name like Guardian.
Overall the app is very simple to use. It only has three basic features: the VPN, the Wi-Fi test, and Guardian. When you first install the app it asks permission to use Android’s built-in VPN features, which then allows you to use Opera’s free VPN with a single tap.
Previously, Opera told us it collects some of this information in order to “use anonymous market insights derived from customer usage to help support the service. We make this information available to third parties who are interested in better understanding the mobile ecosystem and how it’s evolving.”
Twitter has finally come up with a solution to muzzle trolls.
The company published a blog post on Thursday announcing two new controls for filtering your notifications. Twitter notifications are the primary method through which trolls can contact and harass users.
The first new setting reduces the noise in your notifications stream. By default, anyone who mentions your Twitter username with the “@” symbol shows up in your Twitter notifications. It doesn’t matter if they’re asking a simple question, offering constructive criticism, or threatening to cut your head off. Everyone shows up.
The new setting filters your notification down to solely people you follow. The new filter works on Twitter’s apps and the website. It’s not clear if third-party Twitter apps can also apply it.
Why this matters: Many—perhaps most—Twitter users don’t really have a need for this kind of filtering. But for people such as celebrities, politicians, or outspoken feminists, Twitter notifications can be a very dark place. For these people personal threats and other objectionable comments from random Twitter users are commonplace. The new notifications filters will make Twitter a more hospitable place for anyone who wants to speak their mind without having to sort through a deluge of hate.
The unfortunate side effect of this, however, is that people who are being targeted for online harassment are effectively putting themselves in a bubble. In other words, the long-held idea of using Twitter as an “online water cooler” to chat and share ideas with strangers will be over—if it ever truly existed in the first place.
It’s all about quality
The second new setting is called a quality filter. This setting, which was turned on by default for my account, removes what Twitter calls “lower-quality content.” This low-brow stuff can be things like duplicate tweets or bot-generated content. The quality filter affects your notifications and “other parts of your Twitter experience.” Presumably, that means your primary timeline. The low-quality filter never restricts people you follow or those whom you’ve recently interacted with—don’t feed the trolls, folks.
How to turn on the new settings
Getting to the new settings is easy on Twitter’s website. First login to the service and then click on the Notifications tab. To the right of your mentions, click the new Settings link.
This settings area has two check boxes for filtering your tweets by people you follow as well as applying the quality filter. Check or uncheck whichever box you’d like, select Save changes, and you’re done. Accessing these settings via Twitter’s mobile apps is similar. Mobile users should also tap on Notifications and then tap the settings cog in that area, which takes you directly to the two new filters.
If you don’t see the new settings they may not yet be available for your account. Try updating your mobile apps or logging in to the website. If that doesn’t work sit tight; the new features should show up for you in the coming days.
If you apply the filter to only allow mentions from people you follow it’s also advisable to make sure your account restricts who can send you direct messages. You can double check this setting on Twitter.com by going to Settings > Security and privacy.
Now let the haters keep on hatin’, because you’ll never know one way or the other.
The Web Proxy Auto-Discovery Protocol (WPAD), enabled by default on Windows and supported by other operating systems, can expose computer users’ online accounts, web searches, and other private data, security researchers warn.
Man-in-the-middle attackers can abuse the WPAD protocol to hijack people’s online accounts and steal their sensitive information even when they access websites over encrypted HTTPS or VPN connections, said Alex Chapman and Paul Stone, researchers with U.K.-based Context Information Security, during the DEF CON security conference this week.
The location of PAC files can be discovered through WPAD in several ways: through a special Dynamic Host Configuration Protocol (DHCP) option, through local Domain Name System (DNS) lookups, or through Link-Local Multicast Name Resolution (LLMNR).
Attackers can abuse these options to supply computers on a local network with a PAC file that specifies a rogue web proxy under their control. This can be done on an open wireless network or if the attackers compromise a router or access point.
Compromising the computer’s original network is optional because computers will still try to use WPAD for proxy discovery when they’re taken outside and are connected to other networks, like public wireless hotspots. And even though WPAD is mostly used in corporate environments, it is enabled by default on all Windows computers, even those running home editions.
A rogue web proxy would allow attackers to intercept and modify non-encrypted HTTP traffic, which wouldn’t normally be a big deal because most major websites today use HTTPS (HTTP Secure).
However, because PAC files allow defining different proxies for particular URLs and can also force DNS lookup for those URLs, Chapman and Stone created a script that leaks all HTTPS URLs via DNS lookups to a rogue server they control.
The full HTTPS URLs are supposed to be hidden because they can contain authentication tokens and other sensitive data as parameters. For example, the URL https://example.com/login?authtoken=ABC1234 could be leaked through a DNS request for https.example.com.login.authtoken.ABC1234.leak and reconstructed on the attacker’s server.
The researchers showed that by using this PAC-based HTTPS URL leak method, attackers can steal Google search terms or see what articles the user has viewed on Wikipedia. That’s bad enough from a privacy perspective, but the risks introduced by WPAD and rogue PAC files don’t end there.
The researchers also devised another attack where they use the rogue proxy to redirect the user to a fake captive portal page, like those used by many wireless networks to collect information about users before allowing them on the Internet.
Their fake captive portal forces browsers to load common websites like Facebook or Google in the background and then performs a 302 HTTP redirect to URLs that can only be accessed after the user authenticates. If the user is already authenticated — and most people have authenticated sessions in their browsers — the attackers will be able to gather information from their accounts.
This attack can expose the victims’ account names on various websites, including private photos from their accounts that can be accessed via direct links. For example, people’s private photos on Facebook are actually hosted on the site’s content delivery network and can be accessed directly by other users if they know the full URL to their location on the CDN.
Furthermore, attackers can steal authentication tokens for the popular OAuth protocol, which allows users to log into third-party websites with their Facebook, Google, or Twitter accounts. By using the rogue proxy, 302 redirects, and the browser’s page pre-rendering functionality, they can hijack social media accounts and in some cases gain full access to them.
In a demo, the researchers showed how they could steal photos, location history, email summaries, reminders, and contact details for a Google account, as well as all documents hosted by that user in Google Drive.
It’s worth stressing that these attacks do not break the HTTPS encryption in any way, but rather work around it and take advantage of how the web and browsers work. They show that if WPAD is turned on, HTTPS is much less effective at protecting sensitive information than previously believed.
But what about people who use virtual private networks (VPNs) to encrypt their entire Internet traffic when they connect to a public or untrusted network? Apparently, WPAD breaks those connections, too.
The two researchers showed that some widely used VPN clients, like OpenVPN, do not clear the Internet proxy settings set via WPAD. This means that if attackers have already managed to poison a computer’s proxy settings through a malicious PAC before that computer connects to a VPN, its traffic will still be routed through the malicious proxy after going through the VPN. This enables all of the attacks mentioned above.
Most operating systems and browsers had vulnerable WPAD implementations when the researchers discovered these issues earlier this year, but only Windows had WPAD enabled by default.
Since then, patches have been released for OS X, iOS, Apple TV, Android, and Google Chrome. Microsoft and Mozilla were still working on patches as of Sunday.
The researchers recommended computer users disable the protocol. “No seriously, turn off WPAD!” one of their presentation slides said. “If you still need to use PAC files, turn off WPAD and configure an explicit URL for your PAC script; and serve it over HTTPS or from a local file.”
Chapman and Stone were not the only researchers to highlight security risks with WPAD. A few days before their presentation, two other researchers named Itzik Kotler and Amit Klein independently showed the same HTTPS URL leak via malicious PACs in a presentation at the Black Hat security conference. A third researcher, Maxim Goncharov, held a separate Black Hat talk about WPAD security risks, entitled BadWPAD.
In May, researchers from Verisign and the University of Michigan showed that tens of millions of WPAD requests leak out onto the Internet every single day when laptops are taken outside of enterprise networks. Those computers are looking for internal WPAD domains that end in extensions like .global, .ads, .group, .network, .dev, .office, .prod, .hsbc, .win, .world, .wan, .sap, and .site.
The problem is that some these domain extensions have become public generic TLDs and can be registered on the Internet. This can potentially allow attackers to hijack WPAD requests and push rogue PAC files to computers even if they’re not on the same network with them.
Many of the large payment card breaches that hit retail and hospitality businesses in recent years were the result of attackers infecting point-of-sale systems with memory-scraping malware. But there are easier ways to steal this sort of data, due to a lack of authentication and encryption between card readers and the POS payment applications.
POS systems are specialized computers. They typically run Windows and have peripherals like keyboards, touch screens, barcode scanners and card readers with PIN pads. They also have specialized payment applications installed to handle transactions.
One of the common methods used by attackers to steal payment card data from PoS systems is to infect them with malware, via stolen remote support credentials or other techniques. These malware programs are known as memory or RAM scrapers because they scan the system’s memory for credit card data when it’s processed by the payment application on the POS system.
Target: gas pumps
But on Tuesday at the BSides conference in Las Vegas, security researchers Nir Valtman and Patrick Watson, from U.S.-based POS and ATM manufacturer NCR, demonstrated a stealthier and more effective attack technique that works against most “payment points of interaction,” including card readers with PIN pads and even gas pump payment terminals.
The main issue shared by all of these devices is that they don’t use authentication and encryption when sending data back to the POS payment software. This exposes them to man-in-the-middle attacks through external devices that tap the network or serial connection or through “shim software” running the POS system itself.
For their demo, the researchers used a Raspberry Pi device with traffic capture software that taps the data cable between a PIN pad, and a laptop with a payment app simulator. The PIN pad had a custom top cover to hide its make and model; the researchers didn’t want to single out a particular vendor since many of them are affected.
While the demo used an external device that could be installed by an insider or a person posing as a technician, attackers can also simply modify a DLL (dynamic-link library) file of the payment app to do the data interception inside the OS itself, if they get remote access to it. A modified DLL that’s loaded by the legitimate payment software would be much harder to detect than memory-scraping malware.
The NCR researchers showed that not only can attackers use this attack technique to steal the data encoded on a card’s magnetic stripe, which can be used to clone it, but they can also trick cardholders to expose their PIN numbers and even the security codes printed on the back of the cards.
Normally PIN pads do encrypt the PIN numbers when transmitting them to the PoS software. This is an industry requirement and manufacturers comply with it.
”Please re-enter PIN”—so attackers can steal it
However, man-in-the-middle attackers can also inject rogue prompts on the PIN pad screen by uploading so-called custom forms. These screen prompts can say whatever the attackers want, for example “Re-enter PIN” or “Enter card security code.”
Security professionals might know that they’re never supposed to re-enter their PINs or that card security codes, also known as CVV2s, are only needed for online, card-not-present transactions, but regular consumers typically don’t know these things, the researchers said.
In fact, they demonstrated this attack method to professionals from the payments industry in the past and 90 percent of them were not suspicious of the PIN re-entry screen, they said.
Some PIN pads have whitelists that restrict which words can appear on custom screens, but many of these whitelists allow the words “please re-enter” and even if they don’t, there’s a way to bypass the filter as PIN pad custom forms allow images. Attackers could instead simply inject an image with those words, using the same text colour and font that normally appears on the screen.
It’s also worth noting that this attack works against card readers and PIN pads that conform to the EMV standard, meaning they support chip-enabled cards. The EMV technology does not prevent attackers from using stolen track data from a chip-enabled card to create a clone and use it in a country that doesn’t support EMV yet or on terminals that are not EMV-enabled and only allow card swiping.
Also, EMV has no bearing on e-commerce transactions, so if the attackers gain the card’s track data and the card’s CVV2 code, they have all the information needed to perform fraudulent transactions online.
For manufacturers, the researchers recommend implementing point-to-point encryption (P2PE), which encrypts the entire connection from the PIN pad all the way back to the payment processor. If P2PE cannot be implemented on existing hardware, vendors should at least consider securing the communication between their PIN pads and the POS software with TLS (Transport Layer Security) and to digitally sign all requests sent back to the PIN pad by the payment application.
Meanwhile, consumers should never, ever, re-enter their PINs on a PIN pad if prompted to do so. They should also read the messages displayed on the screen and be suspicious of those that ask for additional information. Mobile payments with digital wallet services like Apple Pay should be used where possible, because at this point they’re safer than using traditional payment terminals.
Today we’re going to look at a new nice touch that controls what kind of information you display on the sign-in screen, specifically your email address.
Right now, when you land on the login screen on a Windows 10 PC it displays your name and the email address associated with your Microsoft account. When you’re at home that’s no big deal, but you may not want that information displayed where someone might sneak a peek, such as at a coffee shop or in a business meeting.
In my tests with the latest Insider builds this information was taken off the login screen by default. It’s not clear if the same will be true for people upgrading from a previous version of the operating system.
Regardless, accessing the setting is pretty easy if you end up needing to hide this data or, conversely, want to to display it again.
In my tests with build 14388, you go to Start > Settings > Accounts > Sign-in options. There, under the Privacy subheading, you’ll have one slider labeled Show account details (e.g. email address) on sign-in screen. Flip that on or off depending on your needs, and that’s it.
This new feature has been around for months so presumably it will remain once the official Anniversary Update rolls out. If it doesn’t we’ll adjust this article accordingly.
If you have Pokémon Go fever, but you’re concerned about the controversy surrounding the app and access to your Google data, you’ll want to install the Pokémon Go update. Even if you didn’t use Google to sign into the game, you’ll want the update, since it has bug fixes.
The 1.0.1 update is now available in the App Store. Before you perform the update, sign out of the game. You can do this in Pokémon Go by going into the app settings and tapping Sign Out at the bottom of the screen. (If you don’t sign out before updating the app, that’s OK. You’ll need to do so when you launch the update.)
To update the game directly on your iPhone, tap on the App Store app, and then tap the Updates tab on the bottom navigation bar. When you see the update appear on the list, tap the Update button. You can also install the update via iTunes on your Mac, with your iPhone connected.
After the update is installed, launch the app and sign in as usual. If you sign in using Google, you’ll see this new screen.
If you go to the web and check your Google account for your connected apps, you should see a change in what Pokémon Go accesses. If you don’t sign out and then sign back into the game as mentioned earlier, you may not see this updated status.
Niantic, the developer of the game, released a statement on Monday, clarifying what the company can access in relation to google accounts. Niantic’s complete statement:
Secret conversations will only be available to a limited number of users at first, with a wider roll out planned for later this summer. The feature name “secret conversations” first surfaced in March.
Messenger’s secret conversations won’t be like WhatsApp, which offers complete E2EE for all messages when all users in the conversation have a compatible version of the app. Instead, secret conversations will allow Messenger users to encrypt one-on-one conversations on the fly. Group messaging will not be covered.
When encrypted, the messages will only be accessible to the two conversation participants. While the message is in transit from one device to the other it won’t be possible for third parties—including Facebook—to decipher the message.
Facebook is also adding a Snapchat-like self-destruct setting that allows secret conversations to disappear after a predetermined amount of time. Rumors about Facebook’s plans for a Snapchat-like feature for Messenger first surfaced in May.
Each secret conversation will also exist in its own section of the app for each Messenger contact. Secret conversations will not be integrated with the main conversation thread for that person.
The biggest limitation of secret conversations is that new feature will only work on one device. Facebook told Wired it doesn’t have a system in place to distribute encryption keys (bits of information that encrypt and decrypt messages) across multiple devices.
Secret conversations will also start with a slimmed down feature set, leaving out support for animated GIFs, video, Facebook’s payments system, and other features.
The story behind the story: Facebook hasn’t said whether it plans to move towards a fully-encrypted Messenger or only offer the option for people who need it. As more features get added to secret conversations, and if Facebook lifts the one device limit, the E2EE feature could become a standard part of the massive messaging platform.
If going full E2EE is indeed the final plan it wouldn’t be the first time Facebook took a piecemeal approach to encryption. Facebook’s move to make all parts of the social network’s website SSL/TLS-compatible took several years. At first, users had to enable SSL/TSL encryption manually, and many features of the site didn’t work when early versions of the security measure were turned on.
A reader whom I won’t name worries that his cousin watches what he does on his Android phone. The cousin actually told him so.
It’s possible that your cousin is just messing with your head. Ask for proof—such as texts you’ve sent and received.
On the other hand, they may actually be spying on your phone. There are a surprising number of Android apps that can do just that.
[Have a tech question? As Answer Line transitions from Lincoln Spector to Josh Norem, you can still send your query to email@example.com.]
But first, let me clarify one thing: No one is tracking you via your phone’s IP address. Take your phone on a morning jog, and its IP address will change three or four times before you get home.
In order to track your phone, someone would need to install a spying app onto it. That could come in the form of malware such as the recently discovered Godless, which can be downloaded as part of a seemingly innocent app.
And then there are spyware apps that don’t pretend to be anything else; tools such as GPS Phone Tracker. And yes, you can download them from the Play Store.
Why doesn’t Google block these apps? Because they have legitimate purposes. If your employer assigns you a company phone, they have every right to see what you do with it. And parents should monitor kids’ Internet use.
Believe it or not, some people put these apps in their phones willingly. Couple Tracker allows suspicious lovers to track each other’s movements and texts.
Personally, I prefer to just trust my wife.
If you’re an adult and you bought the phone with your own money, only you should have the right to install or not install such an app. But if someone else has physical access to your phone and knows your PIN or password, or if they can log into your Google account, they can install an app without your knowing or noticing it.
How can you tell if you’ve got a spy app on your phone? An unusually hot phone, or a battery that’s suddenly losing power fast, should make you suspicious. But not too suspicious. Those same symptoms may also be a sign of other, less malicious problems.
If you want to make sure, try running Anti Spy Mobile. It finds spying apps and gives you a chance to uninstall them.
The privacy settings on your phone don’t mean much if tech companies choose to ignore them. One major mobile advertiser allegedly did just that.
The company InMobi was secretly tracking user locations, regardless of consent, the U.S. Federal Trade Commission alleged on Wednesday. The motive: to serve location-based ads over mobile apps.
InMobi is headquartered in India and partners with thousands of apps to offer advertising. This gives the company access to 1.5 billion devices.
Collecting user information to serve tailored ads is all too common, but InMobi did so through deception, the FTC alleged. The company stated it would only collect the location-based data if given permission, however, InMobi secretly collected it anyway, the agency said.
InMobi also created a database that could guess a user’s whereabouts, even when the location-tracking function had been shut off, the FTC said.
The company also allegedly tracked the locations of children, when promising not to do so. A U.S. privacy regulation requires companies collecting information about children to first gain the consent from their parents.
“The case is the FTC’s first charging a mobile ad company with deception and with violating the Children’s Online Privacy Protection Act,” the agency said in a blog post.
InMobi has agreed to a settlement and will pay a US$950,000 fine. The company blamed a “technical error” for serving children with the targeted advertising.
In no way was this “deliberate,” and the company notified the FTC as soon as the problem was discovered, InMobi said in an email.
It also said that the company was only tracking users’ location without their permission in “certain instances.” The problems were corrected in last year’s fourth quarter, InMobi added.
As part of the settlement, InMobi must delete all the information it illegally collected and operate a privacy program for the next 20 years to keep the company in line with regulations. It must also honor the user’s location privacy settings.