Google+ 'Photo Story' cards start popping up in Google Now

googleplusphotosdrive

Google Now loves to drop in surprises on us from time to time.

The latest is a card alerting you to a new Photo Story, which is an automatically-generated mashup of images that typically appear after a vacation or other batch of pictures taken while out and about.

google plus photo story

If you backup your smartphone’s pictures to Google+ Photos, you may have seen some of these past creations. They’re rather neat, and can of course be shared with Google’s ever-evolving social network.

This is good evidence the Photo Story feature must be sticking around, even though Google is beginning to unbundle its photo service from Google+, recentlytying it closer to Google Drive.

The story behind the story: Google often tinkers with Google Now, throwing in new features from time-to-time like gas station locations or helpful travel tips. The service is a key part of Google’s always-connected strategy of tying its services closely together across any device you may be using

Amazon Fires 1st Shot in Storage Price War

Amazon on Thursday announced two new plans for unlimited storage in its Cloud Drive service.

The Unlimited Photos plan includes 5 GB of additional storage for videos or other documents and files. It costs US$11.99 per year.

The Unlimited Everything option provides limitless cloud storage of photos, videos, movies, music and files for $59.99 per year. Both plans come with a free three-month trial.

Unlike monthly plans for business users, these new services target the average consumer who has multiple devices.

“We think it is the most customer-centric way to sell storage. We don’t want customers to think about how much storage they need or how much it costs per GB,” Amazon spokesperson Lyn Hart told the E-Commerce Times.

Amazon last year tweaked its Prime membership by adding a storage perk. Prime members will continue to receive unlimited photo storage, including 5 GB of free storage for videos and other files, at no additional cost. Those who have greater storage needs can buy into the Unlimited Everything plan, said Hart.

Amazon’s new plans address the typical consumer dilemma of having a lifetime of birthdays, vacations, holidays and everyday moments stored across numerous devices, noted Charles King, principal analyst at Pund-IT.

An unlimited plan makes sense, because people rarely know how many gigabytes of storage they need to back up all of their data, said Josh Petersen, director of Amazon Cloud Drive.

By the Numbers

Amazon’s bargain basement prices will put pressure on Dropbox, Google and Microsoft. A quick comparison shows a noticeable gap between Amazon’s new pricing plans and those of its main competitors charge. Here’s how they break down:

  • Google charges $9.99/month for 1 TB. Storing 30 TB costs $299.99/month. However, Google Drive users get 15 GB for free. An incremental storage plan offers 100 GB of storage for $1.99/month.
  • Dropbox charges $10 for 1 TB/month; users can get 2 GB for free.
  • Microsoft charges $6.99/month for 1 TB, with 15 GB available for free. A super saver plan, of sorts, is available on Microsoft’s OneDrive. If you subscribe to Office 365, you can get unlimited cloud storage for $7/month.

The unlimited pricing plans replace the previous pricing menu for Amazon Cloud Storage, which gave subscribers 5 GB of storage for free. Users could add more storage, based on a tiered pricing structure starting at 20 GB for $10/year, up to 1 TB/year for $500.

One concern is whether Amazon’s cloud storage is truly unlimited. As with other major cloud storage providers, users can’t store their files exclusively in the cloud, according to Tunio Zafer, CEO of pCloud.

“They still need to store them on hardware devices, and that’s usually less than 1 TB of data. Here the word ‘unlimited’ could be misleading, since actual storage space may be dictated by a user’s hard drive space,” he told the E-Commerce Times.

Shoppers’ Convenience

Consumer-side advantage is a big marketing plus for Amazon’s new plan. The sheer ease of purchasing unlimited storage through Amazon.com is a huge benefit, noted Tim Prendergast, founder and CEO of Evident.io.

“Rather than sign up for some new service or experience another shopping ecosystem at the competitors, consumers just add the unlimited storage product to the Amazon.com cart alongside their diapers, toothpaste and other commodity items,” he told the E-Commerce Times.

This just makes sense from a buyer’s perspective, said Pund-IT’s King. It gives buyers a one-stop shop with no additional setup required.

With Prime’s online content library, Amazon has some interesting options to bundle with its cloud storage service.

That said, shoppers should not discount the other storage options, advised Anand Sanwal, CEO of CB Insights.

“Microsoft and Google are going to be formidable competitors in this area,” he told the E-Commerce Times.

Not All the Same

From a technological perspective, Amazon Cloud has a distinct advantage over many of its competitors. It is a true cloud hosting service, offering cloud storage, Web development and an application hosting environment, noted Fred Menge, president and CEO of Magnir.

“From an end-user perspective, Amazon differs from other cloud storage providers by offering a safe and secure storage environment at a reasonable price,” he told the E-Commerce Times.

“While Google and Dropbox are free services with limited storage, Amazon is a subscription-based service. By offering a small price, Amazon provides a secure environment designed to store massive amounts of data at a reasonable per-gigabyte fee,” Menge explained.

Amazon’s cloud offering has a distinct advantage from playing a broader ecosystem game than the others when it comes to cloud services, noted Evident.io’s Prendergast.

Microsoft and Google have their own cloud ecosystems to rely on, but they tend not to have the breadth or scale that Amazon has, he observed. “They are at different maturity levels in their cloud features and pricing models, which Amazon is clearly going to take advantage of in this situation.”

Common Weakness

Low-ball pricing may not be easy for consumers to ignore, but all online storage bins share a few common problems. One of them is access speed, noted Pund-IT’s King told the E-Commerce Times.

“One or the other of Amazon’s services should work for 90 percent of users. That may make the service seem like a slam dunk, but it does not eliminate the basic shortcomings of online storage services,” he said.

In every case, access to data is restricted according to network capacity, so if you happen to be using a slow WiFi connection, downloading data can take a very long time.

Another issue is that simply uploading files to a cloud service does not mean that users can or will manage them very effectively.

“At one level, Amazon’s service and that of virtually every other cloud storage provider is the virtual equivalent of the stuffed-to-bursting storage locker facilities you see in every city,” said King.

Security and Stability

Consumers need to worry about cloud security standards as well, added pCloud’s Zafer. “Not much has been said about Amazon’s cloud security.”

Most of the big players, like Dropbox, use end-to-end encryption — but client-side encryption is the safer option,” he said. End-to-end encryption is susceptible to information breaches, whereas client-side encryption performs all encryption directly on a user’s device and is therefore more secure.

Another worry when it comes to dumping everything you digitally own into the cloud is a company’s durability. Amazon remains the market leader in these services — but there is a but, said King.

“I think it is reasonable to be cautious about Amazon’s position due to its financial condition. The fact is, Amazon demonstrated financial weakness through most of 2014, including a significant loss in Q3,” he noted.

Though it ended the year on a high note after the holiday shopping season, its cloud competitors, including Microsoft, Google and Apple, are all in far stronger financial shape and easily can match or beat Amazon’s low-balling, King pointed out. “If they choose to do that, the big question will be where Amazon can go after reaching the bottom.”

DirectX 12, LiquidVR may breathe fresh life into AMD GPU

amd-radeon-graphics-card

With DirectX 12 coming soon with Windows 10, VR technology ramping up from multiple vendors, and the Vulkan API already debuted, it’s an exceedingly interesting time to be in PC gaming. AMD’s GCN architecture is three years old at this point, but certain features baked into the chips at launch (and expanded with Hawaii in 2013) are only now coming into their own, thanks to the improvements ushered in by next-generation APIs.

One of the critical technologies underpinning this argument is the Asynchronous Command Engine (ACEs) that are part of every GCN-class video card. The original HD 7900 family had two ACE’s per GPU, while AMD’s Hawaii-class hardware bumped that even further, to eight.

ASynch-Engine

AMD’s Graphics Core Next (GCN) GPUs are capable of asynchronous execution to some degree, as are Nvidia GPUs based on the GTX 900 “Maxwell” family. Previous Nvidia cards like Kepler and even the GTX Titan were not.

What’s an Asynchronous Command Engine?

The ACE units inside AMD’s GCN architecture are designed for flexibility. The chart below explains the difference — instead of being forced to execute a single queue via pre-determined order, even when it makes no sense to do so, tasks from different queues can be scheduled and completed independently. This gives the GPU some limited ability to execute tasks out-of-order — if the GPU knows that a time-sensitive operation that only needs 10ns of compute time is in the queue alongside a long memory copy that isn’t particularly time sensitive, but will take 100,000ns, it can pull the short task, complete it, and then run the longer operation.

Asynchronous vs. Synchronous execution

The point of using ACE’s is that they allow the GPU to process and execute multiple command streams in parallel. In DirectX11, this capability wasn’t really accessible — the API was heavily abstracted, and multiple developers have told us that multi-threading support in DX11 was essentially broken from Day 1. As a result, there’s been no real way to tell the graphics card to handle graphics and compute in the same workload.

GPU pipelines in DX11 vs. DX12

AMD’s original GCN hardware may have debuted with just two ACEs, but AMD claims that it added six ACE units to Hawaii as part of a forward-looking plan, knowing that the hardware would one day be useful. That’s precisely the sort of thing you’d expect a company to say, but there’s some objective evidence that Team Red is being honest. Back when GCN and Nvidia’s Kepler were going head to head, it quickly became apparent that while the two companies were neck and neck in gaming, AMD’s GCN was far more powerful than Nvidia’s GK104 and GK110 in many GPGPU workloads. The comparison was particularly lopsided in cryptocurrency mining, where AMD cards were able to shred Nvidia hardware thanks to a more powerful compute engine and support for some SHA-1 functions in hardware.

When AMD built Kaveri and the SoCs for the PS4 and Xbox One, it included eight ACEs in the first two (the Xbox One may have just two). The thinking behind that move was that adding more asynchronous compute capability would allow programmers to use the GPU’s computational horsepower more effectively. Physics and certain other types of in-game calculations, including some of the work that’s done in virtual reality simulation, can be handled in the background.

ASynch-ShaderPerf

AMD’s argument is that with DX12 (and Mantle / Vulkan), developers can finally use these engines to their full potential. In the image above, the top pipeline is the DX11 method of doing things, in which work is mostly being handled serially. The bottom image is the DX12 methodology.

Whether programmers will take advantage of these specific AMD capabilities is an open question, but the fact that both the PS4 and Xbox one have a full set of ACEs to work with suggests that they may. If developers are writing the code to execute on GCN hardware already, moving that support over to DX12 and Windows 10 is no big deal.

Games-Asynch

Right now, AMD has only released information on the PS4’s use of asynchronous shaders, but that doesn’t mean the Xbox One can’t. It’s possible that the DX12 API push that Microsoft is planning for that console will add the capability.

VR-Async

AMD is also pushing ACE’s as a major feature for its LiquidVR platform — a fundamental capability that it claims will give Radeon cards an edge over their Nvidia counterparts. We’ll need to see final hardware and drivers before making any such conclusions, of course, but the compute capabilities of the company’s cards are well established. It’s worth noting that while AMD did have an advantage in this area over Kepler, which had only one compute and one graphics pipeline, Maxwell has one graphics pipeline and 32 compute pipes, compared to just 8 AMD ACEs. Whether this impacts performance or not in shipping titles is something we’ll only be able to answer once DX12 games that specifically use these features are in-market.

The question, from the end-user perspective, obviously boils down to which company is going to offer better performance (or price/performance ratio) in the next-generation DX12 API. It’s far too early to make a determination on that front — recent 3DMark 12 benchmarks put AMD’s R9 290X out in front of Nvidia’s GTX 980, while Star Swarm results from earlier this year reversed that result.

What is clear is that DX12 and Vulkan are reinventing 3D APIs and, by extension, game development in ways we haven’t seen in years. The new capabilities of these frameworks are set to improve everything from multi-GPU configurations to VR displays. Toss in features like 4K monitors and FreeSync / G-Sync support, and it’s an exciting time for the PC gaming industry.

Energy companies attacked with new malware exploit

Symantec malware

A new exploit called Trojan.Laziokhas targeted the energy sector. The attack focused on the Middle East, with Trojan.Laziok acting as a reconnaissance tool letting attackers steal data from compromised computers, according to Symantec. The attack targeted petroleum, gas, and helium companies using spam emails from the moneytrans.eu domain. The emails contained a Microsoft Excel attachment with an exploit for the Microsoft Windows Common Controls ActiveX Remote Code Execution Vulnerability, which we’ve seen in prior attacks, although the attack method is new.

trojan-laziok-diagram Symantec

Clicking on the attachment starts up the infection process. Lazoik hides itself in %SystemDrive%Documents and SettingsAll UsersApplication DataSystemOracle, and begins making new folders and renaming itself with normal file names like search.exe, lsass.exe, and smss.exe. Then it collects system data, like the computer name, hardware specs (RAM, GPU, processor, antivirus software) and sends it to the attackers, which then can use additional malware to break whatever defenses are present.

The attack affected the United Arab Emirates the most, followed by Saudi Arabia, Pakistan, and Kuwait. Symantec says, “In this campaign, the attackers distributed customized copies of Backdoor.Cyberat and Trojan.Zbot, which are specifically tailored for the compromised computer’s profile. We observed that the threats were downloaded from a few servers operating in the US, UK, and Bulgaria.”

As is often the case, the vulnerability itself wasn’t new, and the attack distributed existing threats. But many people don’t apply patches and leave themselves open to attacks. Symantec stressed the usual methods in order to avoid malware attacks, such as not clicking on links in unsolicited emails, not opening attachments, patching installed software, and keeping security software up to date. Why anyone is clicking on Excel attachments at this point is beyond us, although now there’s real science to prove some people ignore warning dialogs even if they’re present.

Verizon owners can finally opt out of supercookies — here’s how

VerizonTracking

verizon has announced that its customers can finally opt out of the so-called “supercookies” that it’s been injecting into all mobile users’ devices. While the company’s new provision is a start, it’s unlikely to satisfy privacy advocates, who have pointed out that making the feature opt-out means that the overwhelming majority of customers won’t ever know it exists.

The difference between Verizon’ssupercookies and ordinary cookies is that the Verizon flavor is tied directly to your mobile device. Calling it a cookie is merely a matter of convenience — it’s not actually a cookie (defined as a small file that contains user information specific to your individual browsing session and that you can easily delete) at all. What Verizon injects into every HTTP header is known as an X-UIDH header. Since that header is tied to your Verizon account, the Verizon tracking method is less a “cookie” and more a “permanent record of everything you visit.”

Verizon has updated its privacy and advertising policies to include more information about the program, but these “clarifications” mostly clarify why people hate corporate doublespeak. Verizon helpfully informs you of the types of information it uses to determine which advertising to show you:

“The information we use for this program includes the postal address we have for you and certain consumer information such as your device type and language preference, as well as demographic and interest categories obtained from other companies. This information might include your gender, age range, and interests (i.e. sports fan, frequent diner, or pet owner).” [emphasis added]

Later, Verizon promises that it doesn’t share “personal information,” use your device location to deliver ads, or gather information about your mobile browsing habits. Those are all good things, but they’re carefully calibrated to give a false impression. The first thing Verizon does is admit that its buying information about its own customers. Verizon then combines it with the data the company gathers internally to create enhanced advertising profiles.

We don’t know who Verizon buys its information from or how such exchanges operate, because the companies that run them, like Acxiom, are extremely secretive. But it’s not hard to see how the logic chain works. Verizon may be giving up the right to engage in certain kinds of data gathering — reluctantly, and only to a tiny group of people who are aware of it — but it’s buying plenty of other data from other sources to avoid the PR hit of gathering such information itself.

Customers who have registered for the Verizon Selects program cannot opt out of Relevant Mobile Advertising. Customers who have multiple lines must opt out for each and every line. Customers who have opted out of sharing their Customer Proprietary Network Information must still additionally opt out of Verizon’s Relevant Mobile Advertising program.

The ongoing data-gathering push

Verizon, of course, is far from the only company that has these kinds of invasive programs, even if its supercookie created a particular privacy nightmare. For example, AT&T now charges its fiber customers at least $29 a month — though that figure is somewhat misleading.

In AT&T’s plan, “Standard Plan” (that’s the one without tracking) customers must pay a $99 activation fee and a $7 modem rental fee, in addition to $29 for the basic service. If you want AT&T’s video service, you’ll pay $149 per month instead of $120 (that’s your base $29), plus a $50 activation fee, a $10 “monthly service fee,” and the aforementioned $7 rental fee for the modem. Altogether, that’s a $66 per month difference, or nearly $800 per year, not counting the activation fees.

AT&T discloses the existence of these less-tracked plans right up front. Or not.

That $800 per year doesn’t buy you anonymity — it buys you the right not to be tracked any more than AT&T already does.

Users who are stuck with one or both companies may be interested in our recent article onhow to protect yourself from data gathering operations and snoops. Client-side actions are of modest use against ISPs — encrypted content, like emails, can be protected (save for subject lines), but Tor exit nodes can still be snooped. Users who dislike having their every use of the Internet chopped up and sold to the highest bidder, however, have increasingly limited options — at least as far as finding ISPs and companies that will respect their own privacy.

Firefox 37 supports easier encryption option than HTTPS

The latest version of Firefox has a new security feature that aims to put a band-aid over unencrypted website connections. Firefox 37 rolled out earlier this week with support for opportunistic encryption, or OE. You can consider OE sort of halfway point between no encryption (known as clear text) and full HTTPS encryption that’s simpler to implement.

For users, this means you get at least a modicum of protection from passive surveillance (such as NSA-style data slurping) when sites support OE. It will not, however, protect you against an active man-in-the-middle attack as HTTPS does, according to Mozilla developer Patrick McManus, who explained Firefox’s OE rollout on his personal blog.

Unlike HTTPS, OE uses an unauthenticated encrypted connection. In other words, the site doesn’t need a signed security certificate from a trusted issuer as you do with HTTPS. Signed security certificates are a key component of the security scheme with HTTPS and are what browsers use to trust that they are connecting to the right website.

The impact on you: Firefox support is only half of the equation for opportunistic encryption. Websites will still have to enable support on their end for the feature to work. Site owners can get up and running with OE in just two steps, according to McManus. But that will still require enabling an HTTP/2 or SPDY server, which, as Ars Technica points out, may not be so simple. So while OE support in Firefox is a good step for users it will only start to matter when site owners begin to support i

Now view your custom map creations inside the Google Maps Android app

google my maps android

You can now see any of your custom maps that you’ve built with Google’s My Maps tool inside the main Google Maps Android app.

You’ll still need to use the My Maps website or separate Android app to create your own maps, which are rather useful for vacation planning or business trips. You’re able to place markers, plot routes, and share them with others.

Why this matters: While My Maps is a clever tool, it hasn’t quite captured the hearts and minds of Android users the way other services have. Bringing it into the main Google Maps application puts it in front of more eyeballs, who then may be tempted to try out their topographical skills

Photos now available in Google Drive, making Google+ even less relevant

googleplusphotosdrive

We’re all patiently waiting to figure out what exactly Google is going to do with its frail social network, Google+, and today’s news isn’t looking to good for it.

In a blog entry, Google announced that you can now access all those photos you backed up with Google+ through Google Drive. Its intended to make it easier for you to use photos in your documents, but frankly, there’s probably more to the announcement happening behind the scenes.

The story behind the story: It’s slightly redundant to have your photos on both Google+ and Google Drive, though Google suggests this will make it easier to organize them as you see fit and add other file types to a Drive folder. That makes sense for sticking photos in your docs, but it also sort of hints at something major on the horizon. This update may the first step in a plan to separate Google’s photo service from Google+ entirely.

Is this the beginning of end of Google+? Or is this Google’s attempt at dethroning Evernote? Perhaps even Google doesn’t know what it’s doing with its own social network. For now, you can get the Google Drive update directly from the Google Play store or wait for it to hit your device.

Google Chrome's data-saving compression feature hits PCs

Google recently added another way to reduce your PC’s bandwidth demands when you’re tethering, or any other situation where every megabyte counts.

Chrome users can now add a new extension called Data Saver (Beta) from the Chrome Web Store that compresses web pages on Google servers before delivering them to your PC, a feature that mobile Chrome has offered since 2013. The new extension does notwork when Chrome is in incognito mode or when you connect to a site using SSL (HTTPS) encryption.

Why this matters: Reducing bandwidth demands is a common feature on mobile devices with browsers such as Silk and Opera, as well as the app Opera Max. Data compression on PCs, however, hasn’t had nearly as much attention since PCs are typically connected to Wi-Fi or Ethernet and bandwidth constraints are less of an issue. Opera was really the only major browser to tackle data compression with its Turbo feature introduced in 2009. With laptops becoming lighter and more mobile by the month and Chrome OS growing in popularity, focusing on data compression for web-centric PCs is long overdue.

Hands-on with Data Saver (beta)

datasaverbeta

Using Data Saver is really easy. You just install it from the Chrome Web Store and the extension gets to work immediately. Data Saver places an icon in your browser to the right of the address bar. Click on it and you can see how much data you’ve saved, or turn Data Saver off.

For anyone who wants a detailed look at their data compression activity, Chrome evangelist François Beaufort provided this tip. Paste chrome:net-internals#bandwidth into your address bar and hit Enter. This will open a tab showing all your latest bandwidth activity as well as a table at the bottom with a live update of your total bandwidth usage with Data Saver enabled.

Data Saver is still in beta. As such, it may be prone to the occasional mishap and may not compress data as much as you’d like. Trying out the extension on my home set-up, for example, it seemed like a lot less data was compressed when I had Chrome on my 24-inch 1080p monitor versus my laptop’s 12-inch 1366-by-768 display

A launcher overhaul adds Google Now

chromeosnow

Some hefty changes are coming to Chromebooks and Chromeboxes, including a new app launcher with Google Now built in.

The new launcher is available now in the Chrome OS beta channel, and it’s a major departure from the existing version. Instead of a small pop-up that shows all your apps, the new launcher brings up a larger window in the center of the screen—one that looks kind of like Chrome’s “new tab” menu.

Here, you’ll see the Google logo on top, followed by a search box. Your four most recent apps appear below your search box, along with a button to show “all apps.” Beneath these apps, the launcher will show information cards from Google Now, similar to how they appear on phones and tablets.

chromeoslauncher

 

chromeosmaterialfiles

To try the beta channel, head to Settings, then click “About Chrome OS” at the top of the window. Click “More info” and hit the “Change channel” button. Restart, and click the “Check for and apply updates” button in this same menu. If you’d rather stick with the stable channel, the beta features usually take about a month to make their way over