Rechercher dans ce blog

Thursday, September 30, 2021

Apple continues its clever ‘box in a box’ trick for selling the iPhone 13 with EarPods in France - 9to5Mac

As was the case with the iPhone 12 last year, Apple is selling the iPhone 13 with EarPods in France. While Apple stopped including EarPods with the iPhone starting with the iPhone 12, a legal requirement related to radio-frequency energy requires headphones to be included in the box with smartphones.

This situation centers around potential harm caused to the brain when absorbing radio-frequency energy. You can debate the legitimacy or impact of this, but many countries have set a legal limit on the radio-frequency power output. In Apple’s case, the iPhone taps into the proximity sensor to detect when you are holding your phone against your head, and therefore reduce the RF output.

But in France, regulators have taken things a step further with a law that says users should be actively encouraged to avoid holding their phone to the side of their heads. Instead, the country recommends that headphones be used when on the phone. The specific goal of France’s legislation is to protect children age 14 and younger from RF.

For this reason, France requires smartphones to be sold with headphones:  ‘an accessory making it possible to limit the exposure of the head to radioelectric emissions during communications.’ You can find a full summary of the law right here.

First shared by a user on the MacRumors forums, Apple is including wired EarPods in the box with the iPhone 13, just like it did with the iPhone 12. Apple has also maintained its clever “box inside of a box” solution. You simply open the exterior box, then you’ll find the iPhone 13 box, then remove that and you’ll find the EarPods hidden below.

This allows Apple to use the same iPhone 13 box and packing process for all iPhones, and then include the EarPods separately to streamline shipping and operations.

Read more about the iPhone 13:

Check out 9to5Mac on YouTube for more Apple news:

Adblock test (Why?)


Apple continues its clever ‘box in a box’ trick for selling the iPhone 13 with EarPods in France - 9to5Mac
Read More

EA promotes Laura Miele to COO, making her one of the most powerful women in gaming - The Verge

Electronic Arts is promoting chief studios officer Laura Miele to chief operating officer, the company announced Thursday. The change is a big promotion for Miele, who already had significant leadership at the company overseeing 25 different studios. The new role will give Miele greater oversight over the company and arguably makes her the most powerful woman in gaming, an industry where there are few female executives, fewer in the C-suite, and where those C-suite execs are often in charge of HR or finance rather than the company’s products.

Ubisoft did make Virginie Hass its chief studios operating officer last August, following scandals over a toxic culture including sexual harassment and misconduct that went as high as the C-suite ranks.

Miele joined EA in 1996 and has served as chief studios officer since April 2018. The Verge spoke with Miele in July, where she discussed how the pandemic changed development at EA. Miele will move into the role over the next few months, according to an SEC filing (PDF).

EA also announced that chief financial officer Blake Jorgensen will be leaving the company. He’s expected to depart in 2022, and a search to replace him “will begin immediately.” Chris Bruzzo, who was previously the company’s executive vice president of marketing, commercial, and positive play, will become the company’s chief experience officer.

Adblock test (Why?)


EA promotes Laura Miele to COO, making her one of the most powerful women in gaming - The Verge
Read More

Intel launches its next-generation neuromorphic processor—so, what’s that again? - Ars Technica

Mike Davies, director of Intel's Neuromorphic Computing Lab, explains the company's efforts in this area. And with the launch of a new neuromorphic chip this week, he talked Ars through the updates.

Despite their name, neural networks are only distantly related to the sorts of things you'd find in a brain. While their organization and the way they transfer data through layers of processing may share some rough similarities to networks of actual neurons, the data and the computations performed on it would look very familiar to a standard CPU.

But neural networks aren't the only way that people have tried to take lessons from the nervous system. There's a separate discipline called neuromorphic computing that's based on approximating the behavior of individual neurons in hardware. In neuromorphic hardware, calculations are performed by lots of small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from others.

On Thursday, Intel released the newest iteration of its neuromorphic hardware, called Loihi. The new release comes with the sorts of things you'd expect from Intel: a better processor and some basic computational enhancements. But it also comes with some fundamental hardware changes that will allow it to run entirely new classes of algorithms. And while Loihi remains a research-focused product for now, Intel is also releasing a compiler that it hopes will drive wider adoption.

To make sense out of Loihi and what's new in this version, let's back up and start by looking at a bit of neurobiology, then build up from there.

From neurons to computation

The foundation of the nervous system is the cell type called a neuron. All neurons share a few common functional features. At one end of the cell is a structure called a dendrite, which you can think of as a receiver. This is where the neuron receives inputs from other cells. Nerve cells also have axons, which act as transmitters, connecting with other cells to pass along signals.

The signals take the form of what are called "spikes," which are brief changes in the voltage across the neuron's cell membrane. Spikes travel down axons until they reach the junctions with other cells (called synapses), at which point they're converted to a chemical signal that travels to the nearby dendrite. This chemical signal opens up channels that allow ions to flow into the cell, starting a new spike on the receiving cell.

The receiving cell integrates a variety of information—how many spikes it has seen, whether any neurons are signaling that it should be quiet, how active it was in the past, etc.—and uses that to determine its own activity state. Once a threshold is crossed, it'll trigger a spike down its own axons and potentially trigger activity in other cells.

Typically, this results in sporadic, randomly spaced spikes of activity when the neuron isn't receiving much input. Once it starts receiving signals, however, it'll switch to an active state and fire off a bunch of spikes in rapid succession.

A neuron, with the dendrites (spiky protrusions at top) and part of the axon (long extension at bottom right) visible.
Enlarge / A neuron, with the dendrites (spiky protrusions at top) and part of the axon (long extension at bottom right) visible.

How does this process encode and manipulate information? That's an interesting and important question, and one we're only just starting to answer.

One of the ways we've gone about answering it was via what has been called theoretical neurobiology (or computational neurobiology). This has involved attempts to build mathematical models that reflected the behavior of nervous systems and neurons in the hope that this would allow us to identify some underlying principles. Neural networks, which focused on the organizational principles of the nervous system, were one of the efforts that came out of this field. Spiking neural networks, which attempt to build up from the behavior of individual neurons, is another.

Spiking neural networks can be implemented in software on traditional processors. But it's also possible to implement them through hardware, as Intel is doing with Loihi. The result is a processor very much unlike anything you're likely to be familiar with.

Spiking in silicon

The previous-generation Loihi chip contains 128 individual cores connected by a communication network. Each of those cores has a large number of individual "neurons," or execution units. Each of these neurons can receive input in the form of spikes from any other neuron—a neighbor in the same core, a unit in a different core on the same chip or from another chip entirely. The neuron integrates the spikes it receives over time and, based on the behavior it's programmed with, uses that to determine when to send spikes of its own to whatever neurons it's connected with.

All of the spike signaling happens asynchronously. At set time intervals, embedded x86 cores on the same chip force a synchronization. At that point, the neuron will redo the weights of its various connections—essentially, how much attention to pay to all the individual neurons that send signals to it.

Put in terms of an actual neuron, part of the execution unit on the chip acts as a dendrite, processing incoming signals from the communication network based in part on the weight derived from past behavior. A mathematical formula was then used to determine when activity had crossed a critical threshold and to trigger spikes of its own when it does. The "axon" of the execution unit then looks up which other execution units it communicates with, and it sends a spike to each.

In the earlier iteration of Loihi, a spike simply carried a single bit of information. A neuron only registered when it received one.

Unlike a normal processor, there's no external RAM. Instead, each neuron has a small cache of memory dedicated to its use. This includes the weights it assigns to the inputs from different neurons, a cache of recent activity, and a list of all the other neurons that spikes are sent to.

One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come out well ahead. IBM, which introduced its TrueNorth chip in 2014, was able to get useful work out of it even though it was clocked at a leisurely kiloHertz, and it used less than .0001 percent of the power that would be required to emulate a spiking neural network on traditional processors. Mike Davies, director of Intel's Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. "We're routinely finding 100 times [less energy] for SLAM and other robotic workloads," he added.

What’s new in neuromorphics

We'll get back to how asynchronous electronic spikes can actually solve useful problems in a bit. First, we'll take a look at what has changed between Loihi (which we'll call "the original processor" for clarity's sake) and Loihi 2. The difference is informative, because Intel has had hardware in the hands of the research community for a few years, and the company was able to incorporate their feedback into the design decisions. So, the differences between the two, in part, reflect what the people who actually use neuromorphic processors have found is holding them back.

Some of the changes are the obvious things you'd expect in the transition between two generations of chips. Intel's using a more up-to-date manufacturing process, and it can now fit each core in roughly half the space needed in the original processor. Rather than being able to communicate with separate chips via a two-dimensional grid of connections, Loihi 2 can do so in three dimensions, allowing a stack of processing boards to greatly increase the total number of neurons. The number of embedded processors per chip, which help coordinate all the activity, has gone from three to six, and there are eight times as many neurons per chip.

[

Despite containing thousands of individual neurons, Loihi chips aren't especially large.
Enlarge / Despite containing thousands of individual neurons, Loihi chips aren't especially large.
Intel

But there are also some differences that are specific to Loihi's needs. Intel says it has gone through and optimized all the asynchronous hardware, giving Loihi 2 double the performance when updating a neuron's state and boosting the performance of spike generation ten-fold.

Other changes are very specific to spiking neural networks. The original processor's spikes, as mentioned above, only carried a single bit of information. In Loihi 2, a spike is an integer, allowing it to carry far more information and to influence how the recipient neuron sends spikes. (This is a case where Loihi 2 might be somewhat less like the neurons it's mimicking in order to perform calculations better.)

Another major change is in the part of the processor that evaluates the neuron's state in order to determine whether to send a spike. In the original processor, users could perform a simple bit of math to make that determination. In Loihi 2, they now have access to a simplified programmable pipeline, allowing them to perform comparisons and control the flow of instructions. Intel's Davies told Ars that you can specify these programs down to the per-neuron level, meaning that two neighboring neurons could be running completely different software.

Davies also said that the way each neuron handles its internal memory is more flexible. Rather than specific aspects—like the list of neurons spikes should be sent to—having a fixed allocation, there's a pool of memory that can be divided up more dynamically.

These changes do far more than let Loihi 2 execute existing algorithms more efficiently; they actually let the chip run algorithms that were a poor fit for the original processor.

And that brings us back to the question of how neuromorphic computing gets anything done.

From spikes to solutions

How do you actually solve problems using something like a Loihi chip? You can make some parallels to quantum computing. There, the problem you want to solve gets converted into a combination of how you configure a set of qubits and the manipulations you perform on them. The rules of the system—the physics, in the case of quantum computing—then determine the final state of the system. That final state can then be read out and translated into a solution.

For neuromorphic computing, the problem is set up by configuring the axons, which determine what neurons signal to what targets, as well as the code that determines when a neuron sends spikes. From there, the rules of the system determine how the spiking behavior evolves, either from the initial state or in response to further input. The solution can then be read out by examining the spiking behavior of different neurons. "Computation emerges from the interactions of the neurons," is how Davies put it.

Intel provided a concrete example of this in a paper it published back in 2018. The example problem it used is finding a set of features that can be used to approximate the content of an image, in the same way that a series of circles can approximate the head of Mickey Mouse. This can be done on Loihi by assigning each neuron a feature that it represents and then having its spiking activity influenced by whether it recognizes that feature in an image. As things proceed, the neurons signal to each other in a way that tones down the activity of anything that isn't recognizing a feature.

The end result of this competition is that the neurons that represent features present in the image will be actively spiking, while those that don't are relatively quiet. This can be read out as a feature list and the process started over again by feeding the system a new image. While it might be faster to reset the whole processor to its initial state before showing a second image, it shouldn't be necessary—the system is dynamic, so changing the input will mean changing the spiking behavior, allowing a new population of neurons to gradually assert itself.

Learning on the fly and more

This dynamic behavior makes for a contrast with trained neural networks, which are very good at recognizing what they've been trained on but not flexible enough to recognize something they weren't trained on. Davies described work they've done with Loihi to recognize gestures based on video input. He said that it's possible to get the system to recognize new gestures, training it on the fly without altering its ability to recognize gestures it was previously trained on. (This training won't tie the gesture to a specific action; the Loihi system just does the recognition and relies on other hardware to take actions based on that recognition.)

Davies says these sorts of abilities have a lot of potential applications in robotics. Mobile robots have to be flexible enough to recognize and adjust to new circumstances when they find themselves facing a new environment. And any robot will see its behavior change as its parts wear down or get dirty, meaning their control systems have to adjust to new performance parameters.

Intel

Those are the sorts of things that are traditionally associated with AI systems (whether they involve spiking neurons or not). But Davies also said that there are some very different use cases where spiking systems also perform well. One he mentioned was quadratic optimizations, which help with things like managing complex scheduling constraints (think of a nationwide rail system).

These can be solved using traditional computers, but the processing resources rise rapidly with the number of constraints. Loihi has shown promising results on finding optimized solutions with a fraction of the computational resources, and Davies said it's flexible enough to be configured to either find the optimal solution or more quickly find a solution that's within 1 percent of the best.

(Intriguingly, these are the same types of problems that run well on D-Wave's quantum annealing hardware. Davies said that Los Alamos was preparing a paper comparing the two.)

Waiting for the software

While spiking neural networks can be very effective at solving these sorts of problems, the challenge has often been finding the people who understand how to use them. It is a very different type of programming and requires an equally different way of thinking about algorithm development. Davies said that most of the people who are currently adept in it come from a theoretical neurobiology background (or are still in the field). So far, this has meant that Intel has mostly pushed Loihi into the research community, something that has limited its ability to sell the processor more widely.

But long term, Intel hopes to see Loihi derivatives end up in a broad range of systems, from acting as a co-processor in embedded systems to large Loihi clusters in the data center. For that, however, it will need to be easy for companies to find people who can program for it.

To that end, Intel is coupling the release of Loihi 2 with the release of an open source software framework called Lava. "LAVA is meant to help get neuromorphic [programming] to spread to the wider computer science community," Davies told Ars. He went on to say that, in the past, Intel hasn't provided enough abstraction from the inner workings of Loihi. If you wanted to run software on it, you had to understand spiking systems in detail. LAVA is a way of giving people the ability to work with Loihi-based systems without having to know the details.

LAVA should be accessible today, and the first Loihi 2 boards will be made available to the research community via a cloud service shortly. A board with a single chip is being made available for evaluation purposes, and it will be followed by an eight-chip system called Kapoho point later this year.

Listing image by Aurich Lawson | Getty Images | Intel

Adblock test (Why?)


Intel launches its next-generation neuromorphic processor—so, what’s that again? - Ars Technica
Read More

Google tells EU court it’s the #1 search query on Bing - Ars Technica

Let's see, you landed on my "Google Ads" space, and with three houses, that will be $1,400.
Enlarge / Let's see, you landed on my "Google Ads" space, and with three houses, that will be $1,400.
Ron Amadeo / Hasbro

Google is in the middle of one of its many battles with EU antitrust regulators—this time it's hoping to overturn the record $5 billion fine the European Commission levied against it in 2018. The fine was for unfairly pushing Google search on phones running Android software, and Google's appeal argument is that search bundling isn't the reason it is dominating the search market—Google Search is just so darn good.

Bloomberg reports on Google's latest line of arguments, with Alphabet lawyer Alfonso Lamadrid telling the court, “People use Google because they choose to, not because they are forced to. Google’s market share in general search is consistent with consumer surveys showing that 95% of users prefer Google to rival search engines.”

Lamadrid then went on to drop an incredible burn on the #2 search engine, Microsoft's Bing: “We have submitted evidence showing that the most common search query on Bing is, by far, 'Google.'"

Worldwide, Statcounter has Google's search engine marketshare at 92 percent, while Bing is a distant, distant second at 2.48 percent. Bing is the default search engine on most Microsoft products, like the Edge browser and Windows, so quite a few people end up there as the path of least resistance. Despite being the default, Google argues that people can't leave Bing fast enough and that they do a navigational query for "Google" to break free of Microsoft's ecosystem.

Google's argument that defaults don't matter runs counter to the company's other operations. Google pays Apple billions of dollars every year to remain the default search on iOS, which is an awfully generous thing to do if search defaults don't matter. Current estimates put Google's payments to Apple at $15 billion per year. Google also pays around $400 million a year to Chrome rival Mozilla to remain the default search on Firefox.

Adblock test (Why?)


Google tells EU court it’s the #1 search query on Bing - Ars Technica
Read More

Nreal announces lighter, cheaper Nreal Air AR glasses - The Verge

Augmented reality company Nreal is launching a cheaper, iOS-compatible, more compact version of its smart glasses. The new Nreal Air glasses are supposed to ship starting in December 2021 across Japan, China, and South Korea. The price isn’t set, but Nreal says they’ll cost “a fraction of the price” of its earlier Nreal Light glasses, which started selling for around $600 last year.

Based on Nreal’s description, the new Nreal Air glasses have some core similarities with the Nreal Light glasses from 2020. Both are designed to look like relatively normal sunglasses and pitched as ideal for projecting a virtual big-screen display in front of your eyes. They’re both using micro OLED displays for their augmented reality optics and are powered by a phone via a tether cable. And they’re both aimed at consumers rather than businesses, researchers, or the military.

But Nreal Air glasses have a different feature set than their predecessors. Similar to Microsoft HoloLens or Magic Leap hardware, the original Nreal Light glasses could map physical space around you with a set of outward-facing cameras. Nreal Air glasses, by contrast, don’t have any outward-facing cameras. They can display video and phone apps, but they can’t see what’s around you, which means they don’t have the spatial awareness and hand tracking options the Nreal Light does. You’ll control them with a smartphone app, an option that’s also available on Light glasses.

The upside is that Nreal Air glasses are ironically much lighter than Light glasses at 77 grams instead of 106 grams. They don’t have the slightly bug-eyed look that Light glasses do — in product renders, they look more like Facebook and Ray-Ban’s smart glasses, minus the front-facing cameras. (The Ray-Ban Stories glasses, which have cameras but no AR display, weigh around 50 grams.) The new glasses let users tilt the lenses at three angles, making it potentially easier for more people to get a clearer image. Nreal Light glasses launched with support for specific 5G Android phones, but the new glasses will also work while tethered to iPhones and iPads as well as “most” Android devices.

Nreal Air AR glasses

Compared to the Light, the Nreal Air glasses also have a higher screen refresh rate of 90Hz and an increased pixel density of 49 PPD. Nreal says the glasses’ field of view is 46 degrees, compared to the Nreal Light’s 52 degrees — it equates the Air’s view with a 130-inch screen from 3 meters away or a 201-inch screen at 6 meters. If wearers have friends with Nreal glasses, there’s a viewing party option that turns that screen into a shared virtual theater where they can all watch the same media.

Nreal intends to expand the Air glasses’ rollout in 2022, and a spokesperson says the US is a “major market” for the company, although it hasn’t announced plans to ship there. As with the Light, it’s going to be selling the glasses in partnership with major phone carriers; it hasn’t named specific partners, but it’s previously worked with Germany’s Deutsche Telekom, Korea’s LG Uplus, and Japan’s KDDI.

An Nreal spokesperson says the company developed the Air after realizing that most users were primarily either using the glasses to watch streaming video (and to a lesser extent, browse the web) or to develop apps for the platform. In Korea, around 78 percent of users watched streaming content with the glasses. “Consumers today are seeking lighter, but longer lasting AR glasses exclusively for streaming media and working from home,” company founder Chi Xu said in a statement. According to Nreal, the lack of cameras is also supposed to reassure bystanders that the glasses don’t threaten their privacy.

Nreal is one of a handful of consumer smart glasses, and these results could hint at what people actually want from AR headsets. But Nreal also hasn’t made a concerted play for experiences that mix the real and virtual worlds — a use case that other companies like Facebook have emphasized more heavily. Instead, it’s focusing on something it already knows people love: binge watching video.

Adblock test (Why?)


Nreal announces lighter, cheaper Nreal Air AR glasses - The Verge
Read More

Sonos Beam (second-gen) review: Atmos(t) a minor upgrade - The Verge

From the moment Sonos announced the second-generation Beam soundbar, it was evident that this sequel is more refresh than reinvention. The new Beam, available October 5th and now slightly more expensive at $449, is the same compact size as the 2018 original. The speaker drivers inside the unit are completely unchanged. Sonos has touched up the appearance by switching from a fabric covering on the first Beam to the company’s signature perforated plastic with finely drilled holes running along the entire front of the soundbar. And because it’s equipped with more processing power and eARC, the new model supports immersive Dolby Atmos audio.

But the target customer for the Beam hasn’t changed one bit: this is a soundbar for people who want to upgrade their TV’s lousy built-in audio — with the enticing side benefit of native integration with Sonos’ multiroom audio system. At its price, the Beam is more expensive than entry-level soundbars from Vizio and the like. And if you’re willing to spend double, you can get much larger, beefier Atmos soundbars like Sonos’ own Arc or alternatives from Sony and Bose. The Beam is dwarfed in size by the Arc, and its sound performance doesn’t approach the same level. But I can still see and understand the appeal of choosing this one instead.

Maybe you’re in an apartment where more powerful speakers would agitate the neighbors. Maybe you don’t care about or feel the need to invest in premium-tier home theater audio: you just want to make your movies and TV shows sound noticeably better, and the Beam’s smart speaker functions and wide support for music streaming are just icing on the cake. If that’s what you’re after, the second-gen Beam does the job equally as well as the first. Slightly better, even.

Just don’t get your hopes up about the whole Dolby Atmos surround sound part. When reviewing the original Beam, Nilay said the key was to not overthink it. But by making Atmos a pillar of the second-gen model, Sonos is leaving room for people to do just that and come in with unrealistic expectations. As I’ll get into later, for all the work Sonos has put into virtualization and tuning to try to replicate Atmos’ enveloping height channels, it turns out there’s really no substitute for speakers that are pointed, well, up.

Aside from the sleeker perforated front side, everything else about the Beam’s external appearance is identical to the original. You’ve got the same capacitive touch playback controls on top — including a button to mute the built-in voice assistant mics — and the same ports on back: there’s HDMI, an ethernet jack, a connect button, and the power input. What’s new about the second-gen Beam is that Sonos has upgraded the HDMI port to support eARC, which enables Atmos and comes with other perks that often go unadvertised. For example, if you’ve got a TV with HDMI 2.1, you shouldn’t encounter any audio and video sync issues (even when gaming), which could be frustratingly common on the first Beam. That alone is a reason I’d buy this hardware over the original given the choice — even if you can still find the old Beam for a while.

Setup remains a relatively quick, easy process using the Sonos app for Android or iOS. I plugged the Beam into my LG CX OLED TV, opened the app, the new device was automatically recognized, and then I held my phone near the soundbar to finish linking it to my system via NFC. iPhone owners still get exclusive access to Sonos’ Trueplay feature, which uses the phone’s mic to optimize the soundbar’s audio output to sound best for any room in which it’s being used. Considering the Beam has its own mic array, why not just build automatic Trueplay into the thing? Android users can still take manual control of sliders for bass, treble, and loudness.

The Beam’s 40-percent faster processor allowed Sonos to add more “arrays” — the software that coordinates the playback and sophisticated phase algorithms between all the soundbar’s speakers — and the new ones are fully dedicated to surround sound and height effects. But remember, the acoustic architecture inside hasn’t changed from the first Beam. There’s a center tweeter, four midwoofers, and three passive radiators that Sonos says help to enhance lower frequencies. However, physics are physics, and we’re talking about a soundbar that’s barely over two feet wide; in other words, you’re going to want a Sub if you need growling, powerful bass. The Beam won’t get you there on its own, but neither does the Arc or most any standalone soundbar for that matter.

But the Beam is still a strong performer that tends to genuinely surprise people for its size, and this one fares even better. It can fill most smaller to midsized living rooms (or bedrooms) without straining itself. And the amount of presence and great stereo separation of the first model has carried over to the new device. The general surround virtualization effect is quite good: you can get lost in movies without it being obvious that all the sound is originating from the soundbar beneath the TV.

Watch a car chase or some fighter jet sequences, and you’ll hear that the Beam does an impressive job having audio “swoop” in from the left and right sides of a room — assuming your walls aren’t terribly far apart. The feeling of spaciousness is very real, and in A/B testing with the original Beam, this is where those new arrays are making the biggest difference. I’ve seen Sonos describe it as panoramic sound, and that seems right on.

The company has also made some tweaks that result in even clearer dialogue in regular listening, and the “speech enhancement” mode is still there if you need even more emphasis on what’s being said.

But as for the Atmos part? Meh. My bedroom has fairly low ceilings, but even when sampling go-to Atmos action scenes in movies, I can’t say I often picked up on any standout height effects. Don’t buy the Sonos Beam in hopes that it’ll legitimately sound as though audio is coming from above. It just doesn’t get there. You can bounce carefully phased sound waves off walls all you want, but it won’t hold a candle to a proper Atmos system with in-ceiling speakers. Even some owners of premium soundbars like the Arc that do have up-firing speakers don’t consider the Atmos aspect to be a game-changer. If your room conditions are ideal, you might get some hints of verticality, but there’s no real illusion of 3D.

Even if it’s low on Atmos magic, the Beam can be the start of a really great surround system if you tack on a Sub and other Sonos speakers as rear surrounds. (Opting for the Ikea Symfonisk bookshelf speakers is a popular way to save on the latter.) You’re looking at well over $1,000 to build a proper 5.1 setup, but if you see yourself staying in the Sonos ecosystem for a long time to come, it’s worth considering. Adding in extra speakers can quickly make you forget about the Beam’s size constraints. And you can always upgrade to an Arc down the line and keep everything else in place.

Everything else about the Beam can already be found on the first model. It’s still totally competent as a music speaker (and the improved virtualization helps here too), though not as well-matched for that as something like the Sonos Five. You can choose to use either Google Assistant or Amazon Alexa as your preferred voice assistant — but not both at once, even if it’s technically possible. (You can also just skip setting up a voice assistant if you’d rather avoid them). The soundbar supports Apple’s AirPlay 2, so you can play music, podcasts, or other audio to it from an iPhone, iPad, or Mac. And you can also send TV audio across your other Sonos speakers if you’re so inclined.

At the center of everything is Sonos’ compatibility with pretty much every streaming audio service under the sun. The company has said it’ll add support for high-resolution audio and Dolby Atmos tracks from Amazon Music later this year, and hopefully the same will pan out for Apple Music. Sonos is also granting a longtime customer request with the introduction of DTS decoding, another feature coming to all of its soundbars in the coming months.

Sonos’ addition of Dolby Atmos to the second-generation Beam doesn’t magically turn it into some mind-blowing $450 soundbar. But it’s still a very good one made better by the new virtualization upgrades and its seamless integration with your other Sonos gear. That, combined with the company’s impressive commitment to software support, are still the biggest reasons to spend the extra cash on this instead of buying a perfectly good Vizio bar for less. The upgrade to eARC makes for a smoother, more dependable listening experience without any latency issues, and the new design will look better in your living room. But a lack of proper up-firing speakers limits the Atmos potential, and the fact that Sonos has reused so much of the hardware here makes the second-gen Beam feel like a half-step toward something more ambitious. I expect to see bigger things whenever the third generation comes along.

Photography by Chris Welch / The Verge

Adblock test (Why?)


Sonos Beam (second-gen) review: Atmos(t) a minor upgrade - The Verge
Read More

PlayStation Officially Acquires Bluepoint Games, Next Game Planned to Be an Original, Not a Remake - IGN - IGN

Sony Interactive Entertainment has announced yet another studio acquisition - Bluepoint Games, the developers of the Shadow of the Colossus remake on PS4, and, most recently, the PS5 remake of Demon's Souls.

Bluepoint and PlayStation have worked closely together for years, but the news comes after the studio's latest successful release, as Sony confirmed Demon's Souls has sold more than 1.4 million copies since release. IGN spoke with PlayStation Studios Head Hermen Hulst and Bluepoint President Marco Thrush to learn more about the acquisition, PlayStation's overall studio strategy, and about how, thoughBluepoint is steeped in PlayStation remaster and remake expertise, it wants to explore original ideas.

Bluepoint Wants to Make Original Games

Demon's Souls was only released last November, and while Bluepoint isn't officially announcing its next game, Thrush explained that the studio is aiming to work on original content going forward. No exact details about what the "original content" Bluepoint is working on will be, so it remains unclear if it is a new game that is part of an existing IP, or something new entirely.

"Our next project, we're working on original content right now. We can't talk about what that is, but that's the next step in the evolution for us," Thrush said, noting that, even with remakes like Shadow and Souls, the studio was already partially creating original content. He explained how, really, the growth of the studio, both in the literal number of employees as well as types of projects, naturally leads to this next step, especially given the team's pedigree.

"The transition from remasters to remakes was to test ourselves and push ourselves harder for the next step," Thrush said, noting the team was at about 15 people during the production of the original God of War collection, right now is at about 70 employees, and grew to 95 people at its peak during Demon's Souls (with outsourcing work, too).

Demon's Souls PS5 Screenshots

"Our team is a very highly experienced team, the average experience among most people is about 15 years, and all of them come from original development. It's not like we're a bunch of developers that got trained up on making remasters and remakes. We have that original game development mindset in our hearts, and that's what we're now ready, finally ready with the support of Sony to push forward and show what we can do, and show what PlayStation can do," he said.

And though the potential is exciting for Bluepoint to be tackling its own game, don't expect to see it too quickly. The studio has had a surprisingly quick turnaround on its games, having worked on five PlayStation remasters or remaster collections and several ports over the last decade, while moving from remasters in 2015 to Shadow in 2018, and then Demon's Souls in 2020.

"When we're working on a remaster, on a remake, we're very, very fortunate and that we basically, the original team finishes the game, we get handed that game, and then we got to polish it for a few years," Thrush said, noting that that "polish" is, of course, a lot of work and original art and design in its own right.

"You're starting out with the blueprint, right? True original development, there's a blueprint, you execute on it, and then it's not fun and you throw it away and you start over. So yes, by definition, my default answer is going to be original development, of course, takes longer. It has to, otherwise, you wouldn't make a good game."

And given PlayStation's recent commitments to being willing to delay games to let teams achieve their vision on a reasonable schedule, Hulst says that will be true for whatever Bluepoint and Sony's various other studios make.

"It's always about making quality games in a way that's sustainable for the teams, for the individuals on the teams. Because obviously when we acquired team like Bluepoint, this is a long-term play for us, right? We're not in it to get some quick results," Hulst said, explaining that, in short, recent delays of games like Horizon Forbidden West and God of War Ragnarok

aren't cause for concern.

"We're very happy actually with development progress that I feel good about the decision that we made there [with Horizon and God of War]. And it's very much the mindset that it's people first. We are a people business. Everything we do is about the developers, their health, their creativity, their wellbeing."

Screens - Shadow of the Colossus

Why PlayStation Acquired Bluepoint, and Why Bluepoint Wanted to Be Acquired

Though PlayStation and Bluepoint have been working together for years, Bluepoint has remained independent all that time. That has now changed, of course, and Hulst and Thrush explained why the two decided to make the merger official and bring Bluepoint under the PlayStation Studios banner. And it largely came down to wanting to make that working relationship as beneficial to both sides as possible to let the studio produce its best work.

"Bluepoint is now in a place where there's hardly an entity imaginable that knows PlayStation better than they do, because they've worked with so many different teams on their respective, iconic franchises that they've had a developer insight in a wonderful way," Hulst said, explaining that he let the team finish up Demon's Souls before discussions really began about the acquisition.

"We've expressed that probably better together, making sure that Bluepoint can focus on their games, can focus on what they do best, making amazing worlds, wonderful character development, and make use of all the resources that we have got to offer," Hulst said.

And from Thrush's perspective, the two sides have worked so well together, making the acquisition happen really just allows them to continue doing so without any red tape getting in the way.

"We've loved working with PlayStation all these years. There's really nobody else we want to rather work with, so we started talking to these guys and it just happened to work out," Thrush explained. "And now our future is extremely bright. As Hermen was saying, we have all these opportunities ahead of us. We have all the Sony support. We don't have to grow to become a gigantic studio. We have lots of helping hands on the Sony side now that can fill in for any gaps and maintain our studio culture."

As for when the deal came together, Hulst explained that talks largely occurred after Demon's Souls was released, so that the team could keep its focus on delivering that PS5 exclusive. The two sides saw eye to eye on why the acquisition would be beneficial and, to put it simply, it allows Bluepoint, and Thrush as the studio's president, to focus more on creating the experiences they want to and not have to worry as much about the security of the team as a whole.

"I've also in my past run an independent studio, and I realized that the amount of work you need to do, even when you have close partnerships, on business acquisition and making sure you hedge your bets, there's a lot of energy that goes into that," Hulst elaborated. "I know that if we take that off of Marco's plate and let him focus on what he wants to focus on with his team... then I think that's good for both parties. It's good for them because they get to do what they love most, and it's great for us because there's even more focus by Bluepoint on what we want. And that is amazing content, amazing games to come out of Blueprint."

Thrush echoed this sentiment, noting the opportunities the studio has had for past games, like the ability to hire the London Symphony to score Demon's Souls, or being able to rely on other PlayStation assets, such as already established motion capture studios and more.

Every IGN PlayStation First-Party Exclusive Review

And though PlayStation has been on a bit of an acquisitions spree lately - Firesprite, Nixxes, and Housemarque have all also been acquired as first-party studios this year - Hulst explained Sony's recent approach is born from a desire to let these teams do their best work with the resources of PlayStation at their disposal.

"The way we look at our group of studios, and we now have 16 internal teams as part of PlayStation Studios, is very much the way we look at our games. It needs to be right, it needs to fit what we're about in qualitative terms, it's got to be the right games. Same with the teams. The teams stay have to have a very collaborative mindset," Hulst said. "They need to be quality-oriented. We're not buying teams to just be bigger. We're only buying teams because we feel that together, we're going to make something that is going to be even better than if we did it separate from one another."

PlayStation isn't necessarily going to stop looking at potential acquisitions, Hulst explained, but they need to be studios that both share the same values, and can expand what's offered to PlayStation players.

"We are open always to building new relationships or bringing people in-house, but only if we adhere to the quality-first mentality and the right kind of innovative content, new experiences, diverse experiences. Because all of these teams, they share a lot, but they're also very different from one another, and that's what I really like," Hulst said. And I think that's what the PlayStation audience, the PlayStation fans, deserve, it's that diverse slate of games coming out of PlayStation Studios."

Jonathon Dornbush is IGN's Senior Features Editor, PlayStation Lead, and host of Podcast Beyond! He's the proud dog father of a BOY named Loki. Talk to him on Twitter @jmdornbush.

Adblock test (Why?)


PlayStation Officially Acquires Bluepoint Games, Next Game Planned to Be an Original, Not a Remake - IGN - IGN
Read More

Wednesday, September 29, 2021

Doctor uses iPhone 13 Pro’s Macro camera to check patients’ eyes - 9to5Mac

One of the new features of the iPhone 13 Pro is the addition of a new Macro mode for capturing very close-up photos and videos with the camera. While most users have been using the new mode to capture details of nature, Doctor Tommy Korn has discovered that the iPhone 13 Pro’s Macro camera can also be useful for eye treatment.

In a LinkedIn post, the ophthalmologist shared the story about how he has been using his new iPhone 13 Pro Max to check a patient’s eye with the new camera. Thanks to the Macro mode, Korn can take extremely detailed photos of the eyes, which lets him observe and record important details about patients’ health.

The doctor shows the case of a patient who had a cornea transplant and now needs to constantly check if the abrasion is being healed.

Been using the iPhone 13 Pro Max for MACRO eye photos this week. Impressed. Will innovate patient eye care & telemedicine. forward to seeing where it goes… Photos are from healing a resolving abrasion in a cornea transplant. Permission was obtained to use photos. PS: this “Pro camera” includes a telephone app too!

Together with optometrist Jeffrey Lewis, both doctors argue how this feature should be quite useful in pushing telemedicine forward.

Dovetails with the overall move toward virtual, slowly overcoming imaging barriers. Yet another way to impress, manage, nurture long-term relationships with our patients.

Despite having the new camera mode, Apple has not added a new lens specifically for Macro shots. Instead, iPhone 13 Pro and iPhone 13 Pro Max have an upgraded ultra-wide lens with a larger f/1.8 aperture and 120-degree field of view that is capable of capturing Macro images with 2 centimeters of distance.

You can learn more about taking macro photos and videos with iPhone 13 Pro with our guide here on 9to5Mac.

Check out 9to5Mac on YouTube for more Apple news:

Adblock test (Why?)


Doctor uses iPhone 13 Pro’s Macro camera to check patients’ eyes - 9to5Mac
Read More

Google's encryption-breaking Magic Compose AI proves iPhone shouldn't support RCS messaging - BGR

For years, Google has been dying to come up with an iMessage equivalent, a key iPhone feature that’s probably responsible for stealing plent...