More stories

  • in

    Photonics startup Ayar Labs receives $35 million funding to interconnect massively parallel computers

    Ayar Labs’s first demonstrated device is a 2-terabit-per-second transceiver that sits in a package with an FPGA and converts the bits from the chip into lightwaves to be sent out over a laser. 
    Ayar Labs
    The age of chips connected via beams of light is upon us, according to Charles Wuischpard, chief executive of Silicon Valley startup Ayar Labs, which has received $35 million in new funding, bringing its total funding to date to roughly $60 million.
    Silicon photonics, the long-promised age of chips that do away with copper wires, is coming into focus as massively parallel computer systems require a way to simplify the wiring that joins multiple chips together. 
    “What if every Xeon CPU in the data center was optically connected versus through copper on the motherboard today?” Wuischpard offered, in an interview with ZDNet via Zoom.
    “Nvidia spends a lot of money putting sixteen GPUs in their DGX box,” Wuischpard continued. “But in order to expand, and have maybe 256 GPUs addressable in a box, you’re going to need to go to an optical interconnect to enable that.”
    New investors in the Series B round inclue Applied Ventures, the venture capital arm of chip equipment giant Applied Materials; Castor Venures; and Downing Ventures. They join existing investors Intel, Lockheed Martin, Global Foundries, BlueSky Capital, and Playground Global. 
    Wuischpard knows a thing or two about both optical and massively parallel computing. He ran the supercomputing unit at Intel for several years before coming on board Ayar at the end of 2018. Intel is an investor and a customer, and to some extent, a potential competitor, given that Intel has its own silicon photonics efforts.
    Ayar, which was founded by MIT scholars in 2015, has done pioneering work in converting electrical signals to optical signals, to move bits from processors into fiber-optic links. These electro-optic transducers could replace copper wiring in many instances, opening up a world of optically connected chips. 

    The focus on computing, however, is surprising. For most of its existence, it was thought that Ayar would find its principle market in the need to connect routing and switching chips in data center networking equipment, as speeds of optical links go to 200 gigabits per second and 400 gigabits and beyond.

    That opportunity still exists, but Wuischpard has seen the more-immediate opportunity to connect processors that need to scale to thousands of devices per motherboard. 
    “I think what’s happened is that where I saw the opportunity, maybe on the horizon, if you look at the moves made in the industry, plus the architectural statements from all the big guys, everyone sees this as a big opportunity to advance Moore’s Law, price, performance, etc.”
    The company in early 2019 demonstrated a simple 100-gigabit electrical-to-optical converter, and then moved on to developing a chiplet, a part meant to sit next to a processor inside a package. In the demonstration version, the converter is packaged with a Stratix field-programmable-gate array, or FPGA, from Intel. “It’s a very complex package, 2.5-D, with fibers attached,” noted Wuischpard. 
    This spring, the company developed a fuller version that can handle over two trillion bits per second, the equivalent of 80 PCIe Gen 5 connections, “all in one little chiplet that’s a few millimeters across,” said Wuischpard. 
    Obviously, that ability to condense tons and tons of wire into a single photonic link is a dramatic reduction in space, power and complexity that could make for much denser, more integrated computer systems. 

    Or, it could lead to much more powerful rack-based server systems. Ayar has been developing an prototype massively parallel computer system with Intel. 
    “It’s an AI application, a brand-new machine, that’s a prototype of the future, essentially,” Wuischpard 
    “Think of 5,000 CPUs, each with their own local memory, but with low-latency interconnect, an all-to-all fabric, although not through a switch.”
    The optical interconnect enables each CPU to see all the memory in the system. “It’s unified memory but it’s physically disaggregated,” Wuischpard explained.
    Some stimulations on the machine show on the order of 1,000 times improvement on some workloads versus what can be accomplished in today’s machines, said Wuischpard.
    The company is currently taping out a version of its latest version TeraPHY chiplet, and is under contract to deliver several thousands of units in 2021, with full production intended to ramp in 2022.
    Wuischpard sees the entire world of hybrid computing moving in Ayar’s direction.
    “Without putting too fine a point on it, in order to really leverage synergy between AMD and Xilinx, you’re going to  need a very fast interconnect between CPU, GPU, and FPGA combinations, either in package or over distance.”
    The market for chip M&A tends to reinforce the optical thesis, Wuischpard told ZDNet. Chip maker Marvell Technology Group last week said it would purchase fiber-optic component vendor Inphi for $10 billion in cash and stock at a 41% premium to Inphi’s stock price. More

  • in

    Committee waves Australian spectrum reform changes through

    Image: Chris Duckett/ZDNet
    The Senate Standing Committee on Environment and Communications handed down its report into the Radiocommunications Legislation Amendment (Reform and Modernisation) Bill 2020 on Wednesday, making a sole recommendation that the Bill be passed.
    “The committee considers that it is of the utmost important [sic] that these Bills are passed as soon as practicable in order to ensure certainty for industry and to legislate long-awaited changes to the market,” the committee said in its report.
    “The Bills are the products of a highly consultative process that represents a best-case example of considered, informed, and collaborative regulatory change.”
    Despite concerns from industry and the public broadcasters that the Bills give the Australian Communications and Media Authority (ACMA) too much power for information gathering, which could potentially force the ABC and SBS to disclose commercially sensitive information related to future spectrum use, as well as calls from industry for the renewal process to begin five years out from the end of the new 20-year licence terms, the committee said that a balance had been struck between “technological realities, industry needs, and regulatory stability”.
    “The committee is of the view that the proposed expansion of the ACMA’s powers will be a significant improvement to the current radiocommunications regime, empowering the ACMA to take civil action where necessary and to manage unintentional spectrum interference in a proportionate manner,” it said.
    “Given the ACMA’s role as a regulatory agency and its exemplary past conduct, it is the committee’s view that the ACMA’s wider remit of powers is unlikely to pose a risk to the commercial practices of the national broadcasters or broadcasters more generally.”
    Although the Reform and Modernisation Bill is labelled with the year 2020, it stems from a process that kicked off in 2015.

    In additional notes at the end of the report, Labor Senators Nita Green and Catryna Bilyk said that while they backed the changes, they were dissatisfied with the three-week consultation period on the exposure draft and delays in getting to this point.  
    “The delay means the ACMA has conducted spectrum auctions without the benefit of the streamlined approach that was identified as a key area in need of reform,” the pair said.
    “Labor Senators note that despite years of delay, the Bills do not address all of the recommendations of the spectrum review, and that spectrum reform is yet another example of the Liberal National government failing to do what it said it would.
    “Labor Senators are concerned the Government has missed an opportunity to ensure sufficient flexibility for the ACMA or the government to de-fragment spectrum licensed holdings where existing configurations represent a very wasteful use of spectrum, an issue that is growing even bigger in future as technical standards evolve.”
    The pair also called out the government for wanting community TV broadcasters to move solely to streaming without an alternative planned use for the spectrum that would be freed.
    Also on Wednesday, Communications Minister Paul Fletcher said the government would hold a pair of 5G spectrum auctions.
    In April, the 26GHz band would be up for sale, followed by the 850/900MHz band in the second half of the year, he said.
    Fletcher’s department also distributed a quintet of “5G facts” in a mild effort to dissuade people from believing nonsense conspiracy theories related to the technology.
    Beyond pointing out that the millimetre-wave frequencies used in 5G small cells can run at lower power than previous mobile generations, and therefore have lower electromagnetic energy (EME) emissions, the factsheet also addressed other sources of EME.
    “Natural EME is generated by the sun, earth, atmosphere and even the human body,” it said. “Think anything wireless or remote controlled — TV, radio, radar, weather forecasting, microwaves, laptops, smart devices, mobile phones, and Wi-Fi. Many other everyday items also generate electromagnetic energy — such as electrical power, light bulbs, fridges, ovens, irons, and vacuum cleaners.
    “Telecommunications is part of the electromagnetic spectrum but it is not, and has never been, the only source of EME.”
    Related Coverage More

  • in

    CSIRO's Data61 and Ceres Tag to develop smart satellite-linked pet tracking collar

    ZDNet Australia’s cat Boston emerging from his space ship for snacks.
    Image: Asha Barbaschow/ZDNet
    The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61 has partnered with agtech firm Ceres Tag to develop a prototype smart collar to enable pet owners to track and locate their animals in real-time.
    Dubbed as the companion collar, the prototype will use a combination of Data61’s embedded intelligence platform and Bluetooth technology to track the location of their pet within an established boundary, such as inside the home, but it will automatically switch to GPS location and satellite communications when the pet wanders outside of network. The information is fed to the owner in real time via an app.
    According to Data61 senior research engineer Phil Valencia, unlike other smart trackers, this prototype will use a combination of technologies, rather than just the one, plus it will not require a mobile plan.
    “Many devices only employ Bluetooth or Wi-Fi-based tracking, which often involve a community of people listening’ on their phones and sharing their location data with a service in order to report the tracking device. This method is also only suitable for short distance monitoring,” he said.
    At the same time, Valencia said the collar will also be able to monitor a pet’s behaviour, out of the ordinary activity, and health metrics data.
    “Owners will get valuable insights into how their pet has behaved throughout the day, with the system identifying if the animal’s activity is above or below its typical levels, and whether it was significantly different at a certain time of day,” he said.
    The prototype builds on existing work between CSIRO and Ceres Tag where the pair have developed smart ear tags (e-tags) for tracking livestock, such as cattle, to give farmers greater transparency. The smart e-tags are expected to be a commercially viable product by early 2021.

    Ceres Tag chief operating officer Lewis Frost believes the collar, much like the e-tag, has the potential to improve the health and welfare of domestic animals.
    “Ceres is leveraging all its learnings from the livestock smart tag development to create a superior product in the companion animal market utilising the skills of our very capable development team,” he said.
    The smart collar project is being funded as part of CSIRO’s Kick-Start program designed to provide dollar-matched funding of between AU$10,000-AU$50,000 to local startups to help grow and develop their business.
    On Thursday, CSIRO also launched a report signalling how science and technology are “more critical than ever”, particularly in the context of a weak Australian economy and the COVID-19 pandemic.
    “Science and technology have always played a key role in supporting Australia’s growth and productivity, with examples in this report like Cochlear hearing implants, Google Maps, canola for biofuel, PERC solar cells, and x-ray crystallography,” CSIRO futures lead economist Katherine Wynn said.
    “But as investment in innovation has dropped in recent years, we’ve seen our economy start to slow and weaken, and now we’ve been hit with COVID-19, so science and technology are more critical than ever.
    “If businesses act now, there are plenty of opportunities to enhance how they navigate the innovation cycle and realise greater value from their investments, including improved productivity, protection from market shocks, stronger international competitiveness, and social and environmental benefits.”
    The Value of science and technology report [PDF] identified there were, however, still barriers to adopting new technologies. These include declining investment in innovation; lack of research and development commercialisation; a widening skills and knowledge gap; resistance by local businesses to keep up with overseas competitors, and wariness among Australians to invest in technology that could be seen as automating jobs and resulting in job losses, or widening the gap between profitable and least profitable businesses.

    CSIRO’s cycle illustrates there are four core stages and eight steps that Australia can take to ensure innovation delivers value and impact. 
    Image: CSIRO
    In identifying these barriers, the report suggested opportunities to overcome them, such as by bringing research organisations, government, and industry together to jointly develop and invest in innovation, particularly in areas such as advanced manufacturing, medical technologies, and quantum technologies.
    At the same time, the report suggested that Australian businesses needed to also look at international collaboration opportunities to remain competitive.
    The report stated by eliminating silos and encouraging greater collaboration, new skills could be developed, and more information sharing could happen in the process.
    In August, CSIRO announced it was committing AU$100 million annually to deliver Australia’s new missions program to help the country emerge from COVID-19 in a resilient way.
    The plan, known as Team Australia, comprises of large scale, major scientific, and collaborative research initiatives, led by CSIRO.
    “Each mission represents a major scientific research program aimed at making significant breakthroughs, not unlike solving Prickly Pear, curing the rabbit plague, inventing the first flu treatment, or creating fast Wi-Fi,” CSIRO chief executive Larry Marshall said at the time.
    “But let me stress, these are not just CSIRO’s missions.
    “Their size and scale require us to collaborate widely across the innovation system, to boldly take on challenges that are far bigger than any single institution.”
    Marshall said CSIRO is working with government, universities, industry, and the community to “co-create and deliver these missions”. 
    More from CSIRO More

  • in

    Full fibre cheaper for on-demand FttN upgrade model: NBN CEO

    The company responsible for the National Broadband Network (NBN) has said it would upgrade customers from fibre to the node (FttN) to fibre to the premise (FttP) because it is cheaper than alternatives such as fibre to the curb (FttC).
    Speaking to the joint parliamentary committee examining the NBN business case, CEO Stephen Rue said the reason for the traditionally more expensive FttP being cheaper than FttC was due to the latter requiring a distribution point unit (DPU) to service up to four homes, with the cost of the unit being spread across homes connected to it.
    But if one home is all that connects, it does not remain cheaper.
    “When we when we look at a on-demand model, fibre to the premise is more cost-effective because you simply don’t know if your neighbors are also going to sign up,” Rue said.
    “If you’re upgrading … one individual home, fibre to the premise will then be cheaper, but obviously if you then end up doing three, it’s more expensive.”
    Rue added that across the FttC footprint, most distribution points, on average, service approximately 3.1 premises.
    Chief operations officer Kathrine Dyer added it was often the case that when placing a DPU into a pit, the pit also needed remediation.

    Under the plan announced in September, NBN is expecting around 200,000 premises to shift from FttN to FttP. This includes premises that are upgraded whenever an order is placed for a service that the copper lines are not capable of. The first orders are expected to occur in the second half of 2021.
    However, NBN has yet to determine how, or whether, it would prevent customers from ordering a service, getting the upgraded lead-in, and then reverting back to their prior slower speed tier.
    “It may be that that’s the edge case, and we don’t put a rule in for edge cases, or it may be we think that’s a greater potential, in which case we need to put some sort of rules,” Rue said.
    “We need to work that through to be perfectly honest.”
    People not lucky enough to have received a full fibre connection have had an option to pay NBN for a bespoke build, leading to cases such as one person paying AU$217,600 to have a fixed wireless connection upgraded to FttP.
    Rue said for people that have placed an order under the technology choice program but the build was yet to commence, the company would offer a choice to end the application and for money to be refunded. For those where an upgrade has been completed, no refund would be offered.
    “Firstly, they may not be part of the footprints that are selected and, secondly, they are already receiving those benefits and it may be many years before we build out so we’re not proposing to refund that,” Rue said.
    Earlier on Wednesday, the Australian Competition and Consumer Commission (ACCC) handed down its final report into the affordability of NBN’s 12/1Mbps plans.
    In the main, the ACCC said it was satisfied with what NBN was offering to retailers in its Wholesale Broadband Agreement 4 (WBA4), echoing comments made last week by chair Rod Sims.
    “We consider that the market conditions are such that NBN Co’s proposed access arrangements for those matters raised in the inquiry will address many of the issues of concern raised by the ACCC in its wholesale service standards inquiry,” the report said.
    “While the proposed arrangements from NBN Co differ from the positions we set out in the draft [final access determination] in certain respects, we consider that they are a marked improvement over the current WBA3 terms and should result in improved end-user outcomes.”
    NBN’s wholesale price of entry-level 12/1Mbps plan has been set at AU$24.70 from December to April 2021, which will then drop to AU$22.50 from May to November next year.
    As flagged last year, new daily rebates from NBN for late connections and fault rectifications will be introduced, with the rebate for missed appointments moving up to AU$50 for the first strike and shifting to AU$75 for subsequent misses. Rebates will also be extended to business-grade services.
    The ACCC also ran up the flagpole the idea of having the Commonwealth introduce subsidies to get disadvantaged households onto higher speed plans.
    “We acknowledge that the 12/1 Mbps access product might not adequately support the use of all online applications by some larger households, and the additional price of a higher speed plan may not be affordable for some larger households from a disadvantaged background,” the report said.
    “We consider that these issues are better considered via direct government assistance that can be targeted more directly at eligible households rather than via an internal cross subsidy.”
    The review did not resolve the ACCC’s concerns about NBN relying on discounts as its main method of pricing, however, with the consumer watchdog saying it removed pricing certainty from retailers.
    “We consider that the potential benefits of NBN Co having an opportunity to trial revised pricing arrangements ahead of making longer-term pricing commitments needs to be balanced against the potential for this flexibility to be abused to in effect supplant the certainty measures NBN Co has itself proposed in this inquiry,” the ACCC said.
    “If the latter were to occur, it would likely have implications for our approach to this issue in any future regulatory reviews.”
    The ACCC added that NBN and the federal government should look at whether the 6Mbps congestion threshold on fixed wireless connections needed raising. Fixed wireless congestion is defined by NBN as having a 30-day average busy hour throughput of under 6Mbps.
    “While we acknowledge stakeholder views that the metric for measuring fixed wireless speed performance remains too low to provide an acceptable standard and should be extended, we note that 6Mbps is NBN Co’s design standard for the fixed wireless network and the purpose of the fixed wireless rebate is to incentivise NBN Co to meet its own design standard on an ongoing basis,” the report said.
    Retailers that wanted the ACCC to force NBN to offer voice-only plans will be disappointed, as the watchdog said a cheap mobile plan is a “suitable substitute”.
    Earlier this week, NBN purchased Speedcast Managed Services, a subsidiary of Speedcast International that build and operates NBN’s satellite network.
    “Under the sale, Speedcast Managed Services employees, assets and equipment now revert to its sole client, NBN Co, to support the ongoing requirements of the National Broadband Network,” Speedcast told the ASX. 
    “Accordingly, the Master Equipment and Services Supply Agreement (MESSA) signed between NBN Co and Speedcast on 2 February 2018 will come to an end with immediate effect.”
    That contract was a decade-long deal worth AU$184 million.
    Related Coverage More

  • in

    Windows 10 bug: Certificates lost after feature upgrade? We're working on fix, says Microsoft

    Microsoft has confirmed reports that Windows 10 is losing system and user certificates when computer owners upgrade to a newer version of the operating system. 
    User reports emerged a week ago about the forgotten-certificate glitch that happens upgrading to a higher Windows 10 build, as reported by Borncity at the time. Users report the certificates being lost when upgrading to multiple versions of Windows 10. 
    Microsoft has now confirmed that system and user certificates might be lost when upgrading from Windows 10 version 1809 to a later version. 

    Windows 10

    However, the company notes there are several preconditions for the lost-certificate issue to manifest itself when upgrading.  
    “Devices will only be impacted if they have already installed any latest cumulative update (LCU) released September 16, 2020 or later and then proceed to update to a later version of Windows 10 from media or an installation source which does not have an LCU released October 13, 2020 or later integrated,” Microsoft explains. 
    The LCU refers to the non-optional security update that Microsoft releases the second Tuesday of each month, aka Patch Tuesday. 
    As one user on Reddit noted, losing user or system certificates in Windows is a real problem, especially now because of working from home requirements during the pandemic. Most VPNs rely on these digital certificates to function.   

    The forgotten-certificate issue happens mostly when managed devices are updated using “outdated bundles or media through an update management tool such as Windows Server Update Services (WSUS) or Microsoft Endpoint Configuration Manager.”
    However, it might also happen “when using outdated physical media or ISO images that do not have the latest updates integrated”. 
    The impact should be fairly narrow since the issues doesn’t affect devices that connect directly to Windows Update or devices that use Windows Update for Business. 
    Microsoft is working on a fix and will provide updated bundles and refreshed media in the coming weeks. 
    However, the company does offer a workaround, which involves rolling back to the previous version of Windows within the 10- to 30- day uninstall period. 
    Affected Windows 10 versions include versions 20H2, 2004, 1909, and 1903, as well as their corresponding Server versions. More

  • in

    SpaceX's Starlink: Beta tester reveals more about Elon Musk's internet from space service

    An early Starlink public beta tester has shared his experience of taking his new UFO-on-a-stick terminal dish to a remote area to find out whether the satellite service lives up to SpaceX CEO Elon Musk’s claims. 
    Reddit user Wandering-coder shared his account and a series of photos of Starlink in action with a 300W battery power supply while in remote national forest this week. 

    Networking

    Wandering-coder told Ars Technica that the national forest was in Idaho, where he was getting 120Mbps download speeds in a location that Google Fi’s T-Mobile- and US Cellular-based service doesn’t reach. 
    SEE: Managing and troubleshooting Android devices checklist (TechRepublic Premium)
    “Works beautifully,” wrote Wondering-coder. “I did a real-time video call and some tests. My power supply is max 300W, and the drain for the whole system while active was around 116W.”
    Wandering-coder’s forest experiments test Musk’s statements in July about how easy the end-user terminal dishes are to install and the conditions they needed – a wide view of the sky – to receive internet from Starlink satellites orbiting about 550km, or 342 miles, above Earth.  
    “[The] Starlink terminal has motors to self-orient for optimal view angle,” wrote Musk. “No expert installer required. Just plug in and give it a clear view of the sky. Can be in garden, on roof, table, pretty much anywhere, so long as it has a wide view of the sky.”

    Wondering-coder confirmed this requirement after placing the terminal under a heavy tree canopy. 
    “It didn’t work well with a heavy tree canopy/trees directly in the line of sight, as expected,” he wrote. “I would be connected only for about five seconds at a time. Make sure you have as clear a view of the sky as possible!”
    The Starlink public beta service as it is today costs $100 a month plus a $499 setup fee for the user terminal, tripod and Wi-Fi router. Wondering-coder was surprised by the low cost of the user terminal given its quality. 
    While $499 is high compared with standard equipment for fixed-line broadband end-user equipment, it’s possible the terminal in beta is being offered below cost. Musk in May said the biggest challenge is getting the user terminal cost to an affordable level. 
    According to Wondering-coder, Starlink’s antenna alone should cost thousands of dollars. 
    “Everything is of an extreme build quality, and this works significantly better than I had ever imagined. It feels like it’s from the future. Given a top-tier cell phone costs in the $1,000 range, I am completely amazed I have my hands on a setup like this for ~$500, so I am biased positively towards this service,” he wrote. 
    “The antenna itself seems like it should be many thousands of dollars, so I just want to share how fortunate I feel to have access to this.”
    SEE: FCC launches $9bn fund to boost rural America’s 5G coverage
    Wondering-coder’s experience with Starlink lines up with claims made by the division of SpaceX, as well as with other public beta testers in remote areas who’ve posted accounts of the service on Reddit. Starlink told users to expected data speeds to vary from 50Mbps to 150Mbps and latency from 20ms to 40ms. 
    The fastest speed test Wondering-coder obtained during tests was 135Mbps on download, 25Mbps upload speeds, with around 21ms of latency. With “significant obstruction” such as bad weather, treetops, fences or houses, the service delivered 46Mbps down, 15Mbps up, and 41ms latency.   

    The beta tester confirmed that the end-user terminal dishes need a wide view of the sky to work well.
    Image: Wondering-coder/Imgur
    More on Elon Musk’s SpaceX, Starlink and internet-beaming satellites More

  • in

    Quantum computing may make current encryption obsolete, a quantum internet could be the solution

    “The quantum threat is basically going to destroy the security of networks as we know them today,” declared Bruno Huttner, who directs strategic quantum initiatives for Geneva, Switzerland-based ID Quantique. No other commercial organization since the turn of the century has been more directly involved in the development of science and working theories for the future quantum computer network.

    One class of theory involves cryptographic security. The moment a quantum computer (QC) breaks through the dam currently held in place by public-key cryptography (PKC), every encrypted message in the world will become vulnerable. That’s Huttner’s “quantum threat.”

    “A quantum-safe solution,” he continued, speaking to the Inside Quantum Technology Europe 2020 conference in late October, “can come in two very different aspects. One is basically using classical [means] to address the quantum threat. The other is to fight quantum with quantum, and that’s what we at ID Quantique are doing most of the time.”
    There is a movement called post-quantum cryptography (PQC), which incorporates efforts to generate more robust classical means to secure encrypted communications, once quantum methods are made reliable. The other method, to which Huttner subscribes, seeks to encrypt all communications through quantum means. Quantum key distribution (QKD) involves the generation of a cryptographic key by a QC, for use in sending messages through a quantum information network (QIN).
    Interfacing a QIN with an electronic Internet, the way we think about such connections today, is physically impossible. Up until recently, it’s been an open question whether any mechanism could be created, however fantastic or convoluted it may become, to exchange usable information between these two systems — which, at the level of physics, reside on different planes of existence.
    Could a quantum Internet connect non-quantum computers?
    At IQT Europe, however, there were notes of hope.

    “I don’t see why you would need a quantum computer,” remarked Mathias Van Den Bossche, who directs research into telecommunications and navigation systems for orbital satellite components producer Thales Alexia Space, “to operate a quantum information network. Basically the tasks will be rather simple.”

    The implications of what Van Den Bossche is implying, during a presentation to IQT Europe, may not be self-evident today, though certainly they will be over the course of history. A quantum information network (QIN) is a theoretical concept, enabling the intertwining of pairs of quantum computers (QC) as though they were physically joined to one another. The product of a QIN connection would be not so much an interfacing of two processors but a binding of two systems, whose resulting computational limit would be 2 to the power of the sum of their quantum components, or qubits. It would work, so long as our luck with leveraging quantum mechanics the way we’ve done so far, continues to pan out in our favor.
    Van Den Bossche’s speculation is not meant to imply that quantum networking could be leveraged to bind together conventional, electronic computers in the same way — for example, giving any two desktop computers as much combined memory as 2 to the power of the sum of their bytes. Quantum networks are only for quantum computers. But if he’s correct, the problem of interfacing a classical computer to a QC’s memory system, and communicating large quantities of data over such a system, may be solvable without additional quantum components, which would otherwise make each connected QC more volatile.

    “Ultimately, in the future, we would like to make entanglement available for everyone,” stated Prof. Stephanie Wehner of Delft University, who leads the Quantum Internet Initiative at the Dutch private/academic partnership QuTech.  “This means enabling quantum communications ultimately between local quantum processors anywhere on Earth.”
    The principal use of a quantum Internet, perhaps permanently, would be to enable QKD to protect all communications. A quantum-encrypted message is protected by physics, not math, so it’s not something that can be “hacked.” Prof. Wehner foresees a time when QKD is applicable to every transaction with the public cloud.
    “Here, you should be imagining you have a very simple quantum device — a quantum terminal, if you wish,” she explained, “and you use a quantum Internet to access a remote quantum computer in the cloud, [so] you can perform, for example, a simulation of a proprietary material in such a way that the cloud hosting provider who has the quantum computer cannot find out what your material design actually is.”
    No part of the cloud server could interfere with the simulation without wrecking it — in the quantum lexicon, causing it to decohere. That might disrupt your work a bit, but it wouldn’t give a malicious actor on the cloud anything useful whatsoever.
    SEE: What classic software developers need to know about quantum computing (TechRepublic)
    Hurdles to creating a quantum Internet
    Achieving Prof. Wehner’s vision of a fully realized quantum Internet would require a respectable number of hurdles having been overcome, and a number of lucky rolls of the dice to come up all box-cars. These good tidings include, though are not limited to, the following:
    Classical control systems would need to marshal the exchanges of information to and from the QIN. This is the problem Van Den Bossche is hopeful can be solved: There needs to be some kind of functional waypoint between the two systems that cannot, in and of itself, introduce unreliability, uncertainty, and noise.

    Quantum transducers, which would perform a role analogous to repeaters in an electronic network.  (You may hear the phrase “quantum repeater” for this reason, although physicists say this is a misnomer.)  As Prof. David Awschalom of the University of Chicago, and director of the Chicago Quantum Exchange, asked IQT Europe attendees, “How do you convert light to matter efficiently in the quantum domain, and how do you build a quantum repeater?” Two qubits can share the curious virtue of entanglement when they’re linked by optical fiber, but only over a limited distance. A transducer such as Prof. Awschalom described it would handle the strange exchange of states required for entanglement to be effectively handed off, as if by a bucket brigade, enabling the QIN to be chained.
    Single photon-emitting qubits, otherwise known as “better qubits,” would make the maintenance of a QIN coupled with classical equipment much more deterministic and manageable. Photons are the signals of a quantum network. A quantum memory system will require high frequencies and heart-stoppingly high bandwidth, which may only be feasible when photon sources can be observed and maintained with precision.
    Quantum memory systems (see above) are, at least at present, ideal visions. For now, a high-qubit QC computing element serves as its own memory, and a 53-qubit node may store as much as 253 bits (about 281.5 terabytes), which may seem sufficient enough except that it’s completely volatile. It may decohere completely when a calculation is completed, so some type of stable memory system will be required to maintain, say, a database. This is perhaps the tallest order of all.
    Available fiber. The 5G Wireless deployment effort could be of assistance here, opening up avenues of connectivity for a photons-only network. Recent experiments conducted by Toshiba Research and the University of Cambridge have shown that telco fiber networks are reliable enough for quantum communications, in places where dark fiber has yet to be laid.
    Lasers. Here is the forgotten element of this discussion. We’re not talking about reclaimed laser units from unbuilt Blu-ray players, but as Awschalom describes them, “fast, high-power, milliwatt-scale pump lasers that generate high-bandwidth optical photons, to match the wavelengths of these memories.”
    The current size and breadth of the quantum computing “ecosystem,” if we can call it that, may not yet mandate the investment of billions of dollars, or euros, into the establishment of all the new infrastructure this industry will require. But well before it gets there, we may encounter the point Prof. Huttner talks about, when the quantum threat is more imminent than the quantum bounty. Then, perhaps suddenly, investments may come in spades.
    SEE: What is the quantum internet? Everything you need to know about the weird future of quantum networks

    Innovation More

  • in

    Arista Networks Q3 revenue and earnings top expectations, revenue outlook higher as well amidst improving business trends

    Arista Networks, which competes with Cisco Systems selling network switching equipment, this afternoon reported Q3 revenue and profit per share that comfortably topped analysts’ expectations, and forecast Q4 revenue higher as well, citing an improving business environment. 
    Arista’s chief financila officer, Ita Brennan, remarked that the company “saw continued improvement in underlying business trends in the quarter, with the Arista team working diligently with customers, supply chain and other partners to navigate the new COVID-19 operating environment.” 
    Arista is best known for selling high-speed switches to hyper-scale operators, particularly Facebook and Microsoft, though it has been expanding into Cisco’s market for enterprise campus switching. 
    Said Arista CEO Jayshree Ullal, “Our customers are validating our traction as we migrate from legacy to cognitive client to cloud deployments with a cumulative of 40 million cloud networking ports shipped by Q3 2020. Despite some COVID-19 turbulence, we believe Arista will only emerge stronger.”
    Arista’s revenue in the three months ended in September declined by 7.5%, year over year, to $605.4 million, yielding earnings per share of $2.42. That was higher than the average Wall Street estimate for $581.6 million and $2.22 per share.
    Gross profit margin in the quarter declined slightly to 63.6% from 63.7% in the prior quarter. On a non-GAAP basis, the company reported gross profit of 64.6%.
    For the current quarter, the company sees revenue in a range of $615 million to $635 million, which is higher than the average estimate for $609 million. 

    Gross profit margin is expected in a range of 63% to 65%, non a non-GAAP basis, while operating profit margin is expected to be roughly 37%. 
    Arista stock rose by 6.4% in late trading.

    Tech Earnings More