More stories

  • in

    Looking ahead to the API economy

    As someone who builds integration products, I spend a lot of time researching industry and technology trends while speaking with analysts, engineers, architects, target customers, and my product peers. This work inevitably drifts my point of view into some version of “what’s happening now, what is likely to happen over the course of the next few years, and what is my role in guiding the industry to the best possible future?” This article intends to provide a synthesis of the most impactful ideas over the past year and their influence on my go-forward thinking as a connectivity Product Manager. I hope you enjoy the reading and look forward to your thoughts in the comments. APIs become a part of internet fabric  To some students of modern technological history, the “connectivity” part of the internet looked very different just a few decades ago. By “connectivity,” I mean APIs, protocols such as HTTP, and agreed-upon architectural patterns that unlock data. As a result, technology professionals speak about “legacy modernization” projects to expose old technology silos that would otherwise remain hidden from the digital lifeblood of the business. These so-called digital transformation projects often relied on XML-RPCs to enable integrations with mainframes while the new digital era brought standards such as REST, GraphQL and Web of Things.
    Free for commercial use. No attribution required.
    While established companies invest in new APIs to support digital transformation projects, early startups build on top of the latest technology stacks. This trend is turning the Internet into a growing fabric of interconnected technologies the likes of which we’ve never seen. As the number of new technologies peaks, the underlying fabric — otherwise known as the API economy — fuels the market to undergo technology consolidations with the historic-high number of acquisitions. There are two interesting consequences of this trend. The first is that all of this drives the need for better, faster, and easier-to-understand APIs. Many Integration-Platform-as-a-Service (iPaaS ) vendors understand this quite well. Established iPaaS solutions, such as those from Microsoft, MuleSoft, and Oracle, are continually improved with new tools while new entrants, like Zapier and Workato, continue to emerge. All invest in simplifying the integration experience on top of APIs, essentially speeding the time-to-integration (a level of growing importance when it comes to business agility). Some call these experiences “connectors” while others call them “templates.” But in the end, the leading integration minds are actively invested in this area.  The second consequence is well-defined, protocol-based connectivity. Looking at the world of REST ー a well-accepted architectural style defined in Roy Fielding’s dissertation ー we see that REST APIs dominate the scene with well-established specification standards such as the OpenAPI Specification (previously known as Swagger). Not only do these protocols enable industry-leading iPaaS solutions to agree on what the next world of connectivity will look like, they also set the foundation for new experiences — often referred to as innovation — to evolve. More technologies just keep emerging, offering visualization and transformation products that understand these standards while bringing more users into the world of connectivity.  I am excited about the potential of this space and its ability to define the fundamental building blocks of the future internet with APIs as the centerpiece of its fabric. Also: APIs, microservices succeed as long as the organization doesn’t get in the way Breaking silos with indexed search and browser-like API discovery

    Moving from specialized tools and standards to a simple API discovery layer means that any employee who can write queries and logic flows will also be able to build full-fledged applications and customer-facing experiences. Many leading analysts are now seeing this dynamic as more APIs are consumed by less-technical departments like marketing, finance, sales, and HR. I see this trend further evolving in two major forms. The first of these is universal API search and discovery. Many of us are using Google to search for information, and “Googling” endpoints (the addressable location of an API) and data shouldn’t be any different. This means more tools will evolve, but the approach we take will be fundamentally different; instead of manually documenting new endpoints with references and API portals, we can start indexing new APIs dynamically based on their machine readable descriptions. Using techniques similar to Google crawler tactics that discover publicly available web pages, more users will have access to all publicly available endpoints and the data. 

    The second form involves how we explore those APIs and the data they contain. Today, many developers start by searching for an API portal, finding a relevant SDK, and sampling an API’s capability with API-consumption tools like Postman. Less-technical users, however, turn to low-code/no-code solutions that bridge the technical gap by demystifying API access (a skill typically reserved for software developers). It’s interesting to think about what will change as we evolve the underlying foundation of those protocols and standards. I believe that we’re soon to see more browser-like discovery tools, where webpages are replaced by endpoints and information is replaced by data. In this world, users can search, query, play, and plug the data instead of worrying about API technicalities like URIs, endpoints syntax, query parameters, etc. Looking ahead, what I find most exciting about this development is that we will see the creation of new digital capabilities that are closer to the end user and are much faster to build. These innovations also trigger a need for enterprise professionals to see the bigger picture of how it all connects, while product leaders and CIOs must pay closer attention to inconsistencies in the customer experience or potential compliance, privacy, and security issues.Also: Turns out low-code and no-code is valuable to professional developers, too Productizing connectivity: protocols vs. connectivity as a service More than ever before, users demand access to data. Yet many existing solutions are too complex, too expensive, or too heavy. This creates a technology vacuum that will be filled in the following ways. On one hand, integration professionals like me will continue to advance connectivity standards. Optimization for ease-of-consumption, particularly by non-developers, will lead to a new API consumption layer, so that less-technical experiences can evolve on top of it.  On the other hand, new business cases will be made for creating agile API-facade-as-a-service solutions. As more users demand faster time-to-market while taking scalability, availability, and security for granted, more startups will emerge to address the need. We’re already seeing new entrants involving productivity infrastructure as a service by Nylas and a unified API from Kloudless that connects over 150 SaaS solutions through a single canonical model. All of this makes it easier than ever before to build and maintain connections with external systems.  As we’re advancing on each front, I suspect that the industry will first need to agree on common architectural patterns as we build new solutions around them.  Data is the new endpoint in security Data breaches are trending up, with a record of 1,767 publicly reported breaches in the first six months of 2021. Our most common attempts at securing data focus on protecting the infrastructure that provides access to it: endpoints. Although this approach makes sense for some organizations, as we shift more infrastructure to the cloud where the infrastructure is far less within their control, securing that infrastructure becomes more problematic. We add more users into the mix who can now search, query, and share data with their favorite apps, and we have a recipe for disaster.  To stay ahead of these trends, we first need to change our mindset. Instead of protecting endpoints in the new digital world, we must protect the data. This space is full of interesting innovations with new encryption and tokenization standards that further propagate the zero-trust model. This trend is also recognized by new startups that are building businesses around the idea of protecting data with encrypted data vaults and use-cases ranging from securing PII to offering HIPAA-compliant encrypted data stores. Regardless of how we evolve our new API layers, at the core of the “secure” approach will be our ability to discover and work with sensitive data.Also: API security becomes a ‘top’ priority for enterprise players The bottom line We are still “rounding first base” in terms of defining the next generation connectivity layer and understanding what kinds of businesses can be built on top of it. As APIs are already in the center of many digital transformations, we’re clearly seeing a trend of simplifying API consumption with low-code/no-code solutions that bring more users to create pluggable enterprises. It’s fulfilling to think of a world where everyone can contribute to improving the business.  Anton Kravchenko is  Director of Product at MuleSoft, a Salesforce Company. If you are thinking about or building products or protocols that touch on any of these ideas, he would love to hear from you. More

  • in

    A company spotted a security breach. Then investigators found this new mysterious malware

    A previously undiscovered cyber-espionage campaign using never-before-seen malware is infiltrating global aerospace and telecommunications companies in a highly targeted operation that has been active since at least 2018 but has remained completely under the radar until July this year. The campaign is the work of a newly disclosed Iranian hacking group dubbed MalKamak that has been detailed by cybersecurity company Cybereason Nocturnus, which discovered it after being called by a client to investigate a security incident.  

    ZDNet Recommends

    Dubbed Operation GhostShell, the aim of the cyber-espionage campaign is compromising the networks of companies in the aerospace and telecoms industries to steal sensitive information about assets, infrastructure and technology. The targets – which haven’t been disclosed – are predominantly in the Middle East, but with additional victims in the United States, Europe and Russia. Each target appears to have been handpicked by the attackers. SEE: Ransomware attackers targeted this company. Then defenders discovered something curious”This is a very, very targeted type of attack,” Assaf Dahan, head of threat research at Cybereason, told ZDNet. “We’ve only managed to identify around 10 victims worldwide.”MalKamak distributes a previously undocumented remote access trojan (RAT) known as ShellClient that is designed with espionage in mind – which is why it remained undetected for three years. One of the reasons the malware has remained so effective is because the authors have put a lot of effort into making it stealthy enough to avoid antivirus and other security tools. The malware receives regular updates so that this continues to be the case. “Each iteration, they add more functionality, they add different levels of stealth,” said Dahan. 

    ShellClient has even started implementing a Dropbox client for command and control on target networks, making it difficult to detect because many companies might not notice or think much of yet another cloud collaboration tool performing actions, if they even notice it at all.  It’s all part of the plan to use the trojan to monitor systems, steal user credentials, secretly execute commands on networks and ultimately steal sensitive information. Each infected machine is given a unique ID so the attackers can keep track of their work during the weeks and months they’re snooping around compromised networks.  “Once they’re in, they start conducting extensive reconnaissance of the network. They map out the important assets – the crown jewels they would go for, key servers such as the Active Directory, but also business servers that contain the type of information that they’re after,” said Dahan.  The campaign successfully remained undetected until July, when researchers were called in to investigate an incident. It’s possible that the attackers got too confident in their tactics and overplayed their hand, leaving evidence that allowed researchers to identify the campaign and the malware being deployed. “According to what we’re seeing, in the last year, they picked up the pace. Sometimes when you’re faster your you can be slightly sloppy or simply there’ll be more instances that would be detected,” Dahan explained.  

    Analysis of MalKamack’s tools and techniques led researchers to believe that the attacks were the work of a hacking operation working out of Iran, as one of the tools ShellClient RAT uses for credential dumping attacks is a variation of SafetKatz, which has been linked to previous Iranian campaigns. The targeting of telecoms and aerospace companies operating in the Middle East also aligns with Iran’s geopolitical goals. SEE: A winning strategy for cybersecurity (ZDNet special report)But while there are similarities to known Iranian state-backed cyber-espionage operations including Chafer (APT39), which uses similar techniques to target victims in the Middle East, US and Europe, as well as Agrius APT, which shares similarities in malware code, researchers believe that MalKamack is a new Iranian cyber operation – although it likely does have connections to other state-sponsored activity. Researchers also believe that Operation GhostShell remains active and that MalKamack will continue to evolve how it conducts attacks in order to continue stealing information from targets. It’s currently not known how the attackers gain initial access to the network, but there’s the possibility it comes via phishing attacks or from exploiting unpatched vulnerabilities. MORE ON CYBERSECURITY More

  • in

    Meet ESPecter: a new UEFI bootkit for cyber spying

    A new bootkit for conducting covert cyberespionage that is able to compromise system partitions has been discovered. 

    Researchers from ESET say the new malware, dubbed ESPecter, was only found recently but the origin of the bootkit has been traced back to 2012 — suggesting that the software is stealthy enough to have avoided detection by cybersecurity teams for the best part of a decade. “We traced the roots of this threat back to at least 2012; it was previously operating as a bootkit for systems with legacy BIOSes,” commented ESET researcher Anton Cherepanov. “Despite ESPecter’s long existence, its operations and upgrade to UEFI went unnoticed and have not been documented until now.” The only radical change in the malware since 2012 is a shift from legacy BIOS and Master Boot Record (MBR) infiltration to modern UEFI. UEFI is a critical component in the pre-OS stage of a machine starting up and has a hand in loading an operating system.  The malware takes root in the EFI System Partition (ESP) and persists through a patch applied to the Windows Boot Manager, however, this is yet to be fully analyzed.  The patch allows ESPecter to bypass Windows Driver Signature Enforcement (DSE) protocols to load its own unsigned drivers on a target machine and inject other components to create a connection to the operator’s command-and-control (C2) server.  ESET found an ESPecter sample on a PC together with keylogging and document-stealing functionality modules, an indicator that the malware is likely used for surveillance purposes. 

    Once executed on a target machine, ESPecter is able to deploy a backdoor containing commands for cyber spying, and alongside key logs and documents, the malicious code also takes screenshots on a regular basis and hides this content in a hidden directory.  However, the Secure Boot feature has to be disabled for a successful ESPecter attack.  “It’s worth mentioning that the first Windows version supporting Secure Boot was Windows 8, meaning that all previous versions are vulnerable to this persistence method,” the team says. The researchers have not found concrete evidence for attribution, but there are clues in the malware’s components — specifically debug messages — which suggests that the threat actors are Chinese-speaking.  It is also not known how ESPecter is distributed; however, there are a number of potential scenarios: an attacker has physical access to a target machine, Secure Boot has already been disabled, or the exploit of either a zero-day UEFI bug or a known, but unpatched, security flaw in legacy software.  “Even though Secure Boot stands in the way of executing untrusted UEFI binaries from the ESP, over the last few years we have been witness to various UEFI firmware vulnerabilities affecting thousands of devices that allow disabling or bypassing Secure Boot,” ESET says. “This shows that securing UEFI firmware is a challenging task and that the way various vendors apply security policies and use UEFI services is not always ideal.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Asean champions regional efforts in cybersecurity, urges international participation

    Asean has championed the region’s efforts in cybersecurity and pledges to drive further collaboration amongst member states, including plans to adopt common standards and best practices. It also urges the need for participation from the international community, particularly as digital transformation continues to accelerate amid increasing cyber threats.  To date, Asean is the only regional organisation to have subscribed, in principle, to the United Nations’ (UN) 11 voluntary, non-binding norms of responsible state behaviour in cyberspace, according to Singapore’s Minister for Communications and Information and Minister-in-charge of Smart Nation and Cybersecurity, Josephine Teo.   Asean advocated the need to implement the international cyber stability framework and was making good progress on the roadmap to guide adoption of the norms, said Teo, who was speaking Wednesday at the Asean Ministerial Conference on Cybersecurity, held in conjunction with Singapore International Cyber Week.

    Pointing to the Asean Regional Action Plan, she said Singapore and Malaysia recently organised a workshop with other member states. The region was expected to officially endorse the action plan at the Asean Digital Ministers’ Meeting on December 1, 2021.  There currently are 10 Asean member states including Singapore, Indonesia, Thailand, Malaysia, and the Philippines. The region in September 2018 agreed on the need for a formal framework to coordinate cybersecurity efforts, outlining cyber diplomacy, policy, and operational issues.  Members states had underscored the importance of “a rules-based cyberspace” to drive economic progress and improve living standards. Internal laws, voluntary, and non-binding norms of state behaviour, as well as practical “confidence-building” measures were essential to ensure the stability of cyberspace, they said.  They added that such plans would include the region’s efforts to observe the 11 norms recommended in the 2015 Report of the UN Group of Governmental Experts. The 11 norms outline what the the international organisation deemed necessary for to create a “free, open, peaceful, and secure cyberspace”, including global cooperation to develop and apply “measures to increase stability and security in the use of ICTs” and to “not knowingly allow their territory to be used for internationally wrongful acts using ICTs”.

    Speaking virtually at the Asean Ministerial Conference, Asean Secretary-General Lim Jock Hoi said the global pandemic underscored the need for a coordinated approach to address address cyber threats.  Noting that digitalisation had accelerated, Lim said Asean–ready or not–would have to embrace digital transformation to maximise its benefits and work towards building a regional community. Here, he added that the region had kicked off various initiatives including digital economy agreements and the 2019 Asean Agreement on Electronic Commerce, which aimed to facilitate collaboration and growth of e-commerce transactions in the region. With increased digital adoption, though, came higher exposure to cybersecurity threats that could cause significant damage, he said. He noted these included ransomware, phishing, and Distributed Denial of Service (DDos) attacks that had disrupted business operations, impacted individuals, and threatened the stability of Asean communities.  Such threats and cybercrimes were becoming widespread across the region, targeting critical information infrastructures (CII) such as oil, energy, and e-commerce. Without “resolute action” within Asean member states, Lim said these challenges would significantly undermine the resilience of and trust in the region’s digital economies and prevent them from realising their full potential.  He said member states already were working to enhance the region’s cybersecurity posture, including efforts to strengthen partnerships amongst the respective CERTs (Computer Emergency Response Teams) to build “mutual trust” in dealing with security incidents. The Asean CERT was established to improve the region’s knowledge and capacity to respond and mitigate the impact of cyber attacks, he noted.  The development of a coherent regulatory and policy framework on cybersecurity also was essential in Asean, he added, which he said could be accomplished through regional frameworks for cybersecurity maturity assessment and CII security.  There also should be cybersecurity standards and best practices to drive interoperability across the region, which would further support the secure and trusted use of digital technologies and drive an integrated Asean economy, he said.  International communities should build cyber norms, rules With cybersecurity a global issue, Lim said Asean would collaborate with the international community and play its role in developing a rules-based cyberspace with cyber norm behaviours.  Further stressing the importance of global cooperation, Teo said supply chain and ransomware attacks were increasing in frequency, scale, and impact. She cited the SolarWinds breach, the US Colonial Pipeline attack that posed real-world consequences, and the Kaseya breach, which forced more than 800 Swedish Coop supermarkets to close.

    “These examples show the importance of strengthening our cybersecurity. They also highlight the need for international cooperation to build consensus on the rules, norms, principles, and standards governing cyberspace,” she said. “Such efforts will help to ensure that states behave responsibly in their use of ICT, so we can achieve an open, secure, and interoperable ICT environment. In doing so, we can also strengthen the rules-based multilateral order.” According to Teo, Asean currently was laying the groundwork to drive its updated Digital Masterplan 2025, which involved five key objectives including advancing cyber readiness cooperation, strengthening both regional and international cyber policy coordination, and enhancing regional capacity building. She said recent global supply chain attacks also highlighted the need for swift exchange of threat information to mitigate the spread of such attacks. This emphasised the importance of “cyber ops-tech collaboration” such as the Asean CERT, and through the development and implementation of technical standards.  “Often, we are forced into a reactive position when dealing with cyber incidents. In fact, we would rather be proactive on cybersecurity, by making our systems, networks, and devices secure-by-design,” she said. She pointed to Singapore’s efforts here with the introduction of the Cybersecurity Labelling Scheme for IoT devices, enabling consumers to identify the level of cybersecurity of such devices.  Teo said Asean member states could collectively raise the cyber hygiene level in the region by working towards a common baseline cybersecurity standard for IoT devices.  Singapore on Wednesday also announced the official opening of the Asean-Singapore Cybersecurity Centre of Excellence campus. Announced in 2019 to facilitate cyber capacity building efforts in the region, the centre aimed to conduct research and provide training in areas that included international law, cyber norms, and various cybersecurity policy issues. The facility also would offer CERT-related technical training, conduct virtual cyberdefence training and exercises, as well as drive the exchange of best practices, cyber threat, and other related cyber threat information. The centre comprises two training labs that can hold up to 100 in-person participants, conference rooms, and amenities to facilitate capacity building efforts, CSA said. RELATED COVERAGE More

  • in

    Facebook CEO Mark Zuckerberg on putting profit before safety: 'That's just not true'

    Facebook founder and CEO Mark Zuckerberg has publicly addressed claims that the social media giant prioritises profit over safety and wellbeing is “just not true”. “We care deeply about issues like safety, wellbeing, and mental health. It’s difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted,” Zuckerberg wrote in note to Facebook employees that he publicly posted on his Facebook page. “The argument that we deliberately push content that makes people angry for profit is deeply illogical,” he continued. “We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content. And I don’t know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.”The response comes after Facebook whistleblower Frances Haugen fronted the US Senate as part of its inquiry into Facebook’s operations, declaring the company as “morally bankrupt” and casting “the choices being made inside of Facebook” as “disastrous for our children, our privacy, and our democracy”. Haugen, who used to work as the lead product manager for Facebook’s civic misinformation team, told the Senate that Facebook “is choosing to grow at all costs” — which means that profits are being “bought with our safety.” This, in turn, is encouraging “more division, more harm, more lies, more threats, [and] more combat” online. Haugen added that Zuckerberg “has built an organisation that is very metrics-driven — the metrics make the decision,” and, therefore, the buck stops with him.

    The allegations stem from The Facebook Files, a series of investigations posted by The Wall Street Journal. The articles are based on internal files, draft presentations, research, and internal staff communication leaked by the whistleblower. The Wall Street Journal published six of the internal documents which were the basis of its investigation. Facebook then published two of them, complete with annotations last week.  Zuckerberg said many of the claims “don’t make any sense”. “If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn’t care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space — even ones larger than us?” he wrote. “If we wanted to hide our results, why would we have established an industry-leading standard for transparency and reporting on what we’re doing?”He also took the opportunity to address claims that raised questions about the impact Facebook has in relation to the safety and wellbeing of children specifically. Haugen told Senate members that “Facebook knows that its amplification algorithms can lead children from innocuous topics — such as healthy food recipes — to anorexia-promoting content over a short period of time”. “When it comes to young people’s health or wellbeing, every negative experience matters … we have worked for years on industry-leading efforts to help people in these moments and I’m proud of the work we’ve done. We constantly use our research to improve this work further,” Zuckerberg said. Facebook announced last week it was hitting pause on plans to develop a version of Instagram for kids, citing the need for more time to work more closely with “parents, experts, policymakers, and regulators.”  RELATED COVERAGE More

  • in

    Firefox 93 arrives with tab unloading, insecure download blocks and enforced referrer trim

    Image: Mozilla
    Version 93 of Mozilla’s Firefox browser has arrived, and chief among its new features is tab unloading. Available at the moment only on Windows, with macOS and Linux to follow, the feature kicks in when the browser believes an out-of-memory crash is imminent, and it will unload tabs with the least recently used ones unloaded first. Tabs that are in the foreground are never unloaded with tabs that are pinned, using picture-in-picture, or playing sound are less likely to be unloaded. On Windows, the threshold is around the 6% mark, Mozilla engineer Haik Aftandilian wrote in a blog post. “We have experimented with tab unloading on Windows in the past, but a problem we could not get past was that finding a balance between decreasing the browser’s memory usage and annoying the user because there’s a slight delay as the tab gets reloaded, is a rather difficult exercise, and we never got satisfactory results,” Aftandilian said. “We have now approached the problem again by refining our low-memory detection and tab selection algorithm and narrowing the action to the case where we are sure we’re providing a user benefit: if the browser is about to crash.” A month of testing in Firefox’s Nightly channel found a decrease in browser and content process-related crashes, but also an increase in out of memory crashes, as well as an increase in average memory usage. “The latter may seem very counter-intuitive, but is easily explained by survivorship bias … browser sessions that had such high memory usage would have crashed and burned in the past, but are now able to survive by unloading tabs just before hitting the critical threshold,” the engineer said.

    “The increase in OOM crashes, also very counter-intuitive, is harder to explain. “We’re working on improving our understanding of this problem and the relevant heuristics. But given the clearly improved outcomes for users, we felt there was no point in holding back the feature.” In the next release of Firefox, an about:unloads page will be added to provide diagnostics on tab unloading. Also coming in Firefox 93 is functionality to block HTTP downloads from HTTPS pages, followed by showing a dialog to users warning it is a potential security risk and asking if they wish to continue as well as blocking downloads from sandboxed iframes, unless they have the allow-downloads attribute. The browser has also ended by default support for 3DES encryption but it will still be available when sites use deprecated TLS versions. “Recent measurements indicate that Firefox encounters servers that choose to use 3DES about as often as servers that use deprecated versions of TLS,” Mozilla said. “As long as 3DES remains an option that Firefox provides, it poses a security and privacy risk. Because it is no longer necessary or prudent to use this encryption algorithm, it is disabled by default in Firefox 93.” Firefox 93 is also packing the third version of its SmartBlock technology, which can replace Google Analytics, Optimizely, Criteo, Amazon TAM, and various Google advertising javascript with local versions that behave close enough like the originals to prevent sites from breaking. The browser is changing its referrer policy to ensure sites cannot overwrite the default trimming that Firefox applies to cross site URLs. Same site requests will continue to pass the full referring URL. Related Coverage More

  • in

    Updated CDR rules to allow accredited participants to appoint representatives

    The Australian government has updated the Consumer Data Right (CDR) rules, with accredited CDR participants now able to sponsor other parties to become accredited or allow them to operate as their representative.Parties that are representatives of accredited data recipients (ADRs) will be able to access and use CDR data without accreditation so long as they offer CDR-related services, which the government hopes will increase industry participation in the CDR.Previously, only ADRs have been able to receive consumers’ data from a data holder and make use of it in their own products or services.The CDR is a government initiative aimed at allowing individuals to “own” their data by granting them open access to their banking, energy, phone, and internet transactions, as well as the right to control who can have it and who can use it. The Federal Treasury, the lead agency in rolling out the initiative, envisions the CDR as being a tool that will help individuals to monitor finances, utilities, and other services, and compare and switch between different offerings more easily. The first tranche of Australia’s CDR was officially launched on July 1, requiring financial services providers to share a customers’ data when requested by the customer. While the first tranche only applies to the financial services industry, energy and telecommunications will soon join the regime.In addition to giving more functions to accredited CDR participants, the third version of the CDR rules also expands consumers’ rights, where they are now able to nominate trusted advisers to access CDR data. Trusted advisers include accountants, tax agents, financial counsellors, financial advisers, and mortgage brokers.

    The updated rules also mean consumers will now be able to disclose limited data insights outside the CDR for a specific purpose such as for verifying identity and confirming bank account balances.Data sharing processes for consumers with joint accounts will also be simplified, with each account holder in a joint account to be able to consent to data being shared on the account from July next year.Minister for Superannuation, Financial Services and the Digital Economy Senator Jane Hume labelled the updated rules as a “game change for digital innovation”.”The rules made today are an important step in supporting the development of a vibrant data economy that provides benefits to business and consumers. The government is committed to supporting businesses and consumers to participate in the Consumer Data Right and will continue to ensure that the rules support that objective,” Hume said.In the previous set of amendments, made in December, the government permitted ADRs to offer CDR consumers the ability to amend an existing consent, which included the ability to add or remove uses, data types, accounts or data holders, or to amend the duration of the consent. It also provides for separate consent types, including consents for collection, use, disclosure, direct marketing, and research. Related coverage More

  • in

    By end of 2021, Google plans to auto-enroll 150 million users in two-step verification and require 2 million YouTube creators to turn it on

    Google announced on Tuesday that it will be auto-enrolling 150 million of their users in two-step verification by the end of 2021. The platform will also force two million YouTube creators to turn on two-step verification by the end of the year as well.In a blog post, Google Chrome product Manager AbdelKarim Mardini and Google account security and safety director Guemmy Kim said the best way to keep users safe is to turn on security protections by default. “For years, Google has been at the forefront of innovation in two-step verification (2SV), one of the most reliable ways to prevent unauthorized access to accounts and networks. 2SV is strongest when it combines both ‘something you know’ (like a password) and ‘something you have’ (like your phone or a security key),” the two explained. “2SV has been core to Google’s own security practices and today we make it seamless for our users with a Google prompt, which requires a simple tap on your mobile device to prove it’s really you trying to sign in. And because we know the best way to keep our users safe is to turn on our security protections by default, we have started to automatically configure our users’ accounts into a more secure state.”In addition to requiring 2SV — also known as two-factor authentication — Google said it checks the security of 1 billion passwords and works to protect Google’s Password Manager, which is built directly into Chrome, Android and the Google App.Even iOS users can use Chrome to autofill saved passwords and soon Apple users will have access to Chrome’s strong password generation — a feature Apple has been rolling out over the last year on its own devices and platforms. Google is also planning to add a feature that gives users access to all of the passwords saved in the Password Manager directly from the Google app menu.

    In addition to its work for regular users, Google will be providing additional protection for “over 10,000 high risk users this year” through a partnership with organizations that will see them provide free security keys. “We recently launched One Tap and a new family of Identity APIs called Google Identity Services, which uses secure tokens, rather than passwords, to sign users into partner websites and apps, like Reddit and Pinterest. With the new Google Identity Services, we’ve combined Google’s advanced security with easy sign in to deliver a convenient experience that also keeps users safe,” Mardini and Kim wrote. “These new services represent the future of authentication and protect against vulnerabilities like click-jacking, pixel tracking, and other web and app-based threats. Ultimately, we want all of our users to have an easy, seamless sign-in experience that includes the best security protections across all of their devices and accounts.” More