More stories

  • in

    Hackers used the Log4j flaw to gain access before moving across a company's network, say security researchers

    A North Korean hacking and cyber-espionage operation breached the network of an engineering firm linked to military and energy organisations by exploiting a cybersecurity vulnerability in Log4j. First detailed in December, the vulnerability (CVE-2021-44228) allows attackers to remotely execute code and gain access to systems that use Log4j, a widely used Java logging library. The ubiquitous nature of Log4j meant cybersecurity agencies urged organisations globally to apply security updates as quickly as possible, but months on from disclosure, many are still vulnerable to the flaw. 

    ZDNet Recommends

    According to cybersecurity researchers at Symantec, one of those companies that was still vulnerable was an undisclosed engineering firm that works in the energy and military sectors. That vulnerability resulted in the company being breached when attackers exploited the gap on a public-facing VMware View server in February this year. From there, attackers were able to move around the network and compromise at least 18 computers. SEE: Google: Multiple hacking groups are using the war in Ukraine as a lure in phishing attemptsAnalysis by Symantec researchers suggests that the campaign is by a group they call Stonefly, also known as DarkSeoul, BlackMine, Operation Troy, and Silent Chollima, which is an espionage group working out of North Korea.  Other cybersecurity researchers have suggested that Stonefly has links with Lazarus Group, North Korea’s most infamous hacking operation. But while Lazarus Group’s activity often focuses on stealing money and cryptocurrency, Stonefly is a specialist espionage operation that researchers say engages in highly selective attacks “against targets that could yield intelligence to assist strategically important sectors” – including energy, aerospace, and military. “The group’s capabilities and its narrow focus on acquiring sensitive information make it one of the most potent North Korean cyber-threat actors operating today,” warn researchers at Symantec. Stonefly has existed in some capacity since 2009, but in recent years it has doubled down on targeting highly sensitive information and intellectual property. This is achieved by deploying password-stealers and trojan malware on compromised networks. In the case of the undisclosed engineering firm, the first malware had been dropped onto the network within hours of the initial compromise. Among the tools deployed in this incident was an updated version of Stonefly’s custom Preft backdoor malware. The payload is delivered in stages. When fully executed, it becomes an HTTP remote access tool (RAT) capable of downloading and uploading files and information, along with the ability to download additional payloads, as well as uninstalling itself when the malware is no longer needed. Alongside the Preft backdoor, Stonefly also deployed a custom-developed information-stealer that the attackers planned to use an alternative means of exfiltration. SEE: These are the problems that cause headaches for bug bounty huntersStonefly has been active for over a decade and it’s unlikely their attacks will stop soon, particularly as the group has a history of developing new tactics and techniques. While Stonefly is classified as a powerful state-backed hacking group, in this instance, they didn’t need advanced techniques to breach a network, they simply took advantage of an unpatched critical security vulnerability. To help make sure known vulnerabilities like Log4j can’t be exploited by state-backed hacking groups or cyber criminals, organisations should ensure that security updates for applications and software are rolled out as soon as possible. In the case of the firm above, this process would have involved applying the available patches for VMware servers, which were available before the attack happened.  Other cybersecurity protocols, such as providing users with multi-factor authentication, can also help prevent attacks that take advantage of stolen passwords to move around networks.  MORE ON CYBERSECURITY More

  • in

    Open-source security: It's too easy to upload 'devastating' malicious packages, warns Google

    Google has detailed some of the work done to find malicious code packages that have been sneaked into bigger open-source software projects. The Package Analysis Project is one of the software supply chain initiatives from the the Linux Foundation’s Open Source Security Foundation (OpenSSF) that should help automate the process of identifying malicious packages distributed on popular package repositories, such as npm for JavaScript and PyPl for Python. It runs a dynamic analysis of all packages uploaded to popular open-source repositories. It aims to provide data about common types of malicious packages and inform those working on open-source software supply chain security about how best to improve it. “Unlike mobile app stores that can scan for and reject malicious contributions, package repositories have limited resources to review the thousands of daily updates and must maintain an open model where anyone can freely contribute. As a result, malicious packages like ua-parser-js, and node-ipc are regularly uploaded to popular repositories despite their best efforts, with sometimes devastating consequences for users,” Caleb Brown of Google’s Open Source Security Team explains in a blogpost.  

    Open Source

    “Despite open-source software’s essential role in all software built today, it’s far too easy for bad actors to circulate malicious packages that attack the systems and users running that software.”SEE: Google: Multiple hacking groups are using the war in Ukraine as a lure in phishing attemptsThe Package Analysis project identified more than 200 malicious packages in one month, according to OpenSFF. For example, it found token theft attacks on Discord users that were distributed on PyPl and npm. The PyPl package “discordcmd”, for example, attacks the Discord Windows client via a backdoor downloaded from GitHub and installed on the Discord app to steal Discord tokens.   Attackers distribute malicious packages on npm and PyPl often enough that it’s something OpenSSF, which Google is a member of, decided it needed to be addressed. In March, researchers found hundreds of malicious packages on npm that were used to target developers using Microsoft’s Azure cloud, most of which contained typosquatting and dependency confusion attacks. Both types are social-engineering attacks that exploit repetitive steps when developers frequently update a large number of dependencies. Dependency confusion attacks rely on unusually high version numbers for a package that in fact may have no previous version available.  OpenSSF says most of the malicious packages it detected were dependency-confusion and typo-squatting attacks. But the project believes most of these are likely the work of security researchers participating in bug bounties. “The packages found usually contain a simple script that runs during install and calls home with a few details about the host. These packages are most likely the work of security researchers looking for bug bounties, since most are not exfiltrating meaningful data except the name of the machine or a username, and they make no attempt to disguise their behavior,” OpenSSF and Google note.  OpenSSF notes that any of these packages “could have done far more to hurt the unfortunate victims who installed them, so Package Analysis provides a countermeasure to these kinds of attacks.”The recent Log4j flaw highlighted the general risks of software supply chain security in open source. The component was embedded in tens of thousands of enterprise applications and prompted a massive and urgent clean-up by the US government. Microsoft last week also highlighted the role of software supply chain attacks carried out by Russian state-backed hackers in connection with military attacks on Ukraine.   This February, Google and Microsoft pumped $5 million into OpenSSF’s Alpha-Omega Project to tackle supply chain security. The Alpha side works with maintainers of the most critical open-source projects, while the Omega side will select at least 10,000 widely deployed open-source programs for automated security analysis. More

  • in

    Amazon invests in robots to work alongside humans

    Written by

    Greg Nichols, Contributor

    Greg Nichols
    Contributor

    Greg Nichols covers robotics, AI, and AR/VR for ZDNet. A full-time journalist and author, he writes about tech, travel, crime, and the economy for global media outlets and reports from across the U.

    Full Bio

    Agility Robotics
    One of my favorite robots of the last few years is named Cassie. Little more than a pair of bipedal robotic legs, the robot was designed as a robust R&D tool for ground mobility applications. It’s a cool robot, and it’s a great illustration of a company developing baseline technology readymade for useful iteration.That approach has now netted Agility Robotics, maker of Cassie and, more recently, of commercial robots designed to work alongside people in logistics and warehouse environments, an impressive $150M Series B, which it will use to implement human-robot collaboration in logistics warehouses. The humanoid robots are capable of carrying out a number of potential warehouse tasks previously done by humans and can be deployed flexibly in various environments.

    Innovation

    “Unprecedented consumer and corporate demand have created an extraordinary need for robots to support people in the workplace,” explained Damion Shelton, CEO of Agility Robotics. “With this investment, Agility can ramp up the delivery of robots to fill roles where there’s an unmet need.”Very notably, this round had participation from Amazon as part of the company’s recently announced $1B Industrial Innovation Fund. Agility is one of the first five recipients of the fund, a milestone investment for a 7-year old company out of rural Oregon started by two Robotics Ph.Ds from Carnegie Mellon.”The purpose of the Amazon Industrial Innovation Fund is to support emerging technologies through direct investments, designed to spur invention and solve for the world’s toughest problems across customer fulfillment operations, logistics, and supply chain solutions,” said Katherine Chen, Head of Amazon Industrial Innovation Fund. “Agility’s approach to designing robotics for a blended workforce is truly unique and can have a significant ripple effect for a wide range of industries, and we hope others follow suit to accelerate innovation in this way.”Agility has evolved quickly to focus on true commercial robots that work collaboratively and side-by-side with people in a familiar and non-threatening way. Their robots can easily walk, climb stairs, navigate unstructured environments, carry packages, stack goods, and work indoors or out, all of which are skills that were elusive within robotics development even a short time ago. The company’s robots are now deployed in the factories and warehouses of top U.S. logistics companies, as well as Ford Motor Co. and some of the country’s most elite research institutions, including Ohio and Michigan.The capital raise underscores the continued reliance on automation to drive efficiency, even as the economy grows more turbulent. Faced with a tight labor market and supply chain woes, logistics operations are increasingly relying on automation to fill key gaps, shifting reliance slowly away from human workers.Agility’s most advanced robot will be deployed at customers’ sites later this year. The Series B was led by led by DCVC and Playground Global. More

  • in

    Dell targets multi-cloud ecosystem with cyber recovery and data analytics

    Written by

    Aimee Chanthadavong, Senior Journalist

    Aimee Chanthadavong
    Senior Journalist

    Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet.

    Full Bio

    on May 2, 2022

    | Topic: Cloud

    One year on from unveiling its Apex-as-a-service portfolio, Dell Technologies is bolstering the portfolio to move beyond infrastructure and target more workload-based solutions, with the launch of Apex Cyber Recovery. The service is designed to streamline the deployment of cyber recovery solutions through standardised configurations and recovery options. “With Apex Cyber Recovery, customers can feel confident in the ability to recover from a destructive cyber attack and achieve more agility by offloading the day-to-day management of data protection. Customers get more resiliency from an isolated, immutable, and intelligent data vault,” Dell Apex product management vice president Chad Dunn told media during a briefing on Apex. Apex Cyber Recovery is initially being made available in the US with plans for broader availability later this year. The tech giant is also extending its reach in the multi-cloud ecosystem, starting with the release of PowerProtect Cyber Recovery for Microsoft Azure on the Azure Marketplace. Dell said it will allow organisations to deploy an isolated cyber vault in the public cloud, so that if recovery is necessary, they can do so back to their main corporate data centre, an Azure private network, or a clean environment within Azure. The release comes off the back of Dell recently delivering a similar offering for Amazon Web Services (AWS). On AWS, Dell has announced the launch of CyberSense on AWS Marketplace to use analytics, metadata and machine learning to proactively detect, diagnose and speed up data recovery when an attack has occurred, as well as identify the last known uncorrupted copy of data to recover from. Both PowerProtect Cyber Recovery for Microsoft Azure and CyberSense for Dell PowerProtect Cyber Recovery for AWS will be globally available in Q2. Additionally, Dell has drummed up a new strategic partnership with Snowflake, so that joint customers can for the first time leverage Snowflakes’ cloud-based analytics for on-premise data and gain more insights. Jon Siegal, the company’s ISG product marketing VP, explained customers will be able to connect Dell’s object storage to Snowflakes in two ways. “The first way is by running snowflakes analytics against Dell’s on-premise object storage without moving the data to the cloud … it’s really for customers who don’t want to move their data to the cloud, whether it’s for compliance, security, control, data sovereignty reasons,” he said. “Secondly, customers that have the ability also to connect their on-prem Dell object storage to Snowflake by simply copying Dell’s on-premises object data to the Snowflake cloud, so it can be analysed in Snowflake’s cloud itself.” Dell also took the opportunity to provide an update on Project Alpine that was introduced at the start of the year. Siegal said from the second-half of this year, Dell will be introducing data mobility and the same consistent management experience across on-premise and public cloud environments. He added customers will be able to “power up” their multi-cloud environments by leveraging Dell’s data services capabilities found its storage platforms, such as PowerStore, PowerScale, PowerFlex, and ObjectScale. Related Coverage More

  • in

    How XDR provides protection against advanced exploits

    Damage caused by advanced exploits, such as Log4Shell and Spring4Shell, has been widely documented. These came out of nowhere and seemingly crippled many organizations. This happened despite record cybersecurity industry budgets that will clear $146B in 2022. This post from Palo Alto Networks highlights that, based on telemetry, the company observed more than 125 million hits that had the associated packet capture that triggered the signature. It certainly begs the question of why breaches are becoming more common and more damaging despite security spending at an all-time high. The answer to this lies in the approach many businesses have taken to threat protection. Traditional security is based on perceived best-of-breed products being used for specific functions. For example, firewalls protect the network, EDR protects endpoints, CASB protects the cloud, and so on. Most of these tools do a great job within their domains, but the reality is that exploits are not limited to one specific domain, so the silo-like nature of security creates many blind spots.Point products can’t see the end-to-end threat landscapeFor example, EDR tools are meant to find threats on endpoints, and they are effective at that specific task but have no visibility outside the endpoint. So if the breach occurred elsewhere, there is no way of knowing where and when. This is why so many EDR tools are excellent at detection but poor in response. The same can be said with firewalls that generally know everything that’s happening on a network but have no insight into an endpoint or many cloud services.Solving this problem lies in embracing the concept of XDR. Definitionally, I want to be clear that the X in XDR means “all” versus “eXtended,” the latter of which has been pushed by many of the point product vendors. Security pros need to understand that an upgraded EDR or SIEM tool is not XDR; it is merely a legacy tool with a little more visibility. XDR is the way forward for security True XDR is about taking data across the end-to-end infrastructure and correlating the information to find exploits and threats. This would allow for an exploit to be quickly identified and tracked across the infrastructure so all infected devices can be identified. While it’s impractical to assume that an organization would purchase all its infrastructure from a single vendor, I do believe that organizations should look to consolidate a minimum of network, endpoint and cloud security from a single vendor and treat that as the foundational platform for XDR. This would ensure that the vendor interoperates with other security providers to ingest the necessary data. Another benefit of XDR is that it provides a single source of truth across all security functions, which is vastly different from traditional security – where the security team has multiple tools, each with its own set of data and insights. The only way one could correlate the information is to do it manually, which is impossible today, given the massive amount of security data being collected. People can’t work fast enough, but an XDR solution, powered by artificial intelligence, can provide insights to a range of security analysts.XDR meets the needs of different security roles A good visualization of the value of XDR is depicted on Palo Alto Networks’ Log4j Incident Response Simulation page. It features three different SOC roles and how XDR can aid their jobs.  Specifically, the site does a deep dive on the following functions: Guy, the Threat Hunter: His job is to hunt for sophisticated attacks and those difficult to find low, slow threats that fly under the radar of traditional security tools. His job is to find unusual activities and other anomalies that are indicators of compromise. Cortex XDR makes threat hunting easier as it correlates data across endpoints, network, cloud and identity. Guy can then use an advanced XQL query language to aggregate, visualize and filter results that can quickly identify affected assets. Peter, the Tier 2 SOC Analyst: His function is to monitor, prioritize and investigate alerts. His work is used to resolve incidents and remediate threats. The problem is that most SOC tools provide far too many false positives making the information useless. This is why it’s my belief that the traditional SIEM needs a major overhaul. XDR uses machine learning and behavioral analytics to uncover advanced zero-day threats. Many SIEMS claim to do this, but most are just basic rules-based engines that need continual updating. With XDR, the investigation of the threats is accelerated by grouping-related alerts into incidents, and then the root cause is revealed through cross-data insights. Kasey, Director of Vulnerability Management: Her job is to discover, analyze the application, system, network and other IT vulnerabilities, and then assess and prioritize risk. Once that analysis is done, patching and resolving vulnerabilities can be performed. This is difficult, if not impossible, to do with point products because there is no way to understand the impact of a threat across systems. XDR can be combined with other tools, such as attack-surface management (ASM), to find and mitigate software vulnerable to Log4J and other exploits across the organization.In summary, I’ll go back to a conversation I had with a CISO a few months ago who told me that he finally understood that best of breed everywhere does not lead to best-in-class threat protection. In fact, the average of 30+ security vendors that businesses use today creates a management mess and leads to suboptimal protection. The path forward must be XDR, because it’s the only way to correlate historically siloed data to find threats and quickly remediate them before they cripple the business. 

    A good resource for security professionals, particularly Palo Alto Networks customers, is the upcoming Palo Alto Networks Symphony 2022, on May 18 and 19. While this is a vendor event, it’s filled with information on how to revamp security operations to keep them in line with current trends. More

  • in

    Cortex App to launch new Web3 content network this summer

    on May 2, 2022

    | Topic: Web3

    In its quest to make “Web3 available to everyone,” the Core team behind the newly-formed Cortex App said Monday that it’s introducing a new Web3 content network for launch in June. It will “bring Web2 functionality, such as social posts and blogs, into a decentralized and user-owned Web3.” This is the same group who launched free “.hmn” domain names on the Polygon protocol earlier this year.

    Cortex Network, according to the release on Monday, will afford users new levels of control and privacy for themselves and their content. What’s more, the network will enable new ways to collaborate and define payment models for NFTs and content. “In the Cortex Network, each page (URL), will be a wallet address where a user could receive tokens for their content, or send out tokens as well,” Leonard Kish, co-founder of Cortex App told ZDNet. “Each page will essentially be a store for data and a store for NFTs or other tokens,” he said.How it worksThe Cortex Network will act like a proof-of-stake blockchain, whereby publishers stake so-called CRTX tokens to validate user updates and then publish them over the Polygon network. A new kind of index, known as HDIndex, will create a hash that will act as an on-chain proof and a lookup to content updates. The press release claims that “when publishing on the Cortex Network, users will own their content as they control updates with their keys.”The goal for the Cortex Network is to simplify Web3 publishing, making it easier for current Web2 publishers to migrate to a user-owned Web3 content network.

    Networking

    The Network is based on a network architecture in which batches of updates (known as “commits”) contribute to a local state of content which, in turn, becomes part of the globally verifiable localized consensus. What all that means is that each commit contributes to a globally verifiable state for content with a complete history of the content at a particular web address. “In the Cortex system, URLs and crypto addresses are nearly synonymous as part of a human-readable namespace for keys that act as lookups to content,” according to the press release. Kish notes that when it comes to new ways to collaborate and define payment models for NFTs, the NFT domains (such as “kish.hmn”) and subdomains (“leo.kish.hmn”) on where the content lives are fully transferable, so an NFT can have a full story. And when transferred, that story (the content) can move with the NFT domain as well. “We are working on several ways to expand how NFTs work and the kinds of value they can transmit, and this is one,” Kish said. “Others are coming as well.”Barriers to break downPrice, complexity, scalability and consistency are four obstacles in the Web3 publishing progress, and the Cortex Content Network intends to overcome them. The Network will act as a “complete stack to enable not only a fast and reliable decentralized content environment for Web3, but scalable as well,” according to the Cortex App blog.  Further work is needed before the Network is ready for prime time. “We are working with partners now on testing elements of the network, but we don’t have an exact date,” Kish said. “We do expect to be able to provide an exact date for launch in the month of June.” More

  • in

    How to make SSH even easier to use with config files

    Written by

    Jack Wallen, Contributing Writer

    Jack Wallen
    Contributing Writer

    Jack Wallen is what happens when a Gen Xer mind-melds with present-day snark. Jack is a seeker of truth and a writer of words with a quantum mechanical pencil and a disjointed beat of sound and soul.

    Full Bio

    Secure Shell (SSH) is one of those tools every Linux user will probably work with at some point. With SSH you can easily (and securely) log into remote servers and desktops to administer, develop, and check up on those machines.Using SSH is as simple as:ssh jack@192.168.1.11
    Or even just:ssh 192.168.1.11
    Of course, you would exchange the IP address for the address (or domain) of the machine you need to access. 

    ZDNet Recommends

    The best Linux Foundation classes

    Want a good tech job? Then you need to know Linux and open-source software. One of the best ways to learn is via a Linux Foundation course.

    SSH gets a bit less simple when you have numerous machines you access with different configurations (such as different usernames or SSH authentication keys). Imagine if you had 20 or so different servers you had to log into daily. Not only would you have to keep track of the IP addresses or domains of those servers, but you’d also have to remember what usernames or authentication keys were used. That alone could get rather overwhelming.Thankfully, SSH allows you to create a config file to house all of that information. So, instead of having to type something like ssh olivia@192.168.1.100 -p 2222, you could simply type ssh web1. Let me show you how this is done.Creating the SSH config fileLog in to the Linux machine you use to SSH into all of those remote machines. Open a terminal window and create the new configuration file with the command shown in Figure A.Figure ACreating the new SSH config file with the help of nano.Since this is a new file, it’ll be a blank canvas to which we can start adding configurations for servers. Let’s say you want to configure the following remote servers:web1 at 192.168.1.100 with user oliviadb1 at 192.168.1.101 with user nathan and SSH key ~/.ssh/id_nathandocker1 at 192.168.1.102 with user lilly on port 2222Our first entry will look like this:Host “web1”
    Hostname “192.168.1.100”
    User olivia
    If you save and close the file at this point, you could SSH into 192.168.1.100 with the command:ssh web1
    Let’s go ahead and configure the next two entries, which will look like this:Host db1
    Hostname “192.168.1.101”
    User nathan
    IdentityFile ~/.ssh/id_nathan
    PubkeyAuthentication yes

    Host docker1
    Hostname “192.168.1.102”
    User lilly
    Port 2222
    Save and close the file. You can now secure shell into those machines with the commands:ssh web1
    ssh db1
    ssh docker1
    You can use whatever nickname you need for each host, just make them memorable, so you don’t forget which machine you’re trying to reach and have to constantly reference the config file to jar your memory.Let’s say, however, that you use the same username on all your remote servers, but you use a different username on your local machine. For example, your local machine username might be jack but you’ve created the admin user on all of your remote servers. You could create a single entry for all of those servers with a wildcard in the IP address like this:Host 192.168.1.*
    User admin
    The above configuration would be placed at the top of your config file.You could then configure each server individually as needed, leaving out the User option. For example, if both servers at 192.168.1.200 and 192.168.1.201 use SSH key authentication, you could configure entries like so:Host web2
    Hostname 192.168.1.200
    IdentityFile ~/.ssh/id_admin
    PubkeyAuthentication yes

    Host web3
    Hostname 192.168.1.201
    IdentityFile ~/.ssh/id_admin
    PubkeyAuthentication yes
    Because we applied user admin to the entire range of machines on IP address scheme 192.168.1.x, that username will be applied to all connections. You can also override that global configuration by adding a User configuration line on an as-needed basis.The SSH config file allows for several other options (all of which can be read about in the official SSH config documentation), but these examples shown above should be everything you need to get going with the SSH config file. And that’s all there is to using the SSH config file to help make your remote access with Secure Shell even easier.

    Jack Wallen: How To More

  • in

    Why we need more than one Twitter

    Written by

    Jason Perlow, Senior Technology Editor

    Jason Perlow
    Senior Technology Editor

    Jason Perlow is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. His expressed views do not necessarily represent those of his employer, The Linux Foundation.

    Full Bio

    While on vacation, I considered my response to Elon Musk buying Twitter and whether it would differ from the kneejerk analysis we already have. As a general rule, when I write about subjects related to technology, I try to take a different angle than what is already covered. So I’m not going to take the “Let’s all quit Twitter” viewpoint, or “Elon Musk should be prohibited from buying Twitter” standpoint, or even the “Twitter is going to hell in a neoconservative handbasket” perspective.But we should examine why we care about Twitter at all. It serves an important function, as an instantaneous publicly viewable broadcast message bus, for individuals, brands, governments, and everything in-between. But it also has many weaknesses, including that it is not a public good — it is a corporation, and if Elon Musk gets his way, it will be again a privately owned one.

    Therefore, short of government policies limiting its powers, how Twitter is run, its overall technology vision and its enforced policies will always reflect its ownership and designated management. This is also true of Facebook and its various internet properties, although their basic functionality is different and much broader in scope than Twitter.If Twitter is to be owned by Elon Musk, there will be a change of leadership and potentially ethical direction in terms of what content and what influencing entities will be permitted on the platform. We can debate endlessly about what systemic changes will occur under Musk and whether they will be good or bad. But the world outside Twitter will always be in a state of flux, as governments and leaders come and go, as does what the public feels is ethically permissible to be broadcasted versus what is unethical or repugnant.This is why I believe that microblogging – the generic term for the type of service that Twitter is — needs to be a fundamental part of the Internet’s infrastructure, much in the same way that SMTP email, DNS, and the Web are. And in the same way that those services are standardized, from a protocol standpoint, via organizations such as the ISO, ANSI, Ecma, and IEC.I don’t know how many Twitter-like services we need, whether it is a dozen, a hundred, or a thousand. Or tens of thousands. But is clear that there are many types of voices on Twitter, all of which are competing to be heard and are subject to unknown algorithms that determine whose voices are surfaced and when. But suppose a community of voices is vocal enough or wants to amplify itself. In that case, it should be able to host its own microblogging platform if it is permitted in the country where it chooses to home itself and has the resources to do so. Also: Forget the algorithm: Here’s what really makes Twitter uniqueIt doesn’t matter whether it is a government entity, the academic and scientific community, a vertical industry, or any group of people that decides it wants its own platform — a reason to form a microblogging community requires a viewpoint, a common objective, what have you.Assume that we can create an international standard for a microblogging protocol and API, which determines client/server connectivity. Assume the open source community can create microblogging server infrastructure, clients, and APIs. How do we get them the needed visibility?I believe it is possible to have directory and federation services that would allow all of these to be registered, much like we have registrars for domains. This would allow microblogging clients or systems that can connect to the API to have unified “feeds” of these platforms, including exchanging posts and conversations, much like discussion threads that cross-post within USENET.This is not to say that all microblogging platforms will have an intelligent or thoughtful conversation — but rather, it would be easier to find the ones that do if they are federated under a common set of protocols. This would have the effect of, over time, evening out the “signal to noise ratio” as more and more microblogging platforms emerge and become discoverable. The quality of conversation would increase, while the insanity that often takes over Twitter would be reduced.This is a vision for how we can create a more decentralized (and therefore less susceptible to capture by any one entity, government, or otherwise) social media platform. A platform where people can voluntarily associate themselves with like-minded individuals.Blockchain technologies and other content provenance and authentication technologies, such as C2PA, could ensure transactional and referential integrity between systems and prove ownership of posted content. Various open authentication mechanisms, which have already been standardized on other platforms, such as Google ID, Microsoft ID, Facebook, and yes, Apple, Amazon, and Twitter, could be used for single sign-on. Accounts could be created and consolidated on these platforms, including allowing accounts from one system to participate in another via trust relationships. This is possible, but it requires a community of people who see value in decentralization. Who will build this? I don’t know. But if we don’t, somebody else will, and they may not have our best interests in mind.Also: No, Elon, Twitter will never be a platform for ‘Free Speech’Having a distributed network of dozens, thousands, and tens of thousands of microblogging platforms raises its own issues. How a platform or an individual gets “canceled” from another and which platforms continue to allow such objectionable content will be endlessly debated. A new form of politics will be involved when a specific group or account engages in behavior another community finds objectionable.Will offending accounts and platforms be suspended, with trust relationships severed? Will the platforms change their terms of service to disallow certain speech or behavior not in line with their values? If we don’t start down this path towards decentralization and alternative networks for communication, we will never find out.The benefits here outweigh the negatives, as again, we would not depend on the whims of a single entity as to how it adjudicates conflict. Nor are we limiting ourselves to a single nation, company, or even a single management philosophy. We take back control of the internet from those few who wish to manipulate us and hold the keys to our platforms. We would get to run our communities the way we want — and we will fail quickly when we create environments that do not provide the functionality, atmosphere, and value sets that its users want and succeed when we do.I believe that this is the next step in the evolution of the internet, and it starts with you. You can be part of the solution by helping to create these communities or platform providers. You can use your skills in software development, system administration, design, user experience, or business to make this happen.The time is now. Let’s build a better internet together.

    Social Networking More