More stories

  • in

    Minion privilege escalation exploit patched in SaltStack Salt project

    The Salt Project has patched a privilege escalation bug impacting SaltStack Salt minions that could be used during a wider exploit chain. 

    The vulnerability, CVE-2020-28243, is described as a privilege escalation bug impacting SaltStack Salt minions allowing “an unprivileged user to create files in any non-blacklisted directory via a command injection in a process name.” 
    The bug has been given a severity rating of 7.0 and impacts Salt versions before 3002.5.
    SaltStack’s Salt is an open source project and software designed for automation and infrastructure management. 
    In November, Immersive Labs’ security researcher Matthew Rollings performed a scan on the tool using Bandit, a Python application security scanner, and came across the bug as a result. 
    Salt includes a master system and minions, of which the latter facilitates commands sent to the master, and both often run as root. Rollings discovered a command injection vulnerability in minions when the master system summons a process called restartcheck. Exploits can be triggered if attackers use crafted process names, permitting local users to escalate their privileges on root — as long as they are able to create files on a minion in a non-forbidden directory. 
    With further investigation, the researcher noted it may also be possible to perform container escapes, including performing the exploit “within a container to gain command execution as root on the host machine.”

    In addition, Rollings said the vulnerability “may be performed by an attacker without local shell access, [and] under certain circumstances, remote users can influence process names.” However, this form of attack is considered “unlikely” and could be difficult to trigger. 
    The Salt Project resolved the vulnerability in a February security release. The group also patched other high-impact bugs including CVE-2021-3197, a shell injection flaw in Salt-API’s SSH client; CVE-2021-25281, an eAuth security issue that could allow remote attackers to run any wheel modules on the master, and CVE-2021-25283, a failure to protect against server-side template injection attacks. 
    ZDNet has reached out to the Salt Project and will update when we hear back. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Businessman charged with intent to steal General Electric’s secret silicon technology

    A Chinese businessman has been charged with intent to steal General Electric’s (GE) processor technology. 

    On Friday, the US Department of Justice (DoJ) said that Chi Lung Winsman Ng, a 64-year-old resident of Hong Kong, allegedly plotted to steal MOSFET intellectual property with the overall goal of developing a business — and rival — based on GE’s technology. 
    According to the DoJ indictment, between roughly March 2017 and January 2018, Ng teamed up with a co-conspirator, a former GE engineer, to hash out a plan to steal the company’s proprietary data. 
    General Electric’s silicon carbide metal-oxide semiconductor field-effect transistors (MOSFETs) are semiconductor designs that the company has been working on for more than a decade. GE’s chips are used in a variety of products and have landed the firm contracts in both the automotive and military space. 
    Assistant Attorney General John Demers of the DoJ’s National Security Division said that Ng and co-conspirators “chose to steal what they lacked the time, talent or money to create.”
    The DoJ claims that the pair went so far as to create PowerPoint presentations to impress investors with their start-up’s business plan and told interested parties that the new company could be profitable within three years. 
    Ng and the engineer allegedly claimed that the start-up owned assets worth $100 million — including intellectual property — and sought to secure $30 million in funding. At least one meeting took place with a Chinese investor, according to US prosecutors. 

    “We have no evidence that there was an illegal MOSFET technology transfer to any Chinese companies, including the company that Ng and his co-conspirator were trying to start,” the DoJ added. 
    Ng has been charged with conspiracy to steal trade secrets. If arrested and found guilty, the businessman would face up to 10 years behind bars and punitive damages of up to $250,000. 
    “According to the indictment, Mr. Ng conspired to steal valuable and sensitive technology from GE and produce it in China,” commented Special Agent in Charge Thomas Relford of the FBI’s Albany Field Office. “Our office, the US Attorney’s Office, and GE coordinated closely and worked quickly to prevent that theft and the resulting damage to our economic security.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Why would you ever trust Amazon's Alexa after this?

    Skillful, but not necessarily trustworthy?
    Amazon
    It was only the other day that I was wondering whether it would be fun to have a cuckoo clock in my kitchen.

    more Technically Incorrect

    An Amazon Alexa-powered cuckoo clock, that is.
    I concluded that the idea was arrant bonkers, as are most things Alexa-enabled.
    But we all have our prejudices and many Americans are only too delighted to have Amazon’s Echos and Dots strewn about their homes to make their lives easier.
    Why, Alexa can even buy you your mummy, should you want.
    Yet perhaps Alexa-lovers should be warned that things may not be as delightful as they seem.
    Skills? Oh, Everyone’s Got Skills.
    New research from concerned academics at Germany’s Ruhr-University Bochum, together with equally concerned colleagues from North Carolina State — and even a researcher who, during the project, joined Google — may just make Alexa owners wonder about the true meaning of an easy life.

    The researchers looked at 90,194 Alexa skills. What they found was a security Emmenthal that would make a mouse wonder whether there was any cheese there at all.
    How much would you like to shudder, oh happy Alexa owner?
    How about this sentence from Dr. Martin Degeling: “A first problem is that Amazon has partially activated skills automatically since 2017. Previously, users had to agree to the use of each skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place.”
    So the first problem is that you have no idea where your clever answer comes from whenever you rouse Alexa from her slumber. Or, indeed, how secure your question may have been.
    Ready for another quote from the researchers? Here you go: “When a skill is published in the skill store, it also displays the developer’s name. We found that developers can register themselves with any company name when creating their developer’s account with Amazon. This makes it easy for an attacker to impersonate any well-known manufacturer or service provider.”
    Please, this is the sort of thing that makes us laugh when big companies get hacked — and don’t tell us for months, or even years.
    These researchers actually tested the process for themselves. “In an experiment, we were able to publish skills in the name of a large company. Valuable information from users can be tapped here,” they said, modestly.
    This finding was bracing, too. Yes, Amazon has a certification process for these skills. But “no restriction is imposed on changing the backend code, which can change anytime after the certification process.”
    In essence, then, a malicious developer could change the code and begin to hoover up sensitive personal data.

    Security? Yeah, It’s A Priority.
    Then, say the researchers, there are the skills developers who publish under a false identity.
    Perhaps, though, this all sounds too dramatic. Surely all these skills have privacy policies that govern what they can and can’t do.
    Please sit down. From the research: “Only 24.2% of skills have a privacy policy.” So three-quarters of the skills, well, don’t.
    Don’t worry, though, there’s worse: “For certain categories like ‘kids’ and ‘health and fitness’ only 13.6% and 42.2% skills have a privacy policy, respectively. As privacy advocates, we feel both ‘kids’ and ‘health’ related skills should be held to higher standards with respect to data privacy.”
    Naturally, I asked Amazon what it thought of these slightly chilly findings.
    An Amazon spokesperson told me: “The security of our devices and services is a top priority. We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers.”
    It’s heartening to know security is a top priority. I fancy getting customers to be amused by as many Alexa skills as possible so that Amazon can collect as much data as possible, might be a higher priority.
    Still, the spokesperson added: “We appreciate the work of independent researchers who help bring potential issues to our attention.”
    Some might translate this as: “Darn it, they’re right. But how do you expect us to monitor all these little skills? We’re too busy thinking big.”
    Hey, Alexa. Does Anyone Really Care?
    Of course, Amazon believes its monitoring systems work well in identifying true miscreants. Somehow, though, expecting developers to stick to the rules isn’t quite the same as making sure they do.
    I also understand that the company believes kid skills often don’t come attached to a privacy policy because they don’t collect personal information.
    To which one or two parents might mutter: “Uh-huh?”
    Ultimately, like so many tech companies, Amazon would prefer you to monitor — and change — your own permissions, as that would be very cost-effective for Amazon. But who really has those monitoring skills?
    This research, presented last Thursday at the Network and Distributed System Security Symposium, makes for such candidly brutal reading that at least one or two Alexa users might consider what they’ve been doing. And with whom.
    Then again, does the majority really care? Until some unpleasant happenstance occurs, most users just want to have an easy life, amusing themselves by talking to a machine when they could quite easily turn off the lights themselves.
    After all, this isn’t even the first time that researchers have exposed the vulnerabilities of Alexa skills. Last year, academics tried to upload 234 policy-breaking Alexa skills. Tell me how many got approved, Alexa? Yes, all of them.
    The latest skills researchers themselves contacted Amazon to offer some sort of “Hey, look at this.”
    They say: “Amazon has confirmed some of the problems to the research team and says it is working on countermeasures.”
    I wonder what skills Amazon is using to achieve that. More

  • in

    Chrome will soon try HTTPS first when you type an incomplete URL

    Image: Google
    Google engineers have been some of the most ardent promoters of browser security features over the past few years and, together with the teams behind the Firefox and Tor browsers, have often been behind many of the changes that have shaped browsers into what they are today.

    From pioneering features like Site Isolation and working behind the scenes at the CA/B Forum to improve the state of the TLS certificate business, we all owe a great deal of gratitude to the Chrome team.
    But one of the biggest areas of interest for Chrome engineers over the past few years has been in pushing and promoting the use of HTTPS, both inside their browser, but also among website owners.
    As part of these efforts, Chrome now tries to upgrade sites from HTTP to HTTPS when HTTPS is available.
    Chrome also warns users when they’re about to enter passwords or payment card data on unsecured HTTP pages, from where they might be sent across a network in plaintext.
    And Chrome also blocks downloads from HTTP sources if the page URL is HTTPS —to avoid users getting tricked into thinking their download is secured but actually not.
    Changes to the Chrome Omnibox arriving in v90
    But even if around 82% of all internet sites run on HTTPS, these efforts are far from done. The latest of these HTTPS-first changes will arrive in Chrome 90, scheduled to be released in mid-April, this year.

    The change will impact the Chrome Omnibox —the name Google uses to describe the Chrome address (URL) bar.
    In current versions, when users type a link in the Omnibox, Chrome will load the typed link, regardless of protocol. But if users forget to type the protocol, Chrome will add “http://” in front of the text and attempt to load the domain via HTTP.
    For example, typing something like “domain.com” in current Chrome installs loads “http://domain.com.”
    This will change in Chrome 90, according to Chrome security engineer Emily Stark. Starting with v90, the Omnibox will load all domains where the domain was left out via HTTPS, with an “https://” prefix instead.
    “Currently, the plan is to run as an experiment for a small percentage of users in Chrome 89, and launch fully in Chrome 90, if all goes according to plan,” Stark explained on Twitter this week.
    Users who’d like to test the new mechanism can do so already in Chrome Canary. They can visit the following Chrome flag and enable the feature:
    chrome://flags/#omnibox-default-typed-navigations-to-https

    Image: ZDNet More

  • in

    Berlin resident jailed for threatening to bomb NHS hospital unless Bitcoin ransom was paid

    A Berlin resident has been found guilty of threatening to bomb a hospital and attempting to blackmail the UK’s National Health Service (NHS) for £10 million in Bitcoin (BTC).

    Emil Apreda, previously only identified as Emil A. due to German law, is a 33-year-old Italian and resident of Berlin, Germany, with a background in computing. 
    On Friday, the presiding judge in the District Criminal Court of Berlin convicted Apreda and sentenced him to three years in prison.
    Apreda was accused of sending emails to the NHS between April and June 2020, in which he threatened to detonate a bomb in an unspecified hospital in the United Kingdom unless he was paid £10 million ($14m) in cryptocurrency. 
    Nigel Leary, Deputy Director of the UK’s National Crime Agency (NCA)’s National Cyber Crime Unit, said in a briefing on Thursday that his threats “escalated” over a period of six weeks. 
    The first email was sent on April 25, during the first UK lockdown. The NHS was the first subject of the threats, with Apreda saying he would deposit an “explosive package” in a hospital unless his demands were met. The NCA was also sent the same email within hours. 
    Apreda monitored world events and attempted to take advantage not only of the COVID-19 pandemic but also claimed he would plant explosives at Black Lives Matter protests. In addition, the intelligence agency says that Apreda threatened the safety of members of parliament around the time of the anniversary of the murder of Labour MP Jo Cox. 

    Together, the NCA believes the threats were a “social engineering” attempt designed to “elicit the response he was after” — the cryptocurrency payment. The intelligence agency has no reason to suspect that Apreda had any access to explosive materials. 
    The NHS did not respond to the blackmail attempts. 
    Prosecutors said that the “attempted extortion” continued until his arrest in June, in which UK intelligence worked with overseas partners to obtain a warrant and force entry into the suspect’s home.  
    Apreda’s trial began on December 11 in Germany. He has now been sentenced but has been released on bail until the decision has been ratified. 
    The NCA took the threat seriously; Leary noting that at a time when the COVID-19 pandemic was entering full swing, there was a “deep and heightened vulnerability” in the medical system.
    The investigation into the culprit required a “dynamic and significant response,” according to the agency. The potential risk was heightened as Apreda claimed he was part of “Combat 18,” and while not prescribed as a terrorist organization in the UK, is still a group with extremist, far-right leanings. 
    Hospitals, by their nature, are open areas and during the first lockdown were one of the few areas in which there were mass gatherings of people. 
    “We had to step in pretty quickly and make sure that everything that could be done, was done,” Leary commented, but added that “nothing should be done to deter people from seeking medical treatment.”
    Apreda was not extradited but would have faced “similar” charges in the UK, according to the NCA.
    In June, YouTuber Matthew Wain was jailed for 12 weeks after he recorded himself making a bomb threat and saying that he hoped NHS staff at Birmingham City Hospital “died of coronavirus.”
    The footage was posted online in March. The 31-year-old later claimed he was dissatisfied with the treatment he had received at the hospital and that the online rant was nothing more than an “empty threat.” 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Go malware is now common, having been adopted by both APTs and e-crime groups

    The number of malware strains coded in the Go programming language has seen a sharp increase of around 2,000% over the last few years, since 2017, cybersecurity firm Intezer said in a report published this week.
    The company’s findings highlight and confirm a general trend in the malware ecosystem, where malware authors have slowly moved away from C and C++ to Go, a programming language developed and launched by Google in 2007.
    Intezer: Go malware, now a daily occurrence
    While the first Go-based malware was detected in 2012, it took, however, a few years for Golang to catch on with the malware scene.
    “Before 2019, spotting malware written in Go was more a rare occurrence and during 2019 it became a daily occurrence,” Intezer said in its report.
    But today, Golang (as it’s often also referred to instead of Go) has broken through and has been widely adopted.

    techrepublic cheat sheet

    It is used by nation-state hacking groups (also known as APTs), cybercrime operators, and even security teams alike, who often used it to create penetration-testing toolkits.
    There are three main reasons why Golang has seen this sudden sharp rise in popularity. The first is that Go supports an easy process for cross-platform compilation. This allows malware developers to write code once and compile binaries from the same codebase for multiple platforms, allowing them to target Windows, Mac, and Linux from the same codebase, a versatility that they don’t usually have with many other programming languages.

    The second reason is that Go-based binaries are still hard to analyze and reverse engineer by security researchers, which has kept detection rates for Go-based malware very low.
    The third reason is related to Go’s support for working with network packets and requests. Intezer explains:
    “Go has a very well-written networking stack that is easy to to work with. Go has become one of the programming languages for the cloud with many cloud-native applications written in it. For example, Docker, Kubernetes, InfluxDB, Traefik, Terraform, CockroachDB, Prometheus and Consul are all written in Go. This makes sense given that one of the reasons behind the creation of Go was to invent a better language that could be used to replace the internal C++ network services used by Google.”
    Since malware strains usually tamper, assemble, or send/receive network packets all the time, Go provides malware devs with all the tools they need in one place, and it’s easy to see why many malware coders are abandoning C and C++ for it. These three reasons are why we saw more Golang malware in 2020 than ever before.
    “Many of these malware [families] are botnets targeting Linux and IoT devices to either install crypto miners or enroll the infected machine into DDoS botnets. Also, ransomware has been written in Go and appears to become more common,” Intezer said.
    Examples of some of the biggest and most prevalent Go-based threats seen in 2020 include the likes of (per category):
    Nation-state APT malware:
    Zebrocy – Russian state-sponsored group APT28 created a Go-based version of their Zebrocy malware last year.
    WellMess – Russian state-sponsored group APT29 deployed new upgraded versions of their Go-based WellMess malware last year.
    Godlike12 – A Chinese state-sponsored group deployed Go-based backdoors for attacks on the Tibetan community last year.
    Go Loader – The China-linked Mustang Panda APT deployed a new Go-based loader last year for their attacks.
    E-crime malware:
    GOSH – The infamous Carbanak group deployed a new RAT named GOSH written in Go last August.
    Glupteba – New versions of the Glupteba loader were seen in 2020, more advanced than ever.
    A new RAT targeting Linux servers running Oracle WebLogic was seen by Bitdefender.
    CryptoStealer.Go – New and improved versions of the CryptoStealer.Go malware were seen in 2020. This malware targets cryptocurrency wallets and browser passwords.
    Also, during 2020, a clipboard stealer written in Go was found.
    New ransomware strains written in Go:
    Naturally, in light of its recent discoveries, Intezer, along with others, expect Golang usage to continue to rise in the coming years and join C, C++, and Python, as a preferred programming language for coding malware going forward. More

  • in

    Oxford University lab with COVID-19 research links targeted by hackers

    An Oxford University lab conducting research into the coronavirus pandemic has been compromised by cyberattackers. 

    Oxford University, one of the most prominent educational institutions in the UK, was made aware of the security breach on Thursday. 
    The university confirmed that a security incident took place at the Division of Structural Biology lab, also known as “Strubi,” after Forbes disclosed that hackers were boasting of access to the school’s systems. 
    Strubi’s labs are used by students studying molecular and biological science, and during the COVID-19 pandemic, the Oxford team has been researching the virus itself and examining vaccine candidates. 
    The school’s latest publications include work on RNA strands and viruses, as well as antiviral agents. However, the group has not been directly involved in the development of the Oxford University-AstraZeneca vaccine. 
    According to Forbes and Hold Security, the lab’s “biochemical preparation machines” were compromised by the unknown attackers who boasted of their break-in to what appears to be lab equipment, pumps, and pressure tools in an attempt to sell access to their victim’s systems.
    Timestamps of February 13 and 14, 2021, were noted in evidence provided to the publication. 

    Oxford University has confirmed the security breach. However, in a statement, the university said there “has been no impact on any clinical research, as this is not conducted in the affected area.”
    In addition, the cyberattackers do not appear to have compromised any system relating to patient data or records. 
    “We are aware of an incident affecting Oxford University and are working to fully understand its impact,” an Oxford University spokesperson told Forbes. 
    The UK’s GCHQ has been informed and the National Cyber Security Center (NCSC) will investigate the incident. 
    This is not the first time a university may have been targeted with coronavirus or vaccine research in mind. In May 2020, the NCSC warned that threat actors from Russia, Iran, and China were targeting British universities and research hubs to steal research. 
    The European Medicines Agency (EMA), unfortunately, was successfully attacked in December and the cyberattackers responsible then leaked stolen data relating to COVID-19 vaccines and medicines in January this year. 
    In late 2020, Interpol warned of a wave of COVID-19 and flu vaccine-related cybercrimes. The law enforcement agency said that the worldwide pandemic had “triggered unprecedented opportunistic and predatory criminal behavior.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Why your diversity and inclusion efforts should include neurodiverse workers

    Neurodivergent workers bring pattern recognition and skills that are crucial to enterprises and cybersecurity.
    I caught up with Craig Froelich, the chief information security officer at Bank of America, to talk about hiring neurodiverse workers and how they can benefit cybersecurity teams. Here are some of the highlights.

    Neurodiversity is part of Bank of America’s hiring strategy. Froelich said:

    Neuro-diverse people and neurodivergent people have been in our organization for a long time. Neurodiversity is one of those hidden diversity initiatives where there are lots of people who are neuro-diverse. They may be on the autism spectrum. They may have ADHD. They may have dyslexia. And for a long time, they may not necessarily have felt comfortable in being able to talk about that openly because of an associated stigma. So when we first started thinking about neurodiversity and the importance of neurodiversity in order to be able to help solve some of cybersecurity’s hardest problems, it was first about making sure that we had an open and honest, courageous conversation in the organization. From there, it was amazing all of the people that would talk about how they wanted to be able to help. And then it was about finding partners in the community, people who knew a lot more about this than I did, to be able to help us understand where to start and what to do.
    I think the important thing to understand is it’s not a program. It is part of our hiring strategy. And so people who are neurodivergent, they’re either are part of our team already, or that we’re bringing into the organization, they go through the same hiring practices.

    Neurodiversity’s role in cybersecurity. Froelich said neurodivergent people are adept at finding patterns. He said:

    One of the great things that people who are neurodiverse can provide is an amazing ability to be able to think about pattern recognition, as an example. So, in cybersecurity, that’s roles like cryptography, it’s malware reverse engineering, it’s hunt team, where focus and intention and looking for details is really important. And people who are neurodiverse have a great aptitude for being able to do that when given all of the right conditions and the right support.

    Neurodiversity brings business benefits. Froelich said:

    I think there is absolutely a business benefit. In cybersecurity, there is, depending upon who you talk to, something on the order of about 3.5 million jobs that will be unfilled this year. And so it’s an imperative for us as an industry to be able to make sure that we’re bringing people to the table and that those people have to be able to come from all walks of life. If you’re thinking about how to be able to solve a hard problem, like defending an organization like Bank of America from different threats, you have to anticipate what those threat actors are going to do. And people who think differently are going to be able to help you do that. So the advantages are clear.

    Environment matters. When managing neurodivergent people you have to think through the right environmental conditions — especially in a traditional office. Froelich said:

    When you have neurodivergent people in your team, you have to think through, how do you make sure that they have the right environmental conditions? Something as simple as providing them with noise canceling headphones, or putting them in a place in the building, when we’re still in buildings, to be able to make sure that they’re not in a high traffic area, or that they have the right lighting. None of these are really expensive, and frankly none of them are really hard, but it’s amazing what you can do when you open up and ask them what it takes for them to be able to focus at what they come up with and what they will help to deliver.

    The COVID-19 pandemic has made it easier to tailor the environment to neurodiverse workers. Froelich said:

    When they went from the office to working at home, it was actually very easy for them. In fact, probably even easier for them than it was for folks like me. So their ability to stay focused and focused on the outcomes and the details has been a real benefit for us through what we’ve been dealing with as this national human tragedy or global tragedy related to the pandemic.

    Neurodiversity meets machine learning. Froelich said that matching neurodivergent workers with machine learning models has been successful. He said:

    I mentioned cryptography or malware reverse engineering, hunting. If you take hunting as an example, you’re talking about lots and lots of data. You’re looking at logs, you’re looking at different anomalies, and the models will help to be able to surface things, but any good security team at a reasonably sized organization is going to be most likely inundated with different alerts. They’re going to have a lot of information that the models will end up spitting out, but you still need to be able to process. People who are neurodivergent have an ability to be able to pick through all of that information at a more efficient rate and in a better way to give you that type of information that needs to be risen to the surface so that you can action it faster.

    Building a team with neurodiverse people. Froelich said:

    This is a journey for us as it is, I think, for most companies. What I would tell you today is that, one, you shouldn’t think of neurodiversity as a bolt-on to your hiring strategies and the way you design your organizations. It needs to be part and parcel of everything that you do. So there’s certainly certain jobs that neurodivergent people are maybe better at. For example, a lot of neurodiverse people may not necessarily feel comfortable in being able to face off to a business to be able to architect a security solution, because that requires human to human communication. But while that may not necessarily be the right place, you take AI as an example, making sure that they’re paired with people who understand how to be able to interact with people who are neurodiverse.
    Whether it’s the manager or the people on the team, making sure that they have the right training to say, “What are the types of questions that we should be asking and how should those questions be framed so that somebody who’s on the team that may need extra support, like somebody who’s neurodivergent, has the ability to be able to do that?” What’s been really interesting is that just by making sure that we are being more expressive, more direct, more clear, more straightforward in our language, not just in the way that we manage the teams, but also in the way that we hire, our job specifications, it’s not only made us better in dealing with people who are neurodivergent, but also it’s made us better overall.

    Getting started. Froelich said that there are community groups that are a big help for enterprises looking to hire more neurodiverse people. One group, Neurodiversity in the Workplace, has been key to Bank of America. Some advice for enterprises looking to hire more neurodiverse employees:

    I think there’s probably three things. The first is, start. This is an untapped market for the most part and starting is really the first part. Two, when you go to start, make sure you bring some partners with you. You don’t have to learn by yourself. You can learn as you go, like we are, but you can bring partners along like the one I mentioned earlier, Neurodiversity In The Workplace, and they can give you a jump start into it. And the third is, don’t think of this as a bolt-on. Don’t think of this as a program. Think of this as an entire way of you being able to work. And when you think of it as your hiring practices need to evolve, the way that you manage needs to evolve, it doesn’t just benefit you in terms of bringing new people that are neurodivergent to the table, but it actually helps the entire organization. 

    Workplace diversity More