More stories

  • in

    Google rolls out a unified security vulnerability schema for open-source software

    Business author and expert, H. James Harrington, once said, “If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” He was right. And Google is following this advice by introducing a new way to strengthen open-source security by introducing a vulnerability interchange schema for describing vulnerabilities across open-source ecosystems.

    Open Source

    That’s very important. One low-level problem is that there are many security vulnerability databases, there’s no standard interchange format. If you want to aggregate information from multiple databases you must handle each one completely separately. That’s a real waste of time and energy. At the very least you must create parsers for each database format to merge their data. All this makes systematic tracking of dependencies and collaboration between vulnerability databases much harder than it should be. So, Google built on the work it’s already done on the Open Source Vulnerabilities (OSV) database and the OSS-Fuzz dataset of security vulnerabilities. The Google Open Source Security team, Go team, and the broader open-source community all helped create this simple vulnerability interchange schema. While working on the schema, they could communicate precise vulnerability data for hundreds of critical open-source projects. Now the OSV and the schema has been expanded to several new key open-source ecosystems: Go, Rust, Python, and DWF. This expansion unites and aggregates their vulnerability databases. This gives developers a better way to track and remediate their security issues. This new vulnerability schema aims to address some key problems with managing open-source vulnerabilities. It: Enforces version specification that precisely matches naming and versioning schemes used in actual open-source package ecosystems. For instance, matching a vulnerability such as a CVE to a package name and set of versions in a package manager is difficult to do in an automated way using existing mechanisms such as CPEs. Can describe vulnerabilities in any open source ecosystem, while not requiring ecosystem-dependent logic to process them. Is easy to use by both automated systems and humans.In short, as Abhishek Arya, the Google Open Source Security Team Manager, put in a note on the specification manuscript, “The intent is to create a simple schema format that contains precise vulnerability metadata, the necessary details needed to fix the bug and is a low burden on the resource-constrained open source ecosystem.”The hope is that with this schema, developers can define a format that all vulnerability databases can export. Such a unified format would mean that programmers and security researchers can easily share tooling and vulnerability data across all open-source projects. 

    The vulnerability schema spec has gone through several iterations, but it’s not completed yet. Google and friends are inviting further feedback as it gets closer to being finalized. A number of public vulnerability databases today are already exporting this format, with more in the pipeline:The OSV service has also aggregated all of these vulnerability databases, which are viewable at the project’s web UI. The databases can also be queried with a single command via its existing APIs.In addition to OSV’s existing automation, Google has built more automation tools for vulnerability database maintenance and used these tools to bootstrap the community Python advisory database. This automation takes existing feeds, accurately matches them to packages, and generates entries containing precise, validated version ranges with minimal human intervention. Google plans to extend this tooling to other ecosystems for which there is no existing vulnerability database or little support for ongoing database maintenance.This effort also aligns with the recent US Executive Order on Improving the Nation’s Cybersecurity, which emphasized the need to remove barriers to sharing threat information in order to strengthen national infrastructure. This expanded shared vulnerability database marks an important step toward creating a more secure open-source environment for all users. Want to get involved? You should. This promises to make open-source software, no matter what your project, much easier to secure. Related Stories: More

  • in

    Intelligent carpet gives insight into human poses

    The sentient Magic Carpet from “Aladdin” might have a new competitor. While it can’t fly or speak, a new tactile sensing carpet from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) can estimate human poses without using cameras, in a step toward improving self-powered personalized health care, smart homes, and gaming.

    Many of our daily activities involve physical contact with the ground: walking, exercising, or resting. These embedded interactions contain a wealth of information that help us better understand people’s movements. 

    Previous research has leveraged use of single RGB cameras, (think Microsoft Kinect), wearable omnidirectional cameras, and even plain old off-the-shelf webcams, but with the inevitable byproducts of camera occlusions and privacy concerns. 

    The CSAIL team’s system only used cameras to create the dataset the system was trained on, and only captured the moment of the person performing the activity. To infer the 3D pose, a person would simply have to get on the carpet, perform an action, and then the team’s deep neural network, using just the tactile information, could determine if the person was doing situps, stretching, or doing another action. 

    Play video

    Intelligent Carpet: Estimating a person’s 3D pose using only tactile sensors

    “You can imagine leveraging this model to enable a seamless health-monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more,” says Yiyue Luo, a lead author on a paper about the carpet. 

    The carpet itself, which is low-cost and scalable, was made of commercial, pressure-sensitive film and conductive thread, with over 9,000 sensors spanning 36-by-2 feet. (Most living room rug sizes are 8-by-10 and 9-by-12.) 

    Each of the sensors on the carpet converts the human’s pressure into an electrical signal, through the physical contact between people’s feet, limbs, torso, and the carpet. The system was specifically trained on synchronized tactile and visual data, such as a video and corresponding heat map of someone doing a pushup. 

    The model takes the pose extracted from the visual data as the ground truth, uses the tactile data as input, and finally outputs the 3D human pose.

    This might look something like when, after stepping onto the carpet and doing a set up of pushups, the system is able to produce an image or video of someone doing a pushup. 

    In fact, the model was able to predict a person’s pose with an error margin (measured by the distance between predicted human body key points and ground truth key points) by less than 10 centimeters. For classifying specific actions, the system was accurate 97 percent of the time. 

    “You could envision using the carpet for workout purposes. Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories,” says MIT CSAIL PhD student Yunzhu Li, a co-author on the paper.

    Since much of the pressure distributions were prompted by movement of the lower body and torso, that information was more accurate than the upper-body data. Also, the model was unable to predict poses without more explicit floor contact, like free-floating legs during situps, or a twisted torso while standing up. 

    While the system can understand a single person, the scientists, down the line, want to improve the metrics for multiple users, where two people might be dancing or hugging on the carpet. They also hope to gain more information from the tactical signals, such as a person’s height or weight. 

    Luo wrote the paper alongside Li and MIT CSAIL PhD student Yunzhu Pratyusha Sharma, MIT CSAIL mechanical engineer Michael Foshey, MIT CSAIL postdoc Wan Shou, and MIT professors Tomas Palacios, Antonio Torralba, and Wojciech Matusik. The work is funded by the Toyota Research Institute.  More

  • in

    Four MIT faculty members receive 2021 US Department of Energy early career awards

    The U.S. Department of Energy (DoE) recently announced the names of 83 scientists who have been selected for their 2021 Early Career Research Program. The list includes four faculty members from MIT: Riccardo Comin of the Department of Physics; Netta Engelhardt of the Department of Physics and Center for Theoretical Physics; Philip Harris of the Department of Physics and Laboratory for Nuclear Science; and Mingda Li of the Department of Nuclear Science and Engineering.

    Each year, the DoE selects researchers for significant funding the “nation’s scientific workforce by providing support to exceptional researchers during crucial early career years, when many scientists do their most formative work.”

    Resonant coherent diffractive imaging of quantum solids

    The quantum technologies of tomorrow –– more powerful computing, better navigation systems, and more precise imaging and magnetic sensing devices –– rely on understanding the properties of quantum materials. Quantum materials contain unique physical characteristics, and can lead to phenomena like superconductivity. Detecting and visualizing these materials at the nanoscale will enable scientists to understand and harness the properties of quantum materials.

    Riccardo Comin, the Class of 1947 Career Development Assistant Professor of Physics, leads the Comin Photon Scattering Lab at MIT. The group uses high-energy electromagnetic waves, or X-rays, to observe how new collective states emerge at the nanoscale in quantum materials. This is a difficult feat, as the lenses in cameras and in the human eye do not work for X-rays as they do for visible light. Conventional microscopy techniques are not well-suited for visualizing these complex phenomena.

    To overcome this technical limitation, the Comin group has worked on a “lensless” X-ray microscopy approach to image these electronic textures.

    “These new imaging techniques are really fascinating and deeply challenge our traditional ways of performing X-ray microscopy,” Comin says. “We now rely on special algorithms that can perform computationally the task of image reconstruction that is normally taken care of by a lens.”

    The support from the DoE Early Career Research program will be instrumental to the group’s work developing and applying these novel techniques to study the nanoscale organization of quantum materials of interest. Looking beyond the horizon of quantum materials, the availability of lensless X-ray imaging methods provides a new powerful tool set for the characterization of catalysts, batteries, data storage devices, soft matter, and biological systems.

    Spacetime emergence from quantum gravity

    Few phenomena in modern physics remain as mysterious as the black hole interior. Black holes seem to wreck the objects that fall into them, as well as information about what those objects once were. Yet according to basic principles of quantum mechanics (the study of subatomic particle behavior), knowing the current state of a given system should mean knowing everything about its past and future.

    General relativity and quantum mechanics are two highly tested theories. When it comes to black holes, general relativity and quantum mechanics disagree on a fundamental point: whether information about the region behind the event horizon can escape and be decoded by an observer outside of the black hole. The clash between general relativity and quantum mechanics on this matter results in what is termed the “black hole information paradox.” In recent years, scientists have drawn numerous connections between gravity and quantum information.

    Netta Engelhardt, the Biedenharn Career Development Assistant Professor of physics and member of the Center for Theoretical Physics, researches quantum gravity and the black hole information paradox.

    “With a recent leap in our understanding of the black hole information paradox, the connection between gravity, quantum computational complexity, and black holes has newfound potential to shed light on some of the most foundational questions about quantum gravity, starting with ‘What really happens inside a black hole?’” Engelhardt says.

    With support from the DoE award, her project aims to move toward resolving the black hole information paradox using some of the novel tools and insights at the intersection of the two theories.

    Harnessing the Large Hadron Collider with new insights in real-time data processing and artificial intelligence

    Particle accelerators help scientists learn more about the particles that comprise matter.

    Nearly 17 miles in circumference, the Large Hadron Collider (LHC) at the European Center for Nuclear Research is the largest and most powerful particle accelerator in the world, producing valuable information for researchers.

    Researchers have spent much time using LHC data to investigate novel particle interactions at the highest energies. But over the next two decades, they anticipate shifting their focus, directing their efforts toward precision measurements that target physics processes with small interaction strengths and extensive background rates.

    As a result of these more detailed observations, physicists expect additional rare and hidden processes within the Standard Model (SM) of particle physics, and potentially beyond the SM, to emerge as more data mounts.

    Philip Harris, assistant professor of physics and researcher in the Laboratory for Nuclear Science, is working on a physics program to measure those smaller, more inconspicuous processes. Specifically, with the support of DoE funding, his research aims to exploit a new measurement technique he created to identify light resonances that decay into quarks –– the particles that eventually combine to create subatomic particles.

    “In conjunction with advanced artificial intelligence algorithms, this new technique can open up a wealth of unique measurements and searches,” Harris says. “The fully developed state-of-the-art system will empower new measurements of the Higgs boson, new searches for dark matter, and analyses of a multitude of unexplored scientific phenomena.”

    Machine learning-augmented multimodal neutron scattering for emergent topological materials

    Topological materials are a class of quantum materials whose electronic properties have robust protection against outside influences. This robustness enables a wide range of promising applications, such as next-generation electronics without energy loss, and error-tolerant quantum computers.

    But it’s difficult to directly test materials for their topological properties. Rather, scientists usually use methods that measure manifestations of topology. One such method is neutron scattering, or neutron spectroscopy, a process used by scientists to assess materials.

    Neutron scattering has particular advantages when it comes to evaluating topological quantum materials, but more information is needed to understand exactly how massive amounts of data gathered during neutron spectroscopy map onto topology.

    The DoE Early Career Research Program Award will support Mingda Li, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering, in his machine-learning approach to analyzing high-dimensional neutron scattering spectra in quantum materials.

    “The new approach will augment existing neutron scattering probes by measuring things that were not measurable before,” Li says. By doing so, “it will enable a broader discovery of hidden materials states that may have electronics applications, and identify topological solutions that can be used for computer memory.”  More

  • in

    Tackling air pollution with autonomous drones

    Hovering 100 meters above a densely populated urban residential area, the drone takes a quiet breath. Its goal is singular: to systematically measure air quality across the metropolitan landscape, providing regular updates to a central communication module where it docks after its patrol, awaiting a new set of instructions. The central module integrates each new data point provided by a small drone fleet, processing them against wind and traffic patterns and historical pollution hot spot information. Then the fleet is assigned new sampling waypoints and relaunched.

    This simulated autonomous air pollution monitoring system is the capstone project of recently graduated New Engineering Education Transformation (NEET) program alumni Chloe Nelson-Arzuaga ’21, Jeana Choi ’21, Daniel Gonzalez-Diaz ’21, Leilani Trautman ’21, Rima Rebei ’21, and Berke Saat ’21, who have together studied autonomous robotics since they entered NEET during their sophomore year at MIT.

    “We all converged on this, because we felt that the social impact would be more powerful than any of the other projects we were looking at,” says Trautman, who majored in electrical engineering and computer science.

    Their system represents a fundamentally different approach to air quality monitoring compared with the stationary systems routinely used in urban areas, which the group says often fail to detect spatial heterogeneity in pollution levels across a landscape. Given their limited distribution and lack of mobility, these systems are really only a reliable indicator of the air quality directly surrounding each monitoring point, but their data are reported as though they were representative of air quality across the entire city, say the recent graduates.

    “So even though they might say that your air quality is somewhat good, that may not be the case for the park right next to your home,” says Gonzalez-Diaz.

    The NEET cohort’s drone system is designed to provide real-time air quality data with a 15-meter resolution that is publicly accessible through a user-friendly interface.

    The NEET program was launched in 2017 in an effort to fundamentally reconceptualize the way that engineering is taught at MIT. It emphasizes interdisciplinary scholarship, cross-departmental community, and project-based learning to prepare students to engage the major engineering challenges of the 21st century. The program is actively growing and is the fourth-largest undergraduate academic community, with more than 186 students participating. Twenty-three majors from 13 departments are represented. Sixty-four percent of the students are women and 28 percent are members of underrepresented groups. This year over 39% of first-years who applied to this program heard about it from current NEET students.

    Nelson-Arzuaga, Choi, Gonzalez-Diaz, Trautman, Rebei, and Saat graduated from MIT with a certificate from NEET’s Autonomous Machines “thread,” or area of concentration. The NEET program has five threads that students can choose from, each emphasizing a class of contemporary or futuristic engineering problems. For example, the Advanced Materials thread explores the future of materials technologies and manufacturing, while the Digital Cities thread integrates computer science with urban planning to prepare students to build more idealized cities. NEET also offers a biotechnology-focused Living Machines thread as well as the Renewable Energy Machines thread, which emphasizes green energy systems design. The Autonomous Machines thread teaches students to design, build, and program autonomous robots. “A common feature across all five threads is that NEET students want to create an impact while they are still students,” says NEET Executive Director Babi Mitra, “by doing projects that tackle critical societal problems.”

    The NEET curriculum structure is progressive, building on the previous year’s lessons to ultimately prepare the students for real-world application.

    “[Sophomore year] we give them an individual project … and then, during junior year, they have their first small group project. And then the senior year, they have a class project. So it progressively gets more complicated,” says NEET Lead Instructor Greg Long. “This senior project is supposed to mimic something they would do if they were going to do a startup company.”

    It was during the cohort’s junior year, when they were tasked with building autonomous vehicles that could race other vehicles and avoid obstacles, that the pandemic forced the closure of MIT’s campus and the NEET cohort was scattered across the globe.

    “[The curriculum] is so hands-on and such a huge time commitment that we really thought the classes would end at that point when we had to leave campus,” says Saat. “But then they kept going. They set up the simulation for us and everything, but the time commitment was still there and we were in five different time zones.”

    The NEET program demands roughly 20 hours per week on top of the rest of the students’ course load and the students say that meeting this demand with Choi in Cambridge, Trautman and Nelson-Arzuaga in California, Saat in Turkey, Rebei in Illinois, Gonzalez-Diaz in Puerto Rico, and another teammate in Taiwan required creative and exhausting schedule coordination.

    “Literally [for] some of us the sun was rising and [for] the others the sun was going down [while working together],” says Choi. “We’ve definitely bonded and learned so much, and I think it would not have been possible if even one of us was not very interested in robotics.”

    A unique feature of the NEET Autonomous Machines thread is that students take a senior fall semester class during which they discuss and decide the senior spring project they want to work on. It was also the pandemic that helped inform the group’s specific choice to tackle air pollution as a final project their senior year, because it further exposed “the racial and economic disparities that air pollution causes in the United States,” says Rebei.  

    The cohort’s drone project is designed to monitor a form of pollution called PM 2.5, which are essentially pollution particles small enough to enter the bloodstream when inhaled, potentially resulting in lung and heart disease over time, according to the students.

    “Low-income communities of color, at the end of the day, are the ones who are disproportionately impacted by air pollution, and … air pollution is what contributes to a lot of these deadly respiratory illnesses … in these particular communities,” says Rebei. “People who already have these preexisting conditions … are at higher risk of getting very sick from Covid-19.”

    In addition to designing a drone system capable of effectively capturing neighborhood-level variation in pollution exposure levels, the NEET cohort created a web interface that could layer this information with area socioeconomic data, such as income, race, household composition, disability status, housing types, and modes of transportation. This would make patterns and disparities in air pollution exposure public and easier to prove, something the students say could help affected communities advocate for change.

    The complexity of the drone project, remote learning notwithstanding, required the entire cohort to step out of their comfort zone and learn new skills quickly.

    “I’m a mechanical design student, so I do a lot of 3D modeling,” says Nelson-Arzuaga. However, for the drone project she was in charge of learning how to operate a cellular communications network so that the drones would be able to talk to the central communication module. “[It was] very different from anything that I learned in all of my other design classes.”

    The recent graduates say that NEET prepared them to engage these challenges, but expressed that the most valuable part of the program for them was the community that they built and the experiences they shared working together.

    “When we took the sophomore robot class, we were all like, ‘What is a robot? How do we do anything?’” jokes Trautman. “[And now we’re] doing very high-quality development work on this project. I think it’s been exciting to see, to see where everyone is going in the future and just seeing how everyone’s progressed. It’s been a really cool journey.”

    The NEET program is continuing to develop and fine-tune its curriculum, learning environment, and community, and student engagement is an integral part of that process. Spaces are still available in the NEET Class of 2024 cohort. For more information about applying visit the NEET website. More

  • in

    Amazon launching global competition to find and fix 1 million software bugs

    Amazon announced a new global competition called AWS BugBust, which will allow developers to compete over finding and fixing one million bugs. The company said the competition will also help “reduce technical debt by over $100 million.”In a blog post, AWS principal advocate Martin Beeby said AWS BugBust was taking the concept of a bug bash “to a new level” by allowing developers to create and manage private events that effectively “gamify the process of finding and fixing bugs in your software.””Many of the software companies where I’ve worked (including Amazon) run them in the weeks before launching a new product or service. [AWS BugBust] includes automated code analysis, built-in leaderboards, custom challenges, and rewards,” Beeby said. “AWS BugBust fosters team building and introduces some friendly competition into improving code quality and application performance. What’s more, your developers can take part in the world’s largest code challenge, win fantastic prizes, and receive kudos from their peers.”Those interested in joining the competition can create an AWS BugBust event through Amazon’s CodeGuru console, a machine learning developer tool that helps identify bugs. AWS BugBust will have a leaderboard for developers and the company will dole out achievement badges and a chance for an expense-paid trip to AWS re:Invent 2021 in Las Vegas.Swami Sivasubramanian, vice president of Amazon Machine Learning at AWS, explained that hundreds of thousands of AWS customers are building and deploying new features to applications each day at high velocity and managing complex code at high volumes. “It’s difficult to get time from skilled developers to quickly perform effective code reviews since they’re busy building, innovating, and pushing out deployments,” Sivasubramanian said. “Today, we are excited to announce an entirely new approach to help developers improve code quality, eliminate bugs, and boost application performance, while saving millions of dollars in application resource costs.” The AWS BugBust capability is currently available on the East Coast of the US and soon will be available to any region where Amazon CodeGuru is offered. 

    Beeby noted that there will be a global leaderboard that will be updated each time a developer fixes a bug and wins points. Any developer that makes it to 100 points will win an AWS BugBust T-shirt, and those who reach 2,000 points will win an AWS BugBust Varsity Jacket.The top ten will receive tickets to AWS re:Invent. To compete in the global challenge, projects must be written in Python or Java, as those are the only languages supported by Amazon CodeGuru. Beeby added that all costs incurred by the underlying usage of Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler are free of charge for 30 days with an AWS account. “This 30 day free period applies even if you have already utilized the free tiers for Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler. You can create multiple AWS BugBust events within the 30-day free trial period,” Beeby wrote. “After the 30-day free trial expires, you will be charged for Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler based on your usage in the challenge.”Amazon included multiple comments from partners who plan to have employees participate in the program, including Belle Fleur and Miami Dade College. “The AWS BugBust Challenge will be a fun and educative addition to our curriculum to help our students become more confident in their ability to use the Python programming language and take their IT careers to the next level,” said Antonio Delgado, Dean of Engineering, Technology and Design at Miami Dade College. “We plan to use AWS BugBust every semester as a platform for our students to showcase and enhance their coding skills, all while being part of an exciting bug-bashing event.”  More

  • in

    Tulsa warns residents that police citations and reports leaked to Dark Web after Conti ransomware attack

    The City of Tulsa has notified residents that some of their personal information may be on the dark web thanks to a ransomware attack last month by prolific cybercriminal group Conti. In a statement posted to the city’s website this week, the city said more than 18,000 city files — mostly police citations and internal department files — were shared on the dark web. Names, dates of birth, addresses and license numbers are on all police citations. 

    see also

    Best VPN services

    Virtual private networks are essential to staying safe online, especially for remote workers and businesses. Here are your top choices in VPN service providers and how to get set up fast.

    Read More

    “No other files are known to have been shared as of today, but out of an abundance of caution, anyone who has filed a police report, received a police citation, made a payment with the City, or interacted with the City in any way where PII was shared, whether online, in-person or on paper, prior to May 2021, is being asked to take monitoring precautions,” the city said in a statement to its 500,000 residents. Tulsa’s Incident Response Team is working with federal law enforcement on the breach but is still struggling to restore services and resources that were heavily damaged by the attack. The ransomware attack brought down the city’s public-facing systems, internal communications and network access functions. The city admitted that it prioritized restoring systems over everything else. The city notified residents that on May 6, multiple servers “were actively communicating with a known threat site and a ransomware attack was initiated on several City systems.” Tulsa Mayor G.T Bynum said the city would refuse to pay a ransom and instead shut down all of the city’s systems. The city’s online bill payment systems were shut down along with utility billing and any services through email. All of the websites for the Tulsa City Council, Tulsa Police, the Tulsa 311 and the City of Tulsa were shut down as part of the effort to contain the attack.Tulsa was forced to resort to phone services as a way to make up for the lack of online services. Residents were told to prepare for weeks, if not months, of city websites being down. 

    Tulsa suggested concerned residents visit the Oklahoma Department of Consumer Credit website. They also said residents need to monitor all financial accounts and credit reports, change passwords to personal accounts and contact credit or debit card companies about fraudulent charges.  Cybersecurity experts said the leakage of police citations and reports could provide any malicious actor with enough information to do serious damage. Chris Clements, vice president of solutions architecture at Cerberus Sentinel, said that while the reports did not contain social security numbers, there was still enough information that could be leveraged to create incredibly powerful social engineering lures to fool victims into sending money. “The disclosure of police records can be used to construct convincing stories to trick unsuspecting victims or their families into paying fake fees or fines by claiming to be lawyers or court representatives,” Clements said. “Even normally scam savvy people may be fooled if a fraudster has enough detailed information.” Conti has made a name for itself after attacking hundreds of healthcare institutions, most notably bringing down significant parts of Ireland’s healthcare system earlier this year. The FBI said last month that Conti has also gone after first responder networks, law enforcement agencies, emergency medical services, 911 dispatch centers, and multiple municipalities within the last year. “These healthcare and first responder networks are among the more than 400 organizations worldwide victimized by Conti, over 290 of which are located in the US,” the FBI said. Erich Kron, security awareness advocate at KnowBe4, said Conti has repeatedly shown “a blatant disregard for the authority of law enforcement as they continue their attacks on these vital services.” “Even after the shutdown of the Darkside gang, the arrests in the takedown of the Clop group, and even in light of the Ziggy ransomware gang providing all of their encryption keys for victims due to the fear of law enforcement actions, Conti continues their attacks without skipping a beat,” Kron said.”Because Conti’s typical attacks begin with email phishing or stolen Remote Desktop Protocol credentials, organizations looking to defend themselves against the threat should concentrate on these attack vectors.” He added that organizations need to review the security related to any RDP instances they have deployed, paying special attention to securing against brute force attacks, spotting unusual login times or attempts from unusual locations and ensuring that unusual behavior through these portals is quickly reported to security. More

  • in

    Google warns: Watch out, this security update could break links to your Drive files

    Google has issued an alert for Workspace admins that an upcoming update to improve the security of sharing links from Google Drive will actually break links to some files. This could create headaches for Google Workspace business users who need to access files from Drive. The update involves updating Drive file links and may lead to “some new file access requests”, according to Google.    

    That in turn could lead to problems for Workspace admins who might see a rush on support calls over broken links.  SEE: Network security policy (TechRepublic Premium)Google notes that the security update is being applied to some files in Google Drive to make sharing links more secure.”The update will add a resource key to sharing links. Once the update has been applied to a file, users who haven’t viewed the file before will have to use a URL containing the resource key to gain access, and those who have viewed the file before or have direct access will not need the resource key to access the file,” Google explains. This is the first phase of a staged rolled out of resource keys that may break links to Drive files. By the sounds of Google’s description, things could get messy for Drive files, especially in larger organizations with lots of users and files. 

    Google’s support page on the issue explains that admins can choose how to apply the update up to July 23. During this phase, once the resource key security update is applied, end users are notified of impacted files. Admins can change the selection after July 23, but users won’t be notified of the changes.  In phase 2, between July 26 and August 25, Drive notifies impacted users of the update and any affected items that they own or manage. Admins can also let users decide to remove the update from specific items.”Unless the admin chooses to opt their organization out of the security update, end users who own or manage impacted files will receive an email notification starting July 26, 2021 with their impacted files,” Google notes.”End users will have until September 13 to determine how the update is applied to their files, if permitted by their admin.”SEE: Programming languages: Rust in the Linux kernel just got a big boost from GoogleGoogle has also released information for developers affected by the change that may affect various projects that depend on Drive files.It says that end users who own or manage impacted files will receive an email notification from July 26, 2021 that will flag their affected files. Assuming an admin has permitted it, users might have the option to remove the security update from their impacted files. Google also flagged an upcoming issue with unlisted videos that were uploaded before January 1, 2017. From July 23, unlisted videos uploaded before that date will move to Private as part of a security update. Private is one of three visibility settings on YouTube, along with Public and Unlisted. “In 2017, we rolled out a security update to the system that generates new Unlisted video links. This update included security enhancements that make the links for your Unlisted videos even harder for someone to discover if you haven’t shared the link with them. We’re now making changes to older Unlisted videos that were uploaded before this update took place,” Google explains. YouTube users can opt out of this change by following the instructions on Google’s support page.
    Google More

  • in

    Microsoft's security tool can now spot rogue devices on your network

    Microsoft Defender for Endpoint’s new ability to monitor and protect unmanaged devices has now reached general availability. 

    More Windows 11

    Microsoft Defender for Endpoint (formerly Defender ATP), gives security teams visibility over unmanaged devices running on their networks. It’s a cloud-based security service that gives security teams incident response and investigation tools and lives as an instance in Azure. It’s distinct from Microsoft Defender antivirus that ships with Windows 10.    Microsoft pushed the public preview of this unmanaged device capability to public preview in April, as ZDNet reported at the time. The feature aims to alleviate post-pandemic hybrid work security risks, where people may be using their own computers and devices from home, then bring them to work and connect to the corporate network.It’s meant to tackle the unknown threats that may arise from devices that have been compromised at home and then brought into work. The new capabilities should make it easier to discover and secure unmanaged PCs, mobile devices, servers, and network devices on a business network.The GA release allows security teams to discover devices connected to a corporate network, onboard devices once they’ve been discovered, and then review assessments and address threats and vulnerabilities on newly discovered devices. Defender for Endpoint will let teams discover unmanaged workstations, servers, and mobile endpoints across Windows, Linux, macOS, iOS, and Android platforms that haven’t been onboarded and secured. 

    It also covers network devices, such as switches, routers, firewalls, WLAN controllers, VPN gateways. These can also can be discovered and put on the device inventory using periodic authenticated scans of preconfigured network devices.Security teams will be able to see the new features for unmanaged devices within the Microsoft 365 Defender user interface in “Device inventory”. “Now that these features have reached general availability, you will notice that endpoint discovery is already enabled on your tenant. This is indicated by a banner that appears in the EndpointsDevice inventory section of the Microsoft 365 Defender console,” notes Microsoft’s Chris Hallum. The banner will vanish on July 19, 2021 and the default behavior for discovery will be switched from Basic to Standard. Standard discovery is an active discovery method that relies on already-managed devices to probe the network for unmanaged devices.”At this time, Standard discovery will enable the collection of a broader range of device related properties and it will also perform improved device classification. The switch to Standard mode was verified as having negligible network implications during the public preview,” notes Hallum.    More