More stories

  • in

    ACSC offers optional DNS protection to government entities

    Image: Getty Images/iStockphoto
    The Australian Cyber Security Centre will be offering its Australian Protective Domain Name Service (AUPDNS) for free to other government entities at federal and state level across Australia. AUPDNS has already inspected 10 billion queries, and blocked 1 million connections to malicious domains, Assistant Minister for Defence Andrew Hastie said on Thursday. “A single malicious connection could result in a government network being vulnerable to attack or compromise, so it’s vital we do everything we can to prevent cybercriminals from gaining a foothold,” he said.”Currently AUPDNS is protecting over 200,000 users, and this number is growing”. The blocklist functionality was developed with Nominet Cyber. Elsewhere on Thursday, Labor deputy chair of the Parliamentary Joint Committee on Intelligence and Security — which examines national security legislation and often leads to Labor waving continuous legislation through — Anthony Byrne tended his resignation. “The work of the PJCIS is crucial to Australia’s national security and its integrity should never be questioned,” Byrne said.

    “I have always put the work of this bipartisan Committee first and have always served in its best interests.” Byrne is in hot water after telling Victoria’s Independent Broad-based Anti-corruption Commission he was involved in branch stacking. Replacing Byrne to fill the ALP post will be Senator Jenny McAllister, with Peter Khalil appointed to the committee. “Byrne has served the PJCIS in a number of roles since 2005 including as Chair and Deputy Chair,” Labor leader Anthony Albanese said. “I thank Mr Byrne for his important contributions to this committee in Australia’s national interest.” On Wednesday, the Australian government announced a new set of standalone criminal offences for people who use ransomware under what it has labelled its Ransomware Action Plan. The plan creates new criminal offences for people that use ransomware to conduct cyber extortion, target critical infrastructure with ransomware, and deal with stolen data knowingly obtained in the course of committing a separate criminal offence, as well as buying or selling malware for the purposes of undertaking computer crimes.Related Coverage More

  • in

    Singapore to develop mobile defence systems with Ghost Robotics

    Singapore’s Defence Science and Technology Agency (DSTA) has inked a partnership with Philadelphia-based Ghost Robotics to identify uses cases involving legged robots for security, defence, and humanitarian applications. They will look to test and develop mobile robotic systems, as well as the associated technology enablers, that can be deployed in challenging urban terrain and harsh environments.The collaboration also would see robots from Ghost Robotics paired with DSTA’s robotics command, control, and communications (C3) system, the two partners said in a joint statement released Thursday. The Singapore government agency said its C3 capabilities were the “nerve centre” of military platforms and command centres, tapping data analytics, artificial intelligence, and computer vision technologies to facilitate “tighter coordination” and effectiveness during military and other contingency operations. Its robotics C3 system enabled simultaneous control and monitoring of multiple unmanned ground and air systems to deliver a holistic situation outline for coordinated missions, including surveillance in dense urban environments. With the partnership, DSTA and Ghost Robotics would test and develop “novel technologies and use cases” for quadrupedal unmanned ground vehicles, which would be integrated with multi-axis manipulators. These would enhance how the autonomous vehicles interacted with their environment and objects within it. Power technologies, such as solid-state batteries or fuel cells, also would be integrated to allow the robotics systems to operate for extended periods of time. DSTA’s deputy chief executive for operations and director of land systems, Roy Chan, said: “In the world of fast-evolving technology, close collaboration between organisations is imperative to co-create use cases and innovative solutions. In partnering Ghost Robotics, DSTA hopes to advance robotic capabilities in defence and shape the battlefield of the future.

    “We envision that robots would one day become a defender’s best friend and be deployed to undertake more risky and complex operations in tough terrains,” Chan said. DSTA is tasked with tapping science and technology to develop capabilities for the country’s Singapore Armed Forces (SAF), including the use of autonomous vehicles. The Ministry of Defence and SAF in June 2021 unveiled a transformation strategy to address evolving security challenges and threats, which encompassed efforts to leverage technological advancements to better tap data and new technologies, such as robotics C3 systems, and integrate these technologies into warfighting concepts to improve operational effectiveness and reduce manpower requirements.According to Ghost Robotics, its quadrupedal unmanned ground vehicles were built for unstructured terrain, on which a typical wheeled or tracked device could not operate efficiently. RELATED COVERAGE More

  • in

    7-Eleven breached customer privacy by collecting facial imagery without consent

    Image: Getty Images
    In Australia, the country’s information commissioner has found that 7-Eleven breached customers’ privacy by collecting their sensitive biometric information without adequate notice or consent. From June 2020 to August 2021, 7-Eleven conducted surveys that required customers to fill out information on tablets with built-in cameras. These tablets, which were installed in 700 stores, captured customers’ facial images at two points during the survey-taking process — when the individual first engaged with the tablet, and after they completed the survey. After becoming aware of this activity in July last year, the Office of the Australian Information Commissioner (OAIC) commended an investigation into 7-Eleven’s survey. During the investigation [PDF], the OAIC found 7-Eleven stored the facial images on tablets for around 20 seconds before uploading them to a secure server hosted in Australia within the Microsoft Azure infrastructure. The facial images were then retained on the server, as an algorithmic representation, for seven days to allow 7-Eleven to identify and correct any issues, and reprocess survey responses, the convenience store giant claimed. The facial images were uploaded to the server as algorithmic representations, or “faceprints”, that were then compared with other faceprints to exclude responses that 7-Eleven believed may not be genuine. 7-Eleven also used the personal information to understand the demographic profile of customers who completed the survey, the OAIC said. 7-Eleven claimed it received consent from customers who participated in the survey as it provided a notice on its website stating that 7-Eleven may collect photographic or biometric information from users. The survey resided on 7-Eleven’s website.

    As at March 2021, approximately 1.6 million survey responses had been completed. Angelene Falk, Australia’s Information Commissioner and Privacy Commissioner, determined that this large-scale collection of sensitive biometric information breached Australia’s privacy laws and was not reasonably necessary for the purpose of understanding and improving customers’ in-store experience. In Australia, an organisation is prohibited from collecting sensitive information about an individual unless consent is provided.   Falk said facial images that show an individual’s face is sensitive information. She added that any algorithmic representation of a facial image is also sensitive information. In regards to 7-Eleven’s claim that consent was provided, Falk said 7-Eleven did not provide any information about how customers’ facial images would be used or stored, which meant 7-Eleven did not receive any form of consent when it collected the images. “For an individual to be ‘identifiable’, they do not necessarily need to be identified from the specific information being handled. An individual can be ‘identifiable’ where it is possible to identify the individual from available information, including, but not limited to, the information in issue,” Falk said. “While I accept that implementing systems to understand and improve customers’ experience is a legitimate function for 7-Eleven’s business, any benefits to the business in collecting this biometric information were not proportional to the impact on privacy.” As part of the determination, Falk has ordered for 7-Eleven to cease collecting facial images and faceprints as part of the customer feedback mechanism. 7-Eleven has also been ordered to destroy all the faceprints it collected. Related Coverage More

  • in

    Singapore must take caution with AI use, review approach to public trust

    In its quest to drive the adoption of artificial intelligence (AI) across the country, multi-ethnic Singapore needs to take special care navigating its use in some areas, specifically, law enforcement and crime prevention. It should further foster its belief that trust is crucial for citizens to be comfortable with AI, along with the recognition that doing so will require nurturing public trust across different aspects within its society.  It must have been at least two decades ago now when I attended a media briefing, during which an executive was demonstrating the company’s latest speech recognition software. As most demos went, no matter how much you prepared for it, things would go desperately wrong.  Her voice-directed commands often were wrongly executed and several spoken words in every sentence were inaccurately translated into text. The harder she tried, the more things went wrong, and by the end of the demo, she looked clearly flustered.  She had a relatively strong accent and I’d assumed that was likely the main issue, but she had spent hours training the software. This company was known, at that time, specifically for its speech recognition products so it wouldn’t be wrong to assume its technology then was the most advanced in the market.  I walked away from that demo thinking it would be near impossible, with the vast difference in accents within Asia alone and even amongst those who spoke the same language, for speech recognition technology to be sufficiently accurate. 

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Some 20 years later, today, speech-to-text and translation tools clearly have come a long way, but they’re still not always perfect. An individual’s accent and speech patterns remain key variants that determine how well spoken words are translated.  However, wrongly converted words are unlikely to cause much damage, safe from a potentially embarrassing moment on the speaker’s part. The same is far from the truth where facial recognition technology is concerned. 

    In January, police in Detroit, USA, admitted its facial recognition software falsely identified a shoplifter, leading to his wrongful arrest.  Vendors such as IBM, Microsoft, and Amazon have maintained a ban on the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools.  Amazon had said its ban would remain until regulators addressed issues around the use of its Rekognition technology to identify potential criminal suspects, while Microsoft said it would not sell facial recognition software to police until federal laws were in place to regulate the technology. IBM chose to exit the market completely over concerns facial recognition technology could instigate racial discrimination and injustice. Its CEO Arvind Krishna wrote in a June 2020 letter to the US Congress: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency. “AI is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported,” Krishna penned.  I recently spoke with Ieva Martinkenaite, who chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe. Martinkenaite’s day job sees her as head of analytics and AI for Telenor Research. In our discussion on how Singapore could best approach the issue of AI ethics and use of the technology, Martinkenaite said every country would have to decide what it felt was acceptable, especially when AI was used in high risk areas such as in detecting criminals. Here, she noted, there remained challenges amidst evidence of discriminatory results including against certain ethnic groups and gender.   In deciding what was acceptable, she urged governments to have an active dialogue with citizens. She added that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance, or quality assurance in place. Training AI for multi-ethnic SingaporeFacial recognition software has come under fire for its inaccuracy, in particular, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems.  Presumably, AI and machine learning models trained with less data on one ethnic group would exhibit a lower degree of accuracy in in identifying individuals in that group.  Singapore’s population comprises 74.3% Chinese, 13.5% Malays, and 9% Indians, with the remaining 3.2% made up of other ethnic groups such as Eurasians. Should the country decide to tap facial recognition systems to identify individuals, must the data used to train the AI model consist of more Chinese faces since the ethnic group forms the population’s majority? If so, will that lead to a lower accuracy rate when the system is used to identify a Malay or Indian, since fewer data samples of these ethnic groups were used to train the AI model?  Will using an equal proportion of data for each ethnic group then necessarily lead to a more accurate score across the board? Since there are more Chinese residents in the country, should the facial recognition technology be better trained to more accurately identify this ethnic group because the system will likely be used more often to recognise these individuals?  These questions touch only on the “right” volume of data that should be used to train facial recognition systems. There still are many others concerning data alone, such as where training data should be sourced, how the data should be categorised, and how much training data is deemed sufficient before the system is considered “operationally ready”.  Singapore will have to navigate these carefully should it decide to tap AI in law enforcement and crime prevention, especially as it regards racial and ethnic relations important, but sensitive in managing. Beyond data, discussions and decisions will need to be made on, amongst others, when AI-powered facial recognition systems should be used, how automated should they be allowed to operate, and when human intervention would be required. The European Parliament just last week voted in support of a resolution banning law enforcement from using facial recognition systems, citing various risks including discrimination, opaque decision-making, privacy intrusion, and challenges in protecting personal data. 

    “These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental rights to liberty and security of the individual and to an effective remedy and fair trial,” the European Parliament said.  Specifically, it pointed to facial recognition services such as Clearview AI, which had built a database of more than three billion pictures that were illegally collected from social networks and other online platforms.  The European Parliament further called for a ban on law enforcement using automated analysis of other human features, such as fingerprint, voice, gait, and other biometric and behavioural traits.  The resolution passed, though, isn’t legally binding. Because data plays an integral role in feeding and training AI models, what constitutes such data inevitably has been the crux of key challenges and concerns behind the technology.  The World Health Organisation (WHO) in June issued a guidance cautioning that AI-powered healthcare systems trained primarily on data of individuals in high-income countries might not perform well for individuals in low- and middle-income environments. It also cited other risks such as unethical collection and use of healthcare data, cybersecurity, and bias being encoded in algorithms.  “AI systems must be carefully designed to reflect the diversity of socioeconomic and healthcare settings and be accompanied by training in digital skills, community engagement, and awareness-raising,” it noted. “Country investments in AI and the supporting infrastructure should help to build effective healthcare systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.” Fostering trust goes beyond AISingapore’s former Minister for Communications and Information and Minister-in-charge of Trade Relations. S. Iswaran, previously acknowledged the tensions about AI and the use of data, and noted the need for tools and safeguards to better assure people with privacy concerns.  In particular, Iswaran stressed the importance of establishing trust, which he said underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said.   Singapore has been a strong advocate for the adoption of AI, introducing in 2019 a national strategy to leverage the technology to create economic value, enhance citizen lives, and arm its workforce with the necessary skillsets. The government believes AI is integral to its smart nation efforts and a nationwide roadmap was necessary to allocate resources to key focus areas. The strategy also outlines how government agencies, organisations, and researchers can collaborate to ensure a positive impact from AI, as well as directs attention to areas where change or potential new risks must be addressed as AI becomes more pervasive.  The key goal here is to pave the way for Singapore, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. Singaporeans also will trust the use of AI in their lives, which should be nurtured from a clear awareness of the benefits and implications of the technology.  Building trust, however, will need to go beyond simply demonstrating the benefits of AI. People need to fully trust the authorities across various aspects of their lives and that any use of technology will safeguard their welfare and data. The lack of trust in one aspect can spill over and impact trust in other aspects, including the use of AI-powered technologies. Singapore in February urgently pushed through new legislation detailing the scope of local law enforcement’s access to COVID-19 contact tracing data. The move came weeks after it was revealed the police could access the country’s TraceTogether contact tracing data for criminal investigations, contradicting previous assertions this information would only be used when the individual tested positive for the coronavirus. It sparked a public outcry and prompted the government to announce plans for the new bill limiting police access to seven categories of “serious offences”, including terrorism and kidnapping.Early this month, Singapore also passed the Foreign Interference (Countermeasures) Bill amidst a heated debate and less than a month after it was first proposed in parliament. Pitched as necessary to combat threats from foreign interference in local politics, the Bill has been criticised for being overly broad in scope and judicial review restrictive. Opposition party Workers’ Party also pointed to the lack of public involvement and speed at which the Bill was passed.Will citizens trust their government’s use of AI-powered in “delivering welfare benefits”, especially in law enforcement, when they have doubts–correctly perceived or otherwise–their personal data in other areas is properly policed? Doubt in one policy can metastasise and drive further doubt in other policies. With trust, as Iswaran rightly pointed out, an integral part of driving the adoption of AI in Singapore, the government may need to review its approach to fostering this trust amongst its population. According to Deloitte, cities looking to use technology for surveillance and policing should look to balance security interests with the protection of civil liberties, including privacy and freedom. “Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications,” the consulting firm noted. “Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.”Singapore must assess whether it has indeed nurtured a trustful environment, with the right legislations and accountability, in which citizens are properly engaged in dialogue, so they can collectively decide what is the country’s acceptable use of AI in high risk areas. RELATED COVERAGE More

  • in

    Google analysed 80 million ransomware samples: Here's what it found

    Image: Google
    Google has published a new ransomware report, revealing Israel was far and away the largest submitter of samples during that period. The tech giant commissioned cybersecurity firm VirusTotal to conduct the analysis, which entailed reviewing 80 million ransomware samples from 140 countries. According to the report [PDF], Israel, South Korea, Vietnam, China, Singapore, India, Kazakhstan, Philippines, Iran and the UK were the 10 most affected territories based on the number of submissions reviewed by VirusTotal. Israel had the higher number of submissions and that amount was a near-600% increase from its baseline amount of submissions. The report did not state what Israel’s baseline amount of submissions was during that period. From the start of 2020, ransomware activity was at its peak during the first two quarters of 2020, which VirusTotal attributed to activity by ransomware-as-a-service group GandCrab. “GandCrab had an extraordinary peak in Q1 2020 which dramatically decreased afterwards. It is still active but at a different order of magnitude in terms of the number of fresh samples,” VirusTotal said. There was another sizeable peak in July 2021 that was driven by the Babuk ransomware gang, a ransomware operation that was launched at the beginning of 2021. Babuk’s ransomware attack generally features three distinct phases: Initial access, network propagation, and action on objectives.

    GandCrab was the most active ransomware gang since the start of 2020, accounting for 78.5% of samples. GandCrab was followed by Babuk and Cerber, which accounted for 7.6% and 3.1% of samples, respectively.
    Image: Google
    According to the report, 95% of ransomware files detected were Windows-based executables or dynamic link libraries (DLLs) and 2% were Android-based. The report also found that exploits consisted of only a small portion of the samples — 5%. “We believe this makes sense given that ransomware samples are usually deployed using social engineering and/or by droppers (small programs designed to install malware),” VirusTotal said. “In terms of ransomware distribution attackers don’t appear to need exploits other than for privilege escalation and for malware spreading within internal networks.”  After reviewing the samples, VirusTotal also said that there was a baseline of between 1,000 and 2,000 first-seen ransomware clusters at all times throughout the analysed period. “While big campaigns come and go, there is a constant baseline of ransomware activity that never stops,” it said. Related Coverage More

  • in

    Brazilian e-commerce firm Hariexpress leaks 1.75 billion sensitive files

    Around 1.75 billion sensitive files were leaked by a Brazilian e-commerce integrator that provides services to some of the country’s largest online shopping websites.Hariexpress is headquartered in São Paulo and integrates multiple processes into a single platform to improve the efficiency and operational capability of retailers with more than one e-commerce store. Some of the company’s clients include Magazine Luiza, Mercado Livre, Amazon and B2W Digital. The national postal service, Correios, is also among the company’s partners and was also impacted by the incident.

    According to security researcher Anurag Sen at Safety Detectives, who discovered the leak in July 2021, the incident is attributed to a misconfigured and unprotected ElasticSearch server and involves more than 610GB of exposed data. The researchers noted they were unsuccessful in their attempts to resume communication with the company after an initial contact. Banking information relating to customers was not compromised, according to the experts; on the other hand, the leak exposed a vast set of sensitive information including customers’ full names, e-mail addresses, business and residential addresses, company registration and social security numbers. In addition, all manner of details relating to purchases including dates, times and prices of products sold, as well as copies of invoices and login credentials to the Hariexpress service were also exposed, according to Safety Detectives. The researchers could not estimate the exact number of impacted users, due to the amount of duplicate email addresses found in the exposed set of data, but it is estimated that several thousands of users were potentially affected by the leak.Moreover, it is not possible to tell whether other parties had access to the data, according to the researchers. The experts warned that the data set, which contains information that directly identifies users of marketplaces integrated by the company, could be used in phishing and social engineering attacks. The report also warned about the potential for other types of crimes such as burglaries, as the data exposed includes residential and business addresses and extortion, since the information also includes purchases of intimate products. Contacted by ZDNet, the company did not respond to requests for comment. Brazil’s National Data Protection Agency was also contacted for comment on the case and had not responded at the time of publication. More

  • in

    Irish regulators support Facebook's 'consent bypass' legal maneuver, suggest $42 million fine for GDPR violations

    Regulators in Ireland have proposed up to $42 million in fines for Facebook after the company was accused of violating the GDPR through deceptive data collection policies. Privacy expert Max Schrems and his advocacy group nyob — which submitted the original complaint against Facebook — published a draft decision from the Irish Data Protection Commission (DPC) about the issue that was sent to the other European Data Protection Authorities.The decision suggests a fine of between $32 million and $42 million for Facebook’s violations of the GDPR, which include a failure to notify its customers about how it uses their data. Schrems and other privacy experts slammed the proposed fine for its relatively minuscule size and for the legal arguments Facebook is making to get out of more strict fines. Nyob said Facebook’s argument is effectively that it is exempt from most GDPR rules because of a minor change in its agreement with users.”Facebook’s legal argument is rather simple: By interpreting the agreement between user and Facebook as a ‘contract’ (Article 6(1)(b) GDPR) instead of ‘consent’ (Article 6(1)(a) GDPR) the strict rules on consent under the GDPR would not apply to Facebook — meaning that Facebook can use all data it has for all products it provides, including advertisement, online tracking and alike, without asking users for freely given consent that they could withdraw at any time,” nyob explained in a blog post. “Facebook’s switch from ‘consent’ to ‘contract’ happened on 25.5.2018 at midnight — exactly when the GDPR came into effect in the EU.”

    Schrems said it is painfully obvious that Facebook is trying to bypass the rules of the GDPR by relabeling the agreement on data use as a ‘contract’. If this is accepted by regulators, any company could simply write the processing of data into a contract and thereby legitimize any use of customer data without consent, Schrems explained.”This is absolutely against the intentions of the GDPR, that explicitly prohibits to hide consent agreements in terms and conditions,” Schrems said. Nyob noted that studies have shown users do not see the website’s terms of service as a contract. A Gallup Institute survey said just 1.6% of respondents saw the agreement they make with Facebook when they sign up for the site as a “contract.” More than 63% said they see the agreement as consent.Schrems and nyob also made charged claims in the blog post, writing that representatives from Facebook and the DPC met in 2018 and created a way for Facebook to get around certain GDPR regulations.He went on to explain that regulators were fining Facebook for “not being transparent” about how it processes data but still expressed support for the company’s “consent bypass.”Both Facebook and the DPC did not respond to requests for comment.”The DPC developed the ‘GDPR bypass’ with Facebook that it is now greenlighting as a regulator. Instead of a regulator, it acts as a ‘big tech’ advisor,” Schrems said. “Basically the DPC says Facebook can bypass the GDPR, but they must be more transparent about it. With this approach, Facebook can continue to process data unlawfully, add a line to the privacy policy and just pay a small fine, while the DPC can pretend they took some action.”Schrems also took issue with how the DPC analyzed nyob’s complaint, criticizing the regulators for omitting key parts of their submission and refusing oral hearings. The draft was sent to other data protection authorities across Europe and will now be reviewed. Regulators from other countries can submit complaints, which will then be handled by the European Data Protection Board. The board can overrule decisions made by Irish regulators. WhatsApp was slapped with a 225 million euro fine last month after a GDPR investigation found that the platform was not transparent about how it shared data with its parent company, Facebook. In that case, Irish regulators faced similar backlash for the initial 50 million euro fine. The European Data Protection Board overruled the DPC and increased it significantly. “Our hope lies with the other European authorities. If they do not take action, companies can simply move consent into terms and thereby bypass the GDPR for good,” Schrems said. Privacy expert Cillian Kieran told ZDNet the fine mentioned in the draft is just one-hundredth of the possible fine under GDPR. Kieran also took issue with how the DPC represented Facebook’s position and the core tenets of their argument. He said that there needs to be consistent legal definitions designed into the technical systems themselves. “How can the fine in the draft decision, an amount which Facebook recovers in revenue within less than 5 hours on average, possibly be dissuasive? Much of the decision goes into countering allegations that Facebook violated consent requirements. The decision argues that consent is not necessary in this situation, nullifying any issues of consent. This points to a serious disparity in how authorities, advocates, and end-users like the complainant view the principles of processing under GDPR,” Kieran said. “Maybe if the Irish DPC did not form a bottleneck on dozens of GDPR investigations, we would be getting these vital interpretations on consent and other legal bases sooner than three and a half years after GDPR takes effect. I agree with Schrems that this decision is disappointing and inadequate, both in the fine and in the interpretation of contracts versus consent.”

    more on GDPR More

  • in

    Marketers want to influence your dreams, consumers not so much

    Digital marketers are wildly bullish on dream tech — playing ads right before people sleep to influence dreams — and 39% of consumers are open to the technology too, according to a survey.The American Marketing Association-New York’s 2021 Future of Marketing Survey canvased the marketing technology landscape relative to 2019’s report. Overall, consumers are beginning to accept new marketing technology, but worried about privacy. What caught my eye in the survey was dream-tech, which was opposed by 32% of consumers, supported by 39% with the remainder falling in the don’t know category. Given this dream-tech concept wasn’t around in 2019, the favorability rating is a bit stunning. Here’s how favorability among consumers stacks up across marketing channels. Add it up and consumers are accepting of personalized ads (54% in favor); IoT devices (53%) and AI assistants (60%). Virtual reality headsets are viewed favorably by 61% of consumers and augmented reality devices checked in at 49%. In other words, dream-tech is off to a good start with consumers even if the definition of it remains a bit murky.The report also looked at marketers’ expectations and what technologies would be adopted at scale. The kicker: 77% of marketers declared that they would deploy more dream-tech in the next three years. That tally topped smart speakers and IoT devices. I can’t wait to see how this consumer vs. marketing adoption off dream-tech plays out. Here’s a guess: Facebook figures out who looks at the app before bed and hits you with something to influence your dreams. Congressional hearings will ensue — again — but at least Facebook is used to it.  One area of agreement was data collection and how it’s a privacy issue. Consumers would limit data collection to email, age and name and marketers generally agreed. Marketers were more comfortable with collecting location than consumers. Fifty-four percent of marketers want to collect location data and only 41% of consumers want to part with it.  More