How businesses collect and use your personal data - Vlog /data-protection-and-privacy/data-collection-and-use You deserve better, safer and fairer products and services. We're the people working to make that happen. Thu, 30 Apr 2026 01:19:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2024/12/favicon.png?w=32 How businesses collect and use your personal data - Vlog /data-protection-and-privacy/data-collection-and-use 32 32 239272795 Rent platform 2Apply ordered to stop demanding excessive personal info from renters /data-protection-and-privacy/data-collection-and-use/articles/2apply-ordered-to-stop-demanding-excessive-renter-info Thu, 30 Apr 2026 01:15:04 +0000 /?p=1131819 The RentTech company crossed the line by manipulating prospective tenants into providing unnecessary information.

The post Rent platform 2Apply ordered to stop demanding excessive personal info from renters appeared first on Vlog.

]]>
When Vlog started investigating the third-party rental platform industry in 2023, we unearthed a hidden world of stressed-out renters being forced to use a technology that demands huge amounts of personal information.

When we surveyed 1000 renters, 41% said they felt pressured by their rental agent or landlord to use a RentTech platform such as Ignite, 2Apply, Snug, tApp or others.

Six out of 10 were uncomfortable with the amount and type of information being collected, and 29% decided not to use the platform to apply for this very reason.

RentTech constitutes a classic power imbalance, with finding a place to live at stake. The ultimatum imposed on renters is stark: give us the information we ask for or you can’t apply for a home.

Either [renters] hand over personal and private information, including ID documents and payslips, or risk housing precarity or even loss

Privacy Commissioner Carly Kind

Now the Australian Privacy Commissioner has taken action against 2Apply after a year-long review of its practices, which included asking prospective renters’ to reveal their gender, student status, citizenship status and visa expiry, as well as details of their previous rental history – information that goes well beyond what would be needed to assess the suitability of a tenant.

“Renters often lack real choice when making rental applications. Either they hand over personal and private information, including ID documents and payslips, or risk housing precarity or even loss,” says Privacy Commissioner Carly Kind.

“This not only places them at risk that their applications will not be considered fairly and equitably, but that their personal information may be compromised in a data breach or cyber-attack.”

Manipulative tactics called out

The Privacy Commissioner took a novel approach in determining that 2Apply was violating applicants’ privacy, considering the “design, structure and way information is conveyed in the 2Apply form” as well as utilising the concept of “online choice architecture”, which is about how “the presentation and structure of choices presented to individuals can shape how they make decisions”.

2Apply was found to have been applying a variety of manipulative techniques in its rental applications, including:

  • “Confirmshaming” – the use of emotive language to make a user feel guilty or embarrassed for not taking an action that is beneficial to the information collector. 
  • “Biased framing” – presenting choices in a way that emphasises their supposed benefits or disadvantages.
  • “Bundled consent” – requesting consent for personal information to be used for multiple purposes in a single request. 

Can renters be forced to use a RentTech platform?

As Vlog recently reported, tenancy laws in some states require that renters are able to access an alternative payment and contact method.

In New South Wales and Victoria, tenants must be provided options like EFT or Centrepay. In Queensland, renters must be given a “reasonably accessible” option to pay rent.

But in practice, renters are in no position to push back against agents or landlords who breach these rules, as many do.

The essence of the Privacy Commissioner ruling is that 2Apply, which is operated by InspectRealEstate, violated the Australian Privacy Principles by collecting excessive personal information “by unfair means”, meaning prospective tenants felt they had no choice but to provide it.

The Office of the Australian Information Commissioner – which includes the Privacy Commissioner – has the power to issue fines up to $66,000, but 2Apply only had to agree to discontinue its excessive information collection, without admitting any fault.

In her determination on the case, Ms Kind emphasised the need for other RentTech platforms to note the ruling and change their information-gathering practices accordingly.

The post Rent platform 2Apply ordered to stop demanding excessive personal info from renters appeared first on Vlog.

]]>
1131819
What is Grok and should Australia be blocking it? /electronics-and-technology/internet/using-online-services/articles/what-is-grok-and-should-australia-be-blocking-it Fri, 16 Jan 2026 02:55:05 +0000 /?p=936413 Elon Musk’s xAI tool has raised alarms around the world for facilitating the creation of malicious content, including deepfake pornography.

The post What is Grok and should Australia be blocking it? appeared first on Vlog.

]]>

Need to know

  • Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, was recently blocked in Indonesia and Malaysia due its ability to create malicious content
  • Britain’s media regulator, Ofcom, says sexualised images of children created by Grok users may amount to child sexual abuse material
  • Musk’s company X recently said it would prevent Grok users from editing images of real people to put them in revealing clothing in jurisdictions where this is illegal

An artificial intelligence (AI) tool developed by Elon Musk’s company xAI was recently banned in Indonesia and Malaysia and has raised serious concerns globally. It’s called Grok, and it gives users the capability to make highly sexualised images of people that look disturbingly real. 

As Indonesia’s Communication and Digital Affairs Minister Meutya Hafid recently put it, “The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.”

Britain’s media regulator, Ofcom, realeased a statement that says, “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material.”

The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space

Indonesia Communications and Digital Affairs Minister Meutya Hafid

Grok, which is free to X users who pay for a subscription, was launched in 2023, but in 2024 an image generator feature was added that included something called ‘spicy mode’, which can generate pornographic content.

Australia’s eSafety Commissioner says the agency “has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery”, adding that it “will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act”.

Malicious content made easier

Abhinav Dhall, an associate professor at Monash University’s Department of Data Science and AI, says Grok has put powerful new technology into the hands of wrongdoers.

“Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits. As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images”, Dhall says, adding that in many cases “the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe”.

Dhall says Grok users should take steps to avoid images falling into the wrong hands.

“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms,” Dhall says.

“It is also important to avoid posting children’s photos publicly. If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible.”

X said in a previous statement that it removes illegal content from its platform including child abuse material and suspends the accounts of people who post it.

Musk has posted comments on the Grok backlash, saying critics of X “just want to suppress free speech”. In an X post on 15 January he said, “Grok is supposed [to] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV.”

Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits

Associate Professor Abhinav Dhall, Monash University

In a more recent announcement on X the company said “we have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing” in jurisdictions where this is illegal.

But it remains unclear how the company will block certain locations from using this functionality or which locations they may be.

Will mandatory codes stop the deepfakes?

On 9 March 2026, mandatory codes come into effect in Australia which impose new obligations on AI services to limit children’s access to sexually explicit content as well as to violent material and content related to self-harm and suicide. But enforcing such codes on mammoth AI companies based in the US and other countries has proven to be a tall order for Australian regulators.

Abhinav Dhall stops short of recommending that Grok be banned in Australia, saying it’s a matter of enforcing the current rules and compelling tech companies to stop harmful content.

“Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly,” Dhall says. “At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”

Meanwhile, amid the outcry around the world about sexualised deepfakes, in a speech given at Musk’s company SpaceX, in South Texas, US Defense Secretary Pete Hegseth recently said that the Pentagon will embrace Grok along with Google’s generative AI engine. 

The post What is Grok and should Australia be blocking it? appeared first on Vlog.

]]>
936413
Real estate agents, chemists, car hire companies and more under new privacy scrutiny /data-protection-and-privacy/articles/real-estate-agents-car-hire-companies-under-new-privacy-scrutiny Thu, 08 Jan 2026 23:14:20 +0000 /?p=920932 Australia’s privacy regulator is reviewing the privacy policies of businesses collecting your personal data during in-person interactions.

The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>

Need to know

  • In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions privacy policies give the businesses that write them
  • In 2023, we reported on the privacy policies of rental platforms, and last year we analysed the privacy policies of Australia’s ten most popular car brands
  • This month, the Office of the Australian Privacy Commissioner begins its first full-scale privacy policy review, focusing on information demanded by businesses in person

Very few of us read the privacy policies we passively consent to when engaging with a service provider. Fewer still would understand what these privacy policies actually say.

In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions these documents give the businesses we regularly interact with.

In 2023, we reported on the privacy policies of rental platforms such as realestate.com.au’s Ignite as well as Ailo, Tenant Options, Rental Rewards, Snug, 2Apply and Simple Rent.

The conclusion? These RentTech platforms collected information that went well beyond what’s needed to assess a tenant’s ability to pay the rent. The questions often seemed designed to grab as much data as possible from people who had no choice but to provide it.

In 2024, we analysed the privacy policies of Australia’s ten most popular car brands to see how the vehicles monitored and tracked their drivers. Here again we found that the harvesting of personal driver information was often excessive, and the rights the manufacturers gave themselves to share the data with third-parties were both far-reaching and vague.

The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words.


The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month

All of this makes the Office of the Australian Information Commissioner’s (OAIC) recent announcement that it will begin its first large-scale review of privacy policies in early January 2026 more timely than ever.

What’s changing in privacy law?

The Privacy Act requires privacy policies to contain certain details, such as what information is collected, why it’s needed, how it’s used, and how it can be corrected if necessary. 

An update to the Act in 2024 means businesses will also be required (as of 10 December 2026) to specify in their privacy policies whether a computer program will be using your personal information to make decisions that could go against you, such as when an application for a rental home is rejected. 

The privacy policy sweep is … focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys

In addition, the 2024 update gave the OAIC the power to issue infringement notices for Privacy Act violations without going to court. And it gives individuals the right to seek legal redress and financial compensation in certain cases for invasions of privacy or misuse of their personal information.

The OAIC’s privacy policy sweep is taking a different approach than our investigations of online privacy documents. It will occur in the real world, focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys. The privacy policies of such businesses must include the above-mentioned information. 

Not having the right information in a privacy policy – or not having a privacy policy at all – could lead to fines from the OAIC of up to $66,000.

Which types of businesses will be targeted?

The privacy policy sweep will focus on sectors where the OAIC believes there are particular power imbalances – also known as information asymmetries – between the business in question and the customers being asked to provide the information.

When confronted with in-person requests for their personal information … consumers often don’t have access to all the information they might need to make an informed decision

Privacy Commissioner Carly Kind

“When confronted with in-person requests for their personal information from retailers, licensed venues, car hire companies or real estate agents, consumers often don’t have access to all the information they might need to make an informed decision,” says Privacy Commissioner Carly Kind.

“This makes them vulnerable to overcollection of personal information and creates risks to their security and privacy.”

The OAIC says it will review the privacy policies of around 60 businesses from the following six sectors, with a particular focus in each case.

  • Rental and property – collection of individuals’ personal information during property inspections.
  • Chemists and pharmacists – collection of personal information for the purpose of providing a paperless receipt and collection of identity information to provide medication.
  • Licenced venues – collection of identity information to enable individuals to access a venue.
  • Car rental companies – collection of identity and other personal information to enable an individual to enter into a car rental agreement.
  • Car dealerships – collection of personal information to enable an individual to conduct a vehicle test drive.
  • Pawnbrokers and second-hand dealers – collection of identity information from individuals who wish to sell or pawn goods.

Transparent communication is critical

In the OAIC’s view, a business’s explanation of how it will use personal information should be open and transparent.

“The Australian community is increasingly concerned about the lack of choice and control they have with respect to their personal information,” Kind says.

“The first building block of better privacy practices is a clear privacy policy that transparently communicates how an individual can expect their information to be collected, used, disclosed and destroyed.

“In conducting a compliance sweep, the OAIC intends to ensure that entities are meeting their obligations to be transparent with consumers and customers about how they’re using the personal information they collect in-person.

“We hope this will also catalyse some reflection about how robust entities’ privacy practices are, and whether more can be done to improve compliance with the Privacy Act writ large.”


The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>
920932
AI is increasingly invading our medical privacy as regulation struggles to keep up /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/ai-used-by-medical-professionals Mon, 20 Oct 2025 13:00:00 +0000 /uncategorized/post/ai-used-by-medical-professionals/ Patient scans shared without consent, AI scribes recording doctor visits – medical professionals' AI use is raising big concerns.

The post AI is increasingly invading our medical privacy as regulation struggles to keep up appeared first on Vlog.

]]>

Need to know

  • Around 22% of GPs are now using AI scribes to record during patient visits, and some scribes are also proposing diagnosis and treatment
  • Profit-driven medical businesses have used large volumes of patient data to train AI tools without patients' knowledge or consent 
  • Though consent is a cornerstone of privacy law, the Privacy Commissioner says consent isn't always required when it comes to AI and patient data

It seems the artificial intelligence (AI) industry is grabbing our personal information as fast as it can – before regulations end up enforcing what the community actually wants.

Among other indicators, the final report of the Australian Competition and Consumer Commission’s (ACCC) Digital Platform Services Inquiry, released in March this year, confirmed that the vast majority of us don’t want our personal data used to train AI tools without our consent.

In a consumer survey that was part of the report, 83% of Australians said our approval should be mandatory. This aligns with research from the agency tasked with protecting our personal information, the Office of the Australian Information Commissioner (OAIC), which found that 84% of Australians want to have control over their personal information, including the right to demand that it be deleted.

In mid-October, Roy Morgan published research showing that 65% of Australians think AI “creates more problems than it solves”, though 12% hailed its ability to advance medical science.

[The Privacy Act’s requirements] are constraining innovation without providing meaningful protection to individuals

Productivity Commission

There is considerable tension between those seeking to protect our data and those that see it as the key to improving products and services – and boosting profits – especially the big tech companies. 

Agencies of the federal government are also on board with this notion. According to a recent report by the Productivity Commission (PC), our personal data shouldn’t be locked away behind a wall of inflexible regulations.

The Privacy Act’s requirements “are constraining innovation without providing meaningful protection to individuals”, the report says. It concludes that making it easier to harness data could add up to $10 billion a year to the country’s economy.

To that end, the PC calls for exempting some businesses from the requirement to obtain informed consent before accessing a person’s data, one of the key pillars of the Privacy Act. In the PC’s view, some businesses shouldn’t have to do this as long as they commit to acting in the person’s best interests when it comes to handling their privacy. 

The report goes on to argue that giving consent has become a meaningless exercise, since no one has the time to read whatever they’re consenting to.

While businesses are required to have a privacy policy under the Privacy Act, these sprawling documents are all but impossible to understand. The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words.

These consent protocols are clearly not working as intended, but Privacy Commissioner Carly Kind has taken issue with the idea of watering down privacy regulations, making the case that protecting privacy and boosting productivity are not mutually exclusive and that informed consent is a critical consumer right.

Patient scans used to train AI without consent 

But there have been cases in which the OAIC – where Kind is one of three commissioners – has allowed businesses to surreptitiously harvest our data to their own ends.

In September last year, for instance, on the case of Australia’s biggest radiology chain, I-Med Radiology Network, entering a joint venture with the start-up AI platform Harrison.ai in 2019.

The plan was to use I-Med’s patient scans to train an AI tool called Annalise.ai, which has now reportedly been approved for use in over 40 countries and heralded as a gamechanger.

The quantity and quality of the I-Med data – around 30 million patient scans and the resulting diagnostic reports from Australia and other countries – was key to the tool’s success.

It is clearly a money-maker for the business. In April last year, the Australian Financial Review reported that the business was on track to record $1.35 billion in revenue for the financial year.

(Harrison.ai’s flagship product was an AI model that was trained on 800,000 chest x-rays sourced from one of I-Med’s 270 clinics in Australia, Crikey reported. It owns Annalise.ai, which is now called Harrison.ai Radiology.) 

The Crikey article says there is no evidence that informed consent was obtained from the patients. Based on the above-mentioned surveys, this would likely have run counter to many of their wishes. Neither I-Med nor Harrison.ai have disputed this.

Consent is only required in limited circumstances by the Australian Privacy Principles, and will not always be required when entities use personal information for AI training

Office of the Australian Information Commissioner spokesperson

But it appears there is some leeway on the informed consent requirement. In July this year the OAIC ruled that I-Med had not contravened the Privacy Act since the data had been de-identified and could no longer be defined as personal information.

The ruling suggests that, where privacy issues are not at play, it’s not the OAIC’s role to prevent businesses from grabbing our data to train AI, with or without our approval.

An OAIC spokesperson tells Vlog that “the issue of consent is always a highly relevant factor”, but the protections of the Australian Privacy Principles (APPs) no longer apply when patient data is de-identified.

“However, strong de-identification is challenging, and whether something is de-identified is context dependent. Data that is de-identified when subject to strict controls – as it was in the I-MED case – may not be de-identified in other contexts, such as if it is released publicly,” the spokesperson says.

The right balance between privacy regulation and the development of new AI tools remains a work in progress.

While industry guidance from the OAIC stipulates that people should be informed when their data is being used to train AI, “consent is only required in limited circumstances by the APPs, and will not always be required when entities use personal information for AI training”, the spokesperson added.

In other words, we have no copyright on the content of our bodies. Medical clinics are free to use our data for commercial purposes without telling us, and businesses can profit handsomely.

In a joint business venture, Australia’s largest radiology chain I-Med Radiology made millions of patient scans available to the AI firm Harrison.ai without the patients’ knowledge or consent.

The rise of AI scribes in GP consultations

All of this is pertinent to a related development – the growing use of AI tools by GPs and specialists to record patient visits, known as AI scribes. Currently around 22% of GPs are using AI scribes according to polling by the Royal Australian College of General Practitioners (RACGP). Popular models include Lyrebird, Heidi Health, Amplify+, i-scribe and Medilit, but there are many others.

The RACGP has tentatively embraced some uses of AI scribes, especially where they can reduce the administrative burdens on doctors and free them up to concentrate more fully on preventative care.

But the college has also expressed concern about AI tools being developed by profit-seeking tech firms without the oversight of medical clinicians. “Value to technology company shareholders might be prioritised over patient outcomes,” the RACGP wrote in a position statement on the issue.

Of particular concern is the reliability – and the legality – of AI in making medical recommendations. The Therapeutic Goods Administration (TGA) has recently gone on record regarding the issue, saying medical professionals report that AI scribes “frequently propose diagnosis or treatment for patients beyond the stated diagnosis or treatment a clinician had identified during consultations”.

Such a functionality would mean AI scribes are medical devices that require pre-market approval, the TGA says. By dodging regulation, they are potentially being supplied to the medical industry in breach of the Therapeutic Goods Act.

AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation

RACGP spokesperson

The RACGP hasn’t taken a position on whether AI scribes should be regulated by the TGA, but it does instruct GPs to obtain consent from patients before using them, to double check that the information they record is accurate, and to have a backup plan in place in case there’s a glitch with the AI scribe.

“AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation,” a RACGP spokesperson tells Vlog.

“GPs and other doctors must carefully check the output of an AI scribe to ensure its accuracy. Where an AI scribe performs its expected function – summarising information for independent GP review and decision making – it does not have a therapeutic use. Diagnosis, however, is outside the scope of an AI scribe.” 

AI scribes are currently used by around 22% of GPs in Australia.

Market use shouldn’t pre-date regulation

One very concerned citizen is AI expert Dr Kobi Leins, who was recently told to take her business elsewhere after asking that AI not be used during a specialist visit for her child. Liens cancelled the appointment, not least because she was familiar with the AI model in use and wasn’t impressed by its privacy and security features. She didn’t want it capturing her child’s data.

“There is no need for many of these tools, and fundamentally, we need to ask why they are being pushed so hard and where money would be better spent in a healthcare system where doctors have time to listen to patients,” Leins says.

“Individuals do not have the skills nor the capacity to review these tools. Regulatory bodies need to review them in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use. It’s about individual privacy, but also about group privacy. There are potentially grave harms based on racial and gender and other biases in the data these tools rely on.” 

Leins points to a recent study funded by the UK’s National Institute for Health and Care Research which found that when social workers used Google’s popular AI model ‘Gemma’, it downplayed women’s physical and mental health issues as compared to men’s in its case note summaries. In these cases, it’s the AI companies in the driver’s seat rather than healthcare workers, she argues.

Regulatory bodies need to review [AI scribes] in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use

AI expert Dr Kobi Leins

Leins is not alone. One concerned parent told us they didn’t trust the data handling practices of AI companies.

How the data is managed “is dependent on your GP and the policy of whatever scribe they’re using, which you’re unlikely to know when they ask you to sign over consent,” the parent says. “And like with most companies, you have no control over what happens when they get hacked and all your personal health information ends up on the dark web. I generally consent for my data to be scribed, but not my kid’s.” 

The privacy policy of one of the most popular AI scribes, Heidi Health, is not reassuring. It says the business may share the personal information it captures with employees, third party suppliers, related companies, anyone it transfers the business to, ‘professional advisers, dealers and agents’, government agencies, law enforcement and more. Your data may also be transferred overseas.

Heidi Health tells Vlog that the company doesn’t share patient-identifiable or health information with external parties except where required by law, that its data handling practices comply with the Privacy Act, and that it doesn’t use patient data to train AI.

“We understand that clinicians and patients will only embrace new technology when they have complete confidence that their data is secure and used responsibly,” says Heidi Health head of legal and regulatory affairs Yass Omar, adding that the company is the only AI scribe in Australia certified to ISO 27001, “a globally recognised standard that reflects the strength of our information security management systems”.

But concerns remain about AI scribes in general. Another parent recently encountered one while taking her daughter to a specialist. She was told the data would be deleted after it had been reviewed, “but I only had her word to go on. What are the privacy implications here? And what mistakes can and does it make? Does the doctor look at the summary after each appointment at the end of the day while it is fresh in their mind, to make sure that the scribe accurately captured the info?” 

Notices about the use of AI scribes by medical professionals are becoming a more common sight in healthcare settings.

Just like asking Dr Google 

Professor Enrico Coiera, who is the director of both the Centre for Health Informatics, Australian Institute for Health Innovation at Macquarie University and the Australian Alliance for AI in Healthcare, tells Vlog that one of his biggest concerns is that product development is far outpacing regulatory oversight.

“Generative AI products in particular are being updated constantly. This makes it very hard for the traditional safety guardrails we rely on, like regulation, to make sure these new technologies are safe.

Much of this kind of AI is never marketed as ‘medical grade’, but as a general-purpose tool. So it is never assessed for its safe and effective use in healthcare.” 

Much of this kind of AI is never marketed as ‘medical grade’, but as a general-purpose tool

Australian Alliance for AI in Healthcare director, Professor Enrico Coiera

Coiera says this is much like asking your search engine a health question. If an AI scribe is suggesting diagnosis or treatment, it’s a medical device that needs to be regulated by the TGA, he says.

Referring to the I-Med case, Coiera says, “patients should be asked to consent to the use of their data for building AIs, and especially so if there is a risk their information is identifiable”.

He recommends that patients read any consent forms carefully before signing.

“If they are uncomfortable with their data being used for AI development, they should discuss that with their care provider.” 

Coiera is a believer in the capacity of AI to advance medical science, as long as people’s privacy is protected.

“As long as I am comfortable that my data is stored securely and is de-identified before use, I would agree to its use for non-commercial research purposes.” 

The post AI is increasingly invading our medical privacy as regulation struggles to keep up appeared first on Vlog.

]]>
758758 three-chest-xrays lyrebird-health-usage-notice-in-doctors-office heidi-health-usage-notice-notice-in-doctors-office
Kmart’s facial recognition technology broke the law, commissioner rules /data-protection-and-privacy/data-collection-and-use/who-has-your-data/articles/oaic-ruling-kmart-frt Wed, 17 Sep 2025 14:00:00 +0000 /uncategorized/post/oaic-ruling-kmart-frt/ A three year long investigation by the Privacy Commissioner has confirmed what Vlog suspected

The post Kmart’s facial recognition technology broke the law, commissioner rules appeared first on Vlog.

]]>
Retail giant Kmart has been found to have breached the Privacy Act with its facial recognition program, three years after a Vlog expose revealed the invasive technology was in use across Australia.

In 2022, Vlog reported that Kmart, along with Bunnings and The Good Guys were capturing the biometric data, or unique facial features known as a ‘face print’, of customers entering their stores.

Our investigation prompted the Office of the Australian Information Commissioner (OAIC) to launch a probe into whether privacy laws had been breached with the Facial Recognition Technology (FRT).

In 2024, Privacy Commissioner Carly Kind found that Bunnings had breached the law and today an announcement was made that Kmart had done so too.

Kmart sought to justify its use of FRT in stores between June 2020 and July 2022 as a measure to prevent refund fraud. However, the Commissioner said Kmart did not seek customer consent to collect biometric information and its collection was not proportional, as there were other means available to address refund fraud.

“I do not consider that the respondent (Kmart) could have reasonably believed that the benefits of the FRT system in addressing refund fraud proportionately outweighed the impact on individuals’ privacy,” Kind says.

No penalty 

OAIC did not seek a financial penalty against Kmart in this case, similar to the case with Bunnings last year.

In a statement a Kmart spokesperson says they are “disappointed” with the ruling and are reviewing options to appeal the determination.

“Like most other retailers, Kmart is experiencing escalating incidents of theft in stores which are often accompanied by anti-social behaviour or acts of violence against team members and customers,” the spokesperson says.

Commissioner Kind says that despite the two rulings against Bunnings and now Kmart, FRT was not ‘banned’ in Australia.

“The human rights to safety and privacy are not mutually exclusive; rather, both must be preserved, upheld and promoted. Customer and staff safety, and fraud prevention and detection, are legitimate reasons businesses might have regard to when considering the deployment of new technologies. However, these reasons are not, in and of themselves, a free pass to avoid compliance with the Privacy Act,” she stated.

The post Kmart’s facial recognition technology broke the law, commissioner rules appeared first on Vlog.

]]>
765580
Should you trust an AI chatbot with your mental health? /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/ai-therapy-chatbots Wed, 26 Feb 2025 13:00:00 +0000 /uncategorized/post/ai-therapy-chatbots/ Millions are using Headspace, Wysa, Youper and other popular therapy chatbots, but can AI replace a professional counsellor?

The post Should you trust an AI chatbot with your mental health? appeared first on Vlog.

]]>

Need to know

  • AI chatbots for mental health and therapy are growing in popularity
  • Experts are raising concerns about both the efficacy of the tools and the safety of sensitive user data
  • Academics say apps making misleading claims could breach consumer law

This article mentions suicide. If you or anyone you know needs support, contact Lifeline on 13 11 14 or at , or Beyond Blue on 1300 224 636 or at .

Early in 2024 Sarah* started using the generative artificial intelligence (AI) tool ChatGPT for small administrative tasks.

“I would just experiment with it, for the start of a business correspondence or instead of using Google for something I would use ChatGPT,” she says.

After a while she began using it as a tool to supplement her mental health therapy, using the AI like a “sounding board” between her psychologist sessions.

“I would tell the AI my attachment styles and ask it how best to respond to certain situations. I would ask it to keep an eye out for any red flags in the early stages of dating,” says Sarah.

You want to feel like you have been heard and articulating that to someone, or something else, can be really helpful, kind of like a diary

'Sarah', AI therapy chatbot user

“You want to feel like you have been heard and articulating that to someone, or something else, can be really helpful, kind of like a diary.” 

Sarah felt like the AI chatbot helped supplement her therapy sessions rather than replace them, and says her therapist wasn’t discouraging about her use of it.

However, now Sarah is no longer seeing her therapist and is still using ChatGPT, while also experimenting with the new popular AI tool DeepSeek.

“I do have concerns. There is [the issue of] privacy, and then also I think ‘am I living more of a bubble life in an echo chamber after COVID, is this just a further way of isolating myself?'” 

The rise of therapy chatbots 

While some people, like Sarah, are using general AI chatbots like ChatGPT, DeepSeek, MetaAI and Gemini for mental health-related assistance, there are also a rising number of AI products designed specifically to help with mental health.

The US for-profit sleep and meditation mental health app Headspace (not to be confused with the Australian youth not-for-profit organisation of the same name) is in the top ten mental health websites visited in Australia. In October last year, it launched Ebb, “an empathetic AI companion integrated into the app to help people navigate life’s ups and downs”.

The Headspace app has reportedly been downloaded more than 80 million times across various app stores.

Other major players in the AI therapy chatbot space include Wysa and Youper, both of which have over a million downloads in the GooglePlay Store. Youper says it has helped over 2.5 million users. 

Not a replacement for therapy

While mental health therapy apps don’t specifically claim to replace in-person psychologists, they do make varying claims about their ability to treat mental health conditions.

Headspace’s Ebb

Headspace’s page promoting Ebb says it was developed by clinical psychologists using scientific-backed methods.

“Ebb is an AI-powered tool that’s designed to help you better understand yourself. While therapy and coaching provide deeper emotional support, Ebb can help you maintain mental wellness by encouraging regular reflection and mindfulness,” the website says.

Further down the website says, “Ebb is not a substitute for medical or mental health treatment. If you need support for a mental health condition, please talk with a licensed provider”.

Headspace tells Vlog that Ebb is not a “therapy chatbot” but rather “a sub-clinical support tool for our members to process thoughts and emotions or reflect on gratitude”.

“We acknowledge that Ebb, like any AI tool, has limitations. It’s not intended to replace professional mental health care but to complement it by helping users manage stress, practice meditation, and engage in self-reflection,” a spokesperson says.

The spokesperson added that the chatbot has ways of detecting high risk situations such as suicidal thoughts or self-harm ideation, and will suggest that users contact emergency services.

Youper

The Youper website makes the point that the supply of mental health clinicians is not keeping up with demand for mental health services.

“The groundwork of Youper is evidence-based interventions – treatments that have been studied extensively and proven successful. Youper has been proven clinically effective at reducing symptoms of anxiety and depression by researchers at Stanford University,” the site claims,  citing  83% of users experience “better moods”.

Youper did not respond to our questions about the scientific underpinnings of these claims.

Wysa

Wysa says its product is designed to complement traditional therapy and undergoes continuous compliance and safety testing to make sure users don’t receive harmful or unverified advice.

“While ChatGPT and Gemini are powerful general AI models, Wysa is purpose-built for mental health with strict clinical safety guardrails,” the chief of clinical services and operations at Wysa, Smriti Joshi, tells Vlog.

The unknown potential harms

Professor Jeannie Paterson, director for the Centre of AI and Digital Ethics at the University of Melbourne, says there is a lot we don’t know about how AI therapy chatbots function.

Paterson says there is a big difference between using the technology in collaboration with a professional and going to an app store to buy something we “know very little about”.

“They could be causing you harm, or you’re paying for something that does you very little good, because it’s just not tested.” 

They could be causing you harm, or you’re paying for something that does you very little good, because it’s just not tested

Professor Jeannie Paterson, Centre of AI and Digital Ethics

Previous media reporting around the world has highlighted some of the potentially harmful algorithms used in therapy chatbots when things go wrong.

In 2018, the BBC reported that therapy apps Wysa and Woebot, which were being promoted to children in the UK, failed to suggest to a journalist posing as a child who was being sexually abused that they contact emergency services and get help. Both apps have had significant updates since then.

According to National Public Radio in the US, in an apparent cost-saving measure, the US-based National Eating Disorder Association fired their phone-line staff in 2023 and began promoting an AI chatbot instead, which went on to suggest dieting advice to people with eating disorders.

Experts say there’s a big difference between using a therapy chatbot in collaboration with a professional, and buying something from an app store we know little about.

Meeting a demand 

Piers Gooding, an associate professor at Latrobe University law school, has researched mental health chatbots extensively.

He says he expects usage to continue to grow, simply because so much money is being poured into their development worldwide. Australia’s healthcare system is more effective than that in countries like the United States, but there are still sizeable gaps in demand.

“In 2021, according to one market report, digital startups focusing on mental health secured more than five billion dollars in venture capital – more than double that for any other medical issue, and investment further increased in 2023,” he says.

I suspect there will be some kind of reckoning with some of the over-claiming about what they can do

Associate professor Piers Gooding, Latrobe University law school

“I suspect there will be some kind of reckoning with some of the over-claiming about what they can do, and that might come in the form of people just voting with their feet and realising that it’s not quite what it’s cracked up to be.” 

Australian Association of Psychologists Inc policy coordinator Carly Dober says she understands that in a cost-of-living crisis, many people can’t afford to seek the mental health care and support they need and chatbots appear to be a more affordable option.

“I don’t think it’s the fault of people for trying to find whatever they can to support themselves in the moment. But unfortunately, when there is that kind of vacuum, sometimes not the most helpful players will try to fill that market or that space,” she says.

Lack of regulation? 

Dober says there is a big difference between using a chatbot in conjunction with therapy, versus replacing therapy with AI, which she says lacks the “checks and balances” needed.

“There is no uniform law around AI chatbots, there is a lack of regulation of that space, whereas we as psychologists are a highly regulated field,” she adds.

TGA exemptions

Latrobe University’s Gooding says the Therapeutic Goods Administration (TGA) seems to provide an exemption to AI therapy chatbots being regulated as a “medical device” if the providers take steps such as:

  • working from widely accepted cognitive behavioural therapy (CBT) models
  • not providing experimental therapy 
  • not diagnosing mental health conditions.

The TGA says it is currently “reviewing the ongoing appropriateness” of these exemptions and that it will consult stakeholders before suggesting any changes to government. It adds that to date it has received no complaints about therapy chatbots.

‘Innovation important’

Gooding says that despite the lack of TGA regulation, under Consumer Law claims made by app providers about the benefits of their products must not be misleading or deceptive.

He adds that innovation in the technology space is important and that a “heavy handed” regulatory approach from the TGA might not best serve consumers.

Chatbots can’t capture nuance, and they can be easily programmed to have addictive elements and pretend to be real people

Piers Gooding, Latrobe University law school

“Regulatory change is almost certainly needed to remove the digital mental health app exemption in relation to chatbots. Chatbots can’t capture nuance, and they can be easily programmed to have addictive elements and pretend to be real people. That poses a real danger in the mental health context,” Gooding says.

*Not her real name

The post Should you trust an AI chatbot with your mental health? appeared first on Vlog.

]]>
758753 person-to-person-mental-health-therapy person-lying-on-sofa-at-home-talking-to-chatbot-on-tablet-phone
Digital doppelgangers throwing lives into chaos  /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/oaic-ruling-on-digital-doppelgangers Wed, 05 Feb 2025 13:00:00 +0000 /uncategorized/post/oaic-ruling-on-digital-doppelgangers/ A recent ruling says that Services Australia has a duty to prevent your records from getting mixed up with another person’s.

The post Digital doppelgangers throwing lives into chaos  appeared first on Vlog.

]]>

Need to know

  • OAIC recently ruled that Services Australia had failed to prevent an individual’s Medicare information from being mixed up with – and disclosed to – another person
  • The complainant in the case was awarded $10,000 in compensation
  • Once our personal information gets mixed, there’s no telling how long it’s going to take to straighten it out to the government’s satisfaction

In our increasingly data-driven world – where details on a database can define who we are– an administrative mixup by a government agency can easily morph into a recurring nightmare.

It’s an admin error that can happen if we share the same name and date of birth with someone – our digital doppelganger. Once our personal information gets mixed, there’s no telling how long it’s going to take to straighten it out to the government’s satisfaction. It’s also difficult to predict how long the information management failure will affect our lives.

It’s an admin error that can happen if we share the same name and date of birth with someone

Given that hundreds, if not thousands, of people in Australia share the same name and date of birth, it’s a serious issue. But it’s worth pointing out that government agencies are not supposed to let these mix ups happen, as a recent determination by the Office of the Australian Information Commissioner (OAIC) makes clear.

Focusing on an individual case, the OAIC ruled earlier this month that Services Australia had failed to prevent an individual’s Medicare information from being mixed up with – and disclosed to – another person, a violation of the Privacy Act.

Intertwined records can cause real harm

“The idea of a digital doppelganger might seem like a curiosity, but its catchy ring conceals a difficult reality,” says Privacy Commissioner Carly Kind in a recent blog on this issue.

“Individuals whose government records, such as Medicare, Centrelink and child support services, are intertwined may suffer not only inconvenience but real harm. They or their health practitioners may be prevented from accessing accurate records to enable the timely provision of health services.”  

In the case mentioned above, the affected person’s records were intertwined with another person’s for six years, during which time he continually contacted Services Australia and tried in vain to clear things up.

Individuals whose government records, such as Medicare, Centrelink and child support services, are intertwined may suffer not only inconvenience but real harm

Privacy Commissioner Carly Kind

At one point the agency assured the person that the matter had been resolved.

Then he got a message from Services Australia saying “your registered Safety Net family is getting close to reaching the Medicare Safety Net Threshold”, which was clearly meant for another person.

Following that, he discovered that his COVID and influenza vaccination history had been assigned to another person.

Kind acknowledges that Services Australia has taken steps since the incident to keep digital doppelgangers administratively separate from one another, but the possibility of a mixup still exists.

The OAIC awarded the complainant $10,000 in compensation “for distress arising from the privacy breaches, which must have been a considerable burden on his time and energy over the past decade,” Kind says.

The post Digital doppelgangers throwing lives into chaos  appeared first on Vlog.

]]>
765582
Bunnings facial recognition program ruled illegal /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/oaic-finding-against-bunnings Mon, 18 Nov 2024 13:00:00 +0000 /uncategorized/post/oaic-finding-against-bunnings/ In a case exposed by Vlog, the privacy commissioner has found the business breached the Privacy Act.

The post Bunnings facial recognition program ruled illegal appeared first on Vlog.

]]>
Australian home hardware retail giant Bunnings Warehouse breached Australia’s Privacy Act with its use of facial recognition technology, an official investigation has found.

The probe into Bunnings and Kmart was launched by the Office of the Australian Information Commissioner (OAIC) following a Vlog investigation into the companies in July 2022.

Our investigation found that the popular retailers, along with The Good Guys, were capturing biometric information on customers through facial recognition technology in stores, largely without customers’ knowledge or consent. The Good Guys immediately paused their trial of the technology following Vlog’s report. 

Our story generated considerable customer backlash against the companies and gave rise to the OAIC investigation. Just over a month later, in July 2022, both Bunnings and Kmart announced they would halt their use of facial recognition technology while the OAIC investigation was ongoing.

On Tuesday, Privacy Commissioner Carly Kind found that Bunnings Group Limited had breached Australia’s Privacy Act. There was no further information about the investigation into Kmart.

Facial recognition information collected without consent 

Kind says the facial recognition cameras which were in use in 63 Bunnings stores in Victoria and New South Wales between November 2018 and November 2021 likely captured the faces of hundreds of thousands of customers.

“Facial recognition technology, and the surveillance it enables, has emerged as one of the most ethically challenging new technologies in recent years,” Commissioner Kind says.

“We acknowledge the potential for facial recognition technology to help protect against serious issues, such as crime and violent behaviour. However, any possible benefits need to be weighed against the impact on privacy rights, as well as our collective values as a society … Just because a technology may be helpful or convenient, does not mean its use is justifiable.” 

Just because a technology may be helpful or convenient, does not mean its use is justifiable

OAIC Commissioner Carly Kind

Kind went on to say that Bunnings collected the sensitive facial recognition information without consent and failed to take responsible steps to notify the individuals that their personal information was being collected.

“Individuals who entered the relevant Bunnings stores at the time would not have been aware that facial recognition technology was in use and especially that their sensitive information was being collected, even if briefly,” says Kind.

The Commissioner says Bunnings acted cooperatively throughout the investigation and that OAIC has made various orders including that the company must not repeat or continue the acts that led to the privacy breach in the first place.

“This decision should serve as a reminder to all organisations to proactively consider how the use of technology might impact privacy and to make sure privacy obligations are met,” she says.

Landmark ruling highlights outdated privacy laws

Vlog senior campaigns and policy advisor Rafi Alam says he hoped the landmark decision from OAIC would provide much needed guidance on the use of facial recognition technology.

“We know the Australian community has been shocked and angered by the use of facial recognition technology in a number of settings, including sporting and concert venues, pubs and clubs, and big retailers like Bunnings. We hope that today’s decision from the Information Commissioner will put businesses on notice when it comes to how they’re using facial recognition,” says Alam.

“While the decision from the OAIC is a strong step in the right direction, there is still more to be done. Australia’s current privacy laws are confusing, outdated and difficult to enforce. Vlog first raised the alarm on Bunnings’ use of facial recognition technology over two years ago, and in the time it took to reach today’s determination the technology has only grown in use,” says Alam.

Vlog first raised the alarm over two years ago, and in the time it took to reach today’s determination the technology has only grown in use

Vlog senior campaigns and policy advisor Rafi Alam

Alam adds that Vlog is continuing to call for a specific, fit-for-purpose law to protect consumers from the harms that can occur without proper and clear regulation of facial recognition technology.

Bunnings has the right to appeal and seek a review of the determination and told Vlog on Tuesday they would formally do so before the Administrative Review Tribunal.

“FRT (facial recognition technology) was an important tool for helping to keep our team members and customers safe from repeat offenders. Safety of our team, customers and visitors is not an issue justified by numbers. We believe that in the context of the privacy laws, if we protect even one person from injury or trauma in our stores the use of FRT has been justifiable,” Mike Schneider, Bunnings managing director says.

The post Bunnings facial recognition program ruled illegal appeared first on Vlog.

]]>
765578
Real estate agency doxxes tenant after negative review /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/real-estate-agency-doxxes-tenant Sun, 17 Nov 2024 13:00:00 +0000 /uncategorized/post/real-estate-agency-doxxes-tenant/ In a case that ended up with the OAIC, a real estate agency published a renter's personal information in an act of retaliation.

The post Real estate agency doxxes tenant after negative review appeared first on Vlog.

]]>

Need to know

  • It remains unclear what rental application platforms such as Ignite, 2Apply, Snug, tApp and others are doing with the renter data they harvest
  • The Australian Privacy Principles hold that businesses that collect data for a specific purpose can't use it for another purpose without consent
  • In a case recently ruled on by the Office of the Australian Information Commissioner (OAIC), a renter's data was improperly used against him by the rental agency that collected it

Being forced to hand over unreasonable amounts of personal information when applying for a place to live is something that happens to a lot of people in Australia, and it’s an issue that Vlog has reported on extensively.

The potential consumer harms are such that we gave the entire rental application platform industry (or RentTech) a Shonky in 2023.

It remains unclear what rental application platforms such as Ignite, 2Apply, Snug, tApp and others are actually doing with the renter data they harvest, but it is clear that they have to abide by the Australian Privacy Principles (APP) when handling it.

In a recent case that ended up with the Office of the Australian Information Commissioner (OAIC), this didn’t happen.

The Australian Privacy Principles are a distillation of the Privacy Act, which OAIC oversees. They stipulate that a business that holds someone’s personal information for a particular purpose can’t use or disclose it for a different purpose unless they obtain the individual’s consent. There’s also an exemption if the business has valid grounds to assume the person would reasonably expect it to be used for the other purpose.

In the case that was recently decided by OAIC, the matter of consent proved pivotal.

Agency retaliates by doxxing 

A disgruntled renter had left a negative Google review about the real estate agency he was renting through under a name similar to his own.

The review included statements such as: “Highly unprofessional. I would question how many of your 5-star reviews are fake. Your response to an emergency in a rental property is more than three full days. Shame on you.”

Previously, he had lodged a complaint with NSW Fair Trading about the agency.

When the agency retaliated by publishing the renter’s full name, occupation and financial circumstances in its response to the review – a move known as doxxing – it came as a shock.

The agency retaliated by publishing the renter’s full name, occupation and financial circumstances in its response to the review

The agency also fired back with their own comments, saying their reviews were genuine and adding, “I am not sure if we have upset you by chasing your unpaid rent so many times, we will not be apologising for that. You work as an accountant according to your LinkedIn profile, and as an accountant you should know how to pay rent on time.” 

When the agency refused to take down the renter’s personal information or its comments, the renter threatened to lodge a privacy complaint with OAIC. In response, the real estate agency escalated the conflict, threatening to publish more of the renter’s personal information, including health information.

A business that holds someone’s personal information for a particular purpose can’t use or disclose it for a different purpose unless they obtain the individual’s consent.

Agency found to be in the wrong

After reviewing the case, OAIC found that the real estate agency had contravened the APP, not only by publishing the renter’s personal information without consent but also by lacking a privacy policy that adequately explained how it handled personal information at the time it did so.

The nature of the personal information disclosed, and the reasons for the disclosure, were very relevant to the findings in this matter

OAIC spokesperson

“Whether the disclosure of personal information is lawful will depend on the facts and circumstances in each matter but the nature of the personal information disclosed, and the reasons for the disclosure, were very relevant to the findings in this matter,” an OAIC spokesperson tells Vlog, adding that businesses “should carefully consider why they intend to disclose personal information, and whether that purpose is consistent with the purpose for which they collected the information in the first place”.

Regulator concerned about RentTech

The OAIC spokesperson also told us that it’s keeping an eye on the RentTech industry, “where there is typically a power imbalance in property rental, favouring landlords and real estate agencies. Tenants may have little choice other than to use RentTech, providing significant personal and sensitive information in the process”.

The regulator says it has concerns about the amount of personal data collected from renters and the lack of transparency around sharing it with third party providers for secondary purposes. “If personal information is required, it should be deleted once it is no longer needed,” the spokesperson says.

We need comprehensive reform of our privacy laws to protect consumers from malicious and exploitative uses of our data

Vlog senior campaigns and policy adviser Rafi Alam

OAIC’s view is firmly in line with our position on the issue.

“Tenants aren’t just paying through the nose in rent anymore – they’re paying in valuable data too,” says Vlog senior campaigns and policy adviser Rafi Alam. “Our personal information has become big business for RentTech platforms, and it’s left us vulnerable to data misuse and data breaches.”

“We need comprehensive reform of our privacy laws to protect consumers from malicious and exploitative uses of our data. While the government has fortunately taken a few small steps in this direction, we’re hoping that big ticket items like an obligation on businesses to use our data fairly will be introduced as soon as possible.”

Doxxing to be added to privacy law

In the end, the regulatory intervention in this case was more of a reminder of a business’s obligation under privacy legislation than a punitive action.

Though the renter had asked for $15,000 from the real estate agency to compensate for what he described as severe emotional distress caused by the doxxing incident, the privacy regulator didn’t go that far.

Instead, the punishment was the removal of the renter’s personal information from the Google reviews platform, a letter of apology to the renter, and a commitment from the real estate agency to train staff on how to properly handle such information going forward.

Doxxing isn’t a concept that’s specifically outlined in the current version of the Privacy Act, but the Privacy Bill currently before parliament proposes to make it an offence.

The post Real estate agency doxxes tenant after negative review appeared first on Vlog.

]]>
766239 for-lease-and-leased-signs-outside-an-apartment-complex
Drive one of these car brands? This is how much of your data they’re tracking /data-protection-and-privacy/data-collection-and-use/who-has-your-data/articles/connected-cars-tracking-your-data Tue, 08 Oct 2024 13:00:00 +0000 /uncategorized/post/connected-cars-tracking-your-data/ We compare the privacy policies of Australia's most popular car brands to see how they track and monitor drivers.

The post Drive one of these car brands? This is how much of your data they’re tracking appeared first on Vlog.

]]>

Need to know

  • Vlog wrote to and analysed the privacy policies of Australia's ten most popular car brands to see how they monitor and track their drivers
  • Seven out of the 10 car brands can collect and share some level of driving data with third-party companies
  • Experts say reforms to the Privacy Act are needed to better protect drivers from over-reach by car companies 

Like many aspects of modern life, driving a vehicle isn’t what it used to be.

And while few Australians would want to go back to balancing a book of maps on their lap at the traffic lights, the digital age increasingly comes with a catch. These days, the extent to which cars collect and use data gathered on their drivers would come as a surprise to many.

In February, Vlog wrote about a Queensland man’s battle with Toyota after a dealership refused to give him a full refund for a vehicle he never picked up. He had serious concerns about data privacy and the tracking features he wasn’t told about at the point of purchase.

These days, the extent to which cars collect and use data gathered on their drivers would come as a surprise to many

After we published that story we received an avalanche of correspondence from Vlog readers wanting to know about the policies of various car brands when it comes to collecting, using and sharing driver data and biometric information.

We wrote to the makers of the ten most popular car brands in Australia and asked detailed questions about the data they collect, what they do with it and whether they allow consumers to opt-in or -out of their connected features.

Collecting your data 

Seven of the most popular brands collect some level of driving data through a connected services feature and send that data back to the company.

The three brands that don’t currently have connected services features enabled on vehicles sold into the Australian market are Mitsubishi, Subaru and Isuzu Ute.

Australia’s biggest car brand, Toyota, says it collects vehicle location data and what it calls ‘”Drive Pulse” data, which scores a driver’s acceleration, braking and cornering behaviour during each trip. This data is then shared with Toyota, “related companies”, and third-party service providers engaged by Toyota.

Ford also collects and shares driver data with third parties, such as related companies and contractors, though it says it doesn’t “sell data to brokers” .

MG says it collects and shares data with a range of “service providers”, but says it doesn’t share with third parties “other than to provide functionality”. We considered that clause vague and MG refused to respond to our repeated requests for clarification.  

Mazda says it collects “voice consumption” data and shares it with service providers and undisclosed third parties, but did not respond to our requests for clarification as to what this meant. It also shares data with third parties for advertising purposes

Text-only accessible version
How does your car stack up on data privacy?

Green
Mitsubishi Does not collect or share driver data in AustraliaSubaru Does not collect or share driver data in AustraliaIsuzu Ute Does not collect or share driver data in Australia

Yellow
Toyota Collects and shares driver data but not biometric dataFord Collects and shares driver data but not biometric dataMG Collects driver data, unclear if shared*Mazda Collects and shares driver data as well as “voice consumption” data**

Red
Kia

Collects and shares voice recognition and other data with third parties
ܲԻ岹Collects and shares voice recognition and other data with third parties
ձCollects voice and video and shares some data with third parties

*MG did not respond to our questions and their privacy policy is unclear about how extensively the driver data they collect is shared.
**Mazda did not respond to our questions, and did not provide clarification about what exactly “voice consumption” data means.

Voice and biometric data

Even more concerning than the tracking and sharing of your driving data are the number of brands that collect your voice recognition data and share that information with third parties.

Voice recognition, like facial recognition, is considered biometric information as it’s uniquely identifiable to individual people.

This means it is considered to be “sensitive data” under privacy law, and it’s meant to have an enhanced level of consumer protection and consent before it can be gathered and shared.

Kia says it collects data from your use of voice recognition technology and that the company “shares data on an aggregate and on identifying basis (sic) with Cerence, our third-party provider of automotive voice and AI innovation products”.

Cerence, a US-based company, says it is a “global industry leader” in AI-powered interactions across transportation.

Hyundai, which has the same parent company as Kia, also shares voice recognition data with Cerence.

What these car companies are doing is totally unacceptable. It should be illegal

Dr Vanessa Teague, Australian National University

Tesla gathers voice command data as well as “short video clips and images” captured from the camera onboard the vehicle. The company also shares some data with third parties and Tesla’s privacy policy assures drivers that the data is subject to “privacy preserving techniques” that are “not linked to your identity or account”, but doesn’t explain what those are.

“De-identified” data

Dr Vanessa Teague from the Australian National University’s College of Engineering, Computing and Cybernetics says these companies’ assurances that biometric information can somehow be shared in a de-identified manner is “complete baloney”.

“The idea that you can de-identify an image, or a voice is de-identified, it’s nonsense,” she says.

“What these car companies are doing is totally unacceptable. It should be illegal. These practices are good evidence that we need the Privacy Act updated or the Privacy Act enforced, because none of this should be acceptable in our country,” Teague adds.

Consumers concerned 

Given the number of companies engaging in intrusive data collecting and sharing, it’s little wonder that drivers are becoming concerned.

A nationally representative Vlog survey conducted in June 2024 of more than 1000 consumers found almost three in four respondents disagree or strongly disagree with video or audio recordings from inside the car being collected by the car company.

While support for car companies collecting safety data (such as seatbelt use) was stronger at 39%, only 30% said they supported the collection of driving data such as braking behaviour and speed. Just over one in five respondents said they neither agreed nor disagreed with the collection of driving data.

Giving the option to opt-out isn’t enough

All car companies with connected features who responded to us said they offer customers an opt-out function. But drivers are often opted-in automatically when buying the car or downloading the car’s app, and would then need to read long and indecipherable privacy policies to know what they have agreed to.

While customers may be able to “deactivate” their connected features, those wanting to remove the connected features devices altogether may find they can’t. In some cases, removing the connected features disables other functions of the vehicle, such as maps and weather. In Toyota’s case, customers may void part of their warranty by totally removing the data communications module.

Drivers are often opted-in automatically when buying the car or downloading the car’s app

Teague says there is a lot of “deliberate deceit” when it comes to car companies and connected features and she questions how many consumers would agree to the terms and conditions of their vehicles if they understood them.

“Opt-out is not the answer; you should have to opt-in to some of these features if you want them. Many of these other features should simply be illegal,” she says.

Many drivers aren’t aware of what they’re agreeing to when they accept the terms and conditions.

Protecting the data 

Ibrahim Khalil, professor of cloud systems and security at RMIT University, says it is concerning that raw data from Australian drivers is being transferred to car companies overseas and to the AI machine-learning companies they’re partnered with.

“You can use AI systems within the car to build the learning model off the driving data, and then transfer the model,” he says. “You don’t need to transfer the raw data. If you transfer the raw data, then of course, you expose everything.”

“Europeans wouldn’t accept this, [but] here in Australia we don’t make a fuss, we don’t talk about it, we don’t complain about anything when it comes to privacy,” Khalil adds.

Reforming the Privacy Act 

Vlog senior campaigns and policy adviser Rafi Alam says privacy laws are woefully out of date and not fit for purpose in a market where cars are fitted with biometric scanners and driving data is mass-collected.

“At the moment, businesses are able to write their own rules through their privacy policies. As long as a customer ‘consents’ in a way the seller decides is sufficient, the business can mostly do what it pleases with our data,” he says.

Alam says the government’s most recent amendments to the Privacy Act, introduced to parliament in September, don’t go far enough to protect drivers from over-reach by car companies.

“Change needs to come from the top. At a minimum, the federal government must implement a fair-and-reasonable-use test to legally require businesses to only collect and use our data in line with customer’s expectations,” he says.

“We are urging the government to ensure this obligation is included in the second phase of amendments to the Act,” Alam adds.

UPDATE 16/10/24: 

Following publication of this article, MG and Tesla, who both initially declined to comment, provided the following statements to Vlog. A spokesperson for MG says, “No data is shared with insurance companies or advertising agencies. The only reason that customer data is shared with third parties is where it is being used to deliver services or functionality to the owner or user of the vehicle.”

Tesla clarified that its vehicles don’t collect audio voice recordings, only the processed transcription of the voice command, known as voice command data.” At Tesla, we’re committed to protecting our customers anytime they get behind the wheel of a Tesla vehicle. That commitment extends to customer data privacy. Our privacy protections aim to go beyond industry standards, ensuring personal data is never sold, tracked or shared without permission or knowledge,” says Thom Drew, country director of Tesla Australia and New Zealand.

The post Drive one of these car brands? This is how much of your data they’re tracking appeared first on Vlog.

]]>
760967 person-typing-on-in-car-screen