How your data is used – investigations, tips, guides and advice - Vlog /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used You deserve better, safer and fairer products and services. We're the people working to make that happen. Wed, 08 Apr 2026 04:49:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2024/12/favicon.png?w=32 How your data is used – investigations, tips, guides and advice - Vlog /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used 32 32 239272795 What is Grok and should Australia be blocking it? /electronics-and-technology/internet/using-online-services/articles/what-is-grok-and-should-australia-be-blocking-it Fri, 16 Jan 2026 02:55:05 +0000 /?p=936413 Elon Musk’s xAI tool has raised alarms around the world for facilitating the creation of malicious content, including deepfake pornography.

The post What is Grok and should Australia be blocking it? appeared first on Vlog.

]]>

Need to know

  • Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, was recently blocked in Indonesia and Malaysia due its ability to create malicious content
  • Britain’s media regulator, Ofcom, says sexualised images of children created by Grok users may amount to child sexual abuse material
  • Musk’s company X recently said it would prevent Grok users from editing images of real people to put them in revealing clothing in jurisdictions where this is illegal

An artificial intelligence (AI) tool developed by Elon Musk’s company xAI was recently banned in Indonesia and Malaysia and has raised serious concerns globally. It’s called Grok, and it gives users the capability to make highly sexualised images of people that look disturbingly real. 

As Indonesia’s Communication and Digital Affairs Minister Meutya Hafid recently put it, “The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.”

Britain’s media regulator, Ofcom, realeased a statement that says, “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material.”

The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space

Indonesia Communications and Digital Affairs Minister Meutya Hafid

Grok, which is free to X users who pay for a subscription, was launched in 2023, but in 2024 an image generator feature was added that included something called ‘spicy mode’, which can generate pornographic content.

Australia’s eSafety Commissioner says the agency “has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery”, adding that it “will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act”.

Malicious content made easier

Abhinav Dhall, an associate professor at Monash University’s Department of Data Science and AI, says Grok has put powerful new technology into the hands of wrongdoers.

“Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits. As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images”, Dhall says, adding that in many cases “the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe”.

Dhall says Grok users should take steps to avoid images falling into the wrong hands.

“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms,” Dhall says.

“It is also important to avoid posting children’s photos publicly. If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible.”

X said in a previous statement that it removes illegal content from its platform including child abuse material and suspends the accounts of people who post it.

Musk has posted comments on the Grok backlash, saying critics of X “just want to suppress free speech”. In an X post on 15 January he said, “Grok is supposed [to] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV.”

Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits

Associate Professor Abhinav Dhall, Monash University

In a more recent announcement on X the company said “we have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing” in jurisdictions where this is illegal.

But it remains unclear how the company will block certain locations from using this functionality or which locations they may be.

Will mandatory codes stop the deepfakes?

On 9 March 2026, mandatory codes come into effect in Australia which impose new obligations on AI services to limit children’s access to sexually explicit content as well as to violent material and content related to self-harm and suicide. But enforcing such codes on mammoth AI companies based in the US and other countries has proven to be a tall order for Australian regulators.

Abhinav Dhall stops short of recommending that Grok be banned in Australia, saying it’s a matter of enforcing the current rules and compelling tech companies to stop harmful content.

“Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly,” Dhall says. “At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”

Meanwhile, amid the outcry around the world about sexualised deepfakes, in a speech given at Musk’s company SpaceX, in South Texas, US Defense Secretary Pete Hegseth recently said that the Pentagon will embrace Grok along with Google’s generative AI engine. 

The post What is Grok and should Australia be blocking it? appeared first on Vlog.

]]>
936413
Real estate agents, chemists, car hire companies and more under new privacy scrutiny /data-protection-and-privacy/articles/real-estate-agents-car-hire-companies-under-new-privacy-scrutiny Thu, 08 Jan 2026 23:14:20 +0000 /?p=920932 Australia’s privacy regulator is reviewing the privacy policies of businesses collecting your personal data during in-person interactions.

The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>

Need to know

  • In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions privacy policies give the businesses that write them
  • In 2023, we reported on the privacy policies of rental platforms, and last year we analysed the privacy policies of Australia’s ten most popular car brands
  • This month, the Office of the Australian Privacy Commissioner begins its first full-scale privacy policy review, focusing on information demanded by businesses in person

Very few of us read the privacy policies we passively consent to when engaging with a service provider. Fewer still would understand what these privacy policies actually say.

In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions these documents give the businesses we regularly interact with.

In 2023, we reported on the privacy policies of rental platforms such as realestate.com.au’s Ignite as well as Ailo, Tenant Options, Rental Rewards, Snug, 2Apply and Simple Rent.

The conclusion? These RentTech platforms collected information that went well beyond what’s needed to assess a tenant’s ability to pay the rent. The questions often seemed designed to grab as much data as possible from people who had no choice but to provide it.

In 2024, we analysed the privacy policies of Australia’s ten most popular car brands to see how the vehicles monitored and tracked their drivers. Here again we found that the harvesting of personal driver information was often excessive, and the rights the manufacturers gave themselves to share the data with third-parties were both far-reaching and vague.

The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words.


The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month

All of this makes the Office of the Australian Information Commissioner’s (OAIC) recent announcement that it will begin its first large-scale review of privacy policies in early January 2026 more timely than ever.

What’s changing in privacy law?

The Privacy Act requires privacy policies to contain certain details, such as what information is collected, why it’s needed, how it’s used, and how it can be corrected if necessary. 

An update to the Act in 2024 means businesses will also be required (as of 10 December 2026) to specify in their privacy policies whether a computer program will be using your personal information to make decisions that could go against you, such as when an application for a rental home is rejected. 

The privacy policy sweep is … focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys

In addition, the 2024 update gave the OAIC the power to issue infringement notices for Privacy Act violations without going to court. And it gives individuals the right to seek legal redress and financial compensation in certain cases for invasions of privacy or misuse of their personal information.

The OAIC’s privacy policy sweep is taking a different approach than our investigations of online privacy documents. It will occur in the real world, focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys. The privacy policies of such businesses must include the above-mentioned information. 

Not having the right information in a privacy policy – or not having a privacy policy at all – could lead to fines from the OAIC of up to $66,000.

Which types of businesses will be targeted?

The privacy policy sweep will focus on sectors where the OAIC believes there are particular power imbalances – also known as information asymmetries – between the business in question and the customers being asked to provide the information.

When confronted with in-person requests for their personal information … consumers often don’t have access to all the information they might need to make an informed decision

Privacy Commissioner Carly Kind

“When confronted with in-person requests for their personal information from retailers, licensed venues, car hire companies or real estate agents, consumers often don’t have access to all the information they might need to make an informed decision,” says Privacy Commissioner Carly Kind.

“This makes them vulnerable to overcollection of personal information and creates risks to their security and privacy.”

The OAIC says it will review the privacy policies of around 60 businesses from the following six sectors, with a particular focus in each case.

  • Rental and property – collection of individuals’ personal information during property inspections.
  • Chemists and pharmacists – collection of personal information for the purpose of providing a paperless receipt and collection of identity information to provide medication.
  • Licenced venues – collection of identity information to enable individuals to access a venue.
  • Car rental companies – collection of identity and other personal information to enable an individual to enter into a car rental agreement.
  • Car dealerships – collection of personal information to enable an individual to conduct a vehicle test drive.
  • Pawnbrokers and second-hand dealers – collection of identity information from individuals who wish to sell or pawn goods.

Transparent communication is critical

In the OAIC’s view, a business’s explanation of how it will use personal information should be open and transparent.

“The Australian community is increasingly concerned about the lack of choice and control they have with respect to their personal information,” Kind says.

“The first building block of better privacy practices is a clear privacy policy that transparently communicates how an individual can expect their information to be collected, used, disclosed and destroyed.

“In conducting a compliance sweep, the OAIC intends to ensure that entities are meeting their obligations to be transparent with consumers and customers about how they’re using the personal information they collect in-person.

“We hope this will also catalyse some reflection about how robust entities’ privacy practices are, and whether more can be done to improve compliance with the Privacy Act writ large.”


The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>
920932
AI is increasingly invading our medical privacy as regulation struggles to keep up /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/ai-used-by-medical-professionals Mon, 20 Oct 2025 13:00:00 +0000 /uncategorized/post/ai-used-by-medical-professionals/ Patient scans shared without consent, AI scribes recording doctor visits – medical professionals' AI use is raising big concerns.

The post AI is increasingly invading our medical privacy as regulation struggles to keep up appeared first on Vlog.

]]>

Need to know

  • Around 22% of GPs are now using AI scribes to record during patient visits, and some scribes are also proposing diagnosis and treatment
  • Profit-driven medical businesses have used large volumes of patient data to train AI tools without patients' knowledge or consent
  • Though consent is a cornerstone of privacy law, the Privacy Commissioner says consent isn't always required when it comes to AI and patient data

It seems the artificial intelligence (AI) industry is grabbing our personal information as fast as it can – before regulations end up enforcing what the community actually wants.

Among other indicators, the final report of the Australian Competition and Consumer Commission’s (ACCC) Digital Platform Services Inquiry, released in March this year, confirmed that the vast majority of us don’t want our personal data used to train AI tools without our consent.

In a consumer survey that was part of the report, 83% of Australians said our approval should be mandatory. This aligns with research from the agency tasked with protecting our personal information, the Office of the Australian Information Commissioner (OAIC), which found that 84% of Australians want to have control over their personal information, including the right to demand that it be deleted.

In mid-October, Roy Morgan published research showing that 65% of Australians think AI “creates more problems than it solves”, though 12% hailed its ability to advance medical science.

[The Privacy Act’s requirements] are constraining innovation without providing meaningful protection to individuals

Productivity Commission

There is considerable tension between those seeking to protect our data and those that see it as the key to improving products and services – and boosting profits – especially the big tech companies.

Agencies of the federal government are also on board with this notion. According to a recent report by the Productivity Commission (PC), our personal data shouldn’t be locked away behind a wall of inflexible regulations.

The Privacy Act’s requirements “are constraining innovation without providing meaningful protection to individuals”, the report says. It concludes that making it easier to harness data could add up to $10 billion a year to the country’s economy.

To that end, the PC calls for exempting some businesses from the requirement to obtain informed consent before accessing a person’s data, one of the key pillars of the Privacy Act. In the PC’s view, some businesses shouldn’t have to do this as long as they commit to acting in the person’s best interests when it comes to handling their privacy.

The report goes on to argue that giving consent has become a meaningless exercise, since no one has the time to read whatever they’re consenting to.

While businesses are required to have a privacy policy under the Privacy Act, these sprawling documents are all but impossible to understand. The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words.

These consent protocols are clearly not working as intended, but Privacy Commissioner Carly Kind has taken issue with the idea of watering down privacy regulations, making the case that protecting privacy and boosting productivity are not mutually exclusive and that informed consent is a critical consumer right.

Patient scans used to train AI without consent

But there have been cases in which the OAIC – where Kind is one of three commissioners – has allowed businesses to surreptitiously harvest our data to their own ends.

In September last year, for instance, on the case of Australia’s biggest radiology chain, I-Med Radiology Network, entering a joint venture with the start-up AI platform Harrison.ai in 2019.

The plan was to use I-Med’s patient scans to train an AI tool called Annalise.ai, which has now reportedly been approved for use in over 40 countries and heralded as a gamechanger.

The quantity and quality of the I-Med data – around 30 million patient scans and the resulting diagnostic reports from Australia and other countries – was key to the tool’s success.

It is clearly a money-maker for the business. In April last year, the Australian Financial Review reported that the business was on track to record $1.35 billion in revenue for the financial year.

(Harrison.ai’s flagship product was an AI model that was trained on 800,000 chest x-rays sourced from one of I-Med’s 270 clinics in Australia, Crikey reported. It owns Annalise.ai, which is now called Harrison.ai Radiology.)

The Crikey article says there is no evidence that informed consent was obtained from the patients. Based on the above-mentioned surveys, this would likely have run counter to many of their wishes. Neither I-Med nor Harrison.ai have disputed this.

Consent is only required in limited circumstances by the Australian Privacy Principles, and will not always be required when entities use personal information for AI training

Office of the Australian Information Commissioner spokesperson

But it appears there is some leeway on the informed consent requirement. In July this year the OAIC ruled that I-Med had not contravened the Privacy Act since the data had been de-identified and could no longer be defined as personal information.

The ruling suggests that, where privacy issues are not at play, it’s not the OAIC’s role to prevent businesses from grabbing our data to train AI, with or without our approval.

An OAIC spokesperson tells Vlog that “the issue of consent is always a highly relevant factor”, but the protections of the Australian Privacy Principles (APPs) no longer apply when patient data is de-identified.

“However, strong de-identification is challenging, and whether something is de-identified is context dependent. Data that is de-identified when subject to strict controls – as it was in the I-MED case – may not be de-identified in other contexts, such as if it is released publicly,” the spokesperson says.

The right balance between privacy regulation and the development of new AI tools remains a work in progress.

While industry guidance from the OAIC stipulates that people should be informed when their data is being used to train AI, “consent is only required in limited circumstances by the APPs, and will not always be required when entities use personal information for AI training”, the spokesperson added.

In other words, we have no copyright on the content of our bodies. Medical clinics are free to use our data for commercial purposes without telling us, and businesses can profit handsomely.

In a joint business venture, Australia’s largest radiology chain I-Med Radiology made millions of patient scans available to the AI firm Harrison.ai without the patients’ knowledge or consent.

The rise of AI scribes in GP consultations

All of this is pertinent to a related development – the growing use of AI tools by GPs and specialists to record patient visits, known as AI scribes. Currently around 22% of GPs are using AI scribes according to polling by the Royal Australian College of General Practitioners (RACGP). Popular models include Lyrebird, Heidi Health, Amplify+, i-scribe and Medilit, but there are many others.

The RACGP has tentatively embraced some uses of AI scribes, especially where they can reduce the administrative burdens on doctors and free them up to concentrate more fully on preventative care.

But the college has also expressed concern about AI tools being developed by profit-seeking tech firms without the oversight of medical clinicians. “Value to technology company shareholders might be prioritised over patient outcomes,” the RACGP wrote in a position statement on the issue.

Of particular concern is the reliability – and the legality – of AI in making medical recommendations. The Therapeutic Goods Administration (TGA) has recently gone on record regarding the issue, saying medical professionals report that AI scribes “frequently propose diagnosis or treatment for patients beyond the stated diagnosis or treatment a clinician had identified during consultations”.

Such a functionality would mean AI scribes are medical devices that require pre-market approval, the TGA says. By dodging regulation, they are potentially being supplied to the medical industry in breach of the Therapeutic Goods Act.

AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation

RACGP spokesperson

The RACGP hasn’t taken a position on whether AI scribes should be regulated by the TGA, but it does instruct GPs to obtain consent from patients before using them, to double check that the information they record is accurate, and to have a backup plan in place in case there’s a glitch with the AI scribe.

“AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation,” a RACGP spokesperson tells Vlog.

“GPs and other doctors must carefully check the output of an AI scribe to ensure its accuracy. Where an AI scribe performs its expected function – summarising information for independent GP review and decision making – it does not have a therapeutic use. Diagnosis, however, is outside the scope of an AI scribe.”

AI scribes are currently used by around 22% of GPs in Australia.

Market use shouldn’t pre-date regulation

One very concerned citizen is AI expert Dr Kobi Leins, who was recently told to take her business elsewhere after asking that AI not be used during a specialist visit for her child. Liens cancelled the appointment, not least because she was familiar with the AI model in use and wasn’t impressed by its privacy and security features. She didn’t want it capturing her child’s data.

“There is no need for many of these tools, and fundamentally, we need to ask why they are being pushed so hard and where money would be better spent in a healthcare system where doctors have time to listen to patients,” Leins says.

“Individuals do not have the skills nor the capacity to review these tools. Regulatory bodies need to review them in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use. It’s about individual privacy, but also about group privacy. There are potentially grave harms based on racial and gender and other biases in the data these tools rely on.”

Leins points to a recent study funded by the UK’s National Institute for Health and Care Research which found that when social workers used Google’s popular AI model ‘Gemma’, it downplayed women’s physical and mental health issues as compared to men’s in its case note summaries. In these cases, it’s the AI companies in the driver’s seat rather than healthcare workers, she argues.

Regulatory bodies need to review [AI scribes] in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use

AI expert Dr Kobi Leins

Leins is not alone. One concerned parent told us they didn’t trust the data handling practices of AI companies.

How the data is managed “is dependent on your GP and the policy of whatever scribe they’re using, which you’re unlikely to know when they ask you to sign over consent,” the parent says. “And like with most companies, you have no control over what happens when they get hacked and all your personal health information ends up on the dark web. I generally consent for my data to be scribed, but not my kid’s.”

The privacy policy of one of the most popular AI scribes, Heidi Health, is not reassuring. It says the business may share the personal information it captures with employees, third party suppliers, related companies, anyone it transfers the business to, ‘professional advisers, dealers and agents’, government agencies, law enforcement and more. Your data may also be transferred overseas.

Heidi Health tells Vlog that the company doesn’t share patient-identifiable or health information with external parties except where required by law, that its data handling practices comply with the Privacy Act, and that it doesn’t use patient data to train AI.

“We understand that clinicians and patients will only embrace new technology when they have complete confidence that their data is secure and used responsibly,” says Heidi Health head of legal and regulatory affairs Yass Omar, adding that the company is the only AI scribe in Australia certified to ISO 27001, “a globally recognised standard that reflects the strength of our information security management systems”.

But concerns remain about AI scribes in general. Another parent recently encountered one while taking her daughter to a specialist. She was told the data would be deleted after it had been reviewed, “but I only had her word to go on. What are the privacy implications here? And what mistakes can and does it make? Does the doctor look at the summary after each appointment at the end of the day while it is fresh in their mind, to make sure that the scribe accurately captured the info?”

Notices about the use of AI scribes by medical professionals are becoming a more common sight in healthcare settings.

Just like asking Dr Google

Professor Enrico Coiera, who is the director of both the Centre for Health Informatics, Australian Institute for Health Innovation at Macquarie University and the Australian Alliance for AI in Healthcare, tells Vlog that one of his biggest concerns is that product development is far outpacing regulatory oversight.

“Generative AI products in particular are being updated constantly. This makes it very hard for the traditional safety guardrails we rely on, like regulation, to make sure these new technologies are safe.

Much of this kind of AI is never marketed as ‘medical grade’, but as a general-purpose tool. So it is never assessed for its safe and effective use in healthcare.”

Much of this kind of AI is never marketed as ‘medical grade’, but as a general-purpose tool

Australian Alliance for AI in Healthcare director, Professor Enrico Coiera

Coiera says this is much like asking your search engine a health question. If an AI scribe is suggesting diagnosis or treatment, it’s a medical device that needs to be regulated by the TGA, he says.

Referring to the I-Med case, Coiera says, “patients should be asked to consent to the use of their data for building AIs, and especially so if there is a risk their information is identifiable”.

He recommends that patients read any consent forms carefully before signing.

“If they are uncomfortable with their data being used for AI development, they should discuss that with their care provider.”

Coiera is a believer in the capacity of AI to advance medical science, as long as people’s privacy is protected.

“As long as I am comfortable that my data is stored securely and is de-identified before use, I would agree to its use for non-commercial research purposes.”

The post AI is increasingly invading our medical privacy as regulation struggles to keep up appeared first on Vlog.

]]>
758758 three-chest-xrays lyrebird-health-usage-notice-in-doctors-office heidi-health-usage-notice-notice-in-doctors-office
Should you trust an AI chatbot with your mental health? /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/ai-therapy-chatbots Wed, 26 Feb 2025 13:00:00 +0000 /uncategorized/post/ai-therapy-chatbots/ Millions are using Headspace, Wysa, Youper and other popular therapy chatbots, but can AI replace a professional counsellor?

The post Should you trust an AI chatbot with your mental health? appeared first on Vlog.

]]>

Need to know

  • AI chatbots for mental health and therapy are growing in popularity
  • Experts are raising concerns about both the efficacy of the tools and the safety of sensitive user data
  • Academics say apps making misleading claims could breach consumer law

This article mentions suicide. If you or anyone you know needs support, contact Lifeline on 13 11 14 or at , or Beyond Blue on 1300 224 636 or at .

Early in 2024 Sarah* started using the generative artificial intelligence (AI) tool ChatGPT for small administrative tasks.

“I would just experiment with it, for the start of a business correspondence or instead of using Google for something I would use ChatGPT,” she says.

After a while she began using it as a tool to supplement her mental health therapy, using the AI like a “sounding board” between her psychologist sessions.

“I would tell the AI my attachment styles and ask it how best to respond to certain situations. I would ask it to keep an eye out for any red flags in the early stages of dating,” says Sarah.

You want to feel like you have been heard and articulating that to someone, or something else, can be really helpful, kind of like a diary

'Sarah', AI therapy chatbot user

“You want to feel like you have been heard and articulating that to someone, or something else, can be really helpful, kind of like a diary.”

Sarah felt like the AI chatbot helped supplement her therapy sessions rather than replace them, and says her therapist wasn’t discouraging about her use of it.

However, now Sarah is no longer seeing her therapist and is still using ChatGPT, while also experimenting with the new popular AI tool DeepSeek.

“I do have concerns. There is [the issue of] privacy, and then also I think ‘am I living more of a bubble life in an echo chamber after COVID, is this just a further way of isolating myself?'”

The rise of therapy chatbots

While some people, like Sarah, are using general AI chatbots like ChatGPT, DeepSeek, MetaAI and Gemini for mental health-related assistance, there are also a rising number of AI products designed specifically to help with mental health.

The US for-profit sleep and meditation mental health app Headspace (not to be confused with the Australian youth not-for-profit organisation of the same name) is in the top ten mental health websites visited in Australia. In October last year, it launched Ebb, “an empathetic AI companion integrated into the app to help people navigate life’s ups and downs”.

The Headspace app has reportedly been downloaded more than 80 million times across various app stores.

Other major players in the AI therapy chatbot space include Wysa and Youper, both of which have over a million downloads in the GooglePlay Store. Youper says it has helped over 2.5 million users.

Not a replacement for therapy

While mental health therapy apps don’t specifically claim to replace in-person psychologists, they do make varying claims about their ability to treat mental health conditions.

Headspace’s Ebb

Headspace’s page promoting Ebb says it was developed by clinical psychologists using scientific-backed methods.

“Ebb is an AI-powered tool that’s designed to help you better understand yourself. While therapy and coaching provide deeper emotional support, Ebb can help you maintain mental wellness by encouraging regular reflection and mindfulness,” the website says.

Further down the website says, “Ebb is not a substitute for medical or mental health treatment. If you need support for a mental health condition, please talk with a licensed provider”.

Headspace tells Vlog that Ebb is not a “therapy chatbot” but rather “a sub-clinical support tool for our members to process thoughts and emotions or reflect on gratitude”.

“We acknowledge that Ebb, like any AI tool, has limitations. It’s not intended to replace professional mental health care but to complement it by helping users manage stress, practice meditation, and engage in self-reflection,” a spokesperson says.

The spokesperson added that the chatbot has ways of detecting high risk situations such as suicidal thoughts or self-harm ideation, and will suggest that users contact emergency services.

Youper

The Youper website makes the point that the supply of mental health clinicians is not keeping up with demand for mental health services.

“The groundwork of Youper is evidence-based interventions – treatments that have been studied extensively and proven successful. Youper has been proven clinically effective at reducing symptoms of anxiety and depression by researchers at Stanford University,” the site claims, citing 83% of users experience “better moods”.

Youper did not respond to our questions about the scientific underpinnings of these claims.

Wysa

Wysa says its product is designed to complement traditional therapy and undergoes continuous compliance and safety testing to make sure users don’t receive harmful or unverified advice.

“While ChatGPT and Gemini are powerful general AI models, Wysa is purpose-built for mental health with strict clinical safety guardrails,” the chief of clinical services and operations at Wysa, Smriti Joshi, tells Vlog.

The unknown potential harms

Professor Jeannie Paterson, director for the Centre of AI and Digital Ethics at the University of Melbourne, says there is a lot we don’t know about how AI therapy chatbots function.

Paterson says there is a big difference between using the technology in collaboration with a professional and going to an app store to buy something we “know very little about”.

“They could be causing you harm, or you’re paying for something that does you very little good, because it’s just not tested.”

They could be causing you harm, or you’re paying for something that does you very little good, because it’s just not tested

Professor Jeannie Paterson, Centre of AI and Digital Ethics

Previous media reporting around the world has highlighted some of the potentially harmful algorithms used in therapy chatbots when things go wrong.

In 2018, the BBC reported that therapy apps Wysa and Woebot, which were being promoted to children in the UK, failed to suggest to a journalist posing as a child who was being sexually abused that they contact emergency services and get help. Both apps have had significant updates since then.

According to National Public Radio in the US, in an apparent cost-saving measure, the US-based National Eating Disorder Association fired their phone-line staff in 2023 and began promoting an AI chatbot instead, which went on to suggest dieting advice to people with eating disorders.

Experts say there’s a big difference between using a therapy chatbot in collaboration with a professional, and buying something from an app store we know little about.

Meeting a demand

Piers Gooding, an associate professor at Latrobe University law school, has researched mental health chatbots extensively.

He says he expects usage to continue to grow, simply because so much money is being poured into their development worldwide. Australia’s healthcare system is more effective than that in countries like the United States, but there are still sizeable gaps in demand.

“In 2021, according to one market report, digital startups focusing on mental health secured more than five billion dollars in venture capital – more than double that for any other medical issue, and investment further increased in 2023,” he says.

I suspect there will be some kind of reckoning with some of the over-claiming about what they can do

Associate professor Piers Gooding, Latrobe University law school

“I suspect there will be some kind of reckoning with some of the over-claiming about what they can do, and that might come in the form of people just voting with their feet and realising that it’s not quite what it’s cracked up to be.”

Australian Association of Psychologists Inc policy coordinator Carly Dober says she understands that in a cost-of-living crisis, many people can’t afford to seek the mental health care and support they need and chatbots appear to be a more affordable option.

“I don’t think it’s the fault of people for trying to find whatever they can to support themselves in the moment. But unfortunately, when there is that kind of vacuum, sometimes not the most helpful players will try to fill that market or that space,” she says.

Lack of regulation?

Dober says there is a big difference between using a chatbot in conjunction with therapy, versus replacing therapy with AI, which she says lacks the “checks and balances” needed.

“There is no uniform law around AI chatbots, there is a lack of regulation of that space, whereas we as psychologists are a highly regulated field,” she adds.

TGA exemptions

Latrobe University’s Gooding says the Therapeutic Goods Administration (TGA) seems to provide an exemption to AI therapy chatbots being regulated as a “medical device” if the providers take steps such as:

  • working from widely accepted cognitive behavioural therapy (CBT) models
  • not providing experimental therapy
  • not diagnosing mental health conditions.

The TGA says it is currently “reviewing the ongoing appropriateness” of these exemptions and that it will consult stakeholders before suggesting any changes to government. It adds that to date it has received no complaints about therapy chatbots.

‘Innovation important’

Gooding says that despite the lack of TGA regulation, under Consumer Law claims made by app providers about the benefits of their products must not be misleading or deceptive.

He adds that innovation in the technology space is important and that a “heavy handed” regulatory approach from the TGA might not best serve consumers.

Chatbots can’t capture nuance, and they can be easily programmed to have addictive elements and pretend to be real people

Piers Gooding, Latrobe University law school

“Regulatory change is almost certainly needed to remove the digital mental health app exemption in relation to chatbots. Chatbots can’t capture nuance, and they can be easily programmed to have addictive elements and pretend to be real people. That poses a real danger in the mental health context,” Gooding says.

*Not her real name

The post Should you trust an AI chatbot with your mental health? appeared first on Vlog.

]]>
758753 person-to-person-mental-health-therapy person-lying-on-sofa-at-home-talking-to-chatbot-on-tablet-phone
Digital doppelgangers throwing lives into chaos /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/oaic-ruling-on-digital-doppelgangers Wed, 05 Feb 2025 13:00:00 +0000 /uncategorized/post/oaic-ruling-on-digital-doppelgangers/ A recent ruling says that Services Australia has a duty to prevent your records from getting mixed up with another person’s.

The post Digital doppelgangers throwing lives into chaos appeared first on Vlog.

]]>

Need to know

  • OAIC recently ruled that Services Australia had failed to prevent an individual’s Medicare information from being mixed up with – and disclosed to – another person
  • The complainant in the case was awarded $10,000 in compensation
  • Once our personal information gets mixed, there’s no telling how long it’s going to take to straighten it out to the government’s satisfaction

In our increasingly data-driven world – where details on a database can define who we are– an administrative mixup by a government agency can easily morph into a recurring nightmare.

It’s an admin error that can happen if we share the same name and date of birth with someone – our digital doppelganger. Once our personal information gets mixed, there’s no telling how long it’s going to take to straighten it out to the government’s satisfaction. It’s also difficult to predict how long the information management failure will affect our lives.

It’s an admin error that can happen if we share the same name and date of birth with someone

Given that hundreds, if not thousands, of people in Australia share the same name and date of birth, it’s a serious issue. But it’s worth pointing out that government agencies are not supposed to let these mix ups happen, as a recent determination by the Office of the Australian Information Commissioner (OAIC) makes clear.

Focusing on an individual case, the OAIC ruled earlier this month that Services Australia had failed to prevent an individual’s Medicare information from being mixed up with – and disclosed to – another person, a violation of the Privacy Act.

Intertwined records can cause real harm

“The idea of a digital doppelganger might seem like a curiosity, but its catchy ring conceals a difficult reality,” says Privacy Commissioner Carly Kind in a recent blog on this issue.

“Individuals whose government records, such as Medicare, Centrelink and child support services, are intertwined may suffer not only inconvenience but real harm. They or their health practitioners may be prevented from accessing accurate records to enable the timely provision of health services.”

In the case mentioned above, the affected person’s records were intertwined with another person’s for six years, during which time he continually contacted Services Australia and tried in vain to clear things up.

Individuals whose government records, such as Medicare, Centrelink and child support services, are intertwined may suffer not only inconvenience but real harm

Privacy Commissioner Carly Kind

At one point the agency assured the person that the matter had been resolved.

Then he got a message from Services Australia saying “your registered Safety Net family is getting close to reaching the Medicare Safety Net Threshold”, which was clearly meant for another person.

Following that, he discovered that his COVID and influenza vaccination history had been assigned to another person.

Kind acknowledges that Services Australia has taken steps since the incident to keep digital doppelgangers administratively separate from one another, but the possibility of a mixup still exists.

The OAIC awarded the complainant $10,000 in compensation “for distress arising from the privacy breaches, which must have been a considerable burden on his time and energy over the past decade,” Kind says.

The post Digital doppelgangers throwing lives into chaos appeared first on Vlog.

]]>
765582
Bunnings facial recognition program ruled illegal /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/oaic-finding-against-bunnings Mon, 18 Nov 2024 13:00:00 +0000 /uncategorized/post/oaic-finding-against-bunnings/ In a case exposed by Vlog, the privacy commissioner has found the business breached the Privacy Act.

The post Bunnings facial recognition program ruled illegal appeared first on Vlog.

]]>
Australian home hardware retail giant Bunnings Warehouse breached Australia’s Privacy Act with its use of facial recognition technology, an official investigation has found.

The probe into Bunnings and Kmart was launched by the Office of the Australian Information Commissioner (OAIC) following a Vlog investigation into the companies in July 2022.

Our investigation found that the popular retailers, along with The Good Guys, were capturing biometric information on customers through facial recognition technology in stores, largely without customers’ knowledge or consent. The Good Guys immediately paused their trial of the technology following Vlog’s report.

Our story generated considerable customer backlash against the companies and gave rise to the OAIC investigation. Just over a month later, in July 2022, both Bunnings and Kmart announced they would halt their use of facial recognition technology while the OAIC investigation was ongoing.

On Tuesday, Privacy Commissioner Carly Kind found that Bunnings Group Limited had breached Australia’s Privacy Act. There was no further information about the investigation into Kmart.

Facial recognition information collected without consent

Kind says the facial recognition cameras which were in use in 63 Bunnings stores in Victoria and New South Wales between November 2018 and November 2021 likely captured the faces of hundreds of thousands of customers.

“Facial recognition technology, and the surveillance it enables, has emerged as one of the most ethically challenging new technologies in recent years,” Commissioner Kind says.

“We acknowledge the potential for facial recognition technology to help protect against serious issues, such as crime and violent behaviour. However, any possible benefits need to be weighed against the impact on privacy rights, as well as our collective values as a society … Just because a technology may be helpful or convenient, does not mean its use is justifiable.”

Just because a technology may be helpful or convenient, does not mean its use is justifiable

OAIC Commissioner Carly Kind

Kind went on to say that Bunnings collected the sensitive facial recognition information without consent and failed to take responsible steps to notify the individuals that their personal information was being collected.

“Individuals who entered the relevant Bunnings stores at the time would not have been aware that facial recognition technology was in use and especially that their sensitive information was being collected, even if briefly,” says Kind.

The Commissioner says Bunnings acted cooperatively throughout the investigation and that OAIC has made various orders including that the company must not repeat or continue the acts that led to the privacy breach in the first place.

“This decision should serve as a reminder to all organisations to proactively consider how the use of technology might impact privacy and to make sure privacy obligations are met,” she says.

Landmark ruling highlights outdated privacy laws

Vlog senior campaigns and policy advisor Rafi Alam says he hoped the landmark decision from OAIC would provide much needed guidance on the use of facial recognition technology.

“We know the Australian community has been shocked and angered by the use of facial recognition technology in a number of settings, including sporting and concert venues, pubs and clubs, and big retailers like Bunnings. We hope that today’s decision from the Information Commissioner will put businesses on notice when it comes to how they’re using facial recognition,” says Alam.

“While the decision from the OAIC is a strong step in the right direction, there is still more to be done. Australia’s current privacy laws are confusing, outdated and difficult to enforce. Vlog first raised the alarm on Bunnings’ use of facial recognition technology over two years ago, and in the time it took to reach today’s determination the technology has only grown in use,” says Alam.

Vlog first raised the alarm over two years ago, and in the time it took to reach today’s determination the technology has only grown in use

Vlog senior campaigns and policy advisor Rafi Alam

Alam adds that Vlog is continuing to call for a specific, fit-for-purpose law to protect consumers from the harms that can occur without proper and clear regulation of facial recognition technology.

Bunnings has the right to appeal and seek a review of the determination and told Vlog on Tuesday they would formally do so before the Administrative Review Tribunal.

“FRT (facial recognition technology) was an important tool for helping to keep our team members and customers safe from repeat offenders. Safety of our team, customers and visitors is not an issue justified by numbers. We believe that in the context of the privacy laws, if we protect even one person from injury or trauma in our stores the use of FRT has been justifiable,” Mike Schneider, Bunnings managing director says.

The post Bunnings facial recognition program ruled illegal appeared first on Vlog.

]]>
765578
Real estate agency doxxes tenant after negative review /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/real-estate-agency-doxxes-tenant Sun, 17 Nov 2024 13:00:00 +0000 /uncategorized/post/real-estate-agency-doxxes-tenant/ In a case that ended up with the OAIC, a real estate agency published a renter's personal information in an act of retaliation.

The post Real estate agency doxxes tenant after negative review appeared first on Vlog.

]]>

Need to know

  • It remains unclear what rental application platforms such as Ignite, 2Apply, Snug, tApp and others are doing with the renter data they harvest
  • The Australian Privacy Principles hold that businesses that collect data for a specific purpose can't use it for another purpose without consent
  • In a case recently ruled on by the Office of the Australian Information Commissioner (OAIC), a renter's data was improperly used against him by the rental agency that collected it

Being forced to hand over unreasonable amounts of personal information when applying for a place to live is something that happens to a lot of people in Australia, and it’s an issue that Vlog has reported on extensively.

The potential consumer harms are such that we gave the entire rental application platform industry (or RentTech) a Shonky in 2023.

It remains unclear what rental application platforms such as Ignite, 2Apply, Snug, tApp and others are actually doing with the renter data they harvest, but it is clear that they have to abide by the Australian Privacy Principles (APP) when handling it.

In a recent case that ended up with the Office of the Australian Information Commissioner (OAIC), this didn’t happen.

The Australian Privacy Principles are a distillation of the Privacy Act, which OAIC oversees. They stipulate that a business that holds someone’s personal information for a particular purpose can’t use or disclose it for a different purpose unless they obtain the individual’s consent. There’s also an exemption if the business has valid grounds to assume the person would reasonably expect it to be used for the other purpose.

In the case that was recently decided by OAIC, the matter of consent proved pivotal.

Agency retaliates by doxxing

A disgruntled renter had left a negative Google review about the real estate agency he was renting through under a name similar to his own.

The review included statements such as: “Highly unprofessional. I would question how many of your 5-star reviews are fake. Your response to an emergency in a rental property is more than three full days. Shame on you.”

Previously, he had lodged a complaint with NSW Fair Trading about the agency.

When the agency retaliated by publishing the renter’s full name, occupation and financial circumstances in its response to the review – a move known as doxxing – it came as a shock.

The agency retaliated by publishing the renter’s full name, occupation and financial circumstances in its response to the review

The agency also fired back with their own comments, saying their reviews were genuine and adding, “I am not sure if we have upset you by chasing your unpaid rent so many times, we will not be apologising for that. You work as an accountant according to your LinkedIn profile, and as an accountant you should know how to pay rent on time.”

When the agency refused to take down the renter’s personal information or its comments, the renter threatened to lodge a privacy complaint with OAIC. In response, the real estate agency escalated the conflict, threatening to publish more of the renter’s personal information, including health information.

A business that holds someone’s personal information for a particular purpose can’t use or disclose it for a different purpose unless they obtain the individual’s consent.

Agency found to be in the wrong

After reviewing the case, OAIC found that the real estate agency had contravened the APP, not only by publishing the renter’s personal information without consent but also by lacking a privacy policy that adequately explained how it handled personal information at the time it did so.

The nature of the personal information disclosed, and the reasons for the disclosure, were very relevant to the findings in this matter

OAIC spokesperson

“Whether the disclosure of personal information is lawful will depend on the facts and circumstances in each matter but the nature of the personal information disclosed, and the reasons for the disclosure, were very relevant to the findings in this matter,” an OAIC spokesperson tells Vlog, adding that businesses “should carefully consider why they intend to disclose personal information, and whether that purpose is consistent with the purpose for which they collected the information in the first place”.

Regulator concerned about RentTech

The OAIC spokesperson also told us that it’s keeping an eye on the RentTech industry, “where there is typically a power imbalance in property rental, favouring landlords and real estate agencies. Tenants may have little choice other than to use RentTech, providing significant personal and sensitive information in the process”.

The regulator says it has concerns about the amount of personal data collected from renters and the lack of transparency around sharing it with third party providers for secondary purposes. “If personal information is required, it should be deleted once it is no longer needed,” the spokesperson says.

We need comprehensive reform of our privacy laws to protect consumers from malicious and exploitative uses of our data

Vlog senior campaigns and policy adviser Rafi Alam

OAIC’s view is firmly in line with our position on the issue.

“Tenants aren’t just paying through the nose in rent anymore – they’re paying in valuable data too,” says Vlog senior campaigns and policy adviser Rafi Alam. “Our personal information has become big business for RentTech platforms, and it’s left us vulnerable to data misuse and data breaches.”

“We need comprehensive reform of our privacy laws to protect consumers from malicious and exploitative uses of our data. While the government has fortunately taken a few small steps in this direction, we’re hoping that big ticket items like an obligation on businesses to use our data fairly will be introduced as soon as possible.”

Doxxing to be added to privacy law

In the end, the regulatory intervention in this case was more of a reminder of a business’s obligation under privacy legislation than a punitive action.

Though the renter had asked for $15,000 from the real estate agency to compensate for what he described as severe emotional distress caused by the doxxing incident, the privacy regulator didn’t go that far.

Instead, the punishment was the removal of the renter’s personal information from the Google reviews platform, a letter of apology to the renter, and a commitment from the real estate agency to train staff on how to properly handle such information going forward.

Doxxing isn’t a concept that’s specifically outlined in the current version of the Privacy Act, but the Privacy Bill currently before parliament proposes to make it an offence.

The post Real estate agency doxxes tenant after negative review appeared first on Vlog.

]]>
766239 for-lease-and-leased-signs-outside-an-apartment-complex
Photos of Australian kids found in huge AI training data set /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/kids-found-in-ai-training-data Thu, 04 Jul 2024 14:00:00 +0000 /uncategorized/post/kids-found-in-ai-training-data/ Human Rights Watch says that photos of Australian kids have been used without consent. What can you do about it?

The post Photos of Australian kids found in huge AI training data set appeared first on Vlog.

]]>
Photos of Australian children have been used without consent to train artificial intelligence (AI) models that generate images.

from the non-governmental organisation Human Rights Watch has found the personal information, including photos, of Australian children in a large data set called LAION-5B. This data set was created by accessing content from the publicly available internet. It contains links to paired with captions.

Companies use data sets like LAION-5B to ‘teach’ their generative AI tools what visual content looks like. A generative AI tool like Midjourney or Stable Diffusion will then assemble images from the thousands of data points in its training materials.

In many cases, the developers of AI models – and their training data – seem to be over data protection and consumer protection laws. They seem to believe that if they build and deploy the model, they will be able to achieve their business goals while law or enforcement is catching up.

The data set analysed by Human Rights Watch is maintained by German nonprofit organisation . Stanford University researchers have previously in this same data set.

AI developers that have already used this data can’t make their AI models ‘unlearn’ it

LAION has now pledged to remove the Australian kids’ photos found by Human Rights Watch. However, AI developers that have already used this data can’t make their AI models ‘unlearn’ it. And the broader issue of privacy breaches also remains.

If it’s on the internet, is it fair game?

It’s a misconception to say that because something is publicly available, privacy laws don’t apply to it. Publicly available information can be personal information under the Australian Privacy Act.In fact, we have a relevant case when facial recognition platform Clearview AI was found to breach Australians’ privacy in 2021. The company was scraping people’s images from websites across the internet to use in a facial recognition tool.

The Office of the Australian Information Commissioner (OAIC) ruled that even though those photographs were already on websites, . More than that, they were sensitive information.

It held Clearview AI had contravened the Privacy Act by failing to follow obligations about collection of personal information. So, in Australia, personal information includes publicly available information.

AI developers need to be very careful about the provenance of the data sets they’re using.

Companies use data sets like LAION-5B to ‘teach’ their generative AI tools.

Can we enforce the privacy law?

This is where the Clearview AI case is relevant. There are potentially strong arguments that LAION has breached current Australian privacy laws.

One such argument involves the collection of biometric information in the form of without the consent of the individual.

Australia’s information commissioner ruled Clearview AI had collected sensitive information without consent. Additionally, this was done by “unfair means”: scraping people’s facial information from various websites for use in a facial recognition tool.

Under Australian privacy laws, the organisation gathering the data also has to provide a collection notice to the individuals. When you have these kinds of practices – broadly scraping images from across the internet – the likelihood of a company giving appropriate notice to everybody concerned is vanishingly small.

The commissioner may be able to seek a very large fine if there is a serious interference with privacy: the greater of A$50 million, 30% of turnover, or three times the benefit received

If it’s found that Australian privacy law has been breached in this case, we need strong enforcement action by the privacy commissioner. For example, the commissioner may be able to seek a very large fine if there is a serious interference with privacy: the greater of A$50 million, 30% of turnover, or three times the benefit received.

The federal government is expected to release an amendment bill for the Privacy Act in August. It follows a major review of privacy law conducted over the last couple of years.

As part of those reforms, there have been proposals for a , recognising that children are in an even more vulnerable position than adults when it comes to the potential misuse of their personal information. They often lack agency over what’s being collected and used, and how that will affect them throughout their lives.

What can parents do?

There are many good reasons not to publish pictures of your children on the internet, including unwanted surveillance, the risk of identification of children by those with criminal intentions and use in deepfake images – including child pornography. These AI data sets provide yet another reason. For parents, this is an ongoing battle.

Human Rights Watch found photos in the LAION-5B data set that had been scraped from unlisted, unsearchable YouTube videos

Human Rights Watch found photos in the LAION-5B data set that had been scraped from unlisted, unsearchable YouTube videos. In its response, LAION has argued the most effective protection against misuse is to remove children’s personal photos from the internet.

But even if you decide not to publish photos of your children, there are many situations where your child can be photographed by other people and have their images available on the internet. This can include daycare centres, schools or sporting clubs.

If, as individual parents, we don’t publish our children’s photos, that’s great. But avoiding this problem wholesale is difficult – and we should not put all the blame on parents if these images end up in AI training data. Instead, we must keep the tech companies accountable.

This article was originally published on The Conversation, you can read the

The post Photos of Australian kids found in huge AI training data set appeared first on Vlog.

]]>
764659 ai-image-generator
Did online therapy provider BetterHelp breach Australian laws? /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/betterhelp-data-privacy Mon, 17 Jun 2024 14:00:00 +0000 /uncategorized/post/betterhelp-data-privacy/ The therapy platform was fined almost $12 million in the US for illegal data sharing.

The post Did online therapy provider BetterHelp breach Australian laws? appeared first on Vlog.

]]>

Need to know

  • US online therapy platform BetterHelp has copped a massive fine for illegally sharing data
  • The company is looking to expand its Australian operations and customer base
  • Experts are calling for an investigation into whether laws were breached here too

If you’ve listened to a podcast, you’ve probably heard an ad for BetterHelp.

The US-based online therapy platform claims to have the world’s largest network of therapists and to have been accessed by 4.7 million customers in the US and around the world. BetterHelp’s marketing in Australia is all over social media platforms.

But the ads don’t tell you everything. In 2023 BetterHelp was fined $US7.8 million ($11.7 million) by the US Federal Trade Commission (FTC) for sharing the health data of customers with platforms such as Facebook and Snapchat.

BetterHelp shared the sensitive information for the purposes of advertising, something the business had promised not to do. Privacy experts here are now wondering whether laws were also breached in Australia.

What is BetterHelp?

Dr Piers Gooding, a Latrobe University researcher on disability and health issues, has looked into BetterHelp. He says the platform’s model can be described as an “Uber of therapy”, connecting patients to therapists on demand.

The platform promotes its flexibility, allowing users to use text, chat, phone or video calling to have a session with a therapist.

Another selling point is the price. With subscriptions at $90–120 a week, BetterHelp bills itself as an affordable option for people seeking therapy.

The platform promotes its flexibility, allowing users to use text, chat, phone or video calling to have a session with a therapist

But is that pitch relevant to Australians? Gooding says the platform hasn’t taken off in Australia to the extent it has in the US, perhaps because Australia’s Medicare system provides rebates for registered psychological therapy, which makes regular face-to-face mental health care much more affordable than in the US.

“Medicare means that Australians can get subsidised support. It’s a different environment to America where companies like BetterHelp might offer something that people can’t find elsewhere for that price,” says Gooding.

BetterHelp in Australia

According to online reviews, one of the ongoing gripes of non-US users of the platform is that counsellors are mostly US-based and work to US time zones. Some users in the UK, for example, complain on online review forums about paying for the service, then being unable to book suitable times.

Regardless, BetterHelp wants to grow its Australian user base. Along with social media ads targeting Australians, the company has made deals with Australian influencers on channels like YouTube.

In Australia, there is no mandatory regulation of online mental health or therapy platforms

In May, Vlog found job ads posted on LinkedIn calling for new mental health counsellors to work on the BetterHelp platform in Australia.

In Australia, there is no mandatory regulation of online mental health or therapy platforms, and Gooding says companies like BetterHelp are looking to operate in this regulatory gap.

Australia does have something called the National Safety and Quality Digital Mental Health Standards, which apply to mental health and crisis support services that are facilitated by technology – but they’re voluntary.

Organisations such as the mental health crisis phone line Lifeline have signed up to the standards, but BetterHelp has not applied for accreditation.

BetterHelp wants to grow its Australian user base.

FTC action against BetterHelp

When the FTC took action against BetterHelp, it wasn’t for breaching privacy laws. It was essentially for misleading and deceptive conduct, because the company had told users their data wouldn’t be shared with third parties.

The FTC, the equivalent of Australia’s ACCC, said the company had used and disclosed email addresses, IP addresses and health questionnaire information to the social media platforms.

BetterHelp put out a statement at the time saying the reason for sharing the data in question (that the FTC alleged was in breach of the law) was so it “could deliver more relevant ads and reach people who may be interested in our services”.

The statement also says, “This settlement, which is no admission of wrongdoing, allows us to continue to focus on our mission to help millions of people around the world get access to quality therapy.”

Changes to information about data sharing

BetterHelp has since changed the way it informs users of its data-sharing practices. The sharing settings now tell users that they “share data with trusted service providers, but not with third-party advertisers or analytics companies”.

Other data-sharing practices explained on their website have an opt-out function.

BetterHelp’s privacy policy starts with an acknowledgement that such documents “can be dense and inaccessible”, then goes on to run over 9000 words, which would take an average reader 37 minutes to read.

Did BetterHelp breach Australian privacy laws?

Malcolm Crompton was Australia’s Privacy Commissioner from 1999 to 2004. He believes the data gathered and shared by BetterHelp with Facebook would be classified as “health information” under Australian law and would therefore be considered “sensitive information” as defined by the Privacy Act. This would give it a stronger level of protection here under the law.

“If they share the fact that you had given your email address with a therapy company, there are certain inferences about you, your mental health, that can be drawn from that. In my view those inferences mean that this information would constitute health information,” he says.

The real issue here is that, in Australia, we have really little enforcement of the Privacy Act

Jeannie Paterson, University of Melbourne

Jeannie Paterson from the University of Melbourne’s Centre for AI and Digital Ethics says there are valid reasons, such as workplace discrimination, that some people choose to keep their mental health information private and that needs to be respected.

“The real issue here is that, in Australia, we have really little enforcement of the Privacy Act. This may be in breach, but does the Office of the Australian Information Commissioner have the resources to investigate? To test it in court?” she says.

Investigation needed

Vlog consumer data advocate Kate Bower says that, despite the stretched resources of OAIC, there is enough evidence of overseas wrongdoing by BetterHelp to warrant an investigation here in Australia.

“The FTC’s findings about BetterHelp are disturbing to consumers who would be shocked to discover their sensitive data has been shared for advertising. Australian consumers deserve to know what BetterHelp has been doing with their data here, and we’re urging the Privacy Commissioner to investigate any potential misuses.”

Crompton agrees that an investigation is needed.

“I think you’ve really got at least a sufficient case to make that it’s worthwhile investigating, even if the only reason is we don’t know,” he says.

Privacy reforms needed for clarity

The federal government is in the process of reforming the outdated Privacy Act and Bower says mandatory Privacy Impact Assessments could help provide clarity about whether actions like BetterHelp’s fall outside of the law.

“Our privacy regulator desperately needs more powers to investigate these matters, and that’s what we’re hoping to see in the government’s reforms to the Privacy Act later this year,” she says.

Our privacy regulator desperately needs more powers to investigate these matters

Kate Bower, Vlog consumer data advocate

“We desperately need obligations on businesses that engage in data use and collection to notify the public on any high-risk practices. This would help regulators, advocates, and consumers to hold misbehaving businesses to account.

“Consumers in this country would also benefit from a better resourced privacy regulator, and we’re hoping extra funds to the Office of the Australian Information Commissioner go hand-in-hand with stronger powers,” Bower concludes.

BetterHelp did not respond to requests for a comment.

The post Did online therapy provider BetterHelp breach Australian laws? appeared first on Vlog.

]]>
759846 woman-in-therapy-online-phone
Privacy commissioner orders tenancy database operator to delete renter’s data /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/tenant-database-operator-tica-forced-to-delete-renter-data Sun, 16 Jun 2024 14:00:00 +0000 /uncategorized/post/tenant-database-operator-tica-forced-to-delete-renter-data/ The business was found to have held on to information more than twice as long as allowed.

The post Privacy commissioner orders tenancy database operator to delete renter’s data appeared first on Vlog.

]]>

Need to know

  • Tenancy Information Centre Australasia (TICA) collects data from rental applications and sells it to real estate agencies
  • In cases where a renter's data ends up on a tenancy blacklist, it must be deleted after three years
  • TICA held on to a renter's data long after it was removed from a blacklist and was finally forced to delete it by the privacy commissioner

When Vlog gave the third-party rental platform industry a Shonky award last year, we knew we were onto something.

Our research and investigations had all come to a similar conclusion: pressuring renters into giving away reams of personal information as a condition for applying for a place to live was unfair, unkind and shouldn’t even be legal. We even coined a new term for this digital assault on renters’ privacy – RentTech.

It had long been standard practice on platforms such as Ignite, 2Apply, Snug, tApp and others to demand information that goes well beyond what an agent or landlord would need to assess an applicant.

Four out of 10 renters we heard from were more or less coerced into using a RentTech platform

In April 2023, Vlog released a major report on RentTech, revealing that four out of 10 renters we heard from were more or less coerced into using a RentTech platform, and six out of 10 were not comfortable with the type of information they were being asked to submit. Unfortunately, it’s a situation that’s still in play today.

But a larger question loomed over the whole issue. Were RentTech platforms using this information simply to enhance their business models, or to discriminate against applicants on economic or other grounds?

And what was happening to all the personal data surrendered by hopeful applicants, most of whom wouldn’t have been approved? The privacy policies of the RentTech platforms were vague on the issue.

Privacy commissioner orders data deleted

One place that the personal renter information collected by RentTech platforms may have ended up is one of Australia’s biggest tenancy databases, a private business called Tenancy Information Centre Australasia (TICA).

TICA collects and organises data extracted from rental applications and other inputs and then sells it to real estate agencies, presumably so they can pick and choose which applicants they like. One of TICA’s partners, 2Apply, is one of the most commonly used RentTech platforms.

The virtual manager system appears to have been designed to get around the rules on data retention that apply to traditional ‘blacklist’ renter databases

Setting up such an operation is not illegal, but earlier this year the Office of the Australian Privacy Commissioner (OAIC) ruled that TICA’s ‘virtual manager’ database violated the privacy rights of a renter whose rental activities were tracked for seven years.

The virtual manager system appears to have been designed to get around the rules on data retention that apply to traditional ‘blacklist’ renter databases, where the renter’s data in the OAIC case was initially stored.

Application platforms such as Ignite, 2Apply, Snug and tApp demand information that goes well beyond what an agent or landlord would reasonably need.

Circumventing data retention rules

Legally speaking, a renter’s data should only end up on a blacklist database if their bond doesn’t cover the back rent they owe or they violate the terms of the lease in other ways, such as by damaging the property.

Real estate agents have to inform the renter and give them a chance to appeal before listing them. Once listed, though, renters must be removed from blacklist databases after three years.

The personal information of the complainant in the OAIC case was sent to the real estate agency through which she was trying to find a place to live, seven years after she ended up on a blacklist.

TICA claimed that its internal virtual manager database was exempt from the three-year rule, but OAIC didn’t see it that way, ruling that the retention of the information was illegal.

The need for reform

“While we welcome the ruling against TICA, this also highlights the need to update our laws,” says Vlog senior campaigns and policy adviser Rafi Alam, who specialises in consumer data issues.

“Tenancy databases have been around for a long time, and governments have seen the need to regulate these on behalf of renters. But the RentTech market has grown exponentially, so we need new rules to protect renters from unfair treatment and breaches of privacy.”

The RentTech market has grown exponentially, so we need new rules to protect renters from unfair treatment and breaches of privacy

Vlog campaigns and policy adviser Rafi Alam

The data of the renter in question has reportedly now been removed from the TICA database by order of the privacy commissioner, but it’s impossible to know what other renter data the business holds, what it’s doing with it, and how long it has had it.

Alam says there’s an urgent need for privacy reforms across all sectors of the economy. The focus should be on preventing privacy breaches, “not just punishing the few instances that make it to court”.

“With privacy laws currently stuck in the 20th century, Vlog looks forward to government plans for a major overhaul later this year, including imposing new obligations on businesses to only use personal data in fair and reasonable ways,” he says.

The post Privacy commissioner orders tenancy database operator to delete renter’s data appeared first on Vlog.

]]>
767800 house-for-lease