Data privacy laws: how regulations protect your personal data - Vlog /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation You deserve better, safer and fairer products and services. We're the people working to make that happen. Wed, 08 Apr 2026 04:49:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2024/12/favicon.png?w=32 Data privacy laws: how regulations protect your personal data - Vlog /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation 32 32 239272795 Why wouldn’t the bank help this identity theft victim? /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/why-wouldnt-the-bank-help-this-identity-theft-victim Mon, 30 Mar 2026 03:51:37 +0000 /?p=1078976 Her questions were left unanswered because she wasn't a customer of the bank the fraudster used.

The post Why wouldn’t the bank help this identity theft victim? appeared first on Vlog.

]]>

Need to know

  • An identity theft victim tried to get assistance from the bank where an account was set up in her name but was told they couldn’t help because she wasn’t a customer
  • The woman’s AFCA complaint against the bank was rejected on the same grounds
  • On 12 March, after this incident, AFCA gained new powers that allow it investigate any bank involved in a scam or identity theft, whether or not the victim is a customer

When a debit card arrived in the mail in mid-January with Patricia*’s name on it she knew it wasn’t a good sign.

The card was for Great Southern Bank (GSB) – Patricia was not a customer and never had been. She realised her identity must have been stolen.

“I acted quickly to try to limit the damage – placing credit bans, reporting the matter to police, and contacting financial institutions  – but I’m very conscious that many people would not detect something like this as quickly,” she says.

Her prompt attention to the matter paid off, because the fraudster had attempted in short order to open accounts under her name with AfterPay, American Express, and Wisr (a personal loan provider). The credit ban Patricia had placed on her file blocked these applications, and all three providers quickly acknowledged that they were fraudulent.  

She eventually discovered that her new driver’s licence had been stolen before she received it, and the fraudster had used it to open the GSB account.

But GSB was less than helpful.

During multiple calls with their call centre I received inconsistent information about escalation, was refused access to the fraud team or a supervisor, and was essentially told there was nothing further they could do for me

Identity theft victim Patricia

“When I contacted the bank after discovering the fraudulent account, it was extremely difficult to get support because I was told repeatedly that I was not their customer,” Patricia says.

“Firstly, I wanted to understand how the account had been opened in my name, because that would indicate what personal information had been compromised and what steps I needed to take to protect myself. More broadly, what I was really looking for was acknowledgement and meaningful action from the bank regarding how this happened and how they would prevent it happening again, for myself and others.”

She wanted to know the email address and phone number that were used to open the account in her name. The bank’s reason for refusing to provide this may have been legitimate, but its representatives were also dismissive, unprofessional and rude, Patricia says.

“During multiple calls with their call centre I received inconsistent information about escalation, was refused access to the fraud team or a supervisor, and was essentially told there was nothing further they could do for me.”

AFCA unable to help

afca_logo_on_dark_background
As of 12 March, AFCA gained new powers to investigate all banks involved in a scam, whether or not the victim is a customer.

At this point she felt that her only option was to lodge a complaint with the Australian Financial Complaints Authority (AFCA). She called out GSB for failing to respond appropriately to her situation and for not having adequate identification checks in place to make sure people opening accounts were who they said they were. The only piece of identification used by the fraudster was her driver’s licence.

But the bank doubled down on its unhelpfulness, appealing to AFCA to have the case dismissed on the grounds that Patricia wasn’t a customer, and that only customers can lodge AFCA complaints. AFCA conceded that this aligned with its legislative charter at the time.

In fairness, GSB is not an outlier when it comes to identity verification. Most banks only require a single primary form of identification to open a bank account, such as a driver’s licence or passport.

GSB: ‘We escalated the matter appropriately’

A GSB spokesperson tells Vlog that the bank is prohibited from sharing information about scam perpetrators (including identity theft) with their victims by the both Privacy Act and the Anti-Money Laundering and Counter-Terrorism Financing Act. The bank says that it responded appropriately to Patricia’s requests for help.

“We have strong sympathy for the affected individual and have worked with her, as well as relevant organisations, to help reduce the risk of further identity theft and fraud,” the spokesperson says.

We believe we escalated the matter appropriately, but acknowledge our communications could have been clearer

Great Southern Bank spokesperson

GSB says it provided sound guidance, advising Patricia to report the incident to the police and place a ban on her credit file. (This good advice aligned with the steps she had already taken.)

But the bank admits that it could have done better.

“We believe we escalated the matter appropriately, but acknowledge our communications could have been clearer, and we are taking steps to improve how we communicate in situations like this.”

AFCA gains new powers

Had Patricia’s experience with the scam of identity theft happened in mid-March rather than mid-January, AFCA’s response to her complaint may have been different.

As of 12 March this year, AFCA’s jurisdiction expanded to allow it to investigate scam complaints involving the unauthorised opening of accounts whether or not the complainant is a customer of the bank in question.

It means that when a scammer convinces you to send money from your bank account to an account at a bank set up by the scammer (known as mule accounts), AFCA can open investigations into both banks.

“This is an important step to establishing a broader, more coordinated framework for looking at scam complaints and it reflects how scams operate in the real world,” an AFCA spokesperson says, adding that the change “strengthens transparency and accountability across the banking system by ensuring all parties involved in the movement of scam funds are accountable”.

This is an important step to establishing a broader, more coordinated framework for looking at scam complaints and it reflects how scams operate in the real world

AFCA spokesperson

As for Patricia’s case, AFCA says it “expects banks to engage with identity theft victims based on consumer expectations and good industry practice”.

Along with the AFCA complaint, Patricia also complained to GSB’s Customer Advocacy team.

“I decided to reach out in a more direct and personal way to set out the full context of what had happened and to see whether there would be any acknowledgement, accountability or rationale around the bank’s role in the situation. Unfortunately, that wasn’t the case,” Patricia says.

The bank maintained that it had made no mistakes since the fraudulent account was opened using a valid driver’s licence.

(*Editor’s note: Patricia is a pseudonym)

The post Why wouldn’t the bank help this identity theft victim? appeared first on Vlog.

]]>
1078976 afca_logo_on_dark_background
Real estate agents, chemists, car hire companies and more under new privacy scrutiny /data-protection-and-privacy/articles/real-estate-agents-car-hire-companies-under-new-privacy-scrutiny Thu, 08 Jan 2026 23:14:20 +0000 /?p=920932 Australia’s privacy regulator is reviewing the privacy policies of businesses collecting your personal data during in-person interactions.

The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>

Need to know

  • In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions privacy policies give the businesses that write them
  • In 2023, we reported on the privacy policies of rental platforms, and last year we analysed the privacy policies of Australia’s ten most popular car brands
  • This month, the Office of the Australian Privacy Commissioner begins its first full-scale privacy policy review, focusing on information demanded by businesses in person

Very few of us read the privacy policies we passively consent to when engaging with a service provider. Fewer still would understand what these privacy policies actually say.

In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions these documents give the businesses we regularly interact with.

In 2023, we reported on the privacy policies of rental platforms such as realestate.com.au’s Ignite as well as Ailo, Tenant Options, Rental Rewards, Snug, 2Apply and Simple Rent.

The conclusion? These RentTech platforms collected information that went well beyond what’s needed to assess a tenant’s ability to pay the rent. The questions often seemed designed to grab as much data as possible from people who had no choice but to provide it.

In 2024, we analysed the privacy policies of Australia’s ten most popular car brands to see how the vehicles monitored and tracked their drivers. Here again we found that the harvesting of personal driver information was often excessive, and the rights the manufacturers gave themselves to share the data with third-parties were both far-reaching and vague.

The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words.


The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month

All of this makes the Office of the Australian Information Commissioner’s (OAIC) recent announcement that it will begin its first large-scale review of privacy policies in early January 2026 more timely than ever.

What’s changing in privacy law?

The Privacy Act requires privacy policies to contain certain details, such as what information is collected, why it’s needed, how it’s used, and how it can be corrected if necessary. 

An update to the Act in 2024 means businesses will also be required (as of 10 December 2026) to specify in their privacy policies whether a computer program will be using your personal information to make decisions that could go against you, such as when an application for a rental home is rejected. 

The privacy policy sweep is … focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys

In addition, the 2024 update gave the OAIC the power to issue infringement notices for Privacy Act violations without going to court. And it gives individuals the right to seek legal redress and financial compensation in certain cases for invasions of privacy or misuse of their personal information.

The OAIC’s privacy policy sweep is taking a different approach than our investigations of online privacy documents. It will occur in the real world, focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys. The privacy policies of such businesses must include the above-mentioned information. 

Not having the right information in a privacy policy – or not having a privacy policy at all – could lead to fines from the OAIC of up to $66,000.

Which types of businesses will be targeted?

The privacy policy sweep will focus on sectors where the OAIC believes there are particular power imbalances – also known as information asymmetries – between the business in question and the customers being asked to provide the information.

When confronted with in-person requests for their personal information … consumers often don’t have access to all the information they might need to make an informed decision

Privacy Commissioner Carly Kind

“When confronted with in-person requests for their personal information from retailers, licensed venues, car hire companies or real estate agents, consumers often don’t have access to all the information they might need to make an informed decision,” says Privacy Commissioner Carly Kind.

“This makes them vulnerable to overcollection of personal information and creates risks to their security and privacy.”

The OAIC says it will review the privacy policies of around 60 businesses from the following six sectors, with a particular focus in each case.

  • Rental and property – collection of individuals’ personal information during property inspections.
  • Chemists and pharmacists – collection of personal information for the purpose of providing a paperless receipt and collection of identity information to provide medication.
  • Licenced venues – collection of identity information to enable individuals to access a venue.
  • Car rental companies – collection of identity and other personal information to enable an individual to enter into a car rental agreement.
  • Car dealerships – collection of personal information to enable an individual to conduct a vehicle test drive.
  • Pawnbrokers and second-hand dealers – collection of identity information from individuals who wish to sell or pawn goods.

Transparent communication is critical

In the OAIC’s view, a business’s explanation of how it will use personal information should be open and transparent.

“The Australian community is increasingly concerned about the lack of choice and control they have with respect to their personal information,” Kind says.

“The first building block of better privacy practices is a clear privacy policy that transparently communicates how an individual can expect their information to be collected, used, disclosed and destroyed.

“In conducting a compliance sweep, the OAIC intends to ensure that entities are meeting their obligations to be transparent with consumers and customers about how they’re using the personal information they collect in-person.

“We hope this will also catalyse some reflection about how robust entities’ privacy practices are, and whether more can be done to improve compliance with the Privacy Act writ large.”


The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>
920932
Pathology lab becomes the first business to be fined in Australia for a privacy breach /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/australian-clinical-labs-fined-for-data-breach Sun, 12 Oct 2025 13:00:00 +0000 /uncategorized/post/australian-clinical-labs-fined-for-data-breach/ In a recent court judgement – the first of its kind under the Privacy Act – Australian Clinical Labs was fined $5.8 million.

The post Pathology lab becomes the first business to be fined in Australia for a privacy breach appeared first on Vlog.

]]>

Need to know

  • In a groundbreaking judgement, Australian Clinical Labs was ordered to pay $5.8 million in penalties for violations of the Privacy Act 
  • They would have been much higher had the data breach occurred after 13 December 2022, when penalties went from $2.22 million per contravention to as much as $50 million.
  • Similar rulings could be made against Optus and Medibank, which have both been taken to court by the Office of the Australian Privacy Commissioner

In February 2022 the personal medical information of 223,000 people fell into the hands of scammers after the IT systems at Australian Clinical Labs (ACL) were breached.

It was a major cybercrime incident, yet ACL dragged its heels – first by failing to properly investigate whether a data breach had occurred and then by taking too long to inform the Office of the Australian Information Commissioner (OAIC) once the business knew its systems had been infiltrated.

In a recent court judgement – the first of its kind under the Privacy Act – ACL was ordered to pay $5.8 million in penalties for these and other contraventions of privacy legislation.

Most of the penalty ($4.2 million), however, was for failing to protect the data in the first place, something that far too many companies have failed to do.

Australian Information Commissioner Elizabeth Tydd calls the unprecedented legal outcome “a notable deterrent and signal to organisations to ensure they undertake reasonable and expeditious investigations of potential data breaches and report them”.

The Justice in the case said ACL’s negligence “had at least the potential to cause significant harm to individuals whose information had been exfiltrated, including financial harm, distress or psychological harms, and material inconvenience” and could have “a broader impact on public trust in entities holding private and sensitive information of individuals”.

ACL penalty could have been a lot higher 

Trust in how our data is collected and protected is already low. In September, Privacy Commissioner Carly Kind found that Kmart Australia had breached Australians’ privacy by grabbing their personal information without their consent in 28 of its stores through facial recognition technology (FRT), a system ostensibly designed to prevent refund fraud. How safe this data is remains unclear. (The Privacy and Information Commissioners are both part of the OAIC.)

Kmart’s secret use of FRT was originally uncovered through a 2022 Vlog investigation, which also revealed the use of ART at Bunnings and The Good Guys. The Privacy Commissioner recently made a similar ruling against Bunnings, a case that is currently under review by the Administrative Review Tribunal.

The financial penalties against ACL may be just the beginning – and they’re on track to get a lot higher

The OAIC did not pursue financial penalties in the Kmart case, but the financial penalties against ACL may be just the beginning – and they’re on track to get a lot higher. 

In August, Commissioner Tydd launched court proceedings against Optus following a cyberattack in September 2022 that resulted in the personal information of around 9.8 million Australians falling into the hands of criminals.

And in June last year, the OAIC filed a court case against Medibank Private following an October 2022 data breach that saw the sensitive health information of around 9.7 million Australians disappear into the criminal underworld.

The penalties against ACL would have been much higher had the data breach occurred after 13 December 2022, when maximum penalties went from $2.22 million per contravention of the Privacy Act to as much as $50 million. (Alternatively, fines can equal three times the benefit derived from the conduct or up to 30% of a business’s annual turnover per contravention.)

Should the Optus and Medibank cases result in financial penalties, they would be determined according to the regime in place before 13 December 2022. But it seems that data breaches aren’t going away anytime soon, and whether the threat of higher fines will stop the breaches is an open question. 

A turning point for privacy law

Referring to the recent ACL case, Commissioner Kind says “this outcome represents an important turning point in the enforcement of privacy law in Australia. For the first time, a regulated entity has been subject to civil penalties under the Privacy Act, in line with the expectations of the public and the powers given to the OAIC by parliament”. 

For the first time, a regulated entity has been subject to civil penalties under the Privacy Act, in line with the expectations of the public and the powers given to the OAIC by parliament

Privacy Commissioner Carly Kind

“This should serve as a vivid reminder to entities, particularly providers operating within Australia’s healthcare system, that there will be consequences of serious failures to protect the privacy of those individuals whose healthcare and information they hold.”

The post Pathology lab becomes the first business to be fined in Australia for a privacy breach appeared first on Vlog.

]]>
759334
Most Australians want tougher privacy laws following data breaches /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/privacy-reform-survey Wed, 04 Sep 2024 14:00:00 +0000 /uncategorized/post/privacy-reform-survey/ With a recent breach affecting almost 13 million people, consumers are demanding change.

The post Most Australians want tougher privacy laws following data breaches appeared first on Vlog.

]]>

Need to know

  • Australians support reforms to the Privacy Act that would stop businesses from collecting too much data
  • The proposed reforms would also give more powers to regulators to investigate and act on data breaches
  • Vlog is calling on the federal government to institute these reforms urgently

New data reveals a majority of Australians support changes to privacy regulations being championed by Vlog, with approximately 80% backing several key reforms.

It comes as authorities reveal almost 13 million people may have had their health information and other personal details accessed by hackers following a cyber attack on former prescription delivery service provider MediSecure in April.

This follows years of data breaches affecting major companies, including Optus, Medibank and Latitude Financial, putting the sensitive information of millions of Australians at risk.

To help protect our data, Vlog is calling on the federal government to urgently implement four key improvements as part of its reform of the Privacy Act, which governs how our data can be collected and used.

Millions of Australians have had their data exposed to hackers following recent data breaches.

Australians back better privacy protections

In our latest Vlog Consumer Pulse survey*, a nationally representative survey of over 1000 households conducted in June 2024, consumers told us they back many of the reforms we’re presenting to the government.

Vlog senior campaigns and policy adviser Rafi Alam says that after years of data leaks, people are fed up and want change.

“Privacy reform has never been more urgent,” he says. “Consumers want and deserve strong protections for their personal information. They tell us daily that they are worried about their data and they expect the government to act to protect them.”

These are the four key reforms we’re pushing for.

Text-only accessible version

Major company data breaches
People affected (millions)
Medibank: 9.7
Optus: 9.8
MediSecure: 12.9

1. Stop the over-collection of data

Of those surveyed, 77% said they believed businesses should only be allowed to use your personal data in ways that are fair to you.

We’re calling on authorities to institute a “fair and reasonable use” test or, as Alam refers to it, “the privacy pub test” – something that would be considered reasonable and fair to most everyday Australians.

A privacy pub test would seek to ensure businesses can only collect and use your data in ways that are fair, regardless of consent or tricky terms and conditions.

Alam says a requirement for this in the Privacy Act will discourage organisations from collecting more data than they need and prevent them from using data in ways that hurts consumers. 

“For too long, businesses in Australia have had a culture of rampant over-collection of data that has led not only to massive data breaches, but also unfair practices like price discrimination and manipulative data-driven marketing,” he says.

Australians believe much of the information captured by our devices could identify us.

2. Bring ‘personal information’ into the digital age

Vlog also believes more of our sensitive details ought to be brought under the protections of the Act.

“Unfortunately our privacy laws were written in the 1980s, long before cybersecurity and artificial intelligence were everyday concerns,” says Alam. “We need fit-for-purpose privacy laws that offer consumers the protection they deserve in the digital age.” 

Currently, ‘personal information’ is protected in the Privacy Act, but is defined only as information about a person, such as names, addresses and phone numbers.

We want the definition changed to information relating to a person, safeguarding more of the data collected on us by our devices, such as IP addresses and our exact location.

Consumers are already well aware of the power this information has. Over 70% of survey respondents believe it could lead to them being identified.

3. More protections from businesses of every size

Consumers also believe more businesses should be required to abide by privacy laws.

According to Australia’s privacy commissioner Carly Kind, 95% of Australian businesses aren’t complying with any privacy legislation.

Most small businesses, with a turnover less than $3 million per year, are currently exempt from the Privacy Act, but 81% of survey respondents told us that they think these firms should be required to follow the same rules as big businesses when dealing with personal data.

“Australians expect the same protections, regardless of the size of the business,” Alam says.

“Whether it’s a real estate agent, supermarket or social media site, consumers want assurance that their personal information will be used fairly.”

4. Help authorities keep data collectors in line

Consumers back Vlog’s call for more powers for the data regulator.

Finally, Vlog also wants the national privacy regulator, the Office of the Australian Information Commissioner (OAIC), to be given similar authorities to other regulators, like the ACCC and ASIC, to mitigate wrongdoing. 

This would include giving OAIC stronger investigative powers and the ability to issue infringement notices for smaller breaches of the Act. 

“Stronger powers for the regulator will uplift compliance across the economy and restore trust in the market,” says Alam.

In our survey, 88% of respondents agreed that the regulator should be able to fine businesses that misuse our personal data.

The post Most Australians want tougher privacy laws following data breaches appeared first on Vlog.

]]>
766042 closeup-of-hackers-trying-to-access-a-website person-checking-smartphone group-of-peope-at-a-investigation-hearing
The two things stopping Australians using AI and the internet more /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/digital-lives-report Mon, 17 Jun 2024 14:00:00 +0000 /uncategorized/post/digital-lives-report/ Concerns about scams and cyber security are keeping people away from new tech.

The post The two things stopping Australians using AI and the internet more appeared first on Vlog.

]]>

Need to know

  • The internet is delivering Australians many benefits, with a new report finding most of us would struggle without it
  • But concerns about scams and the reliability of new technologies like AI are preventing us from using it more
  • A large number of consumers and small businesses would like to see more defences against scams and better AI regulation

Australian consumers and small businesses are keen to use the internet and emerging technologies more, but concerns about cyber safety and the accuracy of AI are holding them back.

This is what emerged in the latest , compiled by internet advocacy group auDA.

An ‘important asset’

Surveying 1500 consumers and 400 small businesses for its fourth annual study of the online challenges and opportunities facing Australians, auDA found almost all canvassed believed the internet adds value to their lives.

The internet’s important role was reflected in the finding that nine out of 10 Australians use it for work and 78% of small businesses would struggle without it.

When asked about what benefits the internet brought them, respondents to the auDA survey listed not only the chance to learn and connect but, increasingly, opportunities to earn money

“The importance of the internet and digital assets can’t be underestimated,” says Luke Achterstraat, CEO of the Council of Small Business Organisations Australia (COSBOA).

“Small business customers are always looking to potentially shop or receive marketing material online, so it’s critical.”

When asked about what benefits the internet brought them, respondents to the auDA survey listed not only the chance to learn and connect but, increasingly, opportunities to earn money.

Scam fears holding us back

Online confidence has been affected by the impact of scams and security breaches.

But concerns about threats to cyber safety, such as those posed by scams and data breaches, are preventing Australians from taking greater advantage of this online connectivity.

“Australian consumers and small businesses both say the value they gain from the internet is hampered by cyber security concerns,” says auDA CEO Rosemary Sinclair AM.

Vlog has witnessed this diminishing of value first-hand – in a survey of 280 scam victims late last year, 61% told us they lost confidence in doing financial transactions online after being scammed.

The rise in scams over the last few years has also threatened small enterprises.

“The average cost of a cyber attack for a small business can be in excess of $50,000,” says Achterstraat. “That’s potentially fatal for [the business].”

Together with 64% of consumers, 55% of small businesses avoid online activity due to concerns about data security, according to auDA’s report.

The study also found Australians are looking for greater certainty around how we should approach cyber threats.

Approximately 40% want to strengthen their online security, but don’t know how, and 81% of consumers in particular believe companies should face penalties if they fail to protect customer data.

81% of consumers believe companies should face penalties if they fail to protect customer data

Achterstraat says COSBOA would like to see more education for small businesses on tackling cyber threats and continued investment by the federal government in its cyber security strategy, which he believes is well-placed to protect consumers and business.

Vlog wants to see the government require businesses like banks, telcos and social media platforms, who have the technology and resources to detect, prevent and respond to scams, do more to prevent them and reimburse victims.

Australians want more regulation of new tech

auDA’s report found a majority of Australians believed AI tools could help with everyday tasks, while other technologies such as autonomous vehicles could also help them make more time in the day.

But they would feel more optimistic about how new technologies such as generative AI, virtual reality and robotics can be useful in their lives, and more comfortable using the technology, if there were stronger laws in place.

Meanwhile, 57% of small businesses say they’re already using AI tools for at least one purpose, with 30% of those saying they were using the programs to answer questions they have.

Consumers and small businesses would feel safer using AI if it was more strongly regulated.

AI anxieties

However, the report also found hesitations around using AI. Consumers in particular came forward with doubts about the accuracy of its output and who legally owns what it produces.

To that end, auDA’s research found a sizeable majority of Australians want more regulation of these tools, with 61% of consumers and 67% of small businesses saying they would feel more comfortable using them if there were stronger regulatory safeguards in place.

Vlog consumer data advocate Kate Bower says there are specific areas where AI is deployed that should be reined-in first.

Strong and clear guardrails for AI will increase consumer trust in the rapidly evolving technology

Vlog consumer data advocate Kate Bower

“We’re calling on the federal government to introduce a strong regulatory framework that would legislate pre-market safeguards for high risk uses of AI, as well as prohibit AI that is very high risk,” she explains.

“Strong and clear guardrails for AI will increase consumer trust in the rapidly evolving technology and enable small businesses to reap the benefits while safely managing the risks.”

The post The two things stopping Australians using AI and the internet more appeared first on Vlog.

]]>
761414 scam-alert-person-using-phone chatbot-ai-on-phone-background
Australians want AI regulation, but the government is falling short /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/australians-want-ai-regulation Wed, 13 Mar 2024 13:00:00 +0000 /uncategorized/post/australians-want-ai-regulation/ New Vlog research for World Consumer Rights Day says we want more than the bare minimum.

The post Australians want AI regulation, but the government is falling short appeared first on Vlog.

]]>
New nationally representative research from Vlog highlights the wide gulf between consumer expectations of AI regulation and the current state of play in Australia. The survey of more than 1000 people found that almost 4 in 5 Australians believe that businesses should have to ensure their artificial intelligence system is fair and safe before releasing it to the public, yet no such requirements exist in Australia.

The survey also found strong support for the role of government in ensuring AI systems are fair and safe, with 77% agreeing that the government should require businesses to assess AI risk before being released to the public and, further to this, 3 out of 4 agreeing that businesses should also be required to prevent AI risk before release.

The message couldn’t be clearer – Australians want the government to place obligations on business to mitigate the risks of AI systems

Vlog consumer data advocate Kate Bower

Notably, consumers expect AI risks to be assessed and managed before products enter the market and see a clear role for the government in ensuring businesses comply.

“The message couldn’t be clearer,” says Vlog consumer data advocate Kate Bower. “Australians want the government to place obligations on business to mitigate the risks of AI systems.” 

“AI systems are notoriously opaque, making it difficult for consumers to know the risks simply by interacting with a product or service. These results show that Australians understand this and they want government and businesses to do more to make sure that AI systems are used fairly and responsibly.”

The figures were released on (15 March), with the theme for 2024 being fair and responsible AI for consumers. Consumers International, the membership organisation for consumer groups worldwide, chose the theme as an acknowledgment of the rising tide of AI systems globally and the “serious implications for consumer safety and digital fairness” that entails. 

Text-only accessible version

Vlog research: Australians’ views on AI

Businesses should have to ensure their artificial intelligence system is fair and safe before releasing it to the public, 78% agree.

Government should require businesses to assess the risks of their artificial intelligence products or services before releasing them to consumers, 77% agree.

Government should require businesses to prevent the risks of their artificial intelligence products or services before releasing them to consumers, 75% agree.

Government should have an independent third party assess the risks of businesses’ artificial intelligence products or services before releasing them to consumers, 69% agree

Government takes first steps towards reform

In January, the federal government announced an to last years’ Safe and Responsible AI consultation undertaken by Minister for Industry and Science Ed Husic and the Department of Industry, Science and Resources. The government acknowledged that AI regulation in Australia is seriously lagging behind comparable countries and that Australians want legal protections. 

In the response they committed to several actions including: 

  • the establishment of a temporary AI advisory group to consider mandatory guardrails for high-risk uses of AI
  • the creation of a voluntary AI safety standard
  • exploring the merits of watermarking for AI generated content. 

However, the government intends to leave most uses of AI, like those in low- and medium-risk categories, to be governed by existing laws. 

Serious gaps in our current laws

Multiple existing laws do apply to AI systems. For instance, negligent or misleading business practices are illegal regardless of which technology is used. But  have pointed to the myriad gaps in existing legal regimes, including consumer protection, privacy, discrimination, and copyright law. 

Dr Zofia Bednarz, an Associate Investigator at the , says “The laws we have currently in place were not prepared with AI models in mind, and they often fail to address new practices enabled by AI. Coupled with often ineffective enforcement of the existing rules, this makes our legal and regulatory system likely unable to deal with many of the new challenges posed by AI.”

Bower says that, while government action is welcome, the response so far falls well short of consumer expectations.

“Australians want more than the bare minimum protection, just regulating for the highest level of risk is not going to cut it,” says Bower. 

“Three-quarters of Australians believe the government should require businesses to actually prevent AI risks before they release a product into the market and more than two-thirds support an independent third party assessment of the risks, such as through an AI regulator.

It’s clear the government is going to have to seriously beef up their response if they want to meet consumer expectations and restore consumer trust

Vlog consumer data advocate Kate Bower

“It’s clear the government is going to have to seriously beef up their response if they want to meet consumer expectations and restore consumer trust.” 

Vlog’s submission to the Safe and Responsible AI consultation made several recommendations to the federal government, such as the establishment of a well-funded AI commissioner with a range of civil and criminal powers, including a new product intervention power that would remove harmful products from sale before they hit the market. Vlog is also urging the government to enshrine in legislation requirements for AI systems to be fair, safe, reliable, transparent and accountable. 

Text-only accessible version

World Consumer Rights Day 2024: Fair and responsible AI for consumers

AI systems should be:

Fair

Safe

Reliable

Transparent

Accountable

“With the World Consumers Rights Day theme for 2024 being fair and responsible AI for consumers, now is the perfect time to address the serious implications artificial intelligence poses to consumer safety and digital fairness by introducing stronger laws,” says Bower. 

Do you have questions or concerns about AI? We’d like to .

Our survey

Vlog Consumer Pulse January 2024 is based on an online survey designed and analysed by Vlog. 1,058 Australian households responded to the survey with quotas applied to ensure coverage across all age groups, genders and locations in each state and territory across metropolitan and regional areas. The data was weighted to ensure it is representative of the Australian population based on the 2021 ABS Census data. Fieldwork was conducted from the 16th of January until the 5th of February, 2024. 

The post Australians want AI regulation, but the government is falling short appeared first on Vlog.

]]>
759360
Australians to get ‘guardrails’ for safe AI use /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/ai-government-regulation Sun, 21 Jan 2024 13:00:00 +0000 /uncategorized/post/ai-government-regulation/ Government says AI comes with huge economic opportunities, but safety protections are needed.

The post Australians to get ‘guardrails’ for safe AI use appeared first on Vlog.

]]>
The federal government says the adoption of AI (artificial intelligence) and automation could add hundreds of billions of dollars to the Australian economy by 2030, but high-risk uses of the technology need safety standards and guardrails. 

In releasing their interim response to the Safe and Responsible AI in Australia consultation, Minister for Industry and Science Ed Husic says Australians see the value of the technology, but want to see risks identified and tackled as well. 

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI,” he says. 

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.” 

A risk-based response 

The government’s response paper says that it will target high-risk settings where the harms of AI would be difficult to reverse, adding that legislation may be introduced for mandatory protection guardrails in these areas. 

‘High-risk’ applications of AI technology include the collection of biometric information such as facial recognition; medical devices; critical infrastructure like water, gas and electricity; determining access to education or jobs and law-enforcement. 

The government says they want to make sure the businesses engaged in ‘low-risk’ uses of AI are allowed to flourish unimpeded. 

Businesses can’t regulate themselves

Rafi Alam, Vlog’s senior consumer data campaigns advisor, says he is pleased to see the government take the first steps towards regulating the use of AI. He adds that consultations with consumer groups should continue and points out that history shows that businesses can’t be left to regulate themselves 

“The government’s interim response has recognised the inadequacy of our current laws to address consumer concerns about high-risk AI systems, and we welcome their commitment to putting in mandatory guardrails on the development and deployment of these systems,” he says. 

Vlog also called for strong enforcement and a well-funded AI Commissioner with a range of civil and criminal penalty powers

“Vlog’s investigations have shown how AI can be used to harm consumers, from biased algorithms that increase prices based on age and sexuality, to facial recognition technology that can be used to monitor and discriminate against certain customers. Our submission to the consultation argued the need to implement a risk-based approach to these and other AI-based technologies.”

In Vlog’s submission to the consultations late last year, we argued for a risk-based approach to AI legislation and also called for strong enforcement and a well-funded AI Commissioner with a range of civil and criminal penalty powers.

Vlog called for Australia to look to the European Union and Canada as examples of jurisdictions where stringent AI regulation means that businesses must guarantee safe, fair, transparent, reliable, and accountable AI systems before releasing them. 

The post Australians to get ‘guardrails’ for safe AI use appeared first on Vlog.

]]>
758744
New anti-scams code targets banks, telcos and digital platforms  /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/government-commits-to-mandatory-anti-scams-code Wed, 29 Nov 2023 13:00:00 +0000 /uncategorized/post/government-commits-to-mandatory-anti-scams-code/ The government has begun consulting on an industry-wide approach to stopping the onslaught of scams. 

The post New anti-scams code targets banks, telcos and digital platforms  appeared first on Vlog.

]]>
In 2022, Australians lost $3.1 billion to scams, a whopping 80% increase compared to the previous year. The 2020s, it seems, have become the decade in which scammers have clearly gained the upper hand. 

The federal government has finally committed to introducing a new mandatory anti-scams code, focusing on banks, digital communications platforms and telecommunications providers.

This commitment to begin rolling back the exponential increase in scams of recent years is a welcome development that Vlog and our allied consumer advocacy organisations have long been calling for. 

‘Really clear obligations’

“These tough new codes would make it really clear what the obligations are on industry to prevent scams and better protect people and businesses,” says Assistant Treasurer Stephen Jones. 

“Disrupting these sophisticated criminals is a whole-of-society effort. Government, industry and the community all have a role to play. New scam codes will ensure we have tailor-made requirements for each sector to keep Australians safe,” says Minister for Communications Michelle Rowland. 

These tough new codes would make it really clear what the obligations are on industry to prevent scams and better protect people and businesses

Assistant Treasurer Stephen Jones

In the 2023–24 budget, the government earmarked $86.5 million to fight scams. Some of those funds went to the establishment of a National Anti-Scam Centre in July 2023. 

The government has also committed to establishing an SMS Sender ID Registry to help prevent scammers from imitating government agencies, a tactic that has become increasingly sophisticated and effective. 

Meta and Google on notice 

Vlog director of campaigns and communications, Rosie Thomas, says the online platform duopoly, Meta and Google, is long overdue for a mandatory consumer protection code. 

“Digital platforms simply aren’t doing enough to protect consumers from scams, putting people at unnecessary risk of harm,” Thomas says.  “Without strong rules that require digital platforms to detect and prevent scams, and support people who’ve been harmed, scams will only continue to run rampant online.” 

While the platforms rake in revenue from the criminals who pay for scam ads, the public loses millions. 

Losses reported to Scamwatch from scams on social media have increased to $66.5 million in 2023, an increase of 41.6% on the same time period in 2022. 

Without strong rules that require digital platforms to detect and prevent scams, and support people who’ve been harmed, scams will only continue to run rampant online

Vlog director of campaigns and communications Rosie Thomas

“A recent Vlog investigation also revealed a slew of likely scam ads impersonating popular Australian retailers across Google, Facebook and Instagram, further reinforcing the need for strong new regulations backed by hefty penalties,” Thomas says. 

Businesses should be liable when they fail to protect consumers

CEO of the Consumer Action Law Centre Stephanie Tonkin says the government’s consultation paper appears to be a step in the right direction, but lacks one critical feature – a duty for banks and other businesses to reimburse scam victims when consumer protection standards are not met. 

“The glaring omission is liability and rights for reimbursement for banking customers,” Tonkin says.

“We talk to many people who, through no fault of their own, have suffered life-changing losses to sophisticated scams, and they should not be made to foot the bill for a system that has failed them.”

Members of the public can take part in the consultation process to develop the final version of the code by 

As an alternative, the government invites the public to .

The post New anti-scams code targets banks, telcos and digital platforms  appeared first on Vlog.

]]>
762634
Op ed: This is what Australia needs to do to regulate AI /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/op-ed-australia-needs-strong-regulations-for-ai Thu, 07 Sep 2023 14:00:00 +0000 /uncategorized/post/op-ed-australia-needs-strong-regulations-for-ai/ We need strong laws and well-resourced regulators to make sure consumers are protected from the possible harms of AI.

The post Op ed: This is what Australia needs to do to regulate AI appeared first on Vlog.

]]>

Need to know

  • As AI becomes increasingly mainstream, we need to be aware of the risks this technology poses
  • We are already seeing examples of harm resulting from the use of AI such as discriminatory outcomes from automatic pricing  
  • Vlog has made a submission to government outlining our suggestions on how consumers can be protected from the risks of AI, including making laws risk-based and appointing strong regulators

Once confined to academic papers and science fiction, 2023 seems like the year artificial intelligence (AI) officially moved out of the research labs and into the consumer market. AI-based tools ChatGPT and DALL-E have become household names, and AI promises to deliver both productivity and fun. 

But although AI has its benefits, we shouldn’t ignore the risks. Businesses are looking to AI to increase profitability, often at the expense of consumers.

The risks of artificial intelligence 

Our investigations over the past year have found that facial recognition technology has made its way into retail stores, pubs and clubs and stadiums. This technology lets businesses automatically refuse access to people based on identity databases, but experts have found a startling rate of inaccuracy, especially for people with disabilities, and people of colour (particularly women). 

AI is also being used to process more data than ever before. Businesses even use algorithms to make decisions about how much we should pay for things, from our groceries to insurance or subscription plans and even our home loans. 

Chatbots using ChatGPT can replicate false information in their answers or provide dangerous advice

But when pricing decisions are entirely automated it can lead to discriminatory outcomes, such as higher premiums for people from marginalised backgrounds or increased prices for older people. 

Generative AI like ChatGPT comes with its own set of hazards. Chatbots using ChatGPT can replicate false information in their answers or provide dangerous advice. The Federal Trade Commission, the US’s competition watchdog, is currently investigating whether ChatGPT has harmed people by creating false information, and is also looking into its privacy practices.

What needs to be done to protect consumers?

Businesses use algorithms to make decisions about how much people should pay for things which can result in unfair outcomes.

AI laws should be risk-based

Experts have been sounding the alarm on these risks for quite some time, but governments around the world are only just catching up. Australia is now running its own consultation on AI, and Vlog has just submitted our suggestions on how the government can protect consumers from these risks. 

At the heart of our submission is the need for a risk-based approach to AI, just like the European Union is proposing. A risk-based framework categorises AI activities from those that are considered minimal risk and therefore require few limitations to those that are high risk that are restricted or even prohibited. 

We also suggested that our AI laws should codify consumer rights to safety, fairness, accountability, reliability, and transparency. 

The federal government should also strengthen existing laws like the Australian Consumer Law and the Privacy Act to ensure people are comprehensively protected from AI misuse or exploitation.

Strong regulators are essential

But making new laws isn’t enough – we need strong regulators to enforce these laws. Vlog is calling for a well-funded AI Commissioner with a range of regulatory powers including civil and criminal penalty powers. 

An AI Commissioner should leverage their specialist expertise in collaborating with existing regulatory bodies responsible for overseeing sectors of the economy that are impacted by AI, such as consumer rights, competition, privacy, and human rights.

Big tech wants to regulate itself, but history proves these businesses can’t be trusted to write their own rules

Big tech wants to regulate itself, but history proves these businesses can’t be trusted to write their own rules. Australia should follow the lead of the European Union and Canada and lay down the foundations for a fair market where businesses must guarantee safe, fair, transparent, reliable, and accountable AI systems before releasing them. 

Not only would this protect our community from harm, it would also encourage innovation and promote responsible AI use.

You can read our full submission to the government here.

The post Op ed: This is what Australia needs to do to regulate AI appeared first on Vlog.

]]>
765671 person-showing-sales-figures-on-tablet-using-a-stylus
Op-ed: To fix the Privacy Act, we need one extra sentence /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/op-ed-one-sentence-to-fix-the-privacy-act Wed, 03 May 2023 14:00:00 +0000 /uncategorized/post/op-ed-one-sentence-to-fix-the-privacy-act/ Why individuation or 'singling out' is critical to improving Australia's Privacy Act and protecting consumers.

The post Op-ed: To fix the Privacy Act, we need one extra sentence appeared first on Vlog.

]]>
Anna Johnston is a leading privacy expert, former Deputy Privacy Commissioner of NSW and founder and Principal of Salinger Privacy. 

Need to know

  • A proposal to change the definition of personal information is the key to privacy reform 
  • Regulators and advocates are united in support of changing the definition to capture unfair and deceptive business practices, including profiling and tracking
  • Big business and the AdTech industry are lobbying against the reforms despite clear evidence consumers want better protections

Indirect identification, individuation, disambiguation, distinguishing from all others, or singling out… call it what you want, but the statutory definition of ‘personal information’ needs to clearly state that it includes cases when individuals can be singled out and acted upon, even if their identity is not known.

Yet as they now stand, the proposals to amend the Privacy Act do not include this critical reform.

I wrote previously about some of the themes arising from the Final Report into the review of the Privacy Act. 

One of the surprises, but not of the happy-surprise-birthday-party kind, was the way in which “personal information” has been treated.

Can someone be ‘identifiable’ if a business doesn’t know their name?

The definition of personal information is a critical threshold definition because the privacy principles only apply to personal information. If a business can successfully argue that some data is not personal information, they can collect, use, disclose and trade the data with impunity.

Right now, the definition of personal information includes if someone is “reasonably identifiable”. But that phrase is foggy, to the detriment of businesses and consumers alike, who may ask: Can someone be ‘identifiable’ if a business doesn’t know their name?

The OAIC says yes

In , and in a , the OAIC has maintained that ‘identifiability’ in law does not necessarily require that a person’s name or legal identity can be established from the information. Instead, it implies uniqueness in a dataset: “(g)enerally speaking, an individual is ‘identified’ when, within a group of persons, he or she is ‘distinguished’ from all other members of a group… This may not necessarily involve identifying the individual by name”.

The Attorney-General’s Department says yes

The Final Report quotes, without challenge, the OAIC position, and states that “The test does not require that an individual’s legal identity be known provided the information could be linked back to the specific person that it relates to.”

Based on European and Californian privacy laws and others, our global trading partners say yes

Each has either explicitly expanded on the meaning of identifiability, or has introduced alternatives to identifiability as a threshold element of their definitions. The GDPR calls it “singling out”. 

The California law (CCPA) includes, within its definition of personal information, data which is “capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household”, without first needing to pass an identifiability test. The 2019 international standard in Privacy Information Management, , is similar.

AdTech businesses are tracking consumers’ buying behaviour and data-matching across brands and businesses.

Australians want the answer to be yes

In a , 79% of digital platforms users considered telephone or device information, and 67% of digital platform users considered browsing history to be information that could reasonably be used to identify them when doing things online. 

The showed that only around a quarter (24%) of Australians feel the privacy of their personal information is well-protected, and that the vast majority (83%) of Australians would like the government to do more to protect the privacy of their data. , in the wake of the Optus and Medibank data breaches.

The vast majority (83%) of Australians would like the government to do more to protect their privacy

And the most recent research, released , found that the majority of Australians regard things like their IP address, device IDs, location data and online search history to be their ‘personal information’. In fact, respondents were even more likely to consider this data personal information than categories of data like sexuality and disability. 

This research showed that the majority of Australians were also uncomfortable with that type of data being used by companies to create a personal profile, or with it being collected from, or shared with other companies.

So who is saying no?

Right now, some industries say no, or they are , or they add to the fog by obfuscating their way around the terminology when dealing with consumers and pretend the answer is no.

Some industries are exploiting the fog around the definition to and , by arguing that the data they are using is not ‘reasonably identifiable’, and thus the privacy rules (which prohibit unrelated companies sharing their customers’ personal information without consent) do not apply.

For example, we have seen that no-one can be “reasonably identified” from , or based targeted advertising. And law academic Katharine Kemp has highlighted the disingenuous made to consumers by media and AdTech companies, especially when compared with what they privately tell brands about their .

An industry player admitted that the reforms ‘will force us to stop doing some things that we probably shouldn’t have been doing anyway’

In an article about the law reform proposals, one industry player was quoted as admitting that reforms will “force us to stop doing some things that “. Another said, of the use of hashed emails: “It’s very easy to link those two data sets together and then re-identify the personal information”. 

So if the law is clarified to state that pseudonyms like hashed emails (which facilitate data-matching at the individuated level) constitute “personal information”, the result will be “a big impact on existing industry practices”, because “there are thousands of AdTech companies and publishers using hashed emails” (to match up data about customers, profile and target them without consent).

: “I can exploit you if I know your fears, your likely political leanings, your cohort. I don’t need to know exactly who you are; I just need to know that you have a group of attributes that is particularly receptive to whatever I’m selling or whatever outrage I want to foment amongst people. 

“I don’t need to know your name. And therefore, arguably depending on how you interpret it, I don’t need ‘personal information’. I just need a series of attributes that allows me to exploit you.”

The media publishing and AdTech industry players know that this is true, but they will do what they can to maintain fog around the phrase ‘reasonably identifiable’, so that their practices can stay in the shadows.

Industry is working to water down privacy reforms

So long as the wording in the statutory definition of ‘personal information’ is not clear and precise, the fog will not dissipate. The much-touted reforms will fail to stop these widespread, covert data-sharing practices.

Fear of the fog clearing is the reason that industry is pushing to water down the proposed reforms – or kill them off entirely. In what has been described as a “privacy counterstrike”, industry lobby groups representing digital platforms, media giants and advertising companies are “planning ” in the Final Report.

One of the digital and AdTech industry’s objectives is to lobby for “a more reasonable definition [of personal information] … [the proposed definition] doesn’t seem workable”.

The proposals in the [Privacy Act Review] Final Report don’t deliver the clarity or strength that we need

And what is this radical and unworkable proposal in the Final Report that is deserving of such a focused counteroffensive? To change the word “about” to “that relates to”. That’s it. Three words. THREE WORDS. As per the Final Report, that’s the only actual change proposed to the statutory definition of “personal information”. 

The rest would be put in guidance, or in a list of things which might or might not be personal information, or in a list of things that organisations should “have regard to”, when “doing their own assessment” about what the definition might mean.

Which means that industry lobbying has already been successful, because this is a watering down of what was proposed by the Department in 2021, in their on the review. Then, the proposal included adding a whole extra sentence! 

Perhaps the review team was persuaded that the sky would fall in if they dared to add the following words into the Act, as they originally proposed in 2021 – and hold onto your hats here, because this is scary radical stuff:

“An individual is ‘reasonably identifiable’ if they are capable of being identified, directly or indirectly.”

Consumer research shows that Australians are uncomfortable with companies using their personal information to profile them.

The 2021 Discussion Paper stated that such a definition “would cover circumstances in which an individual is distinguished from others or has a profile associated with a pseudonym or identifier, despite not being named”.

I know, vive la revolution it ain’t, but nonetheless this modest proposal from 2021 doesn’t even appear in the Final Report.

According to the Final Report, only one submission argued against the proposition that the definition should be amended to expressly include when someone can be distinguished from all others in a group (even if not named) in order to be profiled, targeted, or acted upon in some way.

And yet … the proposals in the Final Report don’t deliver the clarity or strength that we need.

What’s currently being proposed

Instead of proposing an amended definition of personal information that would clearly encompass the types of individuating identifiers that allow online behavioural advertising and other practices to go unchecked, the Final Report proposes a whole separate regime for regulating certain use cases, like direct marketing, online targeting and trading. 

(And they do this by not touching the definition of “personal information”, but by saying that for these special regulations of these special use cases, sometimes de-identified or even unidentified data will be within scope as well.)

But then when you read the details of those new rules in Chapter 20 of the Final Report, the only substantial right is to opt out of being shown targeted advertising. Those proposals will not impact on the collecting of information, or the building or sharing of profiles, or the use of our information to create ‘lookalike audiences’ at all; all the stuff behind the scenes gets a free pass.

The current proposal leaves consumers open to harm

It’s not like privacy harms only come from direct marketing, online targeting or trading in personal information. The ability to distinguish one individual from others, in order to track, profile, locate, contact or influence them, is also the starting point for stalking, surveillance and abuse. 

Privacy harms can come from personal digital assistants, chatbots or generative AI giving erroneous advice, or to third parties. They can come from .

Individuation online is the diesel that fuels the algorithmic engines, amplifying the voices of influencers, and powering the trains of online hate, misinformation and extremism

Individuation online is the diesel that fuels the algorithmic engines, amplifying the voices of influencers, and powering the trains of online hate, misinformation and extremism which lead to everything from the to pro-anorexia content to .

Why the proposed approach won’t work

As we’ve said in our submission to the Privacy Act review, playing by trying to regulate specific use cases is guaranteed to make the Act out of date the day it is amended. Regulating specific use cases will also just shift the battleground, such that arguments will become about what business practices are in or out of those defined use cases. 

(And that’s before we even get to the proposed extra rules for ‘unidentified’ and ‘de-identified’ data as well, which would not be needed if the definition of personal information was fixed instead.)

We also already know that shifting definitional matters to OAIC guidance just doesn’t work. The practices I’ve described here take place now, despite the existing OAIC guidance that the definition of personal information (and therefore all the privacy rules) apply to data which enable an individual to be “distinguished” from all other members of a group, without needing to know their names.

What needs to happen to protect consumers

As the Final Report itself states, codifying OAIC guidance makes propositions “more readily enforceable”.

That’s why it is essential to amend the statutory definition of ‘personal information’ in the Privacy Act, which applies across all industries, and all use cases, and cannot be ignored.

So dear Attorney-General, please take this historic opportunity to strengthen but simplify and clarify the law. Just one extra sentence will do it:

“An individual is ‘reasonably identifiable’ if they are capable of being distinguished from all others, even if their identity is not known.”

That one extra sentence would clear the fog, protect Australians the way they expect, simplify compliance, stop the disingenuous claims by industry, and bring Australian law closer to alignment with that of our trading partners, by building into the wording of the Act itself what is already in determinations and guidance from the OAIC.

It’s only one extra sentence, but it will make all the difference.

This article has been republished from the .

The post Op-ed: To fix the Privacy Act, we need one extra sentence appeared first on Vlog.

]]>
765679 one-targeted-shopper-with-trolley-among-other-shoppers crowd-of-people-walking-over-binary-code