Data protection and privacy: Investigations, tips, guides and advice - Vlog /data-protection-and-privacy You deserve better, safer and fairer products and services. We're the people working to make that happen. Wed, 08 Apr 2026 04:49:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2024/12/favicon.png?w=32 Data protection and privacy: Investigations, tips, guides and advice - Vlog /data-protection-and-privacy 32 32 239272795 Scams getting worse where new protections won’t apply /data-protection-and-privacy/protecting-your-data/articles/scams-getting-worse-where-new-protections-wont-apply Thu, 02 Apr 2026 02:06:22 +0000 /?p=1083025 Criminals are moving toward online contact methods not covered by the government’s Scams Prevention Framework.

The post Scams getting worse where new protections won’t apply appeared first on Vlog.

]]>
After years of a steep upward trend in the number of scams affecting Australians, reports to the authorities have begun to level off.

But we still lost more money in 2025 than in 2024, overwhelmingly due to investment scams. 

The latest report from the National Anti-Scam Centre – which combines data from Scamwatch, ReportCyber, the Australian Financial Crimes Exchange, IDCARE and the Australian Securities and Investments Commission – shows that slowing down the global scams juggernaut is as challenging as ever.

Reported scam losses peaked at $3.1 billion in 2022 and have fallen about 30% since then. But last year scammers still managed to steal a collective $2.18 billion from Australians.

Scammers focus on websites and social media platforms

There were 77,365 text scams reported in 2024, when scammers barraged Australians with fake notifications about package deliveries, government notifications, bank communications and more.

This tapered off in 2025, when significantly fewer text scams were reported (29,058). The world’s criminal scam organisations now appear to be focused on websites and social media platforms, and losses increased by 21% in these areas compared to 2024.

Vlog director of campaigns and communications Andy Kelly says the changing trends puts consumers at greater risk.

The latest data shows that scammers are increasingly shifting from phone calls and text messages to online contact methods to target victims

Vlog director of campaigns and communications Andy Kelly

“The latest data shows that scammers are increasingly shifting from phone calls and text messages to online contact methods to target victims,” Kelly says. “The government cannot justify glaring holes in its proposed digital platform designation, which won’t capture email service providers, dating apps and online marketplaces.”

The federal government’s Scams Prevention Framework – which covers banking, telcos and digital platforms –  also leaves out app stores and gaming platforms, both of which have increasingly been exploited by scammers. Australians lost $139 million to romance scams in 2025, many of which would have been perpetrated by online contacts not covered by the framework. 

Global cooperation is key

Catriona Lowe, deputy chair of the Australian Competition and Consumer Commission (ACCC), says “collaboration and shared accountability” are needed both domestically and globally to gain the upper hand over the ever-evolving scams industry, adding that “scams are often described as a ‘wicked problem’ because they are complex, fast-evolving, and resistant to simple solutions”.

 “As Australia and indeed the world faces increasing sophistication in scam activity through artificial intelligence and the industrialisation of criminal syndicates through scam compounds, it is clear more needs to be done, quickly and at scale,” Lowe says.

To this end, Australia joined other G7 countries in early March to endorse a Call to Action to Combat Fraud at the United Nations and Interpol Global Fraud Summit.

In addition, more than 100 organisations from around the world endorsed a Public Private Partnership Framework to encourage and improve global cooperation in the fight against scams.

Fake gambling sites target vulnerable consumers

Betting and sports investment scams also saw an increase in both number of reports and total losses in 2025, with almost triple the losses ($2.4 million) of 2024.

Most of this was attributable to a phenomenon known as “scambling”, where online gambling platforms are promoted that lead to scam websites where all bogus bets are lost to criminals.

Sports investment scams, which made up a smaller percentage of losses, involve convincing victims to invest money in fraudulent online betting systems that promise high returns.

Strikingly, there was a 91.5% increase in reports from First Nations people about betting scams and a 93.5% increase in reports from people with a disability.

Without people speaking up, we simply wouldn’t have the insights needed to track and disrupt scam activity

ACCC deputy chair Catriona Lowe

“We know losses remain high, but coordinated interventions are key to combating scams, and we will continue working together to strengthen efforts, including through the Scams Prevention Framework,” Lowe says.

Another critical factor is that Australians continue to report scams.

“Without people speaking up, we simply wouldn’t have the insights needed to track and disrupt scam activity,” Lowe says. “We encourage people to report suspicious activity so we can continue improving our understanding and response to scams.”

The post Scams getting worse where new protections won’t apply appeared first on Vlog.

]]>
1083025
Why wouldn’t the bank help this identity theft victim? /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/why-wouldnt-the-bank-help-this-identity-theft-victim Mon, 30 Mar 2026 03:51:37 +0000 /?p=1078976 Her questions were left unanswered because she wasn't a customer of the bank the fraudster used.

The post Why wouldn’t the bank help this identity theft victim? appeared first on Vlog.

]]>

Need to know

  • An identity theft victim tried to get assistance from the bank where an account was set up in her name but was told they couldn’t help because she wasn’t a customer
  • The woman’s AFCA complaint against the bank was rejected on the same grounds
  • On 12 March, after this incident, AFCA gained new powers that allow it investigate any bank involved in a scam or identity theft, whether or not the victim is a customer

When a debit card arrived in the mail in mid-January with Patricia*’s name on it she knew it wasn’t a good sign.

The card was for Great Southern Bank (GSB) – Patricia was not a customer and never had been. She realised her identity must have been stolen.

“I acted quickly to try to limit the damage – placing credit bans, reporting the matter to police, and contacting financial institutions  – but I’m very conscious that many people would not detect something like this as quickly,” she says.

Her prompt attention to the matter paid off, because the fraudster had attempted in short order to open accounts under her name with AfterPay, American Express, and Wisr (a personal loan provider). The credit ban Patricia had placed on her file blocked these applications, and all three providers quickly acknowledged that they were fraudulent.  

She eventually discovered that her new driver’s licence had been stolen before she received it, and the fraudster had used it to open the GSB account.

But GSB was less than helpful.

During multiple calls with their call centre I received inconsistent information about escalation, was refused access to the fraud team or a supervisor, and was essentially told there was nothing further they could do for me

Identity theft victim Patricia

“When I contacted the bank after discovering the fraudulent account, it was extremely difficult to get support because I was told repeatedly that I was not their customer,” Patricia says.

“Firstly, I wanted to understand how the account had been opened in my name, because that would indicate what personal information had been compromised and what steps I needed to take to protect myself. More broadly, what I was really looking for was acknowledgement and meaningful action from the bank regarding how this happened and how they would prevent it happening again, for myself and others.”

She wanted to know the email address and phone number that were used to open the account in her name. The bank’s reason for refusing to provide this may have been legitimate, but its representatives were also dismissive, unprofessional and rude, Patricia says.

“During multiple calls with their call centre I received inconsistent information about escalation, was refused access to the fraud team or a supervisor, and was essentially told there was nothing further they could do for me.”

AFCA unable to help

afca_logo_on_dark_background
As of 12 March, AFCA gained new powers to investigate all banks involved in a scam, whether or not the victim is a customer.

At this point she felt that her only option was to lodge a complaint with the Australian Financial Complaints Authority (AFCA). She called out GSB for failing to respond appropriately to her situation and for not having adequate identification checks in place to make sure people opening accounts were who they said they were. The only piece of identification used by the fraudster was her driver’s licence.

But the bank doubled down on its unhelpfulness, appealing to AFCA to have the case dismissed on the grounds that Patricia wasn’t a customer, and that only customers can lodge AFCA complaints. AFCA conceded that this aligned with its legislative charter at the time.

In fairness, GSB is not an outlier when it comes to identity verification. Most banks only require a single primary form of identification to open a bank account, such as a driver’s licence or passport.

GSB: ‘We escalated the matter appropriately’

A GSB spokesperson tells Vlog that the bank is prohibited from sharing information about scam perpetrators (including identity theft) with their victims by the both Privacy Act and the Anti-Money Laundering and Counter-Terrorism Financing Act. The bank says that it responded appropriately to Patricia’s requests for help.

“We have strong sympathy for the affected individual and have worked with her, as well as relevant organisations, to help reduce the risk of further identity theft and fraud,” the spokesperson says.

We believe we escalated the matter appropriately, but acknowledge our communications could have been clearer

Great Southern Bank spokesperson

GSB says it provided sound guidance, advising Patricia to report the incident to the police and place a ban on her credit file. (This good advice aligned with the steps she had already taken.)

But the bank admits that it could have done better.

“We believe we escalated the matter appropriately, but acknowledge our communications could have been clearer, and we are taking steps to improve how we communicate in situations like this.”

AFCA gains new powers

Had Patricia’s experience with the scam of identity theft happened in mid-March rather than mid-January, AFCA’s response to her complaint may have been different.

As of 12 March this year, AFCA’s jurisdiction expanded to allow it to investigate scam complaints involving the unauthorised opening of accounts whether or not the complainant is a customer of the bank in question.

It means that when a scammer convinces you to send money from your bank account to an account at a bank set up by the scammer (known as mule accounts), AFCA can open investigations into both banks.

“This is an important step to establishing a broader, more coordinated framework for looking at scam complaints and it reflects how scams operate in the real world,” an AFCA spokesperson says, adding that the change “strengthens transparency and accountability across the banking system by ensuring all parties involved in the movement of scam funds are accountable”.

This is an important step to establishing a broader, more coordinated framework for looking at scam complaints and it reflects how scams operate in the real world

AFCA spokesperson

As for Patricia’s case, AFCA says it “expects banks to engage with identity theft victims based on consumer expectations and good industry practice”.

Along with the AFCA complaint, Patricia also complained to GSB’s Customer Advocacy team.

“I decided to reach out in a more direct and personal way to set out the full context of what had happened and to see whether there would be any acknowledgement, accountability or rationale around the bank’s role in the situation. Unfortunately, that wasn’t the case,” Patricia says.

The bank maintained that it had made no mistakes since the fraudulent account was opened using a valid driver’s licence.

(*Editor’s note: Patricia is a pseudonym)

The post Why wouldn’t the bank help this identity theft victim? appeared first on Vlog.

]]>
1078976 afca_logo_on_dark_background
What is Grok and should Australia be blocking it? /electronics-and-technology/internet/using-online-services/articles/what-is-grok-and-should-australia-be-blocking-it Fri, 16 Jan 2026 02:55:05 +0000 /?p=936413 Elon Musk’s xAI tool has raised alarms around the world for facilitating the creation of malicious content, including deepfake pornography.

The post What is Grok and should Australia be blocking it? appeared first on Vlog.

]]>

Need to know

  • Grok, the artificial intelligence tool developed by Elon Musk’s company xAI, was recently blocked in Indonesia and Malaysia due its ability to create malicious content
  • Britain’s media regulator, Ofcom, says sexualised images of children created by Grok users may amount to child sexual abuse material
  • Musk’s company X recently said it would prevent Grok users from editing images of real people to put them in revealing clothing in jurisdictions where this is illegal

An artificial intelligence (AI) tool developed by Elon Musk’s company xAI was recently banned in Indonesia and Malaysia and has raised serious concerns globally. It’s called Grok, and it gives users the capability to make highly sexualised images of people that look disturbingly real. 

As Indonesia’s Communication and Digital Affairs Minister Meutya Hafid recently put it, “The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.”

Britain’s media regulator, Ofcom, realeased a statement that says, “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people – which may amount to intimate image abuse or pornography – and sexualised images of children that may amount to child sexual abuse material.”

The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space

Indonesia Communications and Digital Affairs Minister Meutya Hafid

Grok, which is free to X users who pay for a subscription, was launched in 2023, but in 2024 an image generator feature was added that included something called ‘spicy mode’, which can generate pornographic content.

Australia’s eSafety Commissioner says the agency “has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery”, adding that it “will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act”.

Malicious content made easier

Abhinav Dhall, an associate professor at Monash University’s Department of Data Science and AI, says Grok has put powerful new technology into the hands of wrongdoers.

“Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits. As it is so well integrated into the platform, the edited outputs also appear directly within the same public thread, which increases the visibility and reach of manipulated images”, Dhall says, adding that in many cases “the original poster may not even have the rights to the image they are uploading on the platform, which can make it easier for the edits to become potentially defamatory or unsafe”.

Dhall says Grok users should take steps to avoid images falling into the wrong hands.

“To reduce the risk of personal images being used to generate malicious content, users should be careful about posting clear, front-facing photos of their face, and should check and tighten privacy settings on their social media platforms,” Dhall says.

“It is also important to avoid posting children’s photos publicly. If you suspect your images have been misused, reverse image search can be applied to detect AI-generated content, and fake or harmful content should be reported to the relevant platforms as quickly as possible.”

X said in a previous statement that it removes illegal content from its platform including child abuse material and suspends the accounts of people who post it.

Musk has posted comments on the Grok backlash, saying critics of X “just want to suppress free speech”. In an X post on 15 January he said, “Grok is supposed [to] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV.”

Grok has made it easier to produce malicious content because it is directly integrated into X, so anyone can quickly tag it and request image edits

Associate Professor Abhinav Dhall, Monash University

In a more recent announcement on X the company said “we have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing” in jurisdictions where this is illegal.

But it remains unclear how the company will block certain locations from using this functionality or which locations they may be.

Will mandatory codes stop the deepfakes?

On 9 March 2026, mandatory codes come into effect in Australia which impose new obligations on AI services to limit children’s access to sexually explicit content as well as to violent material and content related to self-harm and suicide. But enforcing such codes on mammoth AI companies based in the US and other countries has proven to be a tall order for Australian regulators.

Abhinav Dhall stops short of recommending that Grok be banned in Australia, saying it’s a matter of enforcing the current rules and compelling tech companies to stop harmful content.

“Australia already has laws covering image-based abuse, so the focus should be on making the penalties clear and ensuring it is easy for victims to report abuse and have content removed quickly,” Dhall says. “At the same time, social media platforms should be required to implement stronger guardrails to stop harmful edits before they spread.”

Meanwhile, amid the outcry around the world about sexualised deepfakes, in a speech given at Musk’s company SpaceX, in South Texas, US Defense Secretary Pete Hegseth recently said that the Pentagon will embrace Grok along with Google’s generative AI engine. 

The post What is Grok and should Australia be blocking it? appeared first on Vlog.

]]>
936413
Real estate agents, chemists, car hire companies and more under new privacy scrutiny /data-protection-and-privacy/articles/real-estate-agents-car-hire-companies-under-new-privacy-scrutiny Thu, 08 Jan 2026 23:14:20 +0000 /?p=920932 Australia’s privacy regulator is reviewing the privacy policies of businesses collecting your personal data during in-person interactions.

The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>

Need to know

  • In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions privacy policies give the businesses that write them
  • In 2023, we reported on the privacy policies of rental platforms, and last year we analysed the privacy policies of Australia’s ten most popular car brands
  • This month, the Office of the Australian Privacy Commissioner begins its first full-scale privacy policy review, focusing on information demanded by businesses in person

Very few of us read the privacy policies we passively consent to when engaging with a service provider. Fewer still would understand what these privacy policies actually say.

In recent years, Vlog has conducted several investigations that focused on the far-reaching permissions these documents give the businesses we regularly interact with.

In 2023, we reported on the privacy policies of rental platforms such as realestate.com.au’s Ignite as well as Ailo, Tenant Options, Rental Rewards, Snug, 2Apply and Simple Rent.

The conclusion? These RentTech platforms collected information that went well beyond what’s needed to assess a tenant’s ability to pay the rent. The questions often seemed designed to grab as much data as possible from people who had no choice but to provide it.

In 2024, we analysed the privacy policies of Australia’s ten most popular car brands to see how the vehicles monitored and tracked their drivers. Here again we found that the harvesting of personal driver information was often excessive, and the rights the manufacturers gave themselves to share the data with third-parties were both far-reaching and vague.

The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words.


The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month

All of this makes the Office of the Australian Information Commissioner’s (OAIC) recent announcement that it will begin its first large-scale review of privacy policies in early January 2026 more timely than ever.

What’s changing in privacy law?

The Privacy Act requires privacy policies to contain certain details, such as what information is collected, why it’s needed, how it’s used, and how it can be corrected if necessary. 

An update to the Act in 2024 means businesses will also be required (as of 10 December 2026) to specify in their privacy policies whether a computer program will be using your personal information to make decisions that could go against you, such as when an application for a rental home is rejected. 

The privacy policy sweep is … focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys

In addition, the 2024 update gave the OAIC the power to issue infringement notices for Privacy Act violations without going to court. And it gives individuals the right to seek legal redress and financial compensation in certain cases for invasions of privacy or misuse of their personal information.

The OAIC’s privacy policy sweep is taking a different approach than our investigations of online privacy documents. It will occur in the real world, focusing on information demanded by businesses in person, such as when a real estate agent asks you for personal details when you’re inspecting a rental property or a car rental company presents you with a lengthy form before handing you the keys. The privacy policies of such businesses must include the above-mentioned information. 

Not having the right information in a privacy policy – or not having a privacy policy at all – could lead to fines from the OAIC of up to $66,000.

Which types of businesses will be targeted?

The privacy policy sweep will focus on sectors where the OAIC believes there are particular power imbalances – also known as information asymmetries – between the business in question and the customers being asked to provide the information.

When confronted with in-person requests for their personal information … consumers often don’t have access to all the information they might need to make an informed decision

Privacy Commissioner Carly Kind

“When confronted with in-person requests for their personal information from retailers, licensed venues, car hire companies or real estate agents, consumers often don’t have access to all the information they might need to make an informed decision,” says Privacy Commissioner Carly Kind.

“This makes them vulnerable to overcollection of personal information and creates risks to their security and privacy.”

The OAIC says it will review the privacy policies of around 60 businesses from the following six sectors, with a particular focus in each case.

  • Rental and property – collection of individuals’ personal information during property inspections.
  • Chemists and pharmacists – collection of personal information for the purpose of providing a paperless receipt and collection of identity information to provide medication.
  • Licenced venues – collection of identity information to enable individuals to access a venue.
  • Car rental companies – collection of identity and other personal information to enable an individual to enter into a car rental agreement.
  • Car dealerships – collection of personal information to enable an individual to conduct a vehicle test drive.
  • Pawnbrokers and second-hand dealers – collection of identity information from individuals who wish to sell or pawn goods.

Transparent communication is critical

In the OAIC’s view, a business’s explanation of how it will use personal information should be open and transparent.

“The Australian community is increasingly concerned about the lack of choice and control they have with respect to their personal information,” Kind says.

“The first building block of better privacy practices is a clear privacy policy that transparently communicates how an individual can expect their information to be collected, used, disclosed and destroyed.

“In conducting a compliance sweep, the OAIC intends to ensure that entities are meeting their obligations to be transparent with consumers and customers about how they’re using the personal information they collect in-person.

“We hope this will also catalyse some reflection about how robust entities’ privacy practices are, and whether more can be done to improve compliance with the Privacy Act writ large.”


The post Real estate agents, chemists, car hire companies and more under new privacy scrutiny appeared first on Vlog.

]]>
920932
Five sophisticated scams to watch out for in 2026 /data-protection-and-privacy/articles/scams-to-watch-out-for-this-year Thu, 08 Jan 2026 21:51:56 +0000 /?p=919582 From the teen social media ban to interest rate hikes, here’s how scammers may try to get you this year.

The post Five sophisticated scams to watch out for in 2026 appeared first on Vlog.

]]>

Need to know

  • This year, watch out for these new ways scammers will be trying to fleece consumers
  • Criminals may exploit confusion surrounding the teenage social media ban and prospective changes to interest rates to convince Australians to hand over their money
  • The growing popularity of sales events, sporting matches and live performances are also attracting criminals trying to make a buck

Australians reported $312 million worth of losses to Scamwatch last year.

That number is down slightly compared to the previous year, as scam awareness improves and businesses and governments introduce measures to crack down on online criminals.

But new laws, advances in technology, shifts in the economy and other changes impacting our lives are providing scammers with new avenues to exploit – and novel ways to do so.

We’ve put together a guide to some of the latest efforts from the world of scams to help you know what to look out for this year.

On this page:

1. Social media ban phishing

One of the biggest stories of last year was the federal government’s introduction of age limits on social media.

Since December 2025, popular platforms like Facebook, Instagram and TikTok have taken steps to prevent anyone under 16 from creating or holding an account.

Much of the coverage of this world-first initiative has focused on the impact on teens, but regulators are warning that criminals may take advantage of the upheaval to target all of us who use social media.

Platform impersonation

State and national bodies are warning that scammers may impersonate social media platforms, the federal government or police and claim you’re at risk of losing your account or being fined unless you share personal details or money to prove your age.

These phishing criminals may ask you to click a link to a fake website, provide your account username and password or upload sensitive identity documents to prove you’re old enough to be on social media.

Clicking on fake links can put your device at risk, while sensitive details like personal ID numbers can be used by scammers to steal money under your name.

Accounts for cash

Regulators say criminals may also contact young Australians and their families and offer to sell them fake IDs or access to age-verified accounts so they can avoid the ban.

The eSafety Commissioner says these operators are unlikely to ever provide what they’ve promised and warns they may try to develop an unsavory relationship with the teens they talk to.

Scammers may target children trying to get accounts on social media following the teen social media ban.

Hi Mum, revamped

There’s also a risk that scammers might use news of the ban to breathe new life into a well-worn phishing exercise.

The “Hi Mum” scam – where criminals contact people at random, claiming to be their children who are in need of help after losing their phone – has been a favourite ploy of scammers in recent years.

The eSafety Commissioner and the ACCC say Hi Mum operators may tweak their approach and pose as older teenagers or young adults accidentally caught up in the social media ban.

Their messages may claim parents have to click on a link or share copies of a child’s ID documents in order to verify their age and allow them to keep using social media.

How to avoid them
  • Ignore requests for payment: None of the platforms targeted by the ban are requesting payment as part of their compliance with the laws. Any demand to send money to secure your account is a scam.
  • Double-check suspicious messages: Don’t act on unexpected texts or emails. Avoid falling for the Hi Mum scam by contacting family members on a number you’ve used before or found yourself. Ignore offers to help teens circumnavigate the ban with fake IDs or access to a verified account.
  • Check platform information: Social media companies complying with the ban should provide info on how they’re verifying people’s ages. Check a platform’s website using a link you’ve found yourself. It should also say if it’s employing a third party to help with verification efforts.

The ACCC says shopping scams “surged” in 2025, becoming one of the most commonly reported cons of the year, while cyber security companies reported that the criminals running these schemes are expanding their methods to coincide with popular sales.

With events like Black Friday getting bigger every year and other perennial discounting periods like the End of Financial Year (EOFY) sales just around the corner, it’s likely we’ll see shopping scammers deploy more of their familiar cons in coming months.

Dodgy shopping sites

Look out for websites promising products at big discounts that are, in reality, phishing portals designed to steal your money and sensitive information.

Some sites are copies of the official pages of popular outlets, while others are “ghost stores” – wholly invented operations, claiming to be small local boutiques.

Shoppers making orders through any of these sites are usually left waiting for products that never arrive, or find their purchases are poor-quality knock-offs.

Note that the scammers running these pages have been able to promote them to appear on social media and in search engine results, so be careful of sponsored posts too.

Fake parcel alerts

Scammers know many of us will be shopping online in this year’s sales and will likely play on our eagerness to see our valuable packages delivered to us safe and sound.

Criminals often impersonate courier companies and send SMS messages urging you to click on links to secure upcoming parcel deliveries, arrange re-delivery or pay fees to receive a parcel.

These links often lead to pages designed to harvest your payment information or other sensitive details.

Note that scammers are currently still able to use technology to make it look like their SMS messages are coming from trusted delivery services like Australia Post, giving them an air of authenticity.

How to avoid them
  • Don’t click on suspicious sale links: Don’t click on unexpected links claiming to connect you with shopping deals. Look up the store online and click on the first non-sponsored search engine result.
  • Check that a branded website isn’t a dodgy copy: Avoid websites claiming to be major retailers that are offering suspiciously big discounts on all products or those that have an unusual URL and inconsistent supporting information.
  • Scrutinise a store’s “local” connections: Avoid retailers that claim to be a small local business, but can’t be found on any maps of the town where they claim to be based and say in their fine print that their products ship from overseas.
  • Double-check delivery demands: Don’t click on unexpected links demanding that you take action over a parcel delivery. Contact the company that is claiming to contact you independently using details you’ve sourced yourself to confirm any requests for information or money.

3. Fake events and tricky tickets

Flaming sky lanterns are banned in Australia, so avoid events claiming to provide these.

One in five Australians have missed out on an event due to fake or undelivered tickets, according to research by PayPal, with many losing significant amounts of money.

Scammers have been employing a mix of methods to carry out these thefts. These include selling tickets to wholly fake events, as well as the long-standing practice of selling fake tickets to real events like popular concerts and sports matches.

In September last year, Western Australian authorities warned consumers not to buy tickets for sky lantern festivals or drone shows around Perth that were being promoted on social media, revealing such events didn’t exist.

This came after authorities in South Australia urged fans of a local AFL team to be on the lookout after fake tickets were sold for hundreds of dollars by scammers looking to cash in on interest in the club following its strong performance.

Meanwhile, a New South Wales man was charged for allegedly being involved in a similar scheme where more than 100 fake passes were sold to a popular music festival.

How to avoid them
  • Know what’s possible: Open flame lanterns that float into the sky are illegal in Australia, so a local event based around these is highly implausible.
  • Be skeptical of secrecy: Beware of events advertised on social media whose promoters claim tickets and the exact location will only be issued 48 hours before the event.
  • Stick to official sources: Watch out for tickets to major events being sold through social media. Ticketing for events at big stadiums and arenas is usually controlled by a large ticket company, which would usually be the authorised reseller.
  • Compare prices: All states have some form of anti-scalping laws, which put a cap on how much a legitimate ticket reseller can charge. This cap is usually based on a percentage markup of the original price (usually 10%). Overcharging could be a sign of a scam, so compare what you’re being offered to the ticket’s original sale price.

4. Pump and dump schemes

The corporate regulator is warning anyone interested in investing this year to watch out for “pump and dump” schemes following a rise in reports of this type of scam in recent months.

A pump and dump is when people with a financial interest in a small company or obscure asset spread misleading rumours online in order to inflate the price of their investment.

Once their asset has been sufficiently “pumped,” these unscrupulous operators will “dump” (sell) their share for a profit. The following fall in the asset price often results in those who bought into the hype losing money.

Meanwhile, with inflation on the rise again, some market watchers expect the Reserve Bank to raise interest rates this year.

Such announcements often spur borrowers and savers to see where they could be getting a better deal, so scammers may use these times to spruik dodgy investment opportunities or fake loans.

How to avoid them
  • Be careful of buying into hype: A rush of advertising, influencer and celebrity endorsements or online forum comments telling you to invest in a particular company could be the beginning of a pump and dump scheme.
  • Follow up on communication: Your bank or other legitimate financial institutions shouldn’t contact you and create a sense of panic about your finances or advise you to make sudden changes. Verify any suspicious messages using contact details for the bank or institution you’ve found yourself.
  • Know the common red flags: Beware of suspicious schemes involving cryptocurrency or requiring you to download remote access software. Watch out for conversations on social media or messaging platforms that unexpectedly turn to investing.
  • Do your research: You should be able to find plenty of information about a legitimate investment company by searching online.

5. AI video clones

At the end of last year, NAB intervened to stop a customer from sending $100,000 to someone appearing to be Hollywood actor Kevin Costner.

Suspicious about the requested transfer, the bank says it discovered that the Kevin the customer had been talking to via video call was a copy created by scammers using AI – one so realistic it had convinced the customer she was speaking to the real actor and that he needed the money.

Scammers are likely to deploy more AI-generated clones to aid their efforts this year. Image: Meta

Mounting improvements in generative AI will be one of the consistent stories of this year and scam victim support organisation IDCare says it expects to see more cases of criminals taking advantage of these advances to better clone the voices and faces of individuals who can lend credibility to their schemes.

We’ve previously pointed out the devastating impacts of audio deepfakes used in phone-based scams, but combined with the latest visual cloning technology to create video messages, they now pose a greater threat.

How to spot them
  • Be realistic: A celebrity is unlikely to ever contact you asking for money. If the request is coming from someone you know, verify it by contacting the person using details you’ve used before or found yourself.
  • Check the source: See where the video came from. Official accounts of legitimate organisations or individuals are unlikely to create AI videos of themselves or their representatives.
  • Read their lips: The audio in an AI video may not always match the mouth movement of the person depicted. Watch for instances of dodgy lip-syncing.
  • Check if it looks too good: AI clones sometimes have an airbrushed, over-polished look. Check if the hair, lighting and skin tone looks believable. Beware of unnatural blinking or flickering around the eyes.
  • Look at the body parts: AI struggles with hands – if these appear in the video, check that they look realistic. Look also at faces for any unusual asymmetries.
  • Once more with feeling: Look for unusual facial expressions that don’t match the tone of what’s being said.

The post Five sophisticated scams to watch out for in 2026 appeared first on Vlog.

]]>
919582 siblings using smartphone hand releasing a sky lantern facebook deepfake detection challenge example
AI is increasingly invading our medical privacy as regulation struggles to keep up /data-protection-and-privacy/data-collection-and-use/how-your-data-is-used/articles/ai-used-by-medical-professionals Mon, 20 Oct 2025 13:00:00 +0000 /uncategorized/post/ai-used-by-medical-professionals/ Patient scans shared without consent, AI scribes recording doctor visits – medical professionals' AI use is raising big concerns.

The post AI is increasingly invading our medical privacy as regulation struggles to keep up appeared first on Vlog.

]]>

Need to know

  • Around 22% of GPs are now using AI scribes to record during patient visits, and some scribes are also proposing diagnosis and treatment
  • Profit-driven medical businesses have used large volumes of patient data to train AI tools without patients' knowledge or consent 
  • Though consent is a cornerstone of privacy law, the Privacy Commissioner says consent isn't always required when it comes to AI and patient data

It seems the artificial intelligence (AI) industry is grabbing our personal information as fast as it can – before regulations end up enforcing what the community actually wants. 

Among other indicators, the final report of the Australian Competition and Consumer Commission’s (ACCC) Digital Platform Services Inquiry, released in March this year, confirmed that the vast majority of us don’t want our personal data used to train AI tools without our consent. 

In a consumer survey that was part of the report, 83% of Australians said our approval should be mandatory. This aligns with research from the agency tasked with protecting our personal information, the Office of the Australian Information Commissioner (OAIC), which found that 84% of Australians want to have control over their personal information, including the right to demand that it be deleted. 

In mid-October, Roy Morgan published research showing that 65% of Australians think AI “creates more problems than it solves”, though 12% hailed its ability to advance medical science. 

[The Privacy Act’s requirements] are constraining innovation without providing meaningful protection to individuals

Productivity Commission

There is considerable tension between those seeking to protect our data and those that see it as the key to improving products and services – and boosting profits – especially the big tech companies.  

Agencies of the federal government are also on board with this notion. According to a recent report by the Productivity Commission (PC), our personal data shouldn’t be locked away behind a wall of inflexible regulations. 

The Privacy Act’s requirements “are constraining innovation without providing meaningful protection to individuals”, the report says. It concludes that making it easier to harness data could add up to $10 billion a year to the country’s economy.

To that end, the PC calls for exempting some businesses from the requirement to obtain informed consent before accessing a person’s data, one of the key pillars of the Privacy Act. In the PC’s view, some businesses shouldn’t have to do this as long as they commit to acting in the person’s best interests when it comes to handling their privacy.  

The report goes on to argue that giving consent has become a meaningless exercise, since no one has the time to read whatever they’re consenting to. 

While businesses are required to have a privacy policy under the Privacy Act, these sprawling documents are all but impossible to understand. The ACCC has estimated that it would take the average Australian 46 hours to read all the privacy policies they encountered in a month, the average length of which is about 6876 words. 

These consent protocols are clearly not working as intended, but Privacy Commissioner Carly Kind has taken issue with the idea of watering down privacy regulations, making the case that protecting privacy and boosting productivity are not mutually exclusive and that informed consent is a critical consumer right. 

Patient scans used to train AI without consent 

But there have been cases in which the OAIC – where Kind is one of three commissioners – has allowed businesses to surreptitiously harvest our data to their own ends. 

In September last year, for instance, on the case of Australia’s biggest radiology chain, I-Med Radiology Network, entering a joint venture with the start-up AI platform Harrison.ai in 2019.

The plan was to use I-Med’s patient scans to train an AI tool called Annalise.ai, which has now reportedly been approved for use in over 40 countries and heralded as a gamechanger.

The quantity and quality of the I-Med data – around 30 million patient scans and the resulting diagnostic reports from Australia and other countries – was key to the tool’s success.

It is clearly a money-maker for the business. In April last year, the Australian Financial Review reported that the business was on track to record $1.35 billion in revenue for the financial year. 

(Harrison.ai’s flagship product was an AI model that was trained on 800,000 chest x-rays sourced from one of I-Med’s 270 clinics in Australia, Crikey reported. It owns Annalise.ai, which is now called Harrison.ai Radiology.) 

The Crikey article says there is no evidence that informed consent was obtained from the patients. Based on the above-mentioned surveys, this would likely have run counter to many of their wishes. Neither I-Med nor Harrison.ai have disputed this. 

Consent is only required in limited circumstances by the Australian Privacy Principles, and will not always be required when entities use personal information for AI training

Office of the Australian Information Commissioner spokesperson

But it appears there is some leeway on the informed consent requirement. In July this year the OAIC ruled that I-Med had not contravened the Privacy Act since the data had been de-identified and could no longer be defined as personal information.

The ruling suggests that, where privacy issues are not at play, it’s not the OAIC’s role to prevent businesses from grabbing our data to train AI, with or without our approval. 

An OAIC spokesperson tells Vlog that “the issue of consent is always a highly relevant factor”, but the protections of the Australian Privacy Principles (APPs) no longer apply when patient data is de-identified. 

“However, strong de-identification is challenging, and whether something is de-identified is context dependent. Data that is de-identified when subject to strict controls – as it was in the I-MED case – may not be de-identified in other contexts, such as if it is released publicly,” the spokesperson says. 

The right balance between privacy regulation and the development of new AI tools remains a work in progress.

While industry guidance from the OAIC stipulates that people should be informed when their data is being used to train AI, “consent is only required in limited circumstances by the APPs, and will not always be required when entities use personal information for AI training”, the spokesperson added. 

In other words, we have no copyright on the content of our bodies. Medical clinics are free to use our data for commercial purposes without telling us, and businesses can profit handsomely. 

In a joint business venture, Australia’s largest radiology chain I-Med Radiology made millions of patient scans available to the AI firm Harrison.ai without the patients’ knowledge or consent.

The rise of AI scribes in GP consultations

All of this is pertinent to a related development – the growing use of AI tools by GPs and specialists to record patient visits, known as AI scribes. Currently around 22% of GPs are using AI scribes according to polling by the Royal Australian College of General Practitioners (RACGP). Popular models include Lyrebird, Heidi Health, Amplify+, i-scribe and Medilit, but there are many others. 

The RACGP has tentatively embraced some uses of AI scribes, especially where they can reduce the administrative burdens on doctors and free them up to concentrate more fully on preventative care. 

But the college has also expressed concern about AI tools being developed by profit-seeking tech firms without the oversight of medical clinicians. “Value to technology company shareholders might be prioritised over patient outcomes,” the RACGP wrote in a position statement on the issue. 

Of particular concern is the reliability – and the legality – of AI in making medical recommendations. The Therapeutic Goods Administration (TGA) has recently gone on record regarding the issue, saying medical professionals report that AI scribes “frequently propose diagnosis or treatment for patients beyond the stated diagnosis or treatment a clinician had identified during consultations”.

Such a functionality would mean AI scribes are medical devices that require pre-market approval, the TGA says. By dodging regulation, they are potentially being supplied to the medical industry in breach of the Therapeutic Goods Act. 

AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation

RACGP spokesperson

The RACGP hasn’t taken a position on whether AI scribes should be regulated by the TGA, but it does instruct GPs to obtain consent from patients before using them, to double check that the information they record is accurate, and to have a backup plan in place in case there’s a glitch with the AI scribe. 

“AI scribes can produce errors and inconsistencies and cannot replace the work GPs typically undertake to prepare clinical documentation,” a RACGP spokesperson tells Vlog. 

“GPs and other doctors must carefully check the output of an AI scribe to ensure its accuracy. Where an AI scribe performs its expected function – summarising information for independent GP review and decision making – it does not have a therapeutic use. Diagnosis, however, is outside the scope of an AI scribe.” 

AI scribes are currently used by around 22% of GPs in Australia.

Market use shouldn’t pre-date regulation

One very concerned citizen is AI expert Dr Kobi Leins, who was recently told to take her business elsewhere after asking that AI not be used during a specialist visit for her child. Liens cancelled the appointment, not least because she was familiar with the AI model in use and wasn’t impressed by its privacy and security features. She didn’t want it capturing her child’s data. 

“There is no need for many of these tools, and fundamentally, we need to ask why they are being pushed so hard and where money would be better spent in a healthcare system where doctors have time to listen to patients,” Leins says. 

“Individuals do not have the skills nor the capacity to review these tools. Regulatory bodies need to review them in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use. It’s about individual privacy, but also about group privacy. There are potentially grave harms based on racial and gender and other biases in the data these tools rely on.” 

Leins points to a recent study funded by the UK’s National Institute for Health and Care Research which found that when social workers used Google’s popular AI model ‘Gemma’, it downplayed women’s physical and mental health issues as compared to men’s in its case note summaries. In these cases, it’s the AI companies in the driver’s seat rather than healthcare workers, she argues. 

Regulatory bodies need to review [AI scribes] in a way that ensures privacy, manages risk and trains staff about their limitations and where they are safe to use

AI expert Dr Kobi Leins

Leins is not alone. One concerned parent told us they didn’t trust the data handling practices of AI companies. 

How the data is managed “is dependent on your GP and the policy of whatever scribe they’re using, which you’re unlikely to know when they ask you to sign over consent,” the parent says. “And like with most companies, you have no control over what happens when they get hacked and all your personal health information ends up on the dark web. I generally consent for my data to be scribed, but not my kid’s.” 

The privacy policy of one of the most popular AI scribes, Heidi Health, is not reassuring. It says the business may share the personal information it captures with employees, third party suppliers, related companies, anyone it transfers the business to, ‘professional advisers, dealers and agents’, government agencies, law enforcement and more. Your data may also be transferred overseas. 

Heidi Health tells Vlog that the company doesn’t share patient-identifiable or health information with external parties except where required by law, that its data handling practices comply with the Privacy Act, and that it doesn’t use patient data to train AI.

“We understand that clinicians and patients will only embrace new technology when they have complete confidence that their data is secure and used responsibly,” says Heidi Health head of legal and regulatory affairs Yass Omar, adding that the company is the only AI scribe in Australia certified to ISO 27001, “a globally recognised standard that reflects the strength of our information security management systems”.

But concerns remain about AI scribes in general. Another parent recently encountered one while taking her daughter to a specialist. She was told the data would be deleted after it had been reviewed, “but I only had her word to go on. What are the privacy implications here? And what mistakes can and does it make? Does the doctor look at the summary after each appointment at the end of the day while it is fresh in their mind, to make sure that the scribe accurately captured the info?” 

Notices about the use of AI scribes by medical professionals are becoming a more common sight in healthcare settings.

Just like asking Dr Google 

Professor Enrico Coiera, who is the director of both the Centre for Health Informatics, Australian Institute for Health Innovation at Macquarie University and the Australian Alliance for AI in Healthcare, tells Vlog that one of his biggest concerns is that product development is far outpacing regulatory oversight. 

“Generative AI products in particular are being updated constantly. This makes it very hard for the traditional safety guardrails we rely on, like regulation, to make sure these new technologies are safe.

Much of this kind of AI is never marketed as ‘medical grade’, but as a general-purpose tool. So it is never assessed for its safe and effective use in healthcare.” 

Much of this kind of AI is never marketed as ‘medical grade’, but as a general-purpose tool

Australian Alliance for AI in Healthcare director, Professor Enrico Coiera

Coiera says this is much like asking your search engine a health question. If an AI scribe is suggesting diagnosis or treatment, it’s a medical device that needs to be regulated by the TGA, he says. 

Referring to the I-Med case, Coiera says, “patients should be asked to consent to the use of their data for building AIs, and especially so if there is a risk their information is identifiable”. 

He recommends that patients read any consent forms carefully before signing. 

“If they are uncomfortable with their data being used for AI development, they should discuss that with their care provider.” 

Coiera is a believer in the capacity of AI to advance medical science, as long as people’s privacy is protected. 

“As long as I am comfortable that my data is stored securely and is de-identified before use, I would agree to its use for non-commercial research purposes.” 

The post AI is increasingly invading our medical privacy as regulation struggles to keep up appeared first on Vlog.

]]>
758758 three-chest-xrays lyrebird-health-usage-notice-in-doctors-office heidi-health-usage-notice-notice-in-doctors-office
Pathology lab becomes the first business to be fined in Australia for a privacy breach /data-protection-and-privacy/protecting-your-data/data-laws-and-regulation/articles/australian-clinical-labs-fined-for-data-breach Sun, 12 Oct 2025 13:00:00 +0000 /uncategorized/post/australian-clinical-labs-fined-for-data-breach/ In a recent court judgement – the first of its kind under the Privacy Act – Australian Clinical Labs was fined $5.8 million.

The post Pathology lab becomes the first business to be fined in Australia for a privacy breach appeared first on Vlog.

]]>

Need to know

  • In a groundbreaking judgement, Australian Clinical Labs was ordered to pay $5.8 million in penalties for violations of the Privacy Act 
  • They would have been much higher had the data breach occurred after 13 December 2022, when penalties went from $2.22 million per contravention to as much as $50 million.
  • Similar rulings could be made against Optus and Medibank, which have both been taken to court by the Office of the Australian Privacy Commissioner

In February 2022 the personal medical information of 223,000 people fell into the hands of scammers after the IT systems at Australian Clinical Labs (ACL) were breached.

It was a major cybercrime incident, yet ACL dragged its heels – first by failing to properly investigate whether a data breach had occurred and then by taking too long to inform the Office of the Australian Information Commissioner (OAIC) once the business knew its systems had been infiltrated.

In a recent court judgement – the first of its kind under the Privacy Act – ACL was ordered to pay $5.8 million in penalties for these and other contraventions of privacy legislation.  

Most of the penalty ($4.2 million), however, was for failing to protect the data in the first place, something that far too many companies have failed to do.

Australian Information Commissioner Elizabeth Tydd calls the unprecedented legal outcome “a notable deterrent and signal to organisations to ensure they undertake reasonable and expeditious investigations of potential data breaches and report them”.

The Justice in the case said ACL’s negligence “had at least the potential to cause significant harm to individuals whose information had been exfiltrated, including financial harm, distress or psychological harms, and material inconvenience” and could have “a broader impact on public trust in entities holding private and sensitive information of individuals”.

ACL penalty could have been a lot higher 

Trust in how our data is collected and protected is already low. In September, Privacy Commissioner Carly Kind found that Kmart Australia had breached Australians’ privacy by grabbing their personal information without their consent in 28 of its stores through facial recognition technology (FRT), a system ostensibly designed to prevent refund fraud. How safe this data is remains unclear. (The Privacy and Information Commissioners are both part of the OAIC.)

Kmart’s secret use of FRT was originally uncovered through a 2022 Vlog investigation, which also revealed the use of ART at Bunnings and The Good Guys. The Privacy Commissioner recently made a similar ruling against Bunnings, a case that is currently under review by the Administrative Review Tribunal.

The financial penalties against ACL may be just the beginning – and they’re on track to get a lot higher

The OAIC did not pursue financial penalties in the Kmart case, but the financial penalties against ACL may be just the beginning – and they’re on track to get a lot higher. 

In August, Commissioner Tydd launched court proceedings against Optus following a cyberattack in September 2022 that resulted in the personal information of around 9.8 million Australians falling into the hands of criminals.

And in June last year, the OAIC filed a court case against Medibank Private following an October 2022 data breach that saw the sensitive health information of around 9.7 million Australians disappear into the criminal underworld.

The penalties against ACL would have been much higher had the data breach occurred after 13 December 2022, when maximum penalties went from $2.22 million per contravention of the Privacy Act to as much as $50 million. (Alternatively, fines can equal three times the benefit derived from the conduct or up to 30% of a business’s annual turnover per contravention.)

Should the Optus and Medibank cases result in financial penalties, they would be determined according to the regime in place before 13 December 2022. But it seems that data breaches aren’t going away anytime soon, and whether the threat of higher fines will stop the breaches is an open question. 

A turning point for privacy law

Referring to the recent ACL case, Commissioner Kind says “this outcome represents an important turning point in the enforcement of privacy law in Australia. For the first time, a regulated entity has been subject to civil penalties under the Privacy Act, in line with the expectations of the public and the powers given to the OAIC by parliament”. 

For the first time, a regulated entity has been subject to civil penalties under the Privacy Act, in line with the expectations of the public and the powers given to the OAIC by parliament

Privacy Commissioner Carly Kind

“This should serve as a vivid reminder to entities, particularly providers operating within Australia’s healthcare system, that there will be consequences of serious failures to protect the privacy of those individuals whose healthcare and information they hold.”

The post Pathology lab becomes the first business to be fined in Australia for a privacy breach appeared first on Vlog.

]]>
759334
Kmart’s facial recognition technology broke the law, commissioner rules /data-protection-and-privacy/data-collection-and-use/who-has-your-data/articles/oaic-ruling-kmart-frt Wed, 17 Sep 2025 14:00:00 +0000 /uncategorized/post/oaic-ruling-kmart-frt/ A three year long investigation by the Privacy Commissioner has confirmed what Vlog suspected

The post Kmart’s facial recognition technology broke the law, commissioner rules appeared first on Vlog.

]]>
Retail giant Kmart has been found to have breached the Privacy Act with its facial recognition program, three years after a Vlog expose revealed the invasive technology was in use across Australia.

In 2022, Vlog reported that Kmart, along with Bunnings and The Good Guys were capturing the biometric data, or unique facial features known as a ‘face print’, of customers entering their stores. 

Our investigation prompted the Office of the Australian Information Commissioner (OAIC) to launch a probe into whether privacy laws had been breached with the Facial Recognition Technology (FRT).

In 2024, Privacy Commissioner Carly Kind found that Bunnings had breached the law and today an announcement was made that Kmart had done so too. 

Kmart sought to justify its use of FRT in stores between June 2020 and July 2022 as a measure to prevent refund fraud. However, the Commissioner said Kmart did not seek customer consent to collect biometric information and its collection was not proportional, as there were other means available to address refund fraud. 

“I do not consider that the respondent (Kmart) could have reasonably believed that the benefits of the FRT system in addressing refund fraud proportionately outweighed the impact on individuals’ privacy,” Kind says. 

No penalty 

OAIC did not seek a financial penalty against Kmart in this case, similar to the case with Bunnings last year. 

In a statement a Kmart spokesperson says they are “disappointed” with the ruling and are reviewing options to appeal the determination. 

“Like most other retailers, Kmart is experiencing escalating incidents of theft in stores which are often accompanied by anti-social behaviour or acts of violence against team members and customers,” the spokesperson says. 

Commissioner Kind says that despite the two rulings against Bunnings and now Kmart, FRT was not ‘banned’ in Australia. 

“The human rights to safety and privacy are not mutually exclusive; rather, both must be preserved, upheld and promoted. Customer and staff safety, and fraud prevention and detection, are legitimate reasons businesses might have regard to when considering the deployment of new technologies. However, these reasons are not, in and of themselves, a free pass to avoid compliance with the Privacy Act,” she stated.

The post Kmart’s facial recognition technology broke the law, commissioner rules appeared first on Vlog.

]]>
765580
Qantas data hack exposes alarming gap in consumer protections /data-protection-and-privacy/protecting-your-data/data-privacy-and-safety/articles/qantas-data-breach Wed, 02 Jul 2025 14:00:00 +0000 /uncategorized/post/qantas-data-breach/ Vlog repeats call for an airline ombuds scheme following a massive data breach at Australia's largest carrier.

The post Qantas data hack exposes alarming gap in consumer protections appeared first on Vlog.

]]>
Vlog is reiterating urgent calls for an airline ombuds scheme after revelations of a widespread data breach at Australia’s biggest airline, Qantas. 

On Wednesday, Qantas revealed that they had detected “unusual activity” on a platform used by their contact centres earlier in the week, and that initial investigations found data such as customer names, emails, dates of birth and frequent flyer numbers had been compromised. 

Qantas says that credit card details and passport details were not held in the system that was breached

The airline says some six million customers had data stored on the service platform in question and that a “significant” amount of customer data had likely been stolen. 

Qantas says that credit card details and passport details were not held in the system that was breached. 

Time for an ombuds scheme 

Bea Sherwood, senior campaigns and policy advisor at Vlog, says the data hack highlights the urgent need for a strong aviation ombuds scheme to support airline customers and facilitate complaints when events like this occur. 

“This is not the first time Qantas customers have had issues with the airline, with Vlog giving the company a Shonky Award in 2022 for unusable flight credits, delayed flights, and more,” she says.  

There is currently no equivalent independent body for airline customers to raise concerns – a huge gap in our consumer protection system

Vlog senior campaigns and policy advisor Bea Sherwood

“Despite ongoing issues with Qantas and other airlines since, customers still don’t have an effective means of directing or resolving their complaints. The Australian Financial Complaints Authority and the Telecommunications Industry Ombudsman consider financial and telco complaints, including about data breaches,” she says. “There is currently no equivalent independent body for airline customers to raise concerns – a huge gap in our consumer protection system.”

“As airlines become more data driven, a robust ombuds scheme to protect consumers is needed more than ever,” says Sherwood.

The post Qantas data hack exposes alarming gap in consumer protections appeared first on Vlog.

]]>
766151
Australian super system caught unprepared for cyber attack /data-protection-and-privacy/protecting-your-data/data-privacy-and-safety/articles/superannuation-funds-data-breach Thu, 03 Apr 2025 13:00:00 +0000 /uncategorized/post/superannuation-funds-data-breach/ Banks, telcos and social media platforms are required to protect Australians from scams, but the super industry is exempt

The post Australian super system caught unprepared for cyber attack appeared first on Vlog.

]]>

Need to know

  • At least five superannuation funds have been targeted in a data breach
  • The government's Scams Prevention Framework (SPF) requires banks, telcos and social media platforms to protect Australians from scams, but the super industry is exempt
  • Australians are urged to log in to their super account to check details are correct and report any unusual emails or text messages from their fund 

Members of the super funds Australian Retirement Trust, Australian Super, Hostplus, Rest, Insignia and possibly others will not be having a relaxing weekend.

The major funds recently suffered a cyber attack from criminals who reportedly had familiarity with the Australian super system.

Passwords were apparently harvested from the dark web, and the latest media reports suggest that only AustralianSuper members have so far been hit with fraudulent withdrawals.

The question for affected super members – as well as for the industry as a whole – is which anti-scam protections were in place, and why didn’t they work?

Cyberattack ‘shocking and unsettling’

The recent passage of the government’s Scams Prevention Framework (SPF) requires banks, telcos and social media platforms to meet new obligations to protect Australians from scams, or risk fines of up to $50 million.

But the legislation doesn’t apply to superannuation funds. Recent cyber attacks on a number of major funds shows why this needs to change.

“Reports of this cyberattack on at least five big super funds are shocking and unsettling,” says Super Consumers Australia CEO Xavier O’Halloran. “This is people’s financial future at risk. And the details and extent of this attack are still emerging.”

This is people’s financial future at risk. And the details and extent of this attack are still emerging

Super Cnsumers Australia CEO Xavier O'Halloran

The breach follows continual warnings from regulators and consumer advocates that the super sector as a whole is falling behind on cyber-resilience and scam protections. 

As Australians are legally required to put their money into super, this can’t be a good thing.

“Today’s news is chilling when we know super funds aren’t doing enough to protect Australians’ retirement savings,” O’Halloran says. 

“We’re calling on the next Government to urgently extend the new protections to safeguard Australians’ retirement savings against fraudsters, scammers and cybercriminals.”

The affected funds have reportedly been working with the National Cyber Security Co-ordinator to figure out just how big this hack is. 

What to do if you’re concerned your super may be affected

If you’re concerned about today’s news, Super Consumers Australia has this advice:

  • If possible, log in to your super account to check your details are correct and change your password.
  • Watch out for communications from your super fund.
  • Contact your super fund if you see any unusual activity; for example, SMSs or emails about transactions or changes that you have not requested. 

The post Australian super system caught unprepared for cyber attack appeared first on Vlog.

]]>
767558