New York is stepping into the ring with algorithmic price discrimination, and businesses will have to warn their customers if they are using personal data to charge them more.
The directive, buried in the state’s recent budget, would require companies to disclose when a price is tailored to them with the information: “This price was set by an algorithm based on your personal data.”

What the new law requires from businesses in New York
Businesses that use personal data (which can be obscure, such as device identifiers and past purchases or browsing behavior) to affect the price a given shopper sees will have to begin prominently alerting people so they know when prices are based on their details.
If a company does use such methods, however, it must show the explicit disclosure so that clients know these numbers didn’t just come out of thin air.
Details like enforcement and penalties will be defined as regulators issue guidance, but the message from Albany is unambiguous: If algorithms are coming for personalized prices, consumers should get plain-English transparency before they part with their money. The law stops short of banning dynamic pricing more broadly, opting instead to target the line where personal data comes into play.
How companies are responding to New York’s pricing rule
It’s not clear how often retailers actually customize their prices. Some companies, as they have started to post the disclosure in response to the law, say they do not use personal data to set prices, according to reports by The New York Times. Uber, for instance, has informed New Yorkers that they are seeing the notice while insisting its pricing reflects riders’ locations and demand, not their personal details.
Industry groups say the law is unclear and could stifle typical pricing behaviors. The National Retail Federation has sued to stop the rule, but a federal judge, who refused to block it at this point, left in place the disclosure requirement while the case continues.
Personalized pricing versus dynamic pricing explained
It’s important to distinguish between two practices that are often confused. Dynamic pricing accounts for supply and demand — think surge pricing for rides or higher hotel rates on holiday weekends. Personalized pricing, on the other hand, focuses on the individual level, in which specific user data is used to model inferred willingness to pay or likelihood of conversion for a tailored price.

Both price steering and personalization have been documented in academic studies of e-commerce. Researchers at Northeastern University, for example, saw evidence that some sites changed prices or ranked results differently depending on the user. Even A/B tests can border on personalization when experiments use attributes associated with a single shopper rather than general segments.
It does not sit well with consumers. About 79% are worried about the way companies use their data, according to Pew Research Center. And when that data is used to adjust the price at checkout, those concerns are heightened — especially if shoppers have a sneaking suspicion they’re paying more than everybody else for identical goods.
Why transparency in algorithmic pricing practices matters
Personalized pricing can bring more affordable offers to some buyers, but it also has the potential to deepen inequities if those proxies for income, location, or device type replace actual protected attributes. The Federal Trade Commission, for example, has cautioned that decisions made by an opaque algorithm can lead to unfair or discriminatory outcomes — particularly when a company is unable to explain how a model produced its end result.
Disclosure is also a light intervention. It does not ban experimentation or optimization; it pushes firms toward accountability and tells consumers that price isn’t always a neutral number. Clear warnings also leave a paper trail for regulators to scrutinize for evidence of abuse and for researchers to analyze prevalence.
Legal and policy ripples to watch after New York’s law
Look for the courts to rule on the extent to which states can require algorithmic transparency. Regulators could also follow New York’s model. Europe’s GDPR already grants individuals rights of access to information about automated decision-making, and U.S. states including California and Colorado are pressing ahead with broader rules around automated profiling as well as consumer data rights.
Former Federal Trade Commission chair Lina Khan has cited New York’s disclosure requirement as important while also recognizing how much more needs to be done to regulate algorithmic pricing. Whether other states follow the model — or simply skip to constraints on what personal data can be used in pricing — will depend on how New York’s experiment fares in the real world.
A shift, for now, is also instantaneous: a business that charges you an amount dreamed up by your data will have to tell you. That one sentence could change how businesses factor the reputational toll of personalization against the revenue that they expect to generate.
