Los administradores de TransicionEstructural no se responsabilizan de las opiniones vertidas por los usuarios del foro. Cada usuario asume la responsabilidad de los comentarios publicados.
0 Usuarios y 3 Visitantes están viendo este tema.
Image-hosting platform Imgur has blocked people in the UK from accessing its content.Imgur is used by millions to make and share images such as memes across the web, particularly on Reddit and in online forums.But UK users trying to access Imgur on Tuesday were met with an error message saying "content not available in your region" - with Imgur content shared on other websites also no longer showing.The UK's data watchdog, the Information Commissioner's Office (ICO), said it recently notified the platform's parent company, MediaLab AI, of plans to fine Imgur after probing its approach to age checks and use of children's personal data.The BBC has approached MediaLab AI for comment.A help article on Imgur's US website, seen by the BBC, states that "from September 30, 2025, access to Imgur from the United Kingdom is no longer available"."UK users will not be able to log in, view content, or upload images. Imgur content embedded on third-party sites will not display for UK users."The ICO launched its investigation into Imgur in March - saying it would probe whether the companies were complying with both the UK's data protection laws, and the children's code.These require platforms to take steps to protect children using online services in the UK, including minimising the amount of the data they collect from them.A document published by the ICO alongside the launch of its investigation stated that Imgur did not ask visitors to declare their age when setting up an account.It said on Tuesday it had reached initial findings in its investigation and, on 10 September, issued MediaLab with a notice of intent to impose a fine."Our findings are provisional and the ICO will carefully consider any representations from MediaLab before taking a final decision whether to issue a monetary penalty," said Tim Capel, an interim executive director at the ICO."We have been clear that exiting the UK does not allow an organisation to avoid responsibility for any prior infringement of data protection law, and our investigation remains ongoing."The watchdog would not elaborate on what its findings were, nor the details of the potential fine, when asked by the BBC."This update has been provided to give clarity on our investigation, and we will not be providing any further detail at this time," Mr Capel said in his statement.'Commercial decision' Some Imgur users and reports speculated as to whether Imgur moved to block UK users from its services, rather than comply with child safety duties recently imposed on some platforms under the Online Safety Act.Among these are requirements for sites allowing pornography or content promoting suicide and self-harm to use technology to check whether visitors are over 18.But both the ICO and Ofcom - the media regulator enforcing the Online Safety Act - said Imgur suspending access for UK users had been its own "commercial decision"."Imgur's decision to restrict access in the UK is a commercial decision taken by the company and not a result of any action taken by Ofcom," an Ofcom spokesperson told the BBC."Other services run by MediaLab remain available in the UK – such as Kik messenger, which has implemented age assurance to comply with the Online Safety Act."Imgur said in the help article on its US website that UK users could exercise their rights under data protection law and request to receive a copy of their data or request to delete their account.
In response to the Online Safety Act, we are seeing more and more companies simply blocking access to UK users as it is not worth their time to deal with UK regulation, or as we suspect with 4chan, they politically believe that such regulation is draconian and needs to be fought on a matter of principle.Ofcom states that it has closed such investigations against four file sharing sites who have done this, confirming that blocking UK users is a legitimate response to its enquiries. They are saying that they will continue to monitor those sites to ensure the blocks remain effective. The regulator regards these blocks as success stories as it ” has significantly reduced the likelihood that people in the UK will be exposed to any illegal or harmful content”. This action has also meant a suicide forum is no longer available in the U.K.We obviously have no way of knowing the extent of the problems in relation to those sites, however if they are distributing CSAM content (or promoting topics like suicide for that matter; albeit the topic is possibly a bit more nuanced given recent political discourse), very few people would take issue with their unavailability. Eventually however it’s likely this will veer into more grey areas around other types of content in terms of people’s views on censorship which may be more controversial.Ofcom also take issue with anyone who blocks UK users telling them how to work around the restrictions such as the use of services which may route your traffic via other countries. If you ask your favourite AI how to circumvent the OSA restrictions, it doesn’t want to help you, but does so anyway
“Today sends a clear message that any service which flagrantly fails to engage with Ofcom and their duties under the Online Safety Act can expect to face robust enforcement action.We’re also seeing some services take steps to introduce improved safety measures as a direct result of our enforcement action. Services who choose to restrict access rather than protect UK users remain on our watchlist as we continue to monitor their availability to UK users”Suzanne Cater, Director of Enforcement, Ofcom
If 4chan continues to ignore Ofcom, the forum could be blocked in the UK. And 4chan could face even bigger fines totaling about $23 million or 10 percent of 4chan’s worldwide turnover, whichever is higher. 4chan also faces potential arrest and/or “imprisonment for a term of up to two years,” the lawsuit said.It seems likely that 4chan won’t engage with Ofcom, arguing in the lawsuit that Ofcom is seeking to “control” the Internet, which is “predominantly an American innovation.” A lawyer for 4chan, Ronald Coleman, previously told the BBC that Ofcom’s enforcement of OSA threatened “the free speech rights of every American.”
y ahora tenemos un pulso con 4chan del que me temo que Ofcom (el regulador de comunicaciones del RU) va a salir humilladohttps://www.thinkbroadband.com/news/ofcom-update-on-online-safety-act-investigationsOfcom envía amenazas veladas a los servicios que deciden simplemente bloquear al Reino Unido en lugar de plegarse a sus regulaciones de censuraCitarIn response to the Online Safety Act, we are seeing more and more companies simply blocking access to UK users as it is not worth their time to deal with UK regulation, or as we suspect with 4chan, they politically believe that such regulation is draconian and needs to be fought on a matter of principle.Ofcom states that it has closed such investigations against four file sharing sites who have done this, confirming that blocking UK users is a legitimate response to its enquiries. They are saying that they will continue to monitor those sites to ensure the blocks remain effective. The regulator regards these blocks as success stories as it ” has significantly reduced the likelihood that people in the UK will be exposed to any illegal or harmful content”. This action has also meant a suicide forum is no longer available in the U.K.We obviously have no way of knowing the extent of the problems in relation to those sites, however if they are distributing CSAM content (or promoting topics like suicide for that matter; albeit the topic is possibly a bit more nuanced given recent political discourse), very few people would take issue with their unavailability. Eventually however it’s likely this will veer into more grey areas around other types of content in terms of people’s views on censorship which may be more controversial.Ofcom also take issue with anyone who blocks UK users telling them how to work around the restrictions such as the use of services which may route your traffic via other countries. If you ask your favourite AI how to circumvent the OSA restrictions, it doesn’t want to help you, but does so anywayCitar“Today sends a clear message that any service which flagrantly fails to engage with Ofcom and their duties under the Online Safety Act can expect to face robust enforcement action.We’re also seeing some services take steps to introduce improved safety measures as a direct result of our enforcement action. Services who choose to restrict access rather than protect UK users remain on our watchlist as we continue to monitor their availability to UK users”Suzanne Cater, Director of Enforcement, Ofcomhttps://arstechnica.com/tech-policy/2025/10/4chan-fined-26k-for-refusing-to-assess-risks-under-uk-online-safety-act/CitarIf 4chan continues to ignore Ofcom, the forum could be blocked in the UK. And 4chan could face even bigger fines totaling about $23 million or 10 percent of 4chan’s worldwide turnover, whichever is higher. 4chan also faces potential arrest and/or “imprisonment for a term of up to two years,” the lawsuit said.It seems likely that 4chan won’t engage with Ofcom, arguing in the lawsuit that Ofcom is seeking to “control” the Internet, which is “predominantly an American innovation.” A lawyer for 4chan, Ronald Coleman, previously told the BBC that Ofcom’s enforcement of OSA threatened “the free speech rights of every American.”
El 24 de Febrero de 1988 la Corte Suprema de Estados Unidos vota 8-0 para revocar el acuerdo de $ 200,000 otorgado al reverendo Jerry Falwell por su angustia emocional al ser parodiado en Hustler, una revista pornográfica.En 1983, Hustler publicó un artículo que parodiaba la primera experiencia sexual de Falwell como un encuentro infantil ebrio e incestuoso con su madre en una letrina. Falwell, un conservador religioso y fundador del grupo de defensa política de la Mayoría Moral, demandó a Hustler y a su editor, Larry Flynt, por difamación. Falwell ganó el caso, pero Flynt apeló, lo que llevó a que la Corte Suprema conociera el caso debido a sus implicaciones constitucionales.
Jimmy Wales is a co-founder of Wikipedia. Speaking of Ofcom’s determination to implement the UK’s Online Safety Act, Wales said:“We’re in talks with Ofcom, but we will not be identifying users under any circumstances. We will not be age-gating Wikipedia under any circumstances. So, if it comes to that, it’s going to be an interesting showdown, because we’re going to just refuse to do it. Politically, what are they going to do? They could block Wikipedia. Good luck with that.“We didn’t cave into the Turkish government; we didn’t cave into the Chinese government. We have users we know of who are editing in Iran. We have users who are editing in Russia. Their personal safety depends on their privacy. And we think it’s a human rights issue that we’re not going to identify those people.”
Wikipedia co-founder Jimmy Wales warns that a “political showdown” with the Labour government he backed at the last election may now be necessary to protect a free and open internet.Jimmy Wales is a Labour supporter. He married into the party when he tied the knot in 2012 with Kate Garvey, who was Tony Blair’s diary gatekeeper in No. 10 for a decade. He knows Keir Starmer and Peter Kyle personally, and likes them. Yet he is now threatening a “political showdown” with the government.The Online Safety Act, legislation passed under the Conservatives and enthusiastically taken up by Labour, aims to protect internet users by boosting the transparency from major platforms and the control of people over what kind of content they see. The most significant restrictions will apply to Category 1 services – those with over seven million average monthly active users and the provision for them to share their content with each other, both of which Wikipedia has. These rules include introducing the ability of users to filter out non-verified users.On social media, if a user blocks another, they no longer see their content. Simple. But applied to Wikipedia, a free online encyclopaedia that relies on collaborative editing, it looks a lot more complicated.“This makes no sense whatsoever,” says Wales. “If you and I are in a debate about the contents of an article, and we’ve both been editing it, and I decide to stop you from doing any further editing on the article, I can just block you. You wouldn’t even be allowed to read the article. This is crazy.”“It’s really very poorly thought-out legislation. It feels like it was passed because they felt like they needed to do something, and this was something,” he adds.“The problem it’s trying to solve of people in social media harassing you – well, this is not a problem in Wikipedia. You can’t harass people on Wikipedia. You get banned immediately. And so that whole framework applied to Wikipedia is just nonsense, and yet Ofcom don’t seem to be able to find a way to see their way around that.”Related: How To Avoid Concreting Cowpats (And Other AI BS)Ofcom, the regulator responsible for implementing the legislation, has not yet determined that Wikipedia is Category 1. This was a key reason that the legal challenge brought by the Wikimedia Foundation (the charity founded by Wales which hosts Wikipedia) failed over the summer: the judge said it was “premature” to rule on the proportionality of the regime before it had been enacted.But what if those rules do apply to Wikipedia, as the wording of the act currently suggests it should?“We’re in talks with Ofcom, but we will not be identifying users under any circumstances. We will not be age-gating Wikipedia under any circumstances. So, if it comes to that, it’s going to be an interesting showdown, because we’re going to just refuse to do it. Politically, what are they going to do? They could block Wikipedia. Good luck with that,” says Wales.“We didn’t cave into the Turkish government; we didn’t cave into the Chinese government. We have users we know of who are editing in Iran. We have users who are editing in Russia. Their personal safety depends on their privacy. And we think it’s a human rights issue that we’re not going to identify those people.”This “poorly drafted legislation,” he warns, could lead to “a ridiculous political showdown.” Wales suspects, and hopes, it will not come to that. But the government declined to write an exception into the law and, as things stand, it would seem that Wikipedia could only avoid Category 1 if it cut the number of UK people who can access it by about three-quarters, or radically change the functionality of the site.This must be a shock for the 59-year-old Internet entrepreneur. Originally from Alabama, he has lived in London for over a decade and these days runs in the same circles as the Prime Minister. While Labour was in opposition, The House was told Wales had been invited to join an informal digital advisory board and had accepted. He confirms that, but says, “I never heard from them again.” How does he now feel about having backed Labour at the election? “I’m not a single-issue voter. Maybe I should be.” He did not agree with the entire Labour manifesto but was “fed up” with the Tories: “That picture of the Queen alone at her husband’s funeral and then finding out the night before they had a party in Downing Street, I was like, ‘That’s it. I’ve lost trust.’”Wales has now penned a book about that exact subject. ‘The Seven Rules of Trust’ explores the founding of Wikipedia, from the personal reasons for its creation to the philosophical underpinning as shared by businesses like Airbnb. It reads a bit like a self-help book – sometimes for the reader, at other times for society.Compared to the era of Barack Obama versus John McCain, when “sensible” people reigned and Wales was “quite proud of the US”, he is depressed – as you would expect from this centrist dad – by the current political climate. “We’ve gotten to quite a bad place,” he concludes. His stated mission has been to educate governments, policymakers, about the importance of a free and open internet. Does he see that as a failed one in the UK?“When we see Graham Linehan arrested at the airport by a bunch of armed officers for tweets he made,” he replies, referring to the Irish comedy writer whose arrest over social media posts about trans people led to the Metropolitan police ending non-crime hate investigations, “that’s pretty symbolic to a lot of people that freedom of expression is seriously under assault in the UK.”He points also to the case of Lucy Connolly, who was jailed for inciting racial hatred in the wake of the terrible Southport attack last year: “The tweet was horrible and bad, but I don’t think there’s any reasonable way to interpret it as anything other than an inflammatory remark on Twitter. It wasn’t a direct threat of violence.”At the suggestion, he sounds somewhat like Elon Musk here, Wales hits back: “Elon Musk, though, is all over the map on this issue. He has also called for journalists from 60 Minutes to be in jail. His free speech credentials are very, very thin, so I don’t want to be lumped in with that.”Asked if there are any limits to the internet he would support, Wales draws the line only at activity that is already illegal, such as revenge porn (“That’s not freedom of expression, that’s abuse”) and direct threats of violence.Wikipedia was hit in 2019 by a DDoS (distributed denial of service) cyber-attack, which overwhelmed the website and brought it down temporarily in several countries, including the UK.Wales does not know much about that or any potential recent increase in malicious activity – he has stopped working on the technology end – but confirms that web crawlers are a live problem. “The number of bots crawling Wikipedia, and pace at which they’re crawling Wikipedia, has dramatically increased,” he says.A million people looking at the same page when a big event happens is no problem, but when a million pages that are not usually looked at are being crawled, it becomes expensive: “We can’t cache all of Wikipedia.” It’s the difference between the behaviour patterns of humans and those of bots. (He encourages crawlers to use Wikipedia’s own tools for developers instead.)ChatGPT this year surpassed Wikipedia in monthly visits. Wales is confident, however, that Wikipedia can survive the age of artificial intelligence.“The hallucination problem is still really bad,” says Wales, who finds large language models most useful for brainstorming. He regularly tests ChatGPT by asking it, “who is Kate Garvey?” (his wife) and gets answers that are “always wrong, always amusing, and always plausible”. It once claimed Garvey had set up a non-profit to promote women’s empowerment in the workplace, with Miriam González Durántez, Nick Clegg’s wife. This is perfectly credible and on-brand: the two couples know each other “from the school gates” and from Clegg’s Facebook work; and Garvey runs an agency that promotes the UN’s sustainable development goals. But it was not true.He also asks to whom she is married. One answer was James Purnell, the Brown era Cabinet minister, who she did actually live with “back in the day”; another was Lord Mandelson. “I said, ‘Isn’t Peter Mandelson quite famously gay?’, And it got very woke with me, and said, ‘Oh, it’s not appropriate to speculate about people’s personal sex lives, and gay people can get married in the UK’,” he laughs.(When The House performed the same experiment, ChatGPT answered that Garvey was married to former Conservative minister Rory Stewart. “The couple are well connected across politics, media and humanitarian circles,” it asserted confidently.)Wales is a strong advocate of VPNs – virtual private networks, which he says everyone should use for their own online safety – and says he would “never use an AI based in China and trust that the data is safe at all.” He is wary, in fact, of all cloud-based services and concerned by Meta’s plans to use information from users’ chats with AI bots embedded in its platforms to personalise ads. “I thought, ‘Wow, that’s really quite a thing, because a lot of people do talk about very personal things with AI, and they probably shouldn’t’.”The Wikipedia co-founder’s technology prediction is that over the next few years, it will become common for the ordinary laptop to run a “decent local AI model,” as he already does.It is not just the Online Safety Act and AI that proposes to radically change the course of Wikipedia’s future, though. Larry Sanger, the other co-founder of Wikipedia who has been very critical of the project since leaving it a year after its launch, has proposed a set of fundamental reforms. (He explored some in an interview with conservative political commentator Tucker Carlson, whom Wales “can’t bear.” )[Related: Wikipedia co-founder Larry Sanger has started a database of encyclopaedias to challenge Wikipedia’s dominance and Transcript: Wikipedia Co-Creator Larry Sanger’s Interview on The Tucker Carlson Show, watch HERE.]One of his accusations is that the site has a left-wing bias, which should partly be addressed by scrapping its sources blacklist. “First of all, the idea that I’m left-wing is a mistake. I’m not. I’m very, very centrist,” says Wales, who recalls that when he first moved to the UK, he found it “hard not to be a Tory” under the Cameron government. “I could only wish my political opinions had some sway in Wikipedia. They don’t.”He rejects the demand to drop the blacklist, saying “the idea that we should take sites that routinely publish crazy conspiracy theories and nonsense just doesn’t make any sense.” While Breitbart News and American conservative think tank The Heritage Foundation are blacklisted, corporate UK publications The Sun, The Daily Star and The Daily Mail are among Wikipedia’s “deprecated” sources.“Deprecated doesn’t mean you’re not allowed to use it. It just means you should prefer a better source if you can find one. And I 100 per cent stand by that. That’s not about the political stance of the Daily Mail – it’s about the quality of the publication. They defend by saying they rarely lose a libel case. I’m like, that’s not good enough,” Wales says.“My suggestion to any serious, thoughtful conservative billionaires is they should be funding some seriously intellectual right-wing sources.”How about letting the public rate articles? That “doesn’t sound like a completely terrible idea,” he says, but adds that such systems are usually gameable and do not produce useful results.Sanger’s suggestion of a Community Note-style system, as on Musk’s X, is dismissed on the basis that is what Wikipedia already is. (Wales likes Community Notes and reveals he spends most of his time on X doing them.) “Another thing he said is that we should do away with consensus as the standard. I just think that’s completely mad,” Wales adds.The most pointed critique, perhaps, relates to the anonymity of administrators who govern Wikipedia. Sanger claims that 85 per cent of the most powerful accounts are anonymous and they can “libel people with impunity” in the US, as legal protections there often shield the Wikimedia Foundation from liability for user-generated content.“I think it would be very, very dangerous for some administrators to be publicly identified, and I think they would quit,” replies Wales. “That’s particularly true when we see a rise in political violence on all sides.”Wales has started his own social networking service, Trust Café, but it is small, and they are still working on the software. His distaste for Musk is clear, but does he ever look at the tech billionaires and think, “Why am I running a nonprofit?”“No, I don’t!” Wales replies. “I’m not poor, I mean, I live in Kensington.” As an afterthought, he asks: “How many bankers in London make far more money than I ever will, and how boring must their lives be compared to mine?”
The UK's communications regulator, Ofcom, has told TechRadar that it's using an unnamed third-party tool to monitor VPN use in the UK.The agency responsible for implementing the Online Safety Act refused to name the platform. However, it seems to have artificial intelligence capabilities and – despite assurances that personal information isn't being accessed – privacy concerns remain.This comes after a tech minister,Baroness Lloyd, said in the UK House of Lords that "nothing is off the table" when it comes to protecting children online, though she acknowledged there are "no current plans to ban the use of VPNs".Open Rights Group, a leading UK civil society organization, warns that any attempt to restrict VPNs would have "a negative impact on free expression and privacy.”What did Ofcom say?We contacted Ofcom and asked them to clarify how it's accessing information about VPN use in the UK. Here’s the statement we received via email:"We use a leading third-party provider, which is widely used in the industry, to gather information on VPN usage. The provider combines multiple data sources to train its models and generate usage estimates. The data we access and use in our analyses is fully aggregated at the app level, and no personally identifiable or user-level information is ever included."Although Ofcom has been transparent about the existence of VPN monitoring, this is the first time it has provided any information outlining the methods it is using.Unfortunately, though, the agency's response raises more questions than it answers.[...]Ofcom’s statement also suggests it’s relying on a tool with AI capabilities (as it "combines multiple data sources to train its models"), but the exact functions of the platform remain hidden.Given that there are so many potential sources of this data – from internet service providers (ISPs) to website administrator logs – it’s nearly impossible to assess the platform's potential accuracy or privacy credentials without additional details.Similarly, while identifiable information may be excluded from the data that Ofcom analyzed, there’s nothing that suggests the data is not at risk of re-identification.Finally, the fact that a regulator is using tools (and therefore presumably spending money and resources) to specifically track the public’s use of software designed to enhance digital privacy is likely to ring alarm bells. However well-intentioned, tracking the use of VPNs risks undermining their very purpose as a privacy tool.Why monitor VPNs?VPNs pose a problem to the UK government and Ofcom, specifically with regard to the controversial Online Safety Act, because VPNs allow people to bypass age checks. They do this by connecting to a VPN server in a different country where those age checks do not occur.By Ofcom's own estimates, the number of daily VPN users rose to around 1.5 million following the introduction of mandatory age checks on adult websites earlier this year. However, without additional transparency about how the agency came up with this number – which may have relied on the same secret tool – it’s impossible to tell how accurate it is.It's understandable that Ofcom wants to monitor the use of VPNs to determine if the new legislation is working as intended. The problem is that the method it's using may be inaccurate or actively threatening people’s privacy.An increase in the number of people using VPNs doesn't necessarily mean people are bypassing the law, either. "It’s important to note VPNs can help protect children's security online too, they aren’t just used to avoid content blocks," says James Baker, Program Manager at Open Rights Group.Several VPNs now offer adult site blocking as part of their subscription plans, including NordVPN and Surfshark the latter of which recently introduced its Web Content Blocker tool specifically for the protection of children.NordVPN's tool automatically restricts access to adult websites and helps identify malicious websites while you're browsing. Surfshark's is able to prevent children from accessing a wide range of inappropriate material, as well as offering protection from malware and phishing sites.
India quiso imponer una app estatal imborrable en todos los móviles. En cuestión de días tuvo que dar un giro inesperadoEl movimiento del Gobierno de la India para obligar a instalar una aplicación de seguridad en todos los móviles vendidos en el país ha durado menos de una semana. El 28 de noviembre, el Ministerio de Telecomunicaciones remitió una comunicación privada a los fabricantes en la que les daba 90 días para cumplir con la medida. Sin embargo, el rechazo generalizado de la opinión pública, las dudas sobre su impacto en la ciberseguridad y la aparente oposición de algunos fabricantes han forzado un giro en los planes.La polémica. La orden empezó a ganar relevancia pública cuando se conocieron sus detalles internos. Reuters señaló que el Gobierno no solo pedía la presencia obligatoria de Sanchar Saathi en los nuevos móviles, sino también su incorporación en los que ya estaban en la cadena de suministro mediante actualizaciones de software. La agencia también informó de que la instrucción inicial especificaba que la aplicación no podía desactivarse.Qué es Sanchar Saathi. La propia web del programa define la herramienta como un servicio público orientado a empoderar a los usuarios frente a fraudes y robos de dispositivos. Está disponible como aplicación móvil y también como portal web, desde donde es posible bloquear temporalmente un teléfono perdido, rastrear intentos de uso posteriores y, en caso de recuperarlo, reactivarlo. El Gobierno enmarca estas funciones dentro de un esfuerzo más amplio de educación digital, con materiales y avisos sobre seguridad para el usuario final.Del discurso de seguridad a las dudas sobre vigilancia. El debate se intensificó cuando figuras de la oposición y especialistas en privacidad cuestionaron la iniciativa. A su juicio, una aplicación gestionada por el Estado, sumada a un mandato tan amplio, exigía garantías adicionales para descartar usos intrusivos. Organizaciones como la Internet Freedom Foundation pidieron transparencia y acceso al texto legal completo. Ante la presión, Scindia defendió públicamente que “espiar no es posible” con Sanchar Saathi y negó que la app pueda utilizarse para vigilancia.La oposición de los fabricantes añadió presión al proceso. Reuters indicó que Apple no tenía intención de acatar la orden tal como estaba planteada y que trasladaría sus objeciones al Gobierno, mientras que Samsung y otros actores expresaron reservas similares. Según fuentes citadas por medios internacionales, las compañías cuestionaban que la instrucción se hubiera emitido sin una consulta previa y alertaban de su impacto sobre las políticas de privacidad de sus ecosistemas. El contexto no era menor: India se ha convertido en uno de los mercados de mayor crecimiento para el smartphone, especialmente para compañías como Apple y otros grandes fabricantes.Una marcha atrás exprés con cifras de éxito en mano. La rectificación llegó el 3 de diciembre, cuando el Ministerio de Comunicaciones publicó una nota anunciando que la preinstalación obligatoria dejaba de ser necesaria. La decisión se justificó en la “creciente aceptación” de Sanchar Saathi, que según el Gobierno suma ya 14 millones de usuarios y permite reportar alrededor de 2.000 fraudes diarios. Solo el día previo, 600.000 nuevos registros habían impulsado un crecimiento de diez veces. Scindia insistió entonces en que “espiar no es posible”, pese al escepticismo de grupos especializados.En los últimos años, como recoge Bloomberg, India ha impulsado decisiones que han obligado a las grandes tecnológicas a reajustarse, como las demandas de acceso a información cifrada o los intentos recientes de que los fabricantes distribuyan el paquete de apps públicas GOV.in. Todo ello ocurre en un mercado que es estratégico para Apple y Google, tanto en ventas como en producción. La retirada del mandato deja claro que estas dinámicas siguen en evolución y que los equilibrios posiblemente seguirán redefiniéndose.
Reddit ya no es lo que era, y la IA tiene la culpaReddit es considerado uno de los espacios más humanos que quedan en internet, pero tanto los moderadores como los usuarios se ven abrumados por las supuestas publicaciones generadas con IA en los subreddits más populares.https://es.wired.com/articulos/reddit-ya-no-es-lo-que-era-y-la-ia-tiene-la-culpaUn post en Reddit sobre una novia que exige a una invitada a la boda vestir un tono específico y poco favorecedor seguramente provocaría indignación, al igual que otro sobre una dama de honor o la madre del novio que quiere ir de blanco. Una situación en la que un padre pide a alguien en un avión cambiar de asiento para poder sentarse junto a su hijo pequeño también es probable que genere el mismo enojo. Pero estos mensajes pueden irritar a un moderador de Reddit por una razón distinta: son temas recurrentes dentro de un género creciente de publicaciones falsas generadas por inteligencia artificial.Estos son los ejemplos que le vienen a la mente a Cassie, una de las docenas de moderadores de r/AmItheAsshole. Con más de 24 millones de miembros, es uno de los subreddits más grandes, y prohíbe explícitamente el contenido generado por IA y otras historias inventadas. Desde finales de 2022, cuando ChatGPT se hizo público por primera vez, Cassie (a la que solo se puede hacer referencia por su nombre de pila) y otras personas que se ofrecen voluntarias para moderar las publicaciones de Reddit han estado luchando contra la afluencia de contenido generado por IA. Algunos de ellos están generados completamente por IA, mientras que otros usuarios han empezado a editar sus mensajes y comentarios con programas de IA como Grammarly."Es más frecuente de lo que cualquiera quisiera admitir, porque es muy fácil meter tu mensaje en ChatGPT y decirle: 'Oye, haz que esto suene más emocionante'", refiere Cassie, que cree que hasta la mitad de todo el contenido que se publica en Reddit puede haber sido creado o reelaborado con IA de alguna manera.r/AmItheAsshole es un pilar de la cultura de RedditEste subreddit ha inspirado a cientos de derivados como r/AmIOverreacting, r/AmITheDevil y r/AmItheKameena. Actualmente cuenta con más de 100,000 miembros descrito como "Soy el idiota, pero la versión india". Los mensajes suelen contener historias sobre conflictos interpersonales, en las que los usuarios pueden opinar sobre quién está equivocado ("YTA" significa "Tú eres el idiota", mientras que "ESH" significa "Aquí todo el mundo da asco"), quién tiene razón y cuál es el mejor curso de acción para seguir adelante. Los usuarios y moderadores de estas variantes de r/AmItheAsshole han informado de que han visto más contenido que sospechan que está generado por IA, y otros dicen que es un problema que afecta a todo el sitio y a todo tipo de subreddits."Si tienes un subreddit de bodas en general o de AITA, relaciones o algo así, recibirás un duro golpe", menciona una moderadora de r/AITAH, una variante de r/AmItheAsshole que tiene casi 7 millones de miembros. Esta persona, una jubilada que habló bajo condición de anonimato, lleva activa en Reddit 18 años, la mayor parte de su existencia, y también tenía décadas de experiencia en el negocio de la web antes de eso. Considera que la IA es una amenaza potencial para la plataforma."Reddit tendrá que hacer algo o la serpiente se tragará su propia cola. Está llegando el punto en el que la IA se alimenta de la IA", cuenta la moderadora.En una respuesta a una solicitud de comentarios, un portavoz de Reddit dijo: "Reddit es el lugar más humano de internet, y queremos que siga siéndolo. Prohibimos el contenido manipulado y el comportamiento no auténtico, incluidas las cuentas de bots de IA engañosas que se hacen pasar por personas y las campañas de influencia extranjeras. Los contenidos generados por IA claramente etiquetados suelen estar permitidos siempre que se ajusten a las normas de la comunidad y a las de todo el sitio". El portavoz añadió que hubo más de 40 millones de "eliminaciones de spam y contenido manipulado" en la primera mitad de 2025.Cambio a peorAlly, una joven de 26 años que trabaja como tutora en un colegio comunitario de Florida y que habló utilizando solo su nombre de pila por su privacidad, ha notado que Reddit "ha ido realmente cuesta abajo" en el último año debido a la IA. Sus sentimientos son compartidos por otros usuarios en subreddits como r/EntitledPeople, r/simpleliving y r/self, donde los mensajes del último año han lamentado el aumento de la supuesta IA. La mera posibilidad de que algo pueda estar generado por IA ya ha erosionado la confianza entre los usuarios. "La IA está convirtiendo Reddit en un montón de basura. Incluso si un post sospechoso de ser IA no lo es, la mera intriga es como tener un espía en la habitación. La sospecha en sí misma es un enemigo", escribió una cuenta en r/AmITheJerk. Ally solía disfrutar leyendo subreddits como r/AmIOverreacting. Pero ahora ya no sabe si sus interacciones son reales, y pasa menos tiempo en la plataforma que en años pasados."La IA quema a todo el mundo. Veo que la gente pone una inmensa cantidad de esfuerzo en encontrar recursos para las personas, solo para que le respondan con 'Ja, te lo has creído, todo esto es mentira'", dice el moderador de r/AITAH.Cómo detectar la IAHay pocas formas infalibles de demostrar que algo es IA o no, y la mayoría de la gente común confía en su propia intuición. El texto puede ser incluso más difícil de evaluar que las fotos y los vídeos, que a menudo tienen indicios bastante definitivos. Cinco redditors que hablaron con WIRED tenían diferentes estrategias para identificar el texto generado por IA. Cassie se da cuenta de cuando los mensajes repiten el título textualmente en el cuerpo o utilizan guiones, así como cuando un mensaje tiene una ortografía y puntuación terribles en su historial de comentarios, pero publica algo con una gramática perfecta. A Ally le desconciertan las cuentas de Reddit recién creadas y los mensajes con emojis en el título. El moderador de r/AITAH tiene una sensación de uncanny valley (valle misterioso) con ciertos mensajes. Pero estos "indicios" también pueden estar presentes en un mensaje que no utilice IA en absoluto."En este punto, se trata de una especie de 'lo sabes cuando lo ves'", afirma Travis Lloyd, estudiante de doctorado en Cornell Tech que ha publicado una investigación sobre los nuevos retos a los que se enfrentan los moderadores de Reddit impulsados por la IA. Añade: "Ahora mismo no hay herramientas fiables para detectarlo el 100% de las veces. Así que la gente tiene sus estrategias, pero no son necesariamente infalibles".Además, a medida que aparece más texto generado por IA, la gente empieza a imitar los rasgos comunes del lenguaje generado por IA, tanto si utilizan IA como si no. En Reddit, el bucle de retroalimentación de la IA puede ser aún más incestuoso, ya que la plataforma ha demandado a empresas de IA como Anthropic y Perplexity por supuestamente raspar contenido de Reddit sin consentimiento para entrenar chatbots. Los resúmenes de Google generados por IA han extraído infamemente de Reddit comentarios que en realidad son bromas sarcásticas, como el de un usuario que sugirió utilizar pegamento para que el queso se adhiera mejor a la corteza de la pizza.Cebos de ira contra las minoríasLos dos moderadores de AITA dicen que han observado una tendencia de mensajes de rabia que podrían estar escritos con IA y que parecen existir solo para difamar a las personas trans y otras poblaciones vulnerables. El moderador de r/AITAH dice que el subreddit recibió una avalancha de contenido anti-trans durante el Mes del Orgullo, mientras que Cassie dice que aparece en la cola de moderación de forma intermitente."Cosas como 'Mis padres no usaron el nombre que elegí y soy trans y exploté contra ellos porque cómo se atreven' o 'Alguien asumió mi género y yo soy cis pero cómo se atreven a asumir mi género'. Solo están pensados para que te enfades con la gente trans, con los gays, con los negros, con las mujeres", describe Cassie.En los subreddits que giran en torno a las noticias y la política, la IA ha permitido nuevas formas de difundir desinformación. Es algo que Tom, un redditor que ayudó a moderar r/Ucrania durante tres años y habló utilizando solo su nombre de pila por privacidad, encontró junto a técnicas de manipulación social como el 'astroturfing' (fingir apoyo ciudadano) que precedieron a programas como ChatGPT. Pero ahora, la IA puede automatizar esas tácticas, lo que dificulta aún más el trabajo de los moderadores humanos."Era como un tipo en un campo contra un maremoto. Se puede crear tanto ruido con tan poco esfuerzo", dice Tom.En r/Ucrania, que tiene cerca de un millón de miembros, Tom recuerda haber recibido formación de otros moderadores para tratar de mitigar la difusión de propaganda rusa e incluso haber recibido apoyo especializado de los administradores de la plataforma, que están un escalón por encima de los voluntarios y en realidad trabajan para Reddit.Monetizar el karmaAdemás de las motivaciones ideológicas, también hay formas poco conocidas de monetizar el contenido de Reddit. Algunas son más obvias, como el Programa de Colaboradores de Reddit, que permite a los usuarios ganar dinero por conseguir upvotes (lo que se conoce como "karma") y premios que otros usuarios pueden comprar para ellos. En teoría, los estafadores de Reddit pueden utilizar contenidos generados por IA para acumular karma, beneficiarse de él e incluso vender sus cuentas."Mi cuenta de Reddit vale mucho dinero, y lo sé porque la gente sigue intentando comprarla. También podría utilizarse con fines nefastos, pero sospecho que en gran parte se trata de gente que está aburrida y tiene tiempo, que dice: 'Bueno, podría ganar cien dólares en un mes haciendo casi nada'", asevera Tom.Otras cuentas necesitan el karma para tener acceso a publicar en subreddits NSFW (Not safe/suitable for work) que tienen requisitos de karma para hacerlo, donde luego pueden promover cosas como los enlaces de OnlyFans. Tanto Cassie como el moderador de r/AITAH han observado cuentas que publican en subreddits más grandes para acumular karma antes de pasar a publicar contenido para adultos. Algunos son estafadores. Otros pueden estar simplemente tratando de ganarse la vida."A veces es auténtico, a veces es un conflicto real que alguien tuvo; otras veces es falso, y en ocasiones está generado por IA de cualquier manera. Casi me atrevería a llamarlo una forma de gamificación, porque buscan aprovechar el sistema tal como está diseñado", menciona Cassie.El tiempo extra que les lleva a los moderadores de Reddit cribar el material potencial de IA es un reflejo de cómo el contenido de IA en general ha creado nuevos obstáculos, que se extiende mucho más allá del ámbito de la moderación de las redes sociales. "Los moderadores de Reddit se enfrentan a lo mismo que la gente de todo el mundo en estos momentos: adaptarse a una realidad en la que cuesta muy poco esfuerzo crear contenidos generados por IA que parezcan plausibles, pero cuesta mucho más evaluarlos. Eso es una verdadera carga para ellos, como lo es para los profesores y cualquier otra persona", concluye Lloyd.Artículo originalmente publicado en WIRED. Adaptado por Alondra Flores