Jump to content

Cpvr

Administrators
  • Joined

  • Last visited

Everything posted by Cpvr

  1. I’m currently listening to hustle by blacc zacc [MEDIA=spotify]track:5VPOhni8eop2HosjvPAQWE[/MEDIA]
  2. I’ve never tried push notifications as a means to monetize a forum nor a website. I don’t think it would work either. It would lead to a huge decline of traffic and cause a massive spiral. Your visitors and users would more than likely leave the website and never look back. Implementing an extra ad or two or new affiliate links would be a better approach.
  3. 🚨 BREAKING: The U.S. Copyright Office SIDES WITH CONTENT CREATORS, concluding in its latest report that the fair use exception likely does not apply to commercial AI training. From the report's conclusion: "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs—all of which can affect the market. When a model is deployed for purposes such as analysis or research—the types of uses that are critical to international competitiveness—the outputs are unlikely to substitute for expressive works used in training. But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries. For those uses that may not qualify as fair, practical solutions are critical to support ongoing innovation. Licensing agreements for AI training, both individual and collective, are fast emerging in certain sectors, although their availability so far is inconsistent. Given the robust growth of voluntary licensing, as well as the lack of stakeholder support for any statutory change, the Office believes government intervention would be premature at this time. Rather, licensing markets should continue to develop, extending early successes into more contexts as soon as possible. In those areas where remaining gaps are unlikely to be filled, alternative approaches such as extended collective licensing should be considered to address any market failure. In our view, American leadership in the AI space would best be furthered by supporting both of these world-class industries that contribute so much to our economic and cultural advancement. Effective licensing options can ensure that innovation continues to advance without undermining intellectual property rights. These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public." - My comments: Although this is a pre-publication version, the report states: "The Office is releasing this pre-publication version of Part 3 in response to congressional inquiries and expressions of interest from stakeholders. A final version will be published in the near future, without any substantive changes expected in the analysis or conclusions." It's GREAT NEWS for content creators/copyright holders, especially as the U.S. Copyright Office's opinion will likely influence present and future AI copyright lawsuits in the U.S. As I've written before, licensing deals seem to be the future of AI training. https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf
  4. I believe in a higher power(god), but I personally would not run a religious forum.
  5. AI and zero-click searches are killing the business model of the web that has sustained content creators for the last 15+ years. It's an opinion that is shared by many, including Cloudflare CEO Matthew Prince, who recently warned that "search drives everything that happens online." It's been known for some time that the web is changing into the Zero-Click Internet, the name for when users no longer need to click on links to find whatever content they want. Social media sites stopped promoting posts with links years ago, posting content directly on the platforms so users don't have to leave them. With the advent of generative AI, people are having their queries answered directly on Google's search page – no need to click on a website to find an answer Prince, boss of the CDN/security giant Cloudflare, spoke about the impact of a zero-click Internet during a recent interview with the Council on Foreign Relations. "AI is going to fundamentally change the business model of the web. The business model of the web for the last 15 years has been search. Search drives everything that happens online," he said. Prince also talked about how the value exchange between Google and those who create web content is disappearing. He noted that almost a decade ago, every two pages that Google scraped meant it would send websites a visitor. Today, it takes six scraped pages to get one visitor, despite the crawl rate not changing. "Today, 75 percent of the queries get answered without you leaving Google," the CEO revealed. The graph below explains zero-click searches and where those clicks go, but this was before AI overviews which is likely to cut those publisher clicks in half (again). The rise of large language models and the AI companies behind them has sent the crisis into overdrive, pushing the scraping-to-visitor ratio far above Google's six to one. As such, creators see lower returns – and with so much AI scraping of content without permission, they often get nothing at all for their work. "And so the business model of the web can't survive unless there's some change, because more and more the answers to the questions that you ask won't lead you to the original source, it will be some derivative of that source While some will argue that being able to find an answer quickly and from multiple sources without clicking through several sites is easier and more convenient, there are obvious problems. The main issue is that nobody is going to want to create new content when they get paid nothing or almost nothing for doing so. This is especially true when it comes to smaller, independent, impartial sites that AI companies might not partner with. And let's not forget how often AI gets things completely wrong. "Sam Altman at OpenAI and others get that. But he can't be the only one paying for content when everyone else gets it for free." Prince said that 80 percent of AI companies use Cloudflare, and 20 to 30 percent of the web uses its services. He added that as his company is at the center of the problem, it is thinking about ways to address the situation, hopefully before it is too late. The executive also talked about the billions of dollars being invested in generative AI and the lack of returns. "In terms of, is AI a fad, is it overhyped? I think the answer is probably yes and no. I would guess that 99 percent of the money that people are spending on these projects today is just getting lit on fire. But 1 percent is going to be incredibly valuable. And I can't tell you what 1 percent of that is. And so maybe we've all got a light, you know, $100 on fire to find that one dollar that matters." Do you think AI will replace traditional search engines like Google? Source: https://www.techspot.com/news/107859-cloudflare-ceo-warns-ai-zero-click-internet-killing.html
  6. I agree. One discussion topic a day can definitely help during slow downtime periods. It gives members something to respond to and keeps the community flowing even if things are quiet. It’s also a good idea to continue pushing for new members through other channels to help fill in the gaps during those slower stretches. You never want to hit a long stretch of inactivity, that’s usually when members start drifting off to more active communities. Every effort, whether small, medium, or large plays a part in pushing through the down periods. Forums naturally go through growing pains. You’re going to have fast-paced days and slower ones, but that’s part of the growth process. As long as we keep growing the garden and watering it, the forum will still push through and find its rhythm, even during the lulls.
  7. Apple is “actively looking at” reshaping the Safari web browser on its devices to focus on AI-powered search engines, Bloomberg Newsreported on Wednesday, a move that could chip away at Google’s dominance in the lucrative search market. Apple executive Eddy Cue testified in the U.S. Justice Department’s antitrust case against Alphabet, saying searches on Safari fell for the first time last month, which he attributed to users increasingly turning to AI, according to the report. Google is the default search engine on Apple’s browser, a coveted position for which it pays Apple roughly $20 billion annually, or about 36% of its search advertising revenue generated through the Safari browser, analysts have estimated. Losing that position could heap pressure on Google just as it faces fierce competition from AI startups such as OpenAI and Perplexity. Apple has already struck a deal with OpenAI to offer ChatGPT as an option in Siri, while Google is trying to secure an agreement by mid-year to embed its Gemini AI technology in Apple’s latest devices. Alphabet shares fell 6%, while Apple was down about 2%. Both companies and the DoJ did not respond to Reuters’ requests for comment. Cue said he believes AI search providers, including OpenAI and Perplexity AI, will eventually replace standard search engines such as Google, and that Apple will add those players as options in Safari in the future, according to the report. “We will add them to the list - they probably won’t be the default,” Bloomberg News cited Cue as saying. Last month, Google reassured jittery tech investors that its AI investments were powering returns at its crucial ad business after its first-quarter profit and revenue beat expectations. “The loss of exclusivity at Apple should have very severe consequences for Google even if there are no further measures,” D.A. Davidson analyst Gil Luria said. “Many advertisers have all of their search advertising with Google because it is practically a monopoly with almost 90% share. If there were other viable alternatives for search, many advertisers could move much of their ad budgets away from Google to these other venues.” Source: https://www.cnn.com/2025/05/07/tech/apple-ai-search-safari-potential-blow-for-google
  8. Instagram Threads will begin testing video ads, Meta announced on Thursday. The test, which will make Threads look more like its competitor X, is an expansion of Threads’ advertising initiatives, which began last month with the opening up of ads to global advertisers. The news was announced at Meta’s presentation at the IAB NewFronts, where a number of social media companies pitch themselves to advertisers. On Threads, Meta says a “small number” of advertisers will test 19:9 or 1:1 video ad creatives that will appear in between pieces of organic content in the Threads feed. The company didn’t share other details around pricing or frequency of those ads, however. The update follows Meta’s recent announcement that Threads now reaches over 350 million monthly active users. The app has also seen a 35% increase in the time spent on Threads as a result of improvements to the app’s recommendation systems, Meta CEO Mark Zuckerberg also told investors on Meta’s earnings call in April. Meta announced the news around Threads, among other updates to its ad products, at the NewFronts. The company says it’s also testing a new short-form video solution, Reels trending ads, that will be shown next to the most trending Reels from creators. Rival TikTok this week had also introduced an expansion of its similar offering, called Pulse Suite, which will now let advertisers market themselves next to trending content by category, holiday, tentpole moments, cultural events, and evergreen, always-on content from sports, entertainment, and lifestyle publishers. Meta will also begin to test Trends in Instagram’s Creator Marketplace to help advertisers find popular trends, and it will test the Creator Marketplace API to help businesses find and connect with quality creators at scale. The company is also rolling out Video Expansion on Facebook Reels, which adjusts video assets by generating unseen pixels in each video frame to expand the aspect ratio for a more native experience. Source: https://techcrunch.com/2025/05/08/instagram-threads-is-getting-video-ads/
  9. I agree. The only other things I’d add to this is promote on your social media pages to keep users informed. I wouldn’t necessarily send out a mass email for everything, as it can lead your email provider being blacklisted. I’d keep emails to a minimum. Pushing out updates to your social media pages is a good idea as it helps users stay updated with your forum’s latest updates. Creating fake users is misleading in itself, I wouldn’t recommend this tactic ever.
  10. I’m currently listening to why you leave by kye bills [MEDIA=spotify]track:5XOcdFMeHRy3tvNLeomMyQ[/MEDIA]
  11. Forums aren’t the only type of online communities that you can run and manage as you can run groups on social media, a discord channel, a page on social media, a youtube channel & many more. So with that being said, what other online communities do you run/manage besides a forum? How long have you been running these particular online communities? Do you feel that they’re easier to run than a forum or a tad bit harder?
  12. I’m currently listening to pinnacle by lbs kee’vin [MEDIA=spotify]track:4IrPF5kVm439qYTCYxQwqe[/MEDIA]
  13. There are a wide variety of add ons that are available on Xenforo that you can use to help you enhance your online community, but are there any add ons that you won’t implement on your community no matter what? If so, which add ons are they?
  14. What are your thoughts on partnering up with other online communities? This can include a button/banner on one another’s forum and a way to cross promote each other’s forum. Another thing that could help balance out the partnership is direct linking articles/topics from both communities, which in turn will help each community grow. Where do you stand on this subject? Do you think it’s a good idea or are you against it?
  15. Apple is “actively looking at” reshaping the Safari web browser on its devices to focus on AI-powered search engines, Bloomberg Newsreported on Wednesday, a move that could chip away at Google’s dominance in the lucrative search market. Apple executive Eddy Cue testified in the U.S. Justice Department’s antitrust case against Alphabet, saying searches on Safari fell for the first time last month, which he attributed to users increasingly turning to AI, according to the report. Google is the default search engine on Apple’s browser, a coveted position for which it pays Apple roughly $20 billion annually, or about 36% of its search advertising revenue generated through the Safari browser, analysts have estimated. Losing that position could heap pressure on Google just as it faces fierce competition from AI startups such as OpenAI and Perplexity. Apple has already struck a deal with OpenAI to offer ChatGPT as an option in Siri, while Google is trying to secure an agreement by mid-year to embed its Gemini AI technology in Apple’s latest devices. Alphabet shares fell 6%, while Apple was down about 2%. Both companies and the DoJ did not respond to Reuters’ requests for comment. Cue said he believes AI search providers, including OpenAI and Perplexity AI, will eventually replace standard search engines such as Google, and that Apple will add those players as options in Safari in the future, according to the report. “We will add them to the list - they probably won’t be the default,” Bloomberg News cited Cue as saying. Last month, Google reassured jittery tech investors that its AI investments were powering returns at its crucial ad business after its first-quarter profit and revenue beat expectations. “The loss of exclusivity at Apple should have very severe consequences for Google even if there are no further measures,” D.A. Davidson analyst Gil Luria said. “Many advertisers have all of their search advertising with Google because it is practically a monopoly with almost 90% share. If there were other viable alternatives for search, many advertisers could move much of their ad budgets away from Google to these other venues.” Source: https://www.cnn.com/2025/05/07/tech/apple-ai-search-safari-potential-blow-for-google
  16. customer service jobs, designing jobs, graphic jobs, retail jobs and more. It’s not just freelancing jobs that’s at play here. They’re already rolling out robots into stores in some areas in California, so that leaves retail worker out of work. Even Facebook has replaced some of their coders with ai systems. then take in account phone support at some major companies, they’re rolling out ai systems and removing humans out of that as well. It’s not just fiverr, it’s a global and national issue. That’s why its important to adapt and learn the systems, so those that know how to use them, don’t get lost in the end. Ai is here to stay whether we like it or not.
  17. Redditors around the world were scandalized last week after learning that a team of researchers released a swarm of AI-powered, human-impersonating bots on the “Change My View” subreddit. The large-scale experiment was designed to explore just how persuasive AI can be. The bots posted over 1,700 comments, adopting personas like abuse survivors or controversial identities like an anti-Black Lives Matter advocate. For Reddit, the incident was a mini-nightmare. Reddit’s brand is associated with authenticity — a place where real people come to share real opinions. If that human-focused ecosystem is disturbed with AI slopor becomes a place where people can’t trust that they’re getting information from actual humans, it could do more than threaten Reddit’s core identity. Reddit’s bottom line could be at stake, since the company now sells its content to OpenAI for training. The company condemned the “improper and highly unethical experiment” and filed a complaint with the university that ran it. But that experiment was only one of what will likely be many instances of generative AI bots pretending to be humans on Reddit for a variety of reasons, from the scientific to the politically manipulative. To protect users from bot manipulation and “keep Reddit human,” the company has quietly signaled an upcoming action — one that may be unpopular with users who come to Reddit for another reason: anonymity. On Monday, Reddit CEO Steve Huffman shared in a post that Reddit would start working with “various third-party services” to verify a user’s humanity. This represents a significant step for a platform that has historically required almost no personal information for users to create an account. “To keep Reddit human and to meet evolving regulatory requirements, we are going to need a little more information,” Huffman wrote. “Specifically, we will need to know whether you are a human, and in some locations, if you are an adult. But we never want to know your name or who you are.” Social media companies have already started implementing ID checks after at least nine states and the U.K. and passed laws mandating age verification to protect children on their platforms.) A Reddit spokesperson declined to explain under what circumstances the company would require users to go through a verification process, though they did confirm that Reddit already takes measures to ban “bad” bots. The spokesperson also wouldn’t share more details about which third-party services the company would use or what kind of personally identifying information users would have to offer up. Many companies today rely on verification platforms like Persona, Alloy, Stripe Identity, Plaid, and Footprint, which usually require a government-issued ID to verify age and humanity. Then there’s the newer and more speculative tech, like Sam Altman’s Tools for Humanity and its eye-scanning “proof of human” device. Opponents to ID checks say there are data privacy and security risks to sharing your personal information with social media platforms. That’s especially true for a platform like Reddit, where people come to post experiences they maybe never would have if their names were attached to them. It’s not difficult to imagine a world in which authorities might subpoena Reddit for the identity of, for example, a pregnant teen asking about abortion experiences on r/women in states where it is now illegal. Just look how Meta handed over private conversationsbetween a Nebraska woman and her 17-year-old daughter, which discussed the latter’s plans to terminate a pregnancy. Meta’s assistance led law enforcement to acquire a search warrant, which resulted in felony charges for both the mother and daughter. That’s exactly the risk Reddit hopes to avoid by tapping outside firms to provide “the essential information and nothing else,” per Huffman, who emphasized that “we never want to know your name or who you are.” “Anonymity is essential to Reddit,” he said. The CEO also noted that Reddit would continue to be “extremely protective of your personal information” and “will continue to push back against excessive or unreasonable demands from public or private authorities.” Source: https://techcrunch.com/2025/05/06/reddit-will-tighten-verification-to-keep-out-human-like-ai-bots/
  18. In a stark and unfiltered message to his staff, Fiverr CEO Micha Kaufman has warned that artificial intelligence (AI) is set to disrupt the workforce across a range of industries, including his own executive role, reportedIndia Today The internal email, which has since circulated widely online after being shared by Neatprompts CEO Aadit Sheth, urges employees to confront the accelerating impact of generative AI tools. Kaufman painted a candid picture of what lies ahead, asserting that automation is not just looming but already reshaping the nature of work. AI is coming for your jobs. Heck, it’s coming for my job too,” Kaufman wrote. He went on to list several professions he believes are particularly vulnerable, including programmers, designers, product managers, data scientists, lawyers, customer support staff, salespeople, and finance professionals. Far from a Fiverr-specific concern, Kaufman framed the development as a global transformation. “This is not about Fiverr. This is about every company and every industry,” he noted. He highlighted how tasks once considered straightforward are being rapidly automated, while more complex processes are being streamlined through AI. Without urgent reskilling and adaptation, Kaufman warned, workers could face redundancy within mere months. However, the tone of the message stopped short of alarmist. Instead, it served as a call to action, encouraging staff to embrace AI tools and build new competencies. Kaufman pointed to platforms such as Cursor for developers, Intercom Fin for customer service, and Lexis+ AI for legal professionals as essential technologies to master. Employees were also encouraged to locate internal AI experts, revise their definitions of productivity, and develop proficiency with large language models (LLMs). Kaufman made a bold claim that traditional search engines like Google are becoming obsolete for those not adept in prompt engineering — a skill increasingly seen as vital in the AI-driven era. Before expanding headcount, companies should instead focus on boosting output using existing teams with the help of AI, he said, framing AI integration not as optional but as essential for survival. Reportedly, the message arrives amid ongoing global conversations about AI adoption and workforce implications.Tech firms are navigating how to integrate advanced technologies while balancing concerns around job security. https://www.threads.com/@buster/post/DJVVVghtuG3?xmt=AQF0J8qs6uIlfe9hSXyLscDjhqknquaK4y2Nce3febMuig Source: https://www.livemint.com/technology/tech-news/fiverr-ceo-warns-staff-ai-is-coming-for-your-jobs-including-mine-11746543614622.html
  19. I’m currently listening to last man standing by polo g. [MEDIA=spotify]track:7AqcPbvxOIIN6NMXlG30Sw[/MEDIA]
  20. If crypto drops in price, how does it actually benefit your forum? Will it pay your server bills like regular cash will? Is it fully acceptable by everyone like cash is?
  21. As long as you don’t use it for engagement and use it for promotional purposes, your community will remain the cornerstone for activity. This would be the best approach to do it, especially if you want to keep users on your forum. Google loves Reddit, so utilizing their community and building backlinks there is a good idea. It will also help you rank higher as well. You don’t necessarily have to work on engagement, just use it as a building block to stir users to your forum.
  22. How has google adsense been performing for you since they’ve placed more emphasis on CPM than cpc recently?
  23. Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer. In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist. “We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.” More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information. The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why. Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent. These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.” For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems. The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information. Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data. “You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.” Cursor and Mr. Truell did not respond to requests for comment. For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests. The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent. When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time. In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do. “Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.” Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said. Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek. Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information. Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent. In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.) For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots. So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas. “The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem. Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking. The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers. “What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic. Source: https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
  24. Same here. A guest can always turn into an active long term member if an admin plays their cards right. When there’s strict rules in place, it’ll be tough to keep those members coming back.
  25. Honestly, I would have done the same thing. I don’t tolerate disrespect either.