Post by Ross FinlaysonPost by Mathin3Dcrackpots like yourself and the foe King spammers had a lot to do with that decision.
Post by Ross FinlaysonWell toodles Google Groups, it's no longer the case that Google Groups will be a peer of USENET.
We'll miss you, Google Groups posters.
"Effective February 15, 2024, Google Groups will no longer support new Usenet content.
Posting and subscribing will be disallowed, and new content from Usenet peers will not
appear. Viewing and searching of historical data will still be supported as it is done today."
Every few years Google refresh their indexes and I fill the first page for foundations.
That and 10,000 more hits. Really there's been times when I was front page of the Internet.
So, they obviously can't have that, ....
Even better then once they trained their AI and it stopped hallucinating
about "spiral space-filling curve" and other phrases in English in mathematics.
I.e. it goes straight to teacher. (Teacher needs to see me after graduate school.)
So, ..., I guess it's sort of true that Google's business model and Usenet's purpose
don't really align.
I'm not a crackpot and the usual spammers are mostly bots.
You express an interest in math if username checks out,
what to you is "not crackpot" yet still interesting and fresh?
I.e. I've never seen anything of mathematical interest from you
so I'm curious where you get off.
Anyways then, for standing up a sort of simplified machinery for
standing up Usenet peerages dynamically, is for the sort of notion
of fungible back-ends, fungible setups and teardowns of infrastructure,
and fungible front-ends. This basically includes some conventions of
the storage formats, of the articles and indices and metadata,
and the articles in various semi-compressed forms for their
delivery very readily, then there are the protocols that are standards
and widely adopted in what your browser already would handle today,
then about how to surface that as with regards to protocols,
like NNTP and IMAP then for HTTP and data formats, how to
make it so that Usenet just carries on, and particularly to help
establish a federated sort of way to surface Usenet articles
by their ID, where don't you know every single post has its own name.
Yeah it's funny, most bibliographic style guides already include Usenet URL's,
URI's, URN's, ....
Wow, it's a modest medium for free expression.
You see, Google's really smart and its algorithms find relevant results.
As to why they provide them for the whole industry you might figure
there's room for competition.
One thing to figure out for, "normal forms for minimal moderation",
basically is to reflect that a given channel has a topic, then with respect
to automatically, how to relate posts or at least something in the thread,
being on-topic, with regards to then how to work up how to have a simplest
form of reputation, pretty much that results a fuzzy logic with repeat posters
and particularly on-topic posters with unique sorts posts, simple approvals or
lack thereof of disapprovals, vis-a-vis reputation systems, and the old maxim
that spams need to be determined in a fair and open sort of way.
I.e. the old maxim is that "a post is spam or not" vis-a-vis "a poster is a spammer or not",
is whether the post relates to other content or the poster relates to other spams.
As well, even un-popular posters, here is for that "there's no negative points",
so that reputation is un-game-able for being one-way.
The simplest resource seems to be time, so the idea is that spam-like behavior
should cost more time, while usual behavior costs less time, to post, or author posts,
while, reading is fundamentally free altogether. (Then for curated and raw,
and only propagating curated, or for curated and raw infeeds and outfeeds,
is for the value of curation and the value of the raw, to read.)
The usual sorts of things like "points" or "teacher's pet's stars", are basically not
having a ranking of authors, instead that it as well would be by the content, what
is to result for uniqueness and, ..., correctness, how to go about figuring out what
are entirely open sorts of metrics, to result engaging in quality of content, and for
various levels, as to whether people seek the engaging, or frivolous. This is more
for bucketing and banding than grinding and gaming.
The Usenet protocol has notions of retention, no-archive, and cancellation,
vis-a-vis "policy", "control", and "junk", and limits, here the notion is essentially
an "unbounded living museum, of text articles".
Then here the idea is to make for "curated" and "raw", in the museum, this
"New/Old/Bot" and "Non/Off/Bad", or "New/Old/Off" and "Non/Bot/Bad",
sort of "curated for digest" and "curated for chat".
There is an idea that people should be able to club together, or vouch,
just not club apart. Then the idea is that bad recommendations would
have to somehow cascade, in terms of there not being the black-ball
nor the private invite, but that the lack thereof of vouching, suffices
to prevent Nons becoming News, in terms of otherwise Nons going to Bads,
as with a timeline and what is the reversibility of Old and Off and Bot and non-Bot,
and not-Bad and Bad, and whether as for the attenuation thereof, or
built-in forgetfulness of Off, then as to whether redress for Bad, or Bot.
The idea is that Nons should go to News as News invest time to get their
first posts published, and go without being rejected as Bad, according to
the related what reflect policy and agnostic content matching, in terms
of that being automatic and as of a strikes policy, vis-a-vis tolerance policies.
I.e. the point is to be both entirely open and also egalitarian, and,
resistant to meddling and gaming, where "gaming" is usually considered
some form of "exploiting", and the usual tragedy of the commons.
As well, it should sort of work unattended, and as after extended times,
a usual case of least maintenance for authors, and, readers.
Luckily, computers are so inexpensive these days, an entire this sort of
thing can run on a quite modest allotment of resources, and with a few
simple architectural principles and a usual sort of giving it away,
make for a pretty nice outlook for an enduring living museum,
of what's called "letters", and a modern-day living, working, museum of letters.
Now I suppose you might open another page and see what Madison Avenue
has deigned to put in your face, these days I see a lot of bandwidth offerings.
There's a pretty detailed work-up of how to implement on "Meta: a usenet server
just for sci.math."
So, how should the front-end work? The basic idea is that it's URLs, then as for populating
front-ends, is sort of for an Explorer sort of metaphor, Browse, vis-a-vis, Search, and Tours,
Exhibits, the metaphor of the museum.
It's pretty simple.