Discussion:
Question about unbounded infinite sets...
(too old to reply)
Chris M. Thomasson
2024-02-16 20:49:47 UTC
Permalink
Take a number that wants to get close to zero. Say:

[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]

This gets close to zero, yet never will equal zero. Okay so:

arbitrarily close seems to be the accepted term.

infinitely close is the wrong wording?

The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using
the "metaphysical formation" of arbitrarily close... ;^)
Ben Bacarisse
2024-02-16 21:07:34 UTC
Permalink
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
notation for a s sequence:

s_0 = 1
s_1 = 0.1
etc.

You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that

p(n) = 2^n when n is even
p(n) = 2^-n when n is odd

gets arbitrarily close to zero but also arbitrarily far away from zero.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say

lim_{n->oo} s(n) = 0.

which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
--
Ben.
Chris M. Thomasson
2024-02-16 21:22:25 UTC
Permalink
Post by Ben Bacarisse
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Ben Bacarisse
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
Post by Ben Bacarisse
s_0 = 1
s_1 = 0.1
etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that
p(n) = 2^n when n is even
p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Ben Bacarisse
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say
lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Ben Bacarisse
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Ross Finlayson
2024-02-16 21:56:16 UTC
Permalink
Post by Chris M. Thomasson
Post by Ben Bacarisse
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Ben Bacarisse
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
Post by Ben Bacarisse
s_0 = 1
s_1 = 0.1
etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that
p(n) = 2^n when n is even
p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Ben Bacarisse
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say
lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Ben Bacarisse
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Hey, just because C/C++ is very familiar and
there's lots of perceived utility in the non-blocking,
one thing I wondered to ask you about was this
idea of a queue that basically has this sort of
utility.

Queues are usually only iterated once. Yet, sometimes
it's that items get appended and it's a well-formed
sequence that results the message. In this case
then the idea of this "mono-hydra" or "slique",
as about about the hydra with multiple heads
or the deque the doubled-ended queue, here
the idea is that this particular data structure
has its synchronization about setting a mark
and then basically when it scans from the front
then when it results a full sequence, then it
pops all those off while swapping in the new head.

So, the idea of this data structure is a very usual
use case in the buffering or burst queue or whatever
is the reason why a queue is filling up in the
intermediate, until the consumer indicates a
well-formed message and pops that off, while
the producer just atomic-appends items to the tail.

Anyways I wondered in your studies of Single and
Multiple Producer and Consumer, algorithms,
toward the lock-free, if you've considered this
sort simple data structure that I call "monohydra"
or "slique".

It's for something like Internet Protocols where
packets arrive and get assembled, then when they
result a message, that this kind of data structure
is entirely ephemeral, and has no purpose being
in main memory, then for a usual sort of free-list
approach to usual sorts un-even loads.

It's like "how's your chip design going" and
it's like "we solved it by adding burst buffers
between every two units" and it's like "that's
not very systolic".

A usual complement to "how low can you go"
is "how cold can it flow".


"If you're going to go about reinventing geometric series,
one need not do so so publicly." -- Virgil
Chris M. Thomasson
2024-02-16 23:09:45 UTC
Permalink
Post by Ross Finlayson
Post by Chris M. Thomasson
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense.  "a number" is one number.  And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation.  [0] = 1.  Eh?  Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
   s_0 = 1
   s_1 = 0.1
   etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one.  For example, one could say that
   p(n) = 2^n when n is even
   p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording.  The best wording is to say
   lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh.  Does the ;^) mean this as all a joke?  If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Hey, just because C/C++ is very familiar and
there's lots of perceived utility in the non-blocking,
one thing I wondered to ask you about was this
idea of a queue that basically has this sort of
utility.
Queues are usually only iterated once.  Yet, sometimes
it's that items get appended and it's a well-formed
sequence that results the message.  In this case
then the idea of this "mono-hydra" or "slique",
as about about the hydra with multiple heads
or the deque the doubled-ended queue, here
the idea is that this particular data structure
has its synchronization about setting a mark
and then basically when it scans from the front
then when it results a full sequence, then it
pops all those off while swapping in the new head.
So, the idea of this data structure is a very usual
use case in the buffering or burst queue or whatever
is the reason why a queue is filling up in the
intermediate, until the consumer indicates a
well-formed message and pops that off, while
the producer just atomic-appends items to the tail.
Anyways I wondered in your studies of Single and
Multiple Producer and Consumer, algorithms,
toward the lock-free, if you've considered this
sort simple data structure that I call "monohydra"
or "slique".
It's for something like Internet Protocols where
packets arrive and get assembled, then when they
result a message, that this kind of data structure
is entirely ephemeral, and has no purpose being
in main memory, then for a usual sort of free-list
approach to usual sorts un-even loads.
It's like "how's your chip design going" and
it's like "we solved it by adding burst buffers
between every two units" and it's like "that's
not very systolic".
A usual complement to "how low can you go"
is "how cold can it flow".
"If you're going to go about reinventing geometric series,
one need not do so so publicly."  -- Virgil
I will get back to you, a bit busy right now. Actually, I am working on
a neat sort of distributed "queue'ish" work system. It only uses atomic
exchange. When you get some free time to burn, take a deep look at the
following:

https://groups.google.com/g/comp.lang.c++/c/Skv1PoQsUZo/m/XI3Qw64xAAAJ
Ross Finlayson
2024-02-16 23:35:32 UTC
Permalink
Post by Chris M. Thomasson
Post by Ross Finlayson
Post by Chris M. Thomasson
Post by Ben Bacarisse
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Ben Bacarisse
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
Post by Ben Bacarisse
s_0 = 1
s_1 = 0.1
etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that
p(n) = 2^n when n is even
p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Ben Bacarisse
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say
lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Ben Bacarisse
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Hey, just because C/C++ is very familiar and
there's lots of perceived utility in the non-blocking,
one thing I wondered to ask you about was this
idea of a queue that basically has this sort of
utility.
Queues are usually only iterated once. Yet, sometimes
it's that items get appended and it's a well-formed
sequence that results the message. In this case
then the idea of this "mono-hydra" or "slique",
as about about the hydra with multiple heads
or the deque the doubled-ended queue, here
the idea is that this particular data structure
has its synchronization about setting a mark
and then basically when it scans from the front
then when it results a full sequence, then it
pops all those off while swapping in the new head.
So, the idea of this data structure is a very usual
use case in the buffering or burst queue or whatever
is the reason why a queue is filling up in the
intermediate, until the consumer indicates a
well-formed message and pops that off, while
the producer just atomic-appends items to the tail.
Anyways I wondered in your studies of Single and
Multiple Producer and Consumer, algorithms,
toward the lock-free, if you've considered this
sort simple data structure that I call "monohydra"
or "slique".
It's for something like Internet Protocols where
packets arrive and get assembled, then when they
result a message, that this kind of data structure
is entirely ephemeral, and has no purpose being
in main memory, then for a usual sort of free-list
approach to usual sorts un-even loads.
It's like "how's your chip design going" and
it's like "we solved it by adding burst buffers
between every two units" and it's like "that's
not very systolic".
A usual complement to "how low can you go"
is "how cold can it flow".
"If you're going to go about reinventing geometric series,
one need not do so so publicly." -- Virgil
I will get back to you, a bit busy right now. Actually, I am working on
a neat sort of distributed "queue'ish" work system. It only uses atomic
exchange. When you get some free time to burn, take a deep look at the
https://groups.google.com/g/comp.lang.c++/c/Skv1PoQsUZo/m/XI3Qw64xAAAJ
I've been thinking about cooperative multithreading,
basically with cooperative threading with timeouts,
since about 2016 I wrote up this idea about the "re-routine",
which is a sort of outline for cooperative multithreading,
where the idea is that all the calls are idempotent and
memoized, and none return null on success, and their
exceptions are modeled as usual error modes or for flow
of control with exceptions if that's the thing, then the
idea being is that the executor sort of runs the re-routine
and then figures that whenever it gets a null, then it's pending,
so it throws itself out, that when the callback arrives when
there's a response, then it just runs again right through the
re-routine, where of course it's all conditioned on the idempotent
memoized intermediate results, until it completes or errors.

This way it's about the same on the heap the intermediate
memo-ized results as the routine is running, but there's
no stack at all, so, it's not stack-bound, at all.

Of course preemptive multithreading and the thread stack
and context switching is about the greatest sort of thing
when some code isn't well-behaved, or won't yield, here
then for figuring about how basically to implement
cooperative multithreading including timeouts and priority.

Thusly, in the same sort of context as the co-routine, is,
the re-routine, this being for a model of cooperating
multi-threading, at least in a module where it can be
either entirely callback-driven or including timeouts,
and besides the executor can run its own threads in
the stack-bound for ensuring usual throughput in
case whatever re-routine is implemented in blocking
fashion.

This is a sort of idea about where mostly what I
want is that the routine is written as if it were
synchronous and serial and blocking, so it's simple,
then that just the semantics of the adapters or
the extra work on the glue logic, makes it so that
the serial re-routine is written naturally "in the language",
making it all readable and directly up-front and
getting the async semantics out of the way.

That's not really relevant here in this context
about "the mathematical infinite and unbounded",
but it results a sort of "factory industry pattern",
or re-routines, then that the implementation of
routine ends up being as simple as possible and
as close as possible to the language of its predication,
while it's all well-formed and guaranteed its behavior,
that then it can be implemented variously local or remote.

I suppose "cooperative multithreading" isn't for everybody,
but re-routines is a great idea.

Then the idea of the sliques or monohydra is basically
exactly as for the buffering the serving the IO's.
I.e., the idea here is that usually request/response
type things for transits can sort of be related to
what goes through DMA and nonblocking or async I/O,
and scatter/gather and vector I/O, getting things flowing,
right by the little ring through their nose.




Then about the infinite and the infinite limit,
it's called "the infinite limit".

Consider a snake that travels from 0 to 1 then 1 to 2.

Not only did it get "close enough", to 1,
it got "far enough", away, to get to 2.

I.e. deductively it crossed that bridge.

In continuous time, ....

It's called "infinite limit", with the idea being also
that when it results continuous called "continuum limit".
mitchr...@gmail.com
2024-02-17 02:48:22 UTC
Permalink
Post by Ross Finlayson
Post by Chris M. Thomasson
Post by Ross Finlayson
Post by Chris M. Thomasson
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
s_0 = 1
s_1 = 0.1
etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that
p(n) = 2^n when n is even
p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say
lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Hey, just because C/C++ is very familiar and
there's lots of perceived utility in the non-blocking,
one thing I wondered to ask you about was this
idea of a queue that basically has this sort of
utility.
Queues are usually only iterated once. Yet, sometimes
it's that items get appended and it's a well-formed
sequence that results the message. In this case
then the idea of this "mono-hydra" or "slique",
as about about the hydra with multiple heads
or the deque the doubled-ended queue, here
the idea is that this particular data structure
has its synchronization about setting a mark
and then basically when it scans from the front
then when it results a full sequence, then it
pops all those off while swapping in the new head.
So, the idea of this data structure is a very usual
use case in the buffering or burst queue or whatever
is the reason why a queue is filling up in the
intermediate, until the consumer indicates a
well-formed message and pops that off, while
the producer just atomic-appends items to the tail.
Anyways I wondered in your studies of Single and
Multiple Producer and Consumer, algorithms,
toward the lock-free, if you've considered this
sort simple data structure that I call "monohydra"
or "slique".
It's for something like Internet Protocols where
packets arrive and get assembled, then when they
result a message, that this kind of data structure
is entirely ephemeral, and has no purpose being
in main memory, then for a usual sort of free-list
approach to usual sorts un-even loads.
It's like "how's your chip design going" and
it's like "we solved it by adding burst buffers
between every two units" and it's like "that's
not very systolic".
A usual complement to "how low can you go"
is "how cold can it flow".
"If you're going to go about reinventing geometric series,
one need not do so so publicly." -- Virgil
I will get back to you, a bit busy right now. Actually, I am working on
a neat sort of distributed "queue'ish" work system. It only uses atomic
exchange. When you get some free time to burn, take a deep look at the
https://groups.google.com/g/comp.lang.c++/c/Skv1PoQsUZo/m/XI3Qw64xAAAJ
I've been thinking about cooperative multithreading,
basically with cooperative threading with timeouts,
since about 2016 I wrote up this idea about the "re-routine",
which is a sort of outline for cooperative multithreading,
where the idea is that all the calls are idempotent and
memoized, and none return null on success, and their
exceptions are modeled as usual error modes or for flow
of control with exceptions if that's the thing, then the
idea being is that the executor sort of runs the re-routine
and then figures that whenever it gets a null, then it's pending,
so it throws itself out, that when the callback arrives when
there's a response, then it just runs again right through the
re-routine, where of course it's all conditioned on the idempotent
memoized intermediate results, until it completes or errors.
This way it's about the same on the heap the intermediate
memo-ized results as the routine is running, but there's
no stack at all, so, it's not stack-bound, at all.
Of course preemptive multithreading and the thread stack
and context switching is about the greatest sort of thing
when some code isn't well-behaved, or won't yield, here
then for figuring about how basically to implement
cooperative multithreading including timeouts and priority.
Thusly, in the same sort of context as the co-routine, is,
the re-routine, this being for a model of cooperating
multi-threading, at least in a module where it can be
either entirely callback-driven or including timeouts,
and besides the executor can run its own threads in
the stack-bound for ensuring usual throughput in
case whatever re-routine is implemented in blocking
fashion.
This is a sort of idea about where mostly what I
want is that the routine is written as if it were
synchronous and serial and blocking, so it's simple,
then that just the semantics of the adapters or
the extra work on the glue logic, makes it so that
the serial re-routine is written naturally "in the language",
making it all readable and directly up-front and
getting the async semantics out of the way.
That's not really relevant here in this context
about "the mathematical infinite and unbounded",
but it results a sort of "factory industry pattern",
or re-routines, then that the implementation of
routine ends up being as simple as possible and
as close as possible to the language of its predication,
while it's all well-formed and guaranteed its behavior,
that then it can be implemented variously local or remote.
I suppose "cooperative multithreading" isn't for everybody,
but re-routines is a great idea.
Then the idea of the sliques or monohydra is basically
exactly as for the buffering the serving the IO's.
I.e., the idea here is that usually request/response
type things for transits can sort of be related to
what goes through DMA and nonblocking or async I/O,
and scatter/gather and vector I/O, getting things flowing,
right by the little ring through their nose.
Then about the infinite and the infinite limit,
it's called "the infinite limit".
Consider a snake that travels from 0 to 1 then 1 to 2.
Not only did it get "close enough", to 1,
it got "far enough", away, to get to 2.
I.e. deductively it crossed that bridge.
Infinity?
That bridge is never crossed.
You can built it. But you can't cross it.
Post by Ross Finlayson
In continuous time, ....
It's called "infinite limit", with the idea being also
that when it results continuous called "continuum limit".
But the infinite is always defined as the unlimited.
No. You can't count to infinity.
Ross Finlayson
2024-02-17 03:34:04 UTC
Permalink
Post by ***@gmail.com
Post by Ross Finlayson
Post by Chris M. Thomasson
Post by Ross Finlayson
Post by Chris M. Thomasson
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
s_0 = 1
s_1 = 0.1
etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that
p(n) = 2^n when n is even
p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say
lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Hey, just because C/C++ is very familiar and
there's lots of perceived utility in the non-blocking,
one thing I wondered to ask you about was this
idea of a queue that basically has this sort of
utility.
Queues are usually only iterated once. Yet, sometimes
it's that items get appended and it's a well-formed
sequence that results the message. In this case
then the idea of this "mono-hydra" or "slique",
as about about the hydra with multiple heads
or the deque the doubled-ended queue, here
the idea is that this particular data structure
has its synchronization about setting a mark
and then basically when it scans from the front
then when it results a full sequence, then it
pops all those off while swapping in the new head.
So, the idea of this data structure is a very usual
use case in the buffering or burst queue or whatever
is the reason why a queue is filling up in the
intermediate, until the consumer indicates a
well-formed message and pops that off, while
the producer just atomic-appends items to the tail.
Anyways I wondered in your studies of Single and
Multiple Producer and Consumer, algorithms,
toward the lock-free, if you've considered this
sort simple data structure that I call "monohydra"
or "slique".
It's for something like Internet Protocols where
packets arrive and get assembled, then when they
result a message, that this kind of data structure
is entirely ephemeral, and has no purpose being
in main memory, then for a usual sort of free-list
approach to usual sorts un-even loads.
It's like "how's your chip design going" and
it's like "we solved it by adding burst buffers
between every two units" and it's like "that's
not very systolic".
A usual complement to "how low can you go"
is "how cold can it flow".
"If you're going to go about reinventing geometric series,
one need not do so so publicly." -- Virgil
I will get back to you, a bit busy right now. Actually, I am working on
a neat sort of distributed "queue'ish" work system. It only uses atomic
exchange. When you get some free time to burn, take a deep look at the
https://groups.google.com/g/comp.lang.c++/c/Skv1PoQsUZo/m/XI3Qw64xAAAJ
I've been thinking about cooperative multithreading,
basically with cooperative threading with timeouts,
since about 2016 I wrote up this idea about the "re-routine",
which is a sort of outline for cooperative multithreading,
where the idea is that all the calls are idempotent and
memoized, and none return null on success, and their
exceptions are modeled as usual error modes or for flow
of control with exceptions if that's the thing, then the
idea being is that the executor sort of runs the re-routine
and then figures that whenever it gets a null, then it's pending,
so it throws itself out, that when the callback arrives when
there's a response, then it just runs again right through the
re-routine, where of course it's all conditioned on the idempotent
memoized intermediate results, until it completes or errors.
This way it's about the same on the heap the intermediate
memo-ized results as the routine is running, but there's
no stack at all, so, it's not stack-bound, at all.
Of course preemptive multithreading and the thread stack
and context switching is about the greatest sort of thing
when some code isn't well-behaved, or won't yield, here
then for figuring about how basically to implement
cooperative multithreading including timeouts and priority.
Thusly, in the same sort of context as the co-routine, is,
the re-routine, this being for a model of cooperating
multi-threading, at least in a module where it can be
either entirely callback-driven or including timeouts,
and besides the executor can run its own threads in
the stack-bound for ensuring usual throughput in
case whatever re-routine is implemented in blocking
fashion.
This is a sort of idea about where mostly what I
want is that the routine is written as if it were
synchronous and serial and blocking, so it's simple,
then that just the semantics of the adapters or
the extra work on the glue logic, makes it so that
the serial re-routine is written naturally "in the language",
making it all readable and directly up-front and
getting the async semantics out of the way.
That's not really relevant here in this context
about "the mathematical infinite and unbounded",
but it results a sort of "factory industry pattern",
or re-routines, then that the implementation of
routine ends up being as simple as possible and
as close as possible to the language of its predication,
while it's all well-formed and guaranteed its behavior,
that then it can be implemented variously local or remote.
I suppose "cooperative multithreading" isn't for everybody,
but re-routines is a great idea.
Then the idea of the sliques or monohydra is basically
exactly as for the buffering the serving the IO's.
I.e., the idea here is that usually request/response
type things for transits can sort of be related to
what goes through DMA and nonblocking or async I/O,
and scatter/gather and vector I/O, getting things flowing,
right by the little ring through their nose.
Then about the infinite and the infinite limit,
it's called "the infinite limit".
Consider a snake that travels from 0 to 1 then 1 to 2.
Not only did it get "close enough", to 1,
it got "far enough", away, to get to 2.
I.e. deductively it crossed that bridge.
Infinity?
That bridge is never crossed.
You can built it. But you can't cross it.
Post by Ross Finlayson
In continuous time, ....
It's called "infinite limit", with the idea being also
that when it results continuous called "continuum limit".
But the infinite is always defined as the unlimited.
No. You can't count to infinity.
How about counting backward?

infinity - 1, must have been at least 1, ..., 0

infinity - 2, must have been at least 2, ..., 0

infinity - 3, must have been at least 3, ..., 0

infinity - 4, must have been at least 4, ..., 0

...

infinity - infinity, must have been at least infinity, 0.


You can notice there isn't any finite number it isn't.


Anyways though that it really is kind of so, that,
to get from zero to one: is a course-of-passage,
through the middle, the middle of no-where.

Actually it's sort of the most real fact that's
only mathematical, so what it takes is to acculturate
an object-sense number-sense word-sense time-sense,
to go along with the usual physical senses that so
often according to scientism and logical positivism
are all that's allowed, that there's a continuity
and there's an infinity and that it's a deductive
result of inference that the limit is the sum.

The infinite limit, ....

I sort of expect mathematicians to know "infinite limit"
and "continuum limit", I got the robot talking it pretty good,
if you don't maybe I got nothing for you.

Of course I build this kind of thing like iota-values
as a continuum limit of functions, standardly modeling
not-a-real-function as a limit of real functions, this
kind of thing, it's a very usual thing you can find
in your physics courses, your probability course,
lots of usual courses.

Then about how these continuous domains like line-reals,
field-reals, and signal-reals, all have to get along
and play together is pretty simply from a result in
function theory and a result in topology that all together
they all sit neatly together in descriptive set theory
for axiomatic set theory, all ordinary.

That eventually does involve an extra-ordinary, in
the otherwise ordinary set theory, as another one of
these deductive results of inference, to explain how
they all fit together in one consistent theory,
then it's about the most real facts of mathematics.
WM
2024-02-17 10:00:09 UTC
Permalink
Post by ***@gmail.com
But the infinite is always defined as the unlimited.
Nevertheless actual infinity is limites, for instance 1, 2, 3, ... is
limites by ω.
1/1, 1/2, 1/3, ... is limited by 0.
Post by ***@gmail.com
No. You can't count to infinity.
Correct. Between all numbers you can count and the limit there are dark
numbers.
Take the function Number of Unit Fractions between (0, and x > 0). It has
the following properties:
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at some
x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).

This requires a first unit fraction, if all are there in actual
infinity. Of course the first unit fractions cannot be seen. They are
dark.

Regards, WM
mitchr...@gmail.com
2024-02-19 04:00:51 UTC
Permalink
Post by WM
Post by ***@gmail.com
But the infinite is always defined as the unlimited.
Nevertheless actual infinity is limites, for instance 1, 2, 3, ... is
limites by ω.
1/1, 1/2, 1/3, ... is limited by 0.
Post by ***@gmail.com
No. You can't count to infinity.
Correct. Between all numbers you can count and the limit there are dark
numbers.
Take the function Number of Unit Fractions between (0, and x > 0). It has
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at some
x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
This requires a first unit fraction, if all are there in actual
infinity. Of course the first unit fractions cannot be seen. They are
dark.
Regards, WM
The first fraction is 1/infinity.
That is dark. Zero is below it and can't be seen.
It is not even dark.

Mitchell Raemsch
Yecin Tcharushin Bazunov
2024-02-19 18:30:52 UTC
Permalink
because ∀n ∈ ℕ: 1/n =/= 1/(n-1).
This requires a first unit fraction, if all are there in actual
infinity. Of course the first unit fractions cannot be seen. They are
dark. Regards, WM
The first fraction is 1/infinity.That is dark. Zero is below it and
can't be seen. It is not even dark.
yes, I can see that. Fuck you amrica. You are the dirt at the bottom of
the dirt. Dirty 𝗹𝗶𝗯𝗲𝗿𝗮𝗹_𝗰𝗮𝗽𝗶𝘁𝗮𝗹𝗶𝘀𝘁 bitches. You are going to suck large dicks, if
the morons of amrica are not waking up, very fast.

𝗛𝘂𝗻𝗴𝗮𝗿𝘆_𝘀𝗻𝘂𝗯𝘀_𝗨𝗦_𝘀𝗲𝗻𝗮𝘁𝗼𝗿𝘀_–_𝗮𝗺𝗯𝗮𝘀𝘀𝗮𝗱𝗼𝗿
A bipartisan delegation had sought to discuss Sweden’s NATO bid with
senior officials in Budapest
https://r%74.com/news/592675-hungary-boycotts-us-senators/

lol

Senior Hungarian officials have refused to meet four US senators who
arrived in Budapest on Sunday, Washington’s envoy to the country has said.
The American lawmakers are attempting to press Prime Minister Viktor Orban
into speeding up approval of Sweden’s accession to NATO.

Hungary is right to not meet the US war criminals.

The Mafia enforcers have arrived on tourist visas, not as an invited
political delegation. Wise to rebuff them.

fuck you amrica, a stolen territory ruled by 𝗸𝗵𝗮𝘇𝗮𝗿_𝗴𝗼𝘆𝘀. You are promoting,
committing and supporting genocide on planet Earth.

Refusing to receive US bullies seems to be trending.👍

"called the boycott “strange and concerning”" .. They might want to get
used to it. The US is the old kid in town. Even the US citizens just laugh
at their politicians.
mitchr...@gmail.com
2024-02-20 03:19:07 UTC
Permalink
Post by Yecin Tcharushin Bazunov
because ∀n ∈ ℕ: 1/n =/= 1/(n-1).
This requires a first unit fraction, if all are there in actual
infinity. Of course the first unit fractions cannot be seen. They are
dark. Regards, WM
The first fraction is 1/infinity.That is dark. Zero is below it and
can't be seen. It is not even dark.
yes, I can see that. Fuck you amrica. You are the dirt at the bottom of
It is not America that is the problem. It is Biden hijacking the election.
Good is coming back. Trump has already won.... One Nation Under God
Call Trump a tyrant and you are going to Hell.

Mitchell Raemsch
mitchr...@gmail.com
2024-02-20 19:10:47 UTC
Permalink
Post by ***@gmail.com
Post by Yecin Tcharushin Bazunov
because ∀n ∈ ℕ: 1/n =/= 1/(n-1).
This requires a first unit fraction, if all are there in actual
infinity. Of course the first unit fractions cannot be seen. They are
dark. Regards, WM
The first fraction is 1/infinity.That is dark. Zero is below it and
can't be seen. It is not even dark.
yes, I can see that. Fuck you amrica. You are the dirt at the bottom of
It is not America that is the problem. It is Biden hijacking the election.
Good is coming back. Trump has already won.... One Nation Under God
Call Trump a tyrant and you are going to Hell.
Mitchell Raemsch
The left is the real tyrant.
Richard Damon
2024-02-17 12:15:36 UTC
Permalink
Post by Mike Terry
Yes, it's not clear what "infinitely close" means
It means dark numbers.
The function Number of Unit Fractions between (0, and x) has
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at some
x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
Regards, WM
In other words, "Dark Numbers" are made up numbers that try to patch the
holes in your logic and you define that we can not know anything about
them, and thus nothing can be wrong with them.

Of course, since your premises are just wrong, so is your logic system,
and you are just trying to hold it together with the bubble gum and
bailing wire you call "Darkness"
WM
2024-02-17 22:03:03 UTC
Permalink
Post by Richard Damon
Post by Mike Terry
Yes, it's not clear what "infinitely close" means
It means dark numbers.
The function Number of Unit Fractions between (0, and x) has
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at some
x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
In other words, "Dark Numbers" are made up numbers that try to patch the
holes in your logic
There are no holes in my logic. There is nonsense in your belief.
Post by Richard Damon
Of course, since your premises are just wrong,
My premises are (1) to (3). Nothing wrong.

Regards, WM
Richard Damon
2024-02-18 12:36:27 UTC
Permalink
Post by WM
Post by Richard Damon
Post by Mike Terry
Yes, it's not clear what "infinitely close" means
It means dark numbers.
The function Number of Unit Fractions between (0, and x) has
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at some
x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
In other words, "Dark Numbers" are made up numbers that try to patch
the holes in your logic
There are no holes in my logic. There is nonsense in your belief.
You just can't see the holes, because you close your eyes.

You seem to think that unbounded sets have there bounds in them.
Post by WM
Post by Richard Damon
Of course, since your premises are just wrong,
My premises are (1) to (3). Nothing wrong.
You assume that your NUF exists, and that assumption requires that there
be a "first" (lowest) Unit Fraction which exists.

Since, no such number can exist, as if x is a unit fraction x/2 will be
too, and will be lower than it, you logic is fallacious.

So, your assumptions are incompatible with the definition of the Natural
Numbers, so make your system inconsistent, and thus worthless.
Post by WM
Regards, WM
Jim Burns
2024-02-17 18:35:36 UTC
Permalink
Mike Terry schrieb am Freitag,
Post by Mike Terry
Yes,
it's not clear what "infinitely close" means
It means dark numbers.
Do you (WM) say that
a point with a final.ordinal.reciprocal
⅟n⋅n = 1 ∧ ⟨1,…,n⟩ ⃒⇇ ⟨1,…,n,n⁺¹⟩
below it is infinitelyᵂᴹ.close to 0?
That would be an odd use of "infinite".

A positive dark number has
a final.ordinal.reciprocal below it.

| Assume otherwise.
| Also, assume
| a skipping.function isn't all.continuous, and,
| for final.ordinal.reciprocal ⅟m
| ⅟(4⋅m) is a final.ordinal.reciprocal.
|
| By assumption,
| positive dark δ is a positive lower bound of
| final.ordinal.reciprocals ⅟ℕ₁
| 0 < δ ≤ᣔ ⅟ℕ₁
|
| β is the greatest lower bound of
| final.ordinal.reciprocals ⅟ℕ₁
| 0 < δ ≤ β ≤ᣔ ⅟ℕ₁
| 0 < β/2 < β < 2β
| 2β isn't a lower bound of ⅟ℕ₁
| β is the greatest lower bound of ⅟ℕ₁
| β/2 is a lower bound of ⅟ℕ₁
|
| β < 2β
| 2β isn't a lower bound of ⅟ℕ₁
| final.ordinal.reciprocal ⅟m₂ᵦ < 2β exists.
| final.ordinal.reciprocal ⅟(4⋅m₂ᵦ) < β/2 exists.
| β/2 isn't a lower bound of ⅟ℕ₁
|
| However,
| β/2 < β
| β/2 is a lower bound of ⅟ℕ₁
| Contradiction.

Therefore,
a positive dark number has
a final.ordinal.reciprocal below it.
The function
Number of Unit Fractions between (0, and x)
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0
cannot happen unless NUF(x) increases at some x.
NUF(x) increases at 0
(2) NUF(x) cannot increase other than
when passing unit fractions at some x = 1/n.
NUF(x) cannot increase other than
when ∀β > 0: NUF(x-β) < NUF(x+β)
(3) NUF(x) cannot pass more than one
unit fraction at a single point x because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
∀n ∈ ℕ: 1/n =/= 0

∀β > 0: ∀n ∈ ℕ: NUF(0-β) + n < NUF(0+β)

β > ⅟1⁺ᵐᵝ > ... > ⅟n⁺ᵐᵝ > ⅟(n+1)⁺ᵐᵝ > 0
for
0 =< mᵦ =< ⅟β < mᵦ+1 = 1⁺ᵐᵝ
(4) This requires a first unit fraction,
if all are there in actual infinity.
Each final.ordinal.reciprocal
is preceded by
another final.ordinal.reciprocal.

The first final.ordinal.reciprocal not.exists.
WM
2024-02-17 19:14:18 UTC
Permalink
Post by Jim Burns
Post by Chris M. Thomasson
The function
Number of Unit Fractions between (0, and x)
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0
cannot happen unless NUF(x) increases at some x.
NUF(x) increases at 0
Impossible,because 0 is not a unit fraction.
Post by Jim Burns
Post by Chris M. Thomasson
(2) NUF(x) cannot increase other than
when passing unit fractions at some x = 1/n.
NUF(x) cannot increase other than
when ∀β > 0: NUF(x-β) < NUF(x+β)
No. If 2β fits between two unit fractions this is not true.
Post by Jim Burns
Post by Chris M. Thomasson
(3) NUF(x) cannot pass more than one
unit fraction at a single point x because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
∀n ∈ ℕ: 1/n =/= 0
yes. Therefore NUF cannot increase at 0.
Post by Jim Burns
Post by Chris M. Thomasson
(4) This requires a first unit fraction,
if all are there in actual infinity.
Each final.ordinal.reciprocal
is preceded by
another final.ordinal.reciprocal.
No, this axiom must be given up.
Post by Jim Burns
The first final.ordinal.reciprocal not.exists.
The alternative would be an increase of NUF(x) to infinity at zero.
Not acceptable.

Regards, WM
WM
2024-02-18 21:00:44 UTC
Permalink
Which do you (WM) give up?
How do you justify giving it up?
I will never give up the following self-evidence:
If there are ℵo unit fractions in the interval (0, eps), then there is
an x with only a finite number of unit fractions in (0, x).

Why? Because unit fractions are real points on the real line. They cannot
appear as an infinite swarm without a finite start.

The intersection of all intervals (0, eps) that can be chosen by anybody
in eternity however contains ℵo unit fractions.

Regards, WM
Richard Damon
2024-02-18 23:12:14 UTC
Permalink
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are ℵo unit fractions in the interval (0, eps), then there is
an x with only a finite number of unit fractions in (0, x).
Why? Because unit fractions are real points on the real line. They
cannot appear as an infinite swarm without a finite start.
But the start was at 1/1

Remember "real points" take up no spacd, so we can always pack more of
them into any finite space, so there doesn't actually need to be a "first"

If there WAS a "finite first" unit fraction, then there couldn't be an
infinite swarm of them, because then you have a finite length divided in
to segments with a finite lower bound of size, and thus have a finite
count of how many can fit.

But if there was a finite first unit fraction, then there would also be
a finite maximum Natural Number, which is a contradiction of definitions.

So, your logic is just backwards, based on embedded your misconceptions
into a made up function that can't actually exist.
Post by WM
The intersection of all intervals (0, eps) that can be chosen by anybody
in eternity however contains ℵo unit fractions.
Regards, WM
Right, because infinity can never be decreased by finite operations. So
you can not expect it to.

If there was a "first", then you never had an infinite set.
WM
2024-02-19 08:14:52 UTC
Permalink
Post by Richard Damon
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are ℵo unit fractions in the interval (0, eps), then there is
an x with only a finite number of unit fractions in (0, x).
Why? Because unit fractions are real points on the real line. They
cannot appear as an infinite swarm without a finite start.
But the start was at 1/1
Remember "real points" take up no space,
But unit fractions have internal distances and they take up space.

If there is a set of real points with distances at the real axis, then
every point can be considered as the border between two subsets. If it is
impossible to reduce the left-hand subset to a finite amount, then there
is no point available dividing infinitely many unit fractions. Then they
sit at one point. That is impossible by ∀n ∈ ℕ: 1/n - 1/(n+1) = d_n
Post by Richard Damon
0 .
Regards, WM
Richard Damon
2024-02-19 12:33:49 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are ℵo unit fractions in the interval (0, eps), then there
is an x with only a finite number of unit fractions in (0, x).
Why? Because unit fractions are real points on the real line. They
cannot appear as an infinite swarm without a finite start.
But the start was at 1/1
Remember "real points" take up no space,
But unit fractions have internal distances and they take up space.
But a space that gets vanishingly small, and thus we CAN fit an infinite
number of them in a finite space.
Post by WM
If there is a set of real points with distances at the real axis, then
every point can be considered as the border between two subsets. If it
is impossible to reduce the left-hand subset to a finite amount, then
there is no point available dividing infinitely many unit fractions.
Then they sit at one point. That is impossible by ∀n ∈ ℕ: 1/n - 1/(n+1)
= d_n
Nope. False conclusion. Why must the left side every become finite?
Post by WM
Post by Richard Damon
0 .
Regards, WM
I guess you agree that Achilles can't pass the Tortoise, or maybe only
at some "dark" time that we can not see.
WM
2024-02-19 12:59:22 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are ℵo unit fractions in the interval (0, eps), then there
is an x with only a finite number of unit fractions in (0, x).
Why? Because unit fractions are real points on the real line. They
cannot appear as an infinite swarm without a finite start.
But the start was at 1/1
Remember "real points" take up no space,
But unit fractions have internal distances and they take up space.
But a space that gets vanishingly small,
Bot by the number of points - there are always infinitely many between two
adjacent unit fractions.
Post by Richard Damon
Post by WM
If there is a set of real points with distances at the real axis, then
every point can be considered as the border between two subsets. If it
is impossible to reduce the left-hand subset to a finite amount, then
there is no point available dividing infinitely many unit fractions.
Then they sit at one point. That is impossible by ∀n ∈ ℕ: 1/n - 1/(n+1)
= d_n
Why must the left side every become finite?
If there are teally existing real points, then each one can be used, in
principle, as the border.
Post by Richard Damon
I guess you agree that Achilles can't pass the Tortoise, or maybe only
at some "dark" time that we can not see.
Irrelevant for the present topic. But true that he overtakes in darkness.

Regards, WM
Richard Damon
2024-02-20 02:17:02 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are ℵo unit fractions in the interval (0, eps), then there
is an x with only a finite number of unit fractions in (0, x).
Why? Because unit fractions are real points on the real line. They
cannot appear as an infinite swarm without a finite start.
But the start was at 1/1
Remember "real points" take up no space,
But unit fractions have internal distances and they take up space.
But a space that gets vanishingly small,
Bot by the number of points - there are always infinitely many between
two adjacent unit fractions.
So?

Your mind just doesn't seem to be able to understand that fact.
Post by WM
Post by Richard Damon
Post by WM
If there is a set of real points with distances at the real axis,
then every point can be considered as the border between two subsets.
If it is impossible to reduce the left-hand subset to a finite
amount, then there is no point available dividing infinitely many
unit fractions. Then they sit at one point. That is impossible by ∀n
∈ ℕ: 1/n - 1/(n+1) = d_n
Why must the left side every become finite?
If there are teally existing real points, then each one can be used, in
principle, as the border.
No, only the one ON the border would be the border, but for every one
that you might want to think of as the border, there are more that are
closer to the border.

Thus, there is NOT one (in the set) on the border, and thus there is no
"first" (from the left) unit fraction, and thus NUF(x) isn't defined.
Post by WM
Post by Richard Damon
I guess you agree that Achilles can't pass the Tortoise, or maybe only
at some "dark" time that we can not see.
Irrelevant for the present topic. But true that he overtakes in darkness.
But he doesn't, he passes the Tortoise at a easily determined finite
time (if we know the actual speeds).

This shows that to you ALL numbers have become "dark", because you can't
actually use any of them without hitting a problem.
Post by WM
Regards, WM
WM
2024-02-20 08:24:07 UTC
Permalink
Measured by the number of points - there are always infinitely many
between two adjacent unit fractions.
Post by Richard Damon
Post by Richard Damon
Post by WM
If there is a set of real points with distances at the real axis,
then every point can be considered as the border between two subsets.
If it is impossible to reduce the left-hand subset to a finite
amount, then there is no point available dividing infinitely many
unit fractions. Then they sit at one point. That is impossible by ∀n
∈ ℕ: 1/n - 1/(n+1) = d_n
Why must the left side every become finite?
If there are really existing real points, then each one can be used, in
principle, as the border.
No, only the one ON the border would be the border, but for every one
that you might want to think of as the border, there are more that are
closer to the border.
That means they appear onlylater. That is potential infinity.
Post by Richard Damon
Thus, there is NOT one (in the set) on the border, and thus there is no
"first" (from the left) unit fraction, and thus NUF(x) isn't defined.
It is very well defined.

Further, if
∃^ℵ y ∈ {1/n : n ∈ ℕ} ∀x ∈ (0, 1]: 0 < y < x
is false, then there must be x > 0 which make it false, i.e., which have
fewer smaller unit fractions. Therefore
∀x ∈ (0, 1]: ∃^ℵ y ∈ {1/n : n ∈ ℕ}: 0 < y < x
cannot be true for all x > 0.
Post by Richard Damon
Post by Richard Damon
I guess you agree that Achilles can't pass the Tortoise, or maybe only
at some "dark" time that we can not see.
Irrelevant for the present topic. But true that he overtakes in darkness.
But he doesn't, he passes the Tortoise at a easily determined finite
time (if we know the actual speeds).
Yes, the limit is well known, like the limit 0 of the unit fractions or
the limit ω of the natural numbers.

Regards, WM
Ross Finlayson
2024-02-19 16:27:49 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are ℵo unit fractions in the interval (0, eps), then there
is an x with only a finite number of unit fractions in (0, x).
Why? Because unit fractions are real points on the real line. They
cannot appear as an infinite swarm without a finite start.
But the start was at 1/1
Remember "real points" take up no space,
But unit fractions have internal distances and they take up space.
But a space that gets vanishingly small, and thus we CAN fit an infinite
number of them in a finite space.
Post by WM
If there is a set of real points with distances at the real axis, then
every point can be considered as the border between two subsets. If it
is impossible to reduce the left-hand subset to a finite amount, then
there is no point available dividing infinitely many unit fractions.
Then they sit at one point. That is impossible by ∀n ∈ ℕ: 1/n -
1/(n+1) = d_n
Nope. False conclusion. Why must the left side every become finite?
Post by WM
Post by Richard Damon
0 .
Regards, WM
I guess you agree that Achilles can't pass the Tortoise, or maybe only
at some "dark" time that we can not see.
How about if you take the turtle's velocity and subtract it
from Achilles' velocity, then just compute Achilles' time-to-travel
to the Tortoise, and add the Tortoise's progress, to where they meet.

Though, if the Tortoise is 0 m/s, that's infinity s/m, ....

Of course it assumes that the Arrow already has a
geometric series that has a sum that equals one, ....

Adding a "postulate of continuity" to geometry was
deemed the needful about a millenia or two after Euclid.


Again this is just the Sorites/Heap, then as with regards
to the "infinitely-divided" of the "infinitely divisible",
what you want is not samples of the "infinitely divisible",
that will never end, instead the "continuum limit" of
the function that is the "infinite limit" of the divisions.

Then the values in front start with finitely many instead,
and it's just either side of that "continuous" and "discrete",
it's about the most fundamental concept relating together
the "continuous" and "discrete", in one relation.

It is, ....
Jim Burns
2024-02-19 20:37:34 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
Which do you (WM) give up?
How do you justify giving it up?
If there are
ℵo unit fractions in the interval (0, eps),
then there is an x with only a finite number of
unit fractions in (0, x).
Why? Because unit fractions are real points on
the real line.
They cannot appear as an infinite swarm without
a finite start.
But the start was at 1/1
Remember "real points" take up no space,
But unit fractions have internal distances
and they take up space.
But a space that gets vanishingly small,
and thus
we CAN fit an infinite number of them in
a finite space.
One can avoid using the I.word by saying
| we CAN fit more than any finite number of them in
| a finite space.
WM
2024-02-20 08:15:03 UTC
Permalink
Post by Jim Burns
One can avoid using the I.word by saying
| we CAN fit more than any finite number of them in
| a finite space.
But if all are there existing from the scratch as an actually infinite
set, then each one can be addressed as border between two subsets, in
principle. Even the smallest one.

Regards, WM
Richard Damon
2024-02-20 12:49:03 UTC
Permalink
Post by WM
Post by Jim Burns
One can avoid using the I.word by saying
| we CAN fit more than any finite number of them in
| a finite space.
But if all are there existing from the scratch as an actually infinite
set, then each one can be addressed as border between two subsets, in
principle. Even the smallest one.
Regards, WM
But there isn't a "Smallest One".

You just don't understand that fact, because you mind is just to filled
with Darkness.
WM
2024-02-20 16:47:31 UTC
Permalink
Post by Richard Damon
Post by WM
But if all are there existing from the scratch as an actually infinite
set, then each one can be addressed as border between two subsets, in
principle. Even the smallest one.
But there isn't a "Smallest One".
You are obviosuly wrong. If all are there, then all can be used to divide
the set into two parts. None is exempt.

Regards, WM
Richard Damon
2024-02-21 02:17:03 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
But if all are there existing from the scratch as an actually
infinite set, then each one can be addressed as border between two
subsets, in principle. Even the smallest one.
But there isn't a "Smallest One".
You are obviosuly wrong. If all are there, then all can be used to
divide the set into two parts. None is exempt.
Regards, WM
And we CAN do that, since none of them are "the first", so ALL of them
have an infinite number of point before them.

Nothing wrong with that, at least as long as you understand how
unbounded sets work.
WM
2024-02-21 08:42:14 UTC
Permalink
Post by Richard Damon
Post by WM
You are obviosuly wrong. If all are there, then all can be used to
divide the set into two parts. None is exempt.
And we CAN do that, since none of them are "the first", so ALL of them
have an infinite number of point before them.
If all are there, timeless and static, then one of them is the first.
NUF(x) growing from 0 to ℵo immediately would contradict mathematics
according to which after every unit fraction there are points without unit
fraction.

Regards, WM
Richard Damon
2024-02-21 12:32:24 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
You are obviosuly wrong. If all are there, then all can be used to
divide the set into two parts. None is exempt.
And we CAN do that, since none of them are "the first", so ALL of them
have an infinite number of point before them.
If all are there, timeless and static, then one of them is the first.
Nope.

You don't understand the properties of UNBOUNDED sets.

Being "Unbounded" means there isn't a "Bound" (i.e. and end) in that set.

This is just a property of "infinity" that your logic can't handle.
Post by WM
NUF(x) growing from 0 to ℵo immediately would contradict mathematics
according to which after every unit fraction there are points without
unit fraction.
No more than my Qn and Qd which show that the square root of two is
Rational.

Definied in words does not mean defined to exist.
Post by WM
Regards, WM
WM
2024-02-22 12:01:55 UTC
Permalink
Post by Richard Damon
Post by WM
If all are there, timeless and static, then one of them is the first.
You don't understand the properties of UNBOUNDED sets.
Sets between 0 and 1 have bounds.

Regards, WM
Richard Damon
2024-02-22 12:17:44 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
If all are there, timeless and static, then one of them is the first.
You don't understand the properties of UNBOUNDED sets.
Sets between 0 and 1 have bounds.
Regards, WM
Only if "Between" is INCLUSIVE, as the bounds are 0 and 1.

If the set EXCLUDES one or both of the bounds, it doesn't have it in
anymore.

The fact that 0 is not > 0 means there is no lowest possible unit
fraction / rational number / real number > 0.

Do you really think that the set can both exclude the bounding value and
also contain it?

You are just admitting that your logic is broken.
Richard Damon
2024-02-23 12:23:21 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
If all are there, timeless and static, then one of them is the first.
You don't understand the properties of UNBOUNDED sets.
Sets between 0 and 1 have bounds.
Only if "Between" is INCLUSIVE, as the bounds are 0 and 1.
If the set EXCLUDES one or both of the bounds, it doesn't have it in
anymore.
The bounds are there, if not at 0 then before. Note: linearity. No
magical leaps.
Regards, WM
Right, so there isn't a number just "next to" 0, as you had to "leap"
over all the infinite numbers between 0 and that number.

The Lower Bound for (0, 1] is 0, which is out of the set, so there is no
lower bound IN the set to use, so there is not "lowest" value in the set.

You just don't understand how unbounded sets work.

Your math is just exploded in contradictions because it can't hold that
much safely.
Chris M. Thomasson
2024-02-23 19:14:58 UTC
Permalink
Post by Richard Damon
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
If all are there, timeless and static, then one of them is the first.
You don't understand the properties of UNBOUNDED sets.
Sets between 0 and 1 have bounds.
Only if "Between" is INCLUSIVE, as the bounds are 0 and 1.
If the set EXCLUDES one or both of the bounds, it doesn't have it in
anymore.
The bounds are there, if not at 0 then before. Note: linearity. No
magical leaps.
Regards, WM
Right, so there isn't a number just "next to" 0, as you had to "leap"
over all the infinite numbers between 0 and that number.
The Lower Bound for (0, 1] is 0, which is out of the set, so there is no
lower bound IN the set to use, so there is not "lowest" value in the set.
You just don't understand how unbounded sets work.
We have tried to explain this to him... His mind seems to be locked in.
Post by Richard Damon
Your math is just exploded in contradictions because it can't hold that
much safely.
FromTheRafters
2024-02-22 14:39:23 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
If all are there, timeless and static, then one of them is the first.
You don't understand the properties of UNBOUNDED sets.
Sets between 0 and 1 have bounds.
Which sets?
WM
2024-02-23 08:35:08 UTC
Permalink
Post by FromTheRafters
Post by WM
Sets between 0 and 1 have bounds.
Which sets?
All sets with elements x which obey 0 < x < 1.

Regards, WM
FromTheRafters
2024-02-23 08:54:30 UTC
Permalink
Post by WM
Post by FromTheRafters
Post by WM
Sets between 0 and 1 have bounds.
Which sets?
All sets with elements x which obey 0 < x < 1.
Then your interval was unbounded (0,1) and your 'x' is strictly between
zero and one.
WM
2024-02-23 09:04:29 UTC
Permalink
Post by FromTheRafters
Post by WM
Post by FromTheRafters
Post by WM
Sets between 0 and 1 have bounds.
Which sets?
All sets with elements x which obey 0 < x < 1.
Then your interval was unbounded (0,1) and your 'x' is strictly between
zero and one.
The border is either 0 an 1 or between 0 and 1. Real timeless complete
existence requires fixed smallest and largest points.
Eigentlichunendlichem = Transfinitum = Vollendetunendlichem =
Unendlichseiendem = kategorematice infinitum.


Regards, WM
Richard Damon
2024-02-23 12:23:26 UTC
Permalink
Post by WM
Post by FromTheRafters
Post by WM
Post by FromTheRafters
Post by WM
Sets between 0 and 1 have bounds.
Which sets?
All sets with elements x which obey 0 < x < 1.
Then your interval was unbounded (0,1) and your 'x' is strictly
between zero and one.
The border is either 0 an 1 or between 0 and 1. Real timeless complete
existence requires fixed smallest and largest points.
Eigentlichunendlichem = Transfinitum = Vollendetunendlichem =
Unendlichseiendem = kategorematice infinitum.
Regards, WM
No, it doesn't

Remember, you have an INFINITE sized set (even if all the values are
finite) so your rule doesn't apply.

Don't understand you german, but you must be wrong.

It seem your problem is you just don't understand how infinity works.

And yes, if be "Real" you mean existing in the real physical world, it
can't actually have even Natural Numbers, since the physical world is
finite, and thus can't hold the infinite, so appeals to it just show the
limitation of trying to argue about Mathematics with a tool belt of
inadequacy tools.
WM
2024-02-24 12:16:55 UTC
Permalink
Post by Richard Damon
Post by WM
The border is either 0 an 1 or between 0 and 1. Real timeless complete
existence requires fixed smallest and largest points.
Eigentlichunendlichem = Transfinitum = Vollendetunendlichem =
Unendlichseiendem = kategorematice infinitum.
Remember, you have an INFINITE sized set (even if all the values are
finite) so your rule doesn't apply.
All number exist as points on the real line.
Post by Richard Damon
Don't understand you german, but you must be wrong.
Completed infinity. That means all points are there, existing, independent
of neighbours.
Post by Richard Damon
And yes, if be "Real" you mean existing in the real physical world,
All numbers exist as points on the real line.

Regards, WM
Richard Damon
2024-02-24 15:31:56 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
The border is either 0 an 1 or between 0 and 1. Real timeless
complete existence requires fixed smallest and largest points.
Eigentlichunendlichem = Transfinitum = Vollendetunendlichem =
Unendlichseiendem = kategorematice infinitum.
Remember, you have an INFINITE sized set (even if all the values are
finite) so your rule doesn't apply.
All number exist as points on the real line.
Right.
Post by WM
Post by Richard Damon
Don't understand you german, but you must be wrong.
Completed infinity. That means all points are there, existing,
independent of neighbours.
Right, except that we know some properties of what mght be a neighbor,
like no two different real numbers are "next" to each other, as there
are always (an infinite number of) points between them.

For example, if we want to claim that X and Y are next to each other, we
need to explain away the number X(X+Y)/2 which will be between them.
Post by WM
Post by Richard Damon
And yes, if be "Real" you mean existing in the real physical world,
All numbers exist as points on the real line.
Regards, WM
Yes, an no point exists as the "first" above 0, as any value X we want
ot claim to be that one, has an X/2 between it and 0.

That number exists, because, as you said, the infinity was completed, so
all are there.

Infinite real points exists, and no points are "next" to each other.

Thus, your "logic" about values needing to be "next" to something is
just invalid, it doesn't apply to these unbounded sets.
WM
2024-02-24 16:14:53 UTC
Permalink
Post by Richard Damon
Post by WM
All numbers exist as points on the real line.
Yes, an no point exists as the "first" above 0,
That is in contradiction with logic. Static existence in a linear order
enforces a first one.
Not so easy to be seen with integers. Very clear with unit fractions.
Your attempts do deny this contradict finite logic (there is no other) and
this are in vain. It is futile to discuss this fact. Therefore I will non
longer do so.

Regards, WM
Richard Damon
2024-02-24 16:45:25 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
All numbers exist as points on the real line.
Yes, an no point exists as the "first" above 0,
That is in contradiction with logic. Static existence in a linear order
enforces a first one.
Nope.

That is FINITE logic, which we no longer have.
Post by WM
Not so easy to be seen with integers. Very clear with unit fractions.
Your attempts do deny this contradict finite logic (there is no other)
and this are in vain. It is futile to discuss this fact. Therefore I
will non longer do so.
Regards, WM
Nope, easier to DECIVE yourself with unit fractions, as you can imagine
that they must take up some bounded amount of space, which they don't

Unboundedly small is harder to understand than unboundedl large.

You admission that you are using "finite" logic on an "infinte" set, and
thus breaking the rules of it, just shows that you are admitting that
you have no real logical basis for your argument.

Yes, if you define a FINITE set of "Visible Natural Numbers" which has
an upper value, and thus a lowest unit fraction, you can make you logic
work, and have a "dark" set of Natural Numbers that are not visible,
except that the Visible Natural Numbers fail to meet the requirements
for the Natural Numbers (since there is a number without a successor to
it in the set).

You can try to extend that saying for each different "Visible" set we
get a different dark set, and think towards the limit, except that
finite logic doesn't support limit theory.

So, as I have been telling you, your "Darkness" is just a figment of
your use of inappropriate logic, and doesn't actually exist in the realm
of the actual Natural Numbers.

FromTheRafters
2024-02-24 16:26:23 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
The border is either 0 an 1 or between 0 and 1. Real timeless complete
existence requires fixed smallest and largest points.
Eigentlichunendlichem = Transfinitum = Vollendetunendlichem =
Unendlichseiendem = kategorematice infinitum.
Remember, you have an INFINITE sized set (even if all the values are
finite) so your rule doesn't apply.
All number exist as points on the real line.
Post by Richard Damon
Don't understand you german, but you must be wrong.
Completed infinity. That means all points are there, existing, independent of
neighbours.
Post by Richard Damon
And yes, if be "Real" you mean existing in the real physical world,
All numbers exist as points on the real line.
All *real* numbers do.
Jim Burns
2024-02-22 14:54:52 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
If all are there, timeless and static,
then one of them is the first.
You don't understand
the properties of UNBOUNDED sets.
Sets between 0 and 1 have bounds.
Consider a finite set B

In each
transitive.trichotomous.order on B
each non.empty subset S of B holds
two extrema (minimum and maximum) of S

Each
transitive.trichotomous.order on B
has
the each.subset.two.extrema property.

More at
https://en.wikipedia.org/wiki/Finite_set

----
Consider ⟨1,…,n⟩

∀S ᙾ⁰⊆ ⟨1,…,n⟩: S holds two extrema of S

The standard
transitive.trichotomous.order on ⟨1,…,n⟩
has
the each.subset.two.extrema property.

If any
transitive.trichotomous.order on ⟨1,…,n⟩
has
the each.subset.two.extrema property,
then each
transitive.trichotomous.order on ⟨1,…,n⟩
has
the each.subset.two.extrema property,
provably.

Each
transitive.trichotomous.order on ⟨1,…,n⟩
has
the each.subset.two.extrema property.

⟨1,…,n⟩ is finite

----
Consider the union ⋃ₙ⟨⟨1,…,n⟩⟩
Standardly ℕ = ⋃ₙ⟨⟨1,…,n⟩⟩

In the standard
transitive.trichotomous.order on ℕ
ℕ holds one extremum

The standard
transitive.trichotomous.order on ℕ
doesn't have
the each.subset.two.extrema property.

Each
transitive.trichotomous.order on ℕ
doesn't have
the each.subset.two.extrema property,
provably.

ℕ is not finite.

----
Consider ℕ∪{ω}
Standardly ℕ ᣔ< ω

In the standard
transitive.trichotomous.order on ℕ∪{ω}
ℕ∪{ω} holds two extrema, however,
ℕ ⊆ ℕ∪{ω} holds one extremum

The standard
transitive.trichotomous.order on ℕ∪{ω}
doesn't have
the each.subset.two.extrema property.

Each
transitive.trichotomous.order on ℕ∪{ω}
doesn't have
the each.subset.two.extrema property,
provably.

ℕ∪{ω} is not finite.

----
Consider [0,1] ⊆ ℝ

In the standard
transitive.trichotomous.order on [0,1]
[0,1] holds two extrema, however,
[0,1) ⊆ [0,1] holds one extremum

The standard
transitive.trichotomous.order on [0,1]
doesn't have
the each.subset.two.extrema property.

Each
transitive.trichotomous.order on [0,1]
doesn't have
the each.subset.two.extrema property,
provably.

[0,1] is not finite.
WM
2024-02-23 08:42:10 UTC
Permalink
Post by Jim Burns
ℕ is not finite.
Ye4s, but what does this mean? The visible part has no final element n,
because n+1 and 2n and n^n^n can be found for every n. The complete set
however is complete such that no element can be added. Based on Cantor's
Eigentlichunendlichem = Transfinitum = Vollendetunendlichem =
Unendlichseiendem = kategorematice infinitum there is a fixed number of
numbers, although we cannot count or determine it better than by |ℕ|.
Post by Jim Burns
[0,1] is not finite.
It has bounds. The numer of points is fixed.

Regards, WM
Jim Burns
2024-02-23 13:11:26 UTC
Permalink
Post by WM
Post by Jim Burns
Consider a finite set B
In each
transitive.trichotomous.order on B
each non.empty subset S of B holds
two extrema (minimum and maximum) of S
Each
transitive.trichotomous.order on B
has
the each.subset.two.extrema property.
More at
https://en.wikipedia.org/wiki/Finite_set
ℕ is not finite.
Ye4s, but what does this mean?
It means that
there is
transitive.trichotomous.order on ℕ
according to which there is
a non.empty subset of ℕ
which
does not hold two extrema.
Post by WM
The visible part has no final element n,
Right.
Thus, ℕ is not finite.

Even in ℕᵂᴹ with your darkᵂᴹ numbers
there is the visibleᵂᴹ standard order
according to which there is
there is the visibleᵂᴹ number subset
which,
as you (WM) have noted,
does not hold two extrema.

The insertion of darkᵂᴹ numbers
cannot turn an infinite set finite,
_according to what "finite" means_
Post by WM
because n+1 and 2n and n^n^n can be found
for every n.
The complete set however
is complete such that
no element can be added.
Based on Cantor's
Eigentlichunendlichem =
Transfinitum =
Vollendetunendlichem =
Unendlichseiendem =
kategorematice infinitum
there is a fixed number of numbers,
although we cannot count or determine it
better than by |ℕ|.
When we "count" the elements of S
we determine the last final.ordinal[1]
which fits[2] in S, or, equivalently,
the predecessor of
the first final ordinal which doesn't fit in S

Flocks of sheep and pockets of pebbles have
their first.not.fitting.final.ordinal.

The set N of final.ordinals does not have
its first.not.fitting.final.ordinal.

It not.exists.
It doesn't exist.in.the.darkᵂᴹ

[2]
fits
1.to.1 map exists

[1]
final
another one doesn't fit
Post by WM
Post by Jim Burns
[0,1] is not finite.
It has bounds.
The numer of points is fixed.
[0,1] ⊇ [0,1) which has one extremum.
Ross Finlayson
2024-02-23 17:38:55 UTC
Permalink
Post by Jim Burns
Post by WM
Post by Jim Burns
Consider a finite set B
In each
transitive.trichotomous.order on B
each non.empty subset S of B holds
two extrema (minimum and maximum) of S
Each
transitive.trichotomous.order on B
has
the each.subset.two.extrema property.
More at
https://en.wikipedia.org/wiki/Finite_set
ℕ is not finite.
Ye4s, but what does this mean?
It means that
there is
transitive.trichotomous.order on ℕ
according to which there is
a non.empty subset of ℕ
which
does not hold two extrema.
Post by WM
The visible part has no final element n,
Right.
Thus, ℕ is not finite.
Even in ℕᵂᴹ with your darkᵂᴹ numbers
there is the visibleᵂᴹ standard order
according to which there is
there is the visibleᵂᴹ number subset
which,
as you (WM) have noted,
does not hold two extrema.
The insertion of darkᵂᴹ numbers
cannot turn an infinite set finite,
_according to what "finite" means_
Post by WM
because n+1 and 2n and n^n^n can be found
for every n.
The complete set however
is complete such that
no element can be added.
Based on Cantor's Eigentlichunendlichem =
Transfinitum =
Vollendetunendlichem = Unendlichseiendem =
kategorematice infinitum
there is a fixed number of numbers,
although we cannot count or determine it
better than by |ℕ|.
When we "count" the elements of S
we determine the last final.ordinal[1]
which fits[2] in S, or, equivalently,
the predecessor of
the first final ordinal which doesn't fit in S
Flocks of sheep and pockets of pebbles have
their first.not.fitting.final.ordinal.
The set N of final.ordinals does not have
its first.not.fitting.final.ordinal.
It not.exists.
It doesn't exist.in.the.darkᵂᴹ
[2]
fits
1.to.1 map exists
[1]
final
another one doesn't fit
Post by WM
Post by Jim Burns
[0,1] is not finite.
It has bounds.
The numer of points is fixed.
[0,1] ⊇ [0,1) which has one extremum.
It's pretty simply,

n/d: the function the continuum limit as d -> oo

and

n/d: the equivalence classes of ratios, d =/= 0

that

one's a function the range a continuous domain [0,1]
the other a model of the non-negative rationals.



If you might condense _all_ of WM/MW's "axioms"
since he ever wrote to sci.math or sci.logic, and
catalog their reasonings and lack thereof, and
then relate it to ratios of integers either apiece
or in the continuum limit, then it seems
the entire bit could be much reduced,
instead of perpetuating the freak-out.



So, for these mutual notions

n/d: the continuum limit of the function as d -> oo

and

n/d: the equivalence classes of ratios, with d =/= 0

then these are very well-understood numbers
and their forms have very familiar properties
and they stop disagreeing with each other
and stop disagreeing with them.
WM
2024-02-24 12:20:24 UTC
Permalink
Post by Jim Burns
Post by WM
Post by Jim Burns
[0,1] is not finite.
It has bounds.
The numer of points is fixed.
[0,1] ⊇ [0,1) which has one extremum.
It has two but only one is visible. Considering only unit fractions proves
it clearly. But considering all points does not chance the principle.

Regards, WM
Jim Burns
2024-02-19 20:21:02 UTC
Permalink
Post by WM
| a final.ordinal.reciprocal.free zone (0,δ)
| exists
or
| a skipping.function isn't all.continuous
or
| for final.ordinal.reciprocal ⅟m
| ⅟(4⋅m) is a final.ordinal.reciprocal.
Which do you (WM) give up?
How do you justify giving it up?
If
there are ℵo unit fractions in
the interval (0, eps),
then
there is an x with only a finite number of
unit fractions in (0, x).
Why?
Because unit fractions are real points on
the real line.
They cannot appear as
an infinite swarm without a finite start.
For each ⅟j in an uninterrupted sequence

each final.ordinal fits leftward
not.each final.ordinal fits rightward

In other words,
infinitely.many are leftward
finitely.many are rightward
for each ⅟j

No ⅟j is in a finite start
That is the reason that
an infinite swarm is the only possibility

----
A finite ordinal is final:
It is the last which has its cardinality.

Ordinal.finality is determined by not.fitting:
If
Bob is inserted into {<j} the ordinals before final j
then
{<j}⁺ᴮᵒᵇ not.fits {<j}
{<j} ⃒⇇ {<j}⁺ᴮᵒᵇ
not.exists 1.to.1 map to {<j} from {<j}⁺ᴮᵒᵇ

Visibleᵂᴹ or darkᵂᴹ,
not.exists final‖not.final j‖j⁺¹

If
j⁺¹ is not.final and
exists 1.to.1 map g to {<j⁺¹} from {<j⁺¹}⁺ᴮᵒᵇ
then
g can be edited[1] to
1.to.1 map f to {<j} from {<j}⁺ᴮᵒᵇ
and j is also not.final.

[1]
if g(i)=j⁺¹
then define f(i)=g(j⁺¹)
else define f(i)=g(i)

That is normal,
if it is normal to be an abstraction.
I speculate that
your thinking in your yet.unstated argument
leans on the normal.ness of that.

----
A set S⁺ᴮᵒᵇ with Bob inserted which
does not fit into S
is a _normal_ (finite) set S
normal S ⟺ S ⃒⇇ S⁺ᴮᵒᵇ

For each normal set S
exists a final ordinal j such that
S fits into {<j}
(normal) S ⃒⇇ S⁺ᴮᵒᵇ
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
∃j: S ⇉ {<j} ⃒⇇ {<j}⁺ᴮᵒᵇ

Not.fitting is essential to counting.

----
Define ℕ as the set of final ordinals.
∀j: j ∈ ℕ ⟺ {<j} ⃒⇇ {<j}⁺ᴮᵒᵇ

(normal) S ⃒⇇ S⁺ᴮᵒᵇ
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
∃j ∈ ℕ: S ⇉ {<j}

ℕ isn't normal.

For each j ∈ ℕ
exists {<j⁺¹} ⊆ ℕ: {<j} ⃒⇇ {<j⁺¹}

For each j ∈ ℕ
{<j} ⃒⇇ ℕ

| Assume {<j} ⇇ ℕ
|
| {<j} ⇇ ℕ
| {<j} ⇇ ℕ ⇇ {<j⁺¹}
| {<j} ⇇ {<j⁺¹}
|
| However,
| {<j} ⃒⇇ {<j⁺¹}
| Contradiction.

¬∃j ∈ ℕ: ℕ ⇉ {<j}
¬(normal ℕ)
¬(ℕ ⃒⇇ ℕ⁺ᴮᵒᵇ)
ℕ⁺ᴮᵒᵇ ⇉ ℕ
WM
2024-02-20 08:12:11 UTC
Permalink
Post by Richard Damon
Post by WM
If
there are ℵo unit fractions in
the interval (0, eps),
then
there is an x with only a finite number of
unit fractions in (0, x).
Why?
Because unit fractions are real points on
the real line.
They cannot appear as
an infinite swarm without a finite start.
In other words,
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes an evolving infinite collection, i.e., a
potentially infinite set where more and more elements are created which
initially have not existed.
A complete, i.e. actually infinite set of ℵo real fixed points on the
real axis can be subdivided by any of its elements (since all are
existing) such that the subsets have cardinalities from 0, ℵo over n,
ℵo to ℵo, n, and ℵo, 0.
Post by Richard Damon
No ⅟j is in a finite start
That is the reason that
an infinite swarm is the only possibility
Then you are not talking about really existing invariable points. But that
is what I discuss.

Regards, WM
Richard Damon
2024-02-20 12:49:05 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
If
there are ℵo unit fractions in
the interval (0, eps),
then
there is an x with only a finite number of
unit fractions in (0, x).
Why?
Because unit fractions are real points on
the real line.
They cannot appear as
an infinite swarm without a finite start.
In other words,
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes an evolving infinite collection, i.e., a
potentially infinite set where more and more elements are created which
initially have not existed.
A complete, i.e. actually infinite set of ℵo real fixed points on the
real axis can be subdivided by any of its elements (since all are
existing) such that the subsets have cardinalities from 0, ℵo over n, ℵo
to ℵo, n, and ℵo, 0.
But ℵo / n is still ℵo, so you never get to 0

You still don't understand the mathematics of Trans-finite values,
because you misused your finite principles.
Post by WM
Post by Richard Damon
No ⅟j is in a finite start
That is the reason that
an infinite swarm is the only possibility
Then you are not talking about really existing invariable points. But
that is what I discuss.
But neither are you talking about what really exists, as you are using
invalid logic.
Post by WM
Regards, WM
WM
2024-02-20 16:52:18 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Jim Burns
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes an evolving infinite collection, i.e., a
potentially infinite set where more and more elements are created which
initially have not existed.
A complete, i.e. actually infinite set of ℵo real fixed points on the
real axis can be subdivided by any of its elements (since all are
existing) such that the subsets have cardinalities from 0, ℵo over n, ℵo
to ℵo, n, and ℵo, 0.
But ℵo / n is still ℵo,
subsets have cardinalities from (0, ℵo) over (n, ℵo) to (ℵo, n), and
(ℵo, 0).
Post by Richard Damon
so you never get to 0.
Of course not. They are dark. But nevertheless these sets are existing.
Post by Richard Damon
Post by WM
Post by Jim Burns
No ⅟j is in a finite start
That is the reason that
an infinite swarm is the only possibility
Then you are not talking about really existing invariable points. But
that is what I discuss.
But neither are you talking about what really exists,
If all unit fractions really exist, then we can talk about each one,
although we cannot find most.

Regards, WM
Richard Damon
2024-02-21 02:17:05 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
Post by Jim Burns
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes an evolving infinite collection, i.e., a
potentially infinite set where more and more elements are created
which initially have not existed.
A complete, i.e. actually infinite set of ℵo real fixed points on the
real axis can be subdivided by any of its elements (since all are
existing) such that the subsets have cardinalities from 0, ℵo over n,
ℵo to ℵo, n, and ℵo, 0.
But ℵo / n is still ℵo,
subsets have cardinalities from (0, ℵo) over (n, ℵo) to (ℵo, n), and
(ℵo, 0).
Nope, all the subsets have cardinalities of ℵo (assuming we are using
the Rational Line), if we are using the Real line then there are ℵ1
points between each of them.
Post by WM
Post by Richard Damon
so you never get to 0.
Of course not. They are dark. But nevertheless these sets are existing.
But they are NOT "Dark"

Your mind just seems them as dark as it can't handle the truth about them.
Post by WM
Post by Richard Damon
Post by WM
Post by Jim Burns
No ⅟j is in a finite start
That is the reason that
an infinite swarm is the only possibility
Then you are not talking about really existing invariable points. But
that is what I discuss.
But neither are you talking about what really exists,
If all unit fractions really exist, then we can talk about each one,
although we cannot find most.
We can find any one that we want.

At least as long as you don't try to qualify it with an impossible
qualifier, like the lowest value one.
Post by WM
Regards, WM
WM
2024-02-21 08:47:53 UTC
Permalink
Post by Richard Damon
Post by WM
Post by WM
Post by Jim Burns
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes an evolving infinite collection, i.e., a
potentially infinite set where more and more elements are created
which initially have not existed.
A complete, i.e. actually infinite set of ℵo real fixed points on the
real axis can be subdivided by any of its elements (since all are
existing) such that the subsets have cardinalities from 0, ℵo over n,
ℵo to ℵo, n, and ℵo, 0.
subsets have cardinalities from (0, ℵo) over (n, ℵo) to (ℵo, n), and
(ℵo, 0).
If all unit fractions really exist, then we can talk about each one,
although we cannot find most.
We can find any one that we want.
At least as long as you don't try to qualify it with an impossible
qualifier, like the lowest value one.
In a static real line obeying mathematics ∀n ∈ ℕ: 1/n - 1/(n+1) =
d_n > 0 there is a smallest unit fraction existing as a point. The only
alternative would be many smallest ones, but that can be excluded.

Regards, WM
Richard Damon
2024-02-21 12:32:26 UTC
Permalink
Post by Richard Damon
Post by WM
Post by WM
Post by Jim Burns
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes an evolving infinite collection, i.e., a
potentially infinite set where more and more elements are created
which initially have not existed.
A complete, i.e. actually infinite set of ℵo real fixed points on
the real axis can be subdivided by any of its elements (since all
are existing) such that the subsets have cardinalities from 0, ℵo
over n, ℵo to ℵo, n, and ℵo, 0.
subsets have cardinalities from (0, ℵo) over (n, ℵo) to (ℵo, n), and
(ℵo, 0).
If all unit fractions really exist, then we can talk about each one,
although we cannot find most.
We can find any one that we want.
At least as long as you don't try to qualify it with an impossible
qualifier, like the lowest value one.
In a static real line obeying mathematics ∀n ∈ ℕ: 1/n - 1/(n+1) = d_n >
0 there is a smallest unit fraction existing as a point. The only
alternative would be many smallest ones, but that can be excluded.
Regards, WM
Nope. You just don't understand the properties of unbounded sets,
because you mind can't actually handle the unbounded.

There doesn't need to be just one or many with a property, there can be
none, and that is how many have the property "smallest".

Your logic system just can't handle unbounded sets.
WM
2024-02-20 21:59:26 UTC
Permalink
Post by WM
Post by Richard Damon
In other words,
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes
an evolving infinite collection, i.e.,
a potentially infinite set where
more and more elements are created which
initially have not existed.
We are finite beings.
The statements are finitely.many.
Their finitely.many.ness doesn't evolve.
We cannot use everything that exists on the real line, because among them
there is the smallest unit fraction, at least the smallest unit fraction
that exists on the real line. Where else should it be? This existence is
static. You seem to deny it. If we could point to it, we caught the
smallest unit fraction. But we cannot point to it although it must be
there. That proves: It is dark.
It is a property of
finite sequences of statements that
if any of them is false,
then one of then is first.false.
Here is one statement that is true: Every unit fraction exists on the real
line. But there are no marks indicating their places. We cannot go to it
without, in principle, passing through all smaller ones. Counting from 1,
2, 3, ... to n.

Regards, WM
Chris M. Thomasson
2024-02-20 22:19:47 UTC
Permalink
On 2/20/2024 1:59 PM, WM wrote:
[...]
Post by WM
We cannot use everything that exists on the real line, because among
them there is the smallest unit fraction, at least the smallest unit
fraction that exists on the real line.
Barf!

[...]
Jim Burns
2024-02-20 23:55:45 UTC
Permalink
It is a property of
finite sequences of statements that
if any of them is false,
then one of then is first.false.
Every unit fraction exists on the real line.
But there are no marks indicating their places.
For each final.ordinal.reciprocal ⅟j
a geometric procedure exists which finds it.
It involves constructing similar triangles.
We cannot go to it without, in principle,
passing through all smaller ones.
For each final.ordinal k,
there are more.than.k smaller ones.
Counting from 1, 2, 3, ... to n.
Whatever final.ordinal n is imagined to be,
counting fails.
There are more.than.n to count.

----
It is a boring property of a "normal" finite set S
that, if Bob is inserted in S, giving S⁺ᴮᵒᵇ,
then S⁺ᴮᵒᵇ doesn't fit into S
S ⃒⇇ S⁺ᴮᵒᵇ
No 1.to.1 map exists to S from S⁺ᴮᵒᵇ

It is a boring property of a final ordinal j
that before.j {<j} is a "normal" finite set.
If Bob is inserted in {<j}, giving {<j}⁺ᴮᵒᵇ,
then then {<j}⁺ᴮᵒᵇ doesn't fit in {<j}
{<j} ⃒⇇ {<j}⁺ᴮᵒᵇ

For each boring "normal" finite set S,
a boring "normal" final.ordinal j exists
such that S fits into before.j
S ⃒⇇ S⁺ᴮᵒᵇ
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
∃j: S ⇉ {<j} ⃒⇇ {<j}⁺ᴮᵒᵇ

The technical term for this is "counting".
It is famously boring.
People "count" sheep
in order to bore themselves to sleep.


Define ℕ to be the set of final ordinals.

For each final ordinal j _and successor_ j⁺¹
{<j} and {<j⁺¹} fit in ℕ
But {<j⁺¹} doesn't fit in {<j}; {<j} is final.

Neither does ℕ fit in {<j},
or else {<j⁺¹} (sub ℕ) fits in {<j}

∀j: {<j} ⃒⇇ {<j}⁺ᴮᵒᵇ ⟹ {<j} ⃒⇇ ℕ
¬∃j: ℕ ⇉ {<j} ⃒⇇ {<j}⁺ᴮᵒᵇ
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
¬(ℕ ⃒⇇ ℕ⁺ᴮᵒᵇ)
ℕ⁺ᴮᵒᵇ ⇉ ℕ

ℕ is not a boring "normal" finite set.
De taediosum non taediosum.

It's a miracle!
WM
2024-02-21 08:39:23 UTC
Permalink
Post by Jim Burns
Every unit fraction exists on the real line.
But there are no marks indicating their places.
For each final.ordinal.reciprocal ⅟j
a geometric procedure exists which finds it.
Not for those existing next to zero. Note that if reciprocals are existing
on the real axis and if all points are timeless, then there is a point
next to zero. So your claim involves time. That is not mathematics, which
is time-independent

Regards, WM
Richard Damon
2024-02-21 12:32:28 UTC
Permalink
Post by WM
Post by Jim Burns
Every unit fraction exists on the real line.
But there are no marks indicating their places.
For each final.ordinal.reciprocal ⅟j
a geometric procedure exists which finds it.
Not for those existing next to zero. Note that if reciprocals are
existing on the real axis and if all points are timeless, then there is
a point next to zero. So your claim involves time. That is not
mathematics, which is time-independent
Regards, WM
Nope, no point "next to" zero. Points on the real axis are "dense" and
there is no "Next To" property, as between ANY two points, there is
another one between them.

You need to get into a trans-finite number system to get the "Next To"
property again.
WM
2024-02-22 13:06:34 UTC
Permalink
No positive point is next to zero,
If all are there and timeless, then there is a first one. But it is more
obvious that the chain of unit fractiond must have a first one, whenever
there is a unit fraction at all.
Post by WM
Note that
if reciprocals are existing on the real axis and
if all points are timeless,
then there is a point next to zero.
Elaborate.
Nothing to elaborate.
Do you reject
all skipping.functions being discontinuous.somewhere?
Functions measuring elements are not discontinuous for more than 1 when
the elements have point between them.
Do you reject
only both or neither ⅟m ⅟(4⋅m) being
final.ordinal.reciprocals findable by
geometric procedure?
Dark numbers cannot be found.

Regards, WM
Chris M. Thomasson
2024-02-23 00:25:36 UTC
Permalink
Post by WM
No positive point is next to zero,
If all are there and timeless, then there is a first one. But it is more
obvious that the chain of unit fractiond must have a first one, whenever
there is a unit fraction at all.
Post by WM
Note that
if reciprocals are existing on the real axis and
if all points are timeless,
then there is a point next to zero.
Elaborate.
Nothing to elaborate.
Do you reject
all skipping.functions being discontinuous.somewhere?
Functions measuring elements are not discontinuous for more than 1 when
the elements have point between them.
Do you reject
only both or neither ⅟m ⅟(4⋅m) being
final.ordinal.reciprocals findable by
geometric procedure?
Dark numbers cannot be found.
Ummm. Well, WM just might be able to find a number covered in shit right
up is own hind end?
Chris M. Thomasson
2024-02-23 00:44:27 UTC
Permalink
Post by Chris M. Thomasson
Post by WM
No positive point is next to zero,
If all are there and timeless, then there is a first one. But it is
more obvious that the chain of unit fractiond must have a first one,
whenever there is a unit fraction at all.
Post by WM
Note that
if reciprocals are existing on the real axis and
if all points are timeless,
then there is a point next to zero.
Elaborate.
Nothing to elaborate.
Do you reject
all skipping.functions being discontinuous.somewhere?
Functions measuring elements are not discontinuous for more than 1
when the elements have point between them.
Do you reject
only both or neither ⅟m ⅟(4⋅m) being
final.ordinal.reciprocals findable by
geometric procedure?
Dark numbers cannot be found.
Ummm. Well, WM just might be able to find a number covered in shit right
up is own hind end?
Then we can all say, okay, that plus one? ;^)
WM
2024-02-23 08:55:46 UTC
Permalink
Post by WM
No positive point is next to zero,
If all are there and timeless,
then there is a first one.
Each positive point is preceded by
a positive point,
and no positive precedes each positive.
Your claim requires to consider the predecessors. "Timeless" means that
every point exists independent of a predecessor.
There isn't a first one.
NUF(x) together with 1/n =/= 1/(n+1) proves the first one.
Post by WM
But it is more obvious that
the chain of unit fractiond must have a first one,
whenever there is a unit fraction at all.
Each is preceded by some.
None precedes all.
You violate 1/n =/= 1/(n+1) and timeless existence.
Post by WM
Post by WM
Note that
if reciprocals are existing on the real axis and
if all points are timeless,
then there is a point next to zero.
Elaborate.
Nothing to elaborate.
Doesn't that worry you?
Why. It is obvious. Timeless existence in linear order proves a first one.
If you opened the hood of your auto,
and no engine was in there,
wouldn't you get a glimmer of a sense that,
perhaps, things were not as they should be?
Failed analogy.
Post by WM
Do you reject
all skipping.functions being discontinuous.somewhere?
Functions measuring elements are not
discontinuous for more than 1
when the elements have point between them.
A point β exists between
lower.bounds of final.ordinal.reciprocals and
not.lower.bounds of final.ordinal.reciprocals.
You violate 1/n =/= 1/(n+1) and timeless existence.

Regards, WM
Jim Burns
2024-02-23 19:21:10 UTC
Permalink
Post by WM
Post by WM
No positive point is next to zero,
If all are there and timeless,
then there is a first one.
Each positive point is preceded by
a positive point,
and no positive precedes each positive.
Your claim
...which is that your claim is wrong
Post by WM
Post by WM
If all are there and timeless,
then there is a first one.
requires to consider the predecessors.
Yes, I consider predecessors.

Is considering predecessors something which
mathematics allows only you to do?
Post by WM
"Timeless" means that
every point exists independent of a predecessor.
Describe points as
final.ordinals ℕ and
differences.of.ratios ℚ of final.ordinals and
least.upper.bounds ℝ of bounded.non.empty.sets of
differences.of.ratios of final.ordinals.

We only use time in which to make descriptions,
but we are finished describing now.
We are timeless.
https://tardis.fandom.com/wiki/Timeless_Child

----
Among the points described,
a point β exists timelessly between
lower.bounds of the final.ordinal.reciprocals and
not.lower.bounds of the final.ordinal.reciprocals.

For each positive not.lower.bound x
a final.ordinal.reciprocal exists
between it and zero.

For each positive lower.bound δ
the greatest.lower.bound β is positive,
and final.ordinal.reciprocal ⅟(4⋅m)
is less than lower.bound β/2

Positive lower.bound δ
makes lower.bound β/2 not a lower.bound.
No positive lower.bound δ exists.

Each positive point is
a positive not.lower.bound x and
there is a final.ordinal.reciprocal
between it and zero.

Each final.ordinal.reciprocal exists.
A first positive point not.exists.
WM
2024-02-24 12:10:43 UTC
Permalink
Have I overlooked an answer from you concerning thos topic?
The ordinals' descents and ascents are not the same.
Every way up can be reversed. That proves that also the ascents are
finite.
For each ordinal ψ
if ψ has any infinite descent,
then, because well.order,
an ordinal χ exists first with any infinite descent.
ψ doesn't have an infinite descent.
And one step upwards is finite too. Finite plus one is finite.

ψ doesn't have an infinite ascent (for every visible predecessor).
Generalizing over ordinals,
each ordinal ψ has finite.descent.
Each ordinal has finite ascent.

Regards, WM
Richard Damon
2024-02-21 02:17:07 UTC
Permalink
Post by WM
Post by WM
Post by Richard Damon
In other words,
infinitely.many are leftward
finitely.many are rightward
for each ⅟j
That correctly describes
an evolving infinite collection, i.e.,
a potentially infinite set where
more and more elements are created which initially have not existed.
We are finite beings.
The statements are finitely.many.
Their finitely.many.ness doesn't evolve.
We cannot use everything that exists on the real line, because among
them there is the smallest unit fraction, at least the smallest unit
fraction that exists on the real line. Where else should it be? This
existence is static. You seem to deny it. If we could point to it, we
caught the smallest unit fraction. But we cannot point to it although it
must be there. That proves: It is dark.
No, there ISN'T a "Smallest Unit Fraction" as has been shown.

Since ALL Unit Fractions "exist" in the mathematical sense,.

You are just stuck in your wrong thinking and it makes your mind dark.
Post by WM
It is a property of
finite sequences of statements that
if any of them is false,
then one of then is first.false.
Here is one statement that is true: Every unit fraction exists on the
real line. But there are no marks indicating their places. We cannot go
to it without, in principle, passing through all smaller ones. Counting
from 1, 2, 3, ... to n.
"Counting " the unit fractions can only be done from 1/1 to 1/2 to 1`/3
and so on.

You can't start counting from an end that doesn't exist. Trying to do
so, just breaks your system.
Post by WM
Regards, WM
Richard Damon
2024-02-21 12:32:29 UTC
Permalink
Post by Richard Damon
Post by WM
We cannot use everything that exists on the real line, because among
them there is the smallest unit fraction, at least the smallest unit
fraction that exists on the real line. Where else should it be? This
existence is static. You seem to deny it. If we could point to it, we
caught the smallest unit fraction. But we cannot point to it although
it must be there. That proves: It is dark.
No, there ISN'T a "Smallest Unit Fraction" as has been shown.
Since ALL Unit Fractions "exist" in the mathematical sense,.
Then take the first one existing there.
There isn't one, and you are just proving your ignornacd.
Post by Richard Damon
"Counting " the unit fractions can only be done from 1/1 to 1/2 to
1`/3 and so on.
You can't start counting from an end that doesn't exist. Trying to do
so, just breaks your system.
Either one or more than one at one point. More than one is excluded by
mathematics,
Regards, WM
And that "restriction" makes NUF(x) just disapper as a defined function
as an artifact of a contradiction.

You can't just "assume" the existance of a function. Your doing that
just makes an ASS out of U. (it doesn't add me, because I won't fall for
it).
WM
2024-02-22 12:04:56 UTC
Permalink
Post by Richard Damon
Then take the first one existing there.
There isn't one, and you are just proving your ignornacd.
There is a first one in a static chain of points 1/n with gaps between
them. To deny this means falling victim to nonsense. Matheology.

Regards, WM
Richard Damon
2024-02-22 12:17:46 UTC
Permalink
Post by WM
Post by Richard Damon
Then take the first one existing there.
There isn't one, and you are just proving your ignornacd.
There is a first one in a static chain of points 1/n with gaps between
them. To deny this means falling victim to nonsense. Matheology.
Regards, WM
Nope.

YOU have fallen victim to your lies and nonsense;

If the is, then NAME IT or explain how it can be. (Not that your system
says it must be, that just shows your system is broken)

Either your logic system doesn't actually HAVE the Natural Numbers in it
or you are lying.

After all if some 1/n was actually the smallest, then that says that n
must be the highest natural number, but the definition of the Natural
Numbers says that the include the successor to all Natural Numbers, and
every number has a successor, so n+1 must be a Natural Number.

By your "Static" rule, it can't be that it comes into being when we look
at it, as that isn't "Stati".

So, you are just proven to be stupid and a liar.
Chris M. Thomasson
2024-02-23 00:27:50 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Then take the first one existing there.
There isn't one, and you are just proving your ignornacd.
There is a first one in a static chain of points 1/n with gaps
between them. To deny this means falling victim to nonsense. Matheology.
If the is, then NAME IT or explain how it can be.
I did.
Post by Richard Damon
(Not that your system says it must be, that just shows your system is
broken)
It is simply mathematics and logic: By logic there must be a start of
NUF(x), by mathematics the start can only be 1. ∀n ∈ ℕ: 1/n - 1/(n+1) =
d_n > 0.
Post by Richard Damon
After all if some 1/n was actually the smallest, then that says that n
must be the highest natural number, but the definition of the Natural
Numbers says that the include the successor to all Natural Numbers,
and every number has a successor, so n+1 must be a Natural Number.
The definition of unit fractions says that all have gaps and there is no
point where more than 1 sit.
Post by Richard Damon
By your "Static" rule, it can't be that it comes into being when we
look at it, as that isn't "Stati".
We cannot look at dark numbers.
I think you plug up toilets with your dark numbers everyday. Ask your
plumbers about it...
Richard Damon
2024-02-23 12:23:24 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Then take the first one existing there.
There isn't one, and you are just proving your ignornacd.
There is a first one in a static chain of points 1/n with gaps
between them. To deny this means falling victim to nonsense. Matheology.
If the is, then NAME IT or explain how it can be.
I did.
Nope, you just say you assume that there must be a smallest.

But if there is, why isn't the one smaller to that in it to, making it
not the smallest?
Post by Richard Damon
(Not that your system says it must be, that just shows your system is
broken)
It is simply mathematics and logic: By logic there must be a start of
NUF(x), by mathematics the start can only be 1. ∀n ∈ ℕ: 1/n - 1/(n+1) =
d_n > 0.
Nope

By your logic, I have shown that the aquare root of two is Rational.

You err in assuming somethng that doesn't exist.

Your "Matheolgy" is based on Magic Faeries that can make the impossible
happen.

You claim to not want to use "Matheologies", but then you do, but you
use ones not up to the task you ask of them.

You ignore the natural and obvious facts of the numbers because you
can't face the limitations of your logic.
Post by Richard Damon
After all if some 1/n was actually the smallest, then that says that n
must be the highest natural number, but the definition of the Natural
Numbers says that the include the successor to all Natural Numbers,
and every number has a successor, so n+1 must be a Natural Number.
The definition of unit fractions says that all have gaps and there is no
point where more than 1 sit.
Right, but the gap is smaller than the number itself, so there is alway
room for more between any of them and 0, so there is not first.

You just are to dim to see that the gaps shrink faster than the numbers
themselves so you get more and more "room" to put the smaller and
smaller values in.
Post by Richard Damon
By your "Static" rule, it can't be that it comes into being when we
look at it, as that isn't "Stati".
We cannot look at dark numbers.
Because they don't exist.
Regards, WM
WM
2024-02-24 11:30:34 UTC
Permalink
Post by Richard Damon
Post by Richard Damon
Post by WM
There is a first one in a static chain of points 1/n with gaps
between them. To deny this means falling victim to nonsense. Matheology.
If the is, then NAME IT or explain how it can be.
I did.
Nope, you just say you assume that there must be a smallest.
I prove it by NUF(x). Points on the real axis exist there without
necessity of neighbours or predecessors, and if they have internal
distances this cannot be doubted.

Regards, WM
Richard Damon
2024-02-24 15:31:59 UTC
Permalink
Post by WM
Post by Richard Damon
Post by Richard Damon
Post by WM
There is a first one in a static chain of points 1/n with gaps
between them. To deny this means falling victim to nonsense. Matheology.
If the is, then NAME IT or explain how it can be.
I did.
Nope, you just say you assume that there must be a smallest.
I prove it by NUF(x). Points on the real axis exist there without
necessity of neighbours or predecessors, and if they have internal
distances this cannot be doubted.
Regards, WM
You first need to prove the NUF(x) exists.

As I showed, by I showed with your logic, we can show that the square
root of 2 is rational, which has been proved otherwise.

Words can "define" a function that doesn't exist.
WM
2024-02-24 16:16:22 UTC
Permalink
Post by Richard Damon
Post by WM
I prove it by NUF(x). Points on the real axis exist there without
necessity of neighbours or predecessors, and if they have internal
distances this cannot be doubted.
You first need to prove the NUF(x) exists.
If there are unit fratcions, then NUF exists.

Regards, WM
FromTheRafters
2024-02-24 16:29:12 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
I prove it by NUF(x). Points on the real axis exist there without
necessity of neighbours or predecessors, and if they have internal
distances this cannot be doubted.
You first need to prove the NUF(x) exists.
If there are unit fratcions, then NUF exists.
Then, this excludes your [0,0] interval because there are no unit
fractions in that domain.
Chris M. Thomasson
2024-02-23 00:26:30 UTC
Permalink
Post by Richard Damon
Post by WM
We cannot use everything that exists on the real line, because among
them there is the smallest unit fraction, at least the smallest unit
fraction that exists on the real line. Where else should it be? This
existence is static. You seem to deny it. If we could point to it, we
caught the smallest unit fraction. But we cannot point to it although
it must be there. That proves: It is dark.
No, there ISN'T a "Smallest Unit Fraction" as has been shown.
Since ALL Unit Fractions "exist" in the mathematical sense,.
Then take the first one existing there.
[...]

Are you a dingbat, or something else, even worse?
Ross Finlayson
2024-02-21 18:27:52 UTC
Permalink
Post by Jim Burns
Mike Terry schrieb am Freitag,
Post by Mike Terry
Yes,
it's not clear what "infinitely close" means
It means dark numbers.
Do you (WM) say that
a point with a final.ordinal.reciprocal
⅟n⋅n = 1 ∧ ⟨1,…,n⟩ ⃒⇇ ⟨1,…,n,n⁺¹⟩
below it is infinitelyᵂᴹ.close to 0?
That would be an odd use of "infinite".
A positive dark number has
a final.ordinal.reciprocal below it.
| Assume otherwise.
| Also, assume
| a skipping.function isn't all.continuous, and,
| for final.ordinal.reciprocal ⅟m
| ⅟(4⋅m) is a final.ordinal.reciprocal.
|
| By assumption,
| positive dark δ is a positive lower bound of
| final.ordinal.reciprocals ⅟ℕ₁
| 0 < δ ≤ᣔ ⅟ℕ₁
|
| β is the greatest lower bound of
| final.ordinal.reciprocals ⅟ℕ₁
| 0 < δ ≤ β ≤ᣔ ⅟ℕ₁
| 0 < β/2 < β < 2β
| 2β isn't a lower bound of ⅟ℕ₁
| β is the greatest lower bound of ⅟ℕ₁
| β/2 is a lower bound of ⅟ℕ₁
|
| β < 2β
| 2β isn't a lower bound of ⅟ℕ₁
| final.ordinal.reciprocal ⅟m₂ᵦ < 2β exists.
| final.ordinal.reciprocal ⅟(4⋅m₂ᵦ) < β/2 exists.
| β/2 isn't a lower bound of ⅟ℕ₁
|
| However,
| β/2 < β
| β/2 is a lower bound of ⅟ℕ₁
| Contradiction.
Therefore,
a positive dark number has
a final.ordinal.reciprocal below it.
The function
Number of Unit Fractions between (0, and x)
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0
cannot happen unless NUF(x) increases at some x.
NUF(x) increases at 0
(2) NUF(x) cannot increase other than
when passing unit fractions at some x = 1/n.
NUF(x) cannot increase other than
when ∀β > 0: NUF(x-β) < NUF(x+β)
(3) NUF(x) cannot pass more than one
unit fraction at a single point x because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
∀n ∈ ℕ: 1/n =/= 0
∀β > 0: ∀n ∈ ℕ: NUF(0-β) + n < NUF(0+β)
β > ⅟1⁺ᵐᵝ > ... > ⅟n⁺ᵐᵝ > ⅟(n+1)⁺ᵐᵝ > 0
for
0 =< mᵦ =< ⅟β < mᵦ+1 = 1⁺ᵐᵝ
(4) This requires a first unit fraction,
if all are there in actual infinity.
Each final.ordinal.reciprocal
is preceded by
another final.ordinal.reciprocal.
The first final.ordinal.reciprocal not.exists.
So, there's no first example where
"the equivalency function" isn't a model
of "not-a-real-function"
with "real analytical character",

considering that
the infinite and continuum limit
was already run out one-way.

I mean if you want it back,
here's the argument that made it, ....
there's transparency here,
it's clear that it's related its
resources already, ....

It's not.ultimately.untrue, ....
Jim Burns
2024-02-21 20:02:31 UTC
Permalink
Post by Ross Finlayson
...]
So, there's no first example where
"the equivalency function" isn't a model
of "not-a-real-function"
with "real analytical character",
considering that
the infinite and continuum limit
was already run out one-way.
The iota.value limit which you describe
is not the real interval [0,1]

Consider (following your lead)
a range of constantly-different monotone
strictly increasing values between zero and one,
an infinitude of them.
I abbreviate that to [0,1]\ι

Define a plus.iota next.operator x⁺ᶥ
∀x ∈ [0,1)\ι: ∃x⁺ᶥ ∈ (0,1]\ι:
x < x⁺ᶥ ∧ ¬∃xₓ ∈ [0,1]\ι: x < xₓ < x⁺ᶥ

It is a constantly.different strictly.increasing
next.operator.
∀x,y ∈ [0,1)\ι: x⁺ᶥ-x = y⁺ᶥ-y = ι > 0

Are there an infinitude of values in [0,1]\ι?
No.

ι > 0
Therefore, there is
a finitely.denominated unit.fraction ⅟n
between ι and 0
and |[0,1]\ι| ≤ n+1

| Assume otherwise.
| Assume ι is a positive lower bound of
| ⅟ℕ₁ the finitely.denominated unit.fractions.
| 0 < ι ≤ᣔ ⅟ℕ₁
|
| β is the greatest.lower.bound of ⅟ℕ₁
|
| β exists, or else
| a function exists which is
| continuous everywhere and
| skips over some points between.
|
| 0 < ι ≤ β
| 0 < β/2 < β < 2β
| β is the greatest lower bound of ⅟ℕ₁
| 2β isn't a lower bound of ⅟ℕ₁
| β/2 is a lower bound of ⅟ℕ₁
|
| β < 2β
| 2β isn't a lower bound of ⅟ℕ₁
| finitely.denominated ⅟m < 2β exists.
| finitely.denominated ⅟(4⋅m) < β/2 exists.
| β/2 isn't a lower bound of ⅟ℕ₁
|
| However,
| β/2 < β
| β/2 is a lower bound of ⅟ℕ₁
| Contradiction.

Therefore,
there is a finitely.denominated unit.fraction ⅟n
between ι and 0
and |[0,1]\ι| ≤ n+1

For a range of constantly-different monotone
strictly increasing values between zero and one,
there aren't an infinitude of them.

You (RF) can define whatever you choose to define.
That's a matter of letting us know
how you are using language.
But
not everything you are able to define
includes the rationals and excludes
skipping.functions not discontinuous.somewhere.
Ross Finlayson
2024-02-21 20:59:52 UTC
Permalink
Post by Jim Burns
Post by Ross Finlayson
...]
So, there's no first example where
"the equivalency function" isn't a model
of "not-a-real-function"
with "real analytical character",
considering that
the infinite and continuum limit
was already run out one-way.
The iota.value limit which you describe
is not the real interval [0,1]
Consider (following your lead)
a range of constantly-different monotone
strictly increasing values between zero and one,
an infinitude of them.
I abbreviate that to [0,1]\ι
Define a plus.iota next.operator x⁺ᶥ
x < x⁺ᶥ ∧ ¬∃xₓ ∈ [0,1]\ι: x < xₓ < x⁺ᶥ
It is a constantly.different strictly.increasing
next.operator.
∀x,y ∈ [0,1)\ι: x⁺ᶥ-x = y⁺ᶥ-y = ι > 0
Are there an infinitude of values in [0,1]\ι?
No.
ι > 0
Therefore, there is
a finitely.denominated unit.fraction ⅟n
between ι and 0
and |[0,1]\ι| ≤ n+1
| Assume otherwise.
| Assume ι is a positive lower bound of
| ⅟ℕ₁ the finitely.denominated unit.fractions.
| 0 < ι ≤ᣔ ⅟ℕ₁
|
| β is the greatest.lower.bound of ⅟ℕ₁
|
| β exists, or else
| a function exists which is
| continuous everywhere and
| skips over some points between.
|
| 0 < ι ≤ β
| 0 < β/2 < β < 2β
| β is the greatest lower bound of ⅟ℕ₁
| 2β isn't a lower bound of ⅟ℕ₁
| β/2 is a lower bound of ⅟ℕ₁
|
| β < 2β
| 2β isn't a lower bound of ⅟ℕ₁
| finitely.denominated ⅟m < 2β exists.
| finitely.denominated ⅟(4⋅m) < β/2 exists.
| β/2 isn't a lower bound of ⅟ℕ₁
|
| However,
| β/2 < β
| β/2 is a lower bound of ⅟ℕ₁
| Contradiction.
Therefore,
there is a finitely.denominated unit.fraction ⅟n
between ι and 0
and |[0,1]\ι| ≤ n+1
For a range of constantly-different monotone
strictly increasing values between zero and one,
there aren't an infinitude of them.
You (RF) can define whatever you choose to define.
That's a matter of letting us know
how you are using language.
But
not everything you are able to define
includes the rationals and excludes
skipping.functions not discontinuous.somewhere.
This isn't the complete ordered field,
it's iota-values,
the constant monotone strictly increasing property,
is their ordering,
but says nothing about their arithmetic,
at all.

What defines their arithmetic,
is that iota-products
put them back together, length 1,
and iota-sums make
for re-Vitali-izing measure theory, length 2.

I.e. they're only put together altogether.

It's not a Cartesian function either,
this equivalency function, and
it falls out of all the results
otherwise about Cartesian functions
and Cantorian uncountability,
which otherwise of course applies.

It's special, this way, and, it's very special.

It's line-drawing, and it's the function
between discrete and continuous.

That's what it is.


Thank you for your attention to this matter.
Richard Damon
2024-02-17 12:15:38 UTC
Permalink
Post by Chris M. Thomasson
Take a number that wants to get close to zero.
This makes no sense.  "a number" is one number.  And numbers don't want
anything.
But mathematicians want to know about numbers, for instance how close to
zero the unit fractions come.
Take the function Number of Unit Fractions between (0, and x > 0). It
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at
some x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
Regards, WM
Or, that such a function can't actually be defined, because it assumes
that there IS a "smallest unit fraction".

Just because you can "define" it in words, doesn't mean that it can
actually exist.
WM
2024-02-17 21:59:59 UTC
Permalink
Post by Richard Damon
Post by WM
Take the function Number of Unit Fractions between (0, and x > 0). It
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at
some x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
Or, that such a function can't actually be defined, because it assumes
that there IS a "smallest unit fraction".
This well-defined function proves its existence.

Regards, WM
Richard Damon
2024-02-18 12:36:30 UTC
Permalink
Post by WM
Post by Richard Damon
Post by WM
Take the function Number of Unit Fractions between (0, and x > 0). It
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at
some x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point
x because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
Or, that such a function can't actually be defined, because it assumes
that there IS a "smallest unit fraction".
This well-defined function proves its existence.
Regards, WM
NOT well defined.

Your assumptions that define it are inconsistant with the definition of
Natural Numbers.

Your NUF(x) has an output range that is outside the range of the Natual
Numbers, and thus not "Well Defined" on that set (or the rationals, or
the Reals).

So, all you are showing is that you don't understand what a "well
defined" function means.

It is easy to create a word salad that seems to fully define a function,
that actually described a not possible to exist thing, and your above is
just one example of that.
FromTheRafters
2024-02-18 16:49:22 UTC
Permalink
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
Take the function Number of Unit Fractions between (0, and x > 0). It has
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen unless
NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions at some
x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single point x
because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
Or, that such a function can't actually be defined, because it assumes
that there IS a "smallest unit fraction".
This well-defined function proves its existence.
Regards, WM
NOT well defined.
Your assumptions that define it are inconsistant with the definition of
Natural Numbers.
Your NUF(x) has an output range that is outside the range of the Natual
Numbers, and thus not "Well Defined" on that set (or the rationals, or the
Reals).
So, all you are showing is that you don't understand what a "well defined"
function means.
Or what a set is.
Post by Richard Damon
It is easy to create a word salad that seems to fully define a function, that
actually described a not possible to exist thing, and your above is just one
example of that.
Indeed! He'll never learn though, he's completely stuck.
Ross Finlayson
2024-02-18 18:03:01 UTC
Permalink
Post by FromTheRafters
Post by Richard Damon
Post by WM
Post by Richard Damon
Post by WM
Take the function Number of Unit Fractions between (0, and x > 0).
(1) An increase from NUF(0) = 0 to NUF(x>0) > 0 cannot happen
unless NUF(x) increases at some x.
(2) NUF(x) cannot increase other than when passing unit fractions
at some x = 1/n.
(3) NUF(x) cannot pass more than one unit fraction at a single
point x because
∀n ∈ ℕ: 1/n =/= 1/(n-1).
(4) This requires a first unit fraction, if all are there in actual
infinity.
Or, that such a function can't actually be defined, because it
assumes that there IS a "smallest unit fraction".
This well-defined function proves its existence.
Regards, WM
NOT well defined.
Your assumptions that define it are inconsistant with the definition
of Natural Numbers.
Your NUF(x) has an output range that is outside the range of the
Natual Numbers, and thus not "Well Defined" on that set (or the
rationals, or the Reals).
So, all you are showing is that you don't understand what a "well
defined" function means.
Or what a set is.
Post by Richard Damon
It is easy to create a word salad that seems to fully define a
function, that actually described a not possible to exist thing, and
your above is just one example of that.
Indeed! He'll never learn though, he's completely stuck.
The iota-values the range of the "natural/unit equivalency function",
not-a-real-function, in the usual and standard sense of the word,
but standardly modeled by real functions, as a continuum limit,
are rather well-defined.

Will he ever learn?
Erdélyi Szőke Komáromi
2024-02-16 21:44:19 UTC
Permalink
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
arbitrarily close seems to be the accepted term.
infinitely close is the wrong wording?
in my country a > [...] = [...] is not a number. It's imbecility.

I believe this proves the gearmons and 𝘁𝗵𝗲_Hitler_𝗻𝗮𝘇𝗶𝘀 were (𝗔𝗥𝗘) despicable lying 𝗸𝗵𝗮𝘇𝗮𝗿_𝗴𝗼𝘆𝘀.

𝗚𝗲𝗿𝗺𝗮𝗻𝘆_𝗮𝗻𝗱_𝗨𝗸𝗿𝗮𝗶𝗻𝗲_𝘀𝗶𝗴𝗻_‘𝗹𝗼𝗻𝗴_𝘁𝗲𝗿𝗺’_𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆_𝗱𝗲𝗮𝗹
Ukrainian President Vladimir Zelensky has said the agreement proves “Ukraine will be in NATO”
https://r%74.com/news/592570-germany-ukraine-security-deal/

please remark Khazaria.
Loading Image...
x
2024-02-17 14:33:59 UTC
Permalink
Post by Erdélyi Szőke Komáromi
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
arbitrarily close seems to be the accepted term.
infinitely close is the wrong wording?
in my country a > [...] = [...] is not a number. It's imbecility.
I believe this proves the gearmons and 𝘁𝗵𝗲_Hitler_𝗻𝗮𝘇𝗶𝘀 were
(𝗔𝗥𝗘) despicable lying 𝗸𝗵𝗮𝘇𝗮𝗿_𝗴𝗼𝘆𝘀.
𝗚𝗲𝗿𝗺𝗮𝗻𝘆_𝗮𝗻𝗱_𝗨𝗸𝗿𝗮𝗶𝗻𝗲_𝘀𝗶𝗴𝗻_‘𝗹𝗼𝗻𝗴_𝘁𝗲𝗿𝗺’_𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆_𝗱𝗲𝗮𝗹
Post by Erdélyi Szőke Komáromi
Ukrainian President Vladimir Zelensky has said the agreement proves
“Ukraine will be in NATO”

Hey.

The pieces of shit called Vladimir Putin and Alexander Bastrykin
just murdered Alexei Navalny.

Yes Putin is actually Hitler.

Nikolay Kharitonov is morally superior to that piece of shit.

It would please the Chinese if you did that. He would not be
as bad off as the Nazi pig.
Burton Rabota Baidjanoff
2024-02-17 16:29:36 UTC
Permalink
Post by Erdélyi Szőke Komáromi
Post by Erdélyi Szőke Komáromi
in my country a > [...] = [...] is not a number. It's imbecility.
I believe this proves the gearmons and 𝘁𝗵𝗲_Hitler_𝗻𝗮𝘇𝗶𝘀 were
(𝗔𝗥𝗘) despicable lying 𝗸𝗵𝗮𝘇𝗮𝗿_𝗴𝗼𝘆𝘀.
𝗚𝗲𝗿𝗺𝗮𝗻𝘆_𝗮𝗻𝗱_𝗨𝗸𝗿𝗮𝗶𝗻𝗲_𝘀𝗶𝗴𝗻_‘𝗹𝗼𝗻𝗴_𝘁𝗲𝗿𝗺’_𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆_𝗱𝗲𝗮𝗹
Post by Erdélyi Szőke Komáromi
Ukrainian President Vladimir Zelensky has said the agreement proves
“Ukraine will be in NATO”
Hey.The pieces of shit called Vladimir Putin and Alexander Bastrykin
just murdered Alexei Navalny.
navalne?? are you fucking stupid. The traitor got 𝗮_𝗬𝗮𝗹𝗲_𝗱𝗶𝗽𝗹𝗼𝗺𝗮 from fucking
𝗰𝗶𝗮, was "𝗽𝗼𝗶𝘀𝗼𝗻𝗲𝗱" in little britain by "evil" Russians, because they shall
be evil, seen by 𝗲𝗻𝗴𝗹𝗶𝘀𝗵_𝗸𝗵𝗮𝘇𝗮𝗿_𝗽𝗶𝗴𝘀, then 𝘁𝗵𝗲 MI6 asset traitor 𝗻𝗮𝘃𝗮𝗹𝗻𝗲 returns
to Russia to kill 𝘁𝗵𝗲_𝗣𝘂𝘁𝗶𝗻𝗮. You fucking imbecile. Here some proofs, read
the second half part.

𝗦𝗖𝗢𝗧𝗧_𝗥𝗜𝗧𝗧𝗘𝗥_𝗼𝗻_𝗡𝗔𝗩𝗔𝗟𝗡𝗬'𝘀_𝗥𝗢𝗟𝗘_𝗮𝗻𝗱_𝗼𝗻_𝗥𝗨𝗦𝗦𝗜𝗔_𝗣𝗟𝗔𝗖𝗜𝗡𝗚_𝗡𝗨𝗖𝗟𝗘𝗔𝗥_𝗪𝗘𝗔𝗣𝗢𝗡𝗦_𝗜𝗡_𝗦𝗣𝗔𝗖𝗘
https://b%69%74%63hute.com/video/LhDuaadcAdjo

𝗖𝗼𝗹._𝗗𝗼𝘂𝗴𝗹𝗮𝘀_𝗠𝗮𝗰𝗴𝗿𝗲𝗴𝗼𝗿__𝗗𝗼𝗲𝘀_𝘁𝗵𝗲_𝗨𝗦_𝗛𝗮𝘃𝗲_𝗮_𝗖𝗼𝗵𝗲𝗿𝗲𝗻𝘁_𝗙𝗼𝗿𝗲𝗶𝗴𝗻_𝗣𝗼𝗹𝗶𝗰𝘆
https://b%69%74%63hute.com/video/AjZyCkDF0TwC

i must insist, you fucking imbecile. You undrenstand physics as you
undrenstand math and relativity. The khazar goy snake media yell all
channels he was "poisoned" and still alive, by the evil "𝗥𝘂𝘀𝘀𝗶𝗮𝗻𝘀". You
stinking sack of shit.

𝗙𝗿𝗮𝗻𝗰𝗲_𝘄𝗮𝗿𝗻𝘀_𝗼𝗳_‘𝗲𝗰𝗼𝗻𝗼𝗺𝗶𝗰_𝘀𝗵𝗼𝗰𝗸’_𝗳𝗿𝗼𝗺_𝗥𝘂𝘀𝘀𝗶𝗮𝗻_𝘃𝗶𝗰𝘁𝗼𝗿𝘆
Control over Ukraine’s fertile lands would allow Moscow to “attack”
European farmers, FM Stephane Sejourne has said
https://r%74.com/news/592595-russian-victory-ukraine-economic-shock/

The narrative keeps changing just like it was with attack on Iraq. From
fighting for Ukrainians freedom, to Ukrainian democracy, to European
values, to prevent further attack on Europe, to save American lives having
Ukrainians die for them and now we are fighting to keep Ukrainian vast
fertile fields to ourselves. Just wonder what will be the purpose of
prolonging Ukrainian agony tomorrow...

What he means is the West wants to get its hands on the resources,
therefore proving the wests help has nothing to do with helping the
Ukrainian people but "RESOURCES"....LMAO!

French farmers only future is Russia saving EU from U.S. Blackrock owned
cheap Ukrainian food production

This may be the closest thing we have yet seen to an admission by a
Western official, that what concerns the West is control over Ukrainian
resources, rather than the interests of the Ukrainian people.
Trolidous
2024-02-17 21:54:58 UTC
Permalink
Post by Burton Rabota Baidjanoff
Post by Erdélyi Szőke Komáromi
Post by Erdélyi Szőke Komáromi
in my country a > [...] = [...] is not a number. It's imbecility.
I believe this proves the gearmons and 𝘁𝗵𝗲_Hitler_𝗻𝗮𝘇𝗶𝘀 were
(𝗔𝗥𝗘) despicable lying 𝗸𝗵𝗮𝘇𝗮𝗿_𝗴𝗼𝘆𝘀.
𝗚𝗲𝗿𝗺𝗮𝗻𝘆_𝗮𝗻𝗱_𝗨𝗸𝗿𝗮𝗶𝗻𝗲_𝘀𝗶𝗴𝗻_‘𝗹𝗼𝗻𝗴_𝘁𝗲𝗿𝗺’_𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆_𝗱𝗲𝗮𝗹
Post by Burton Rabota Baidjanoff
Post by Erdélyi Szőke Komáromi
Post by Erdélyi Szőke Komáromi
Ukrainian President Vladimir Zelensky has said the agreement proves
“Ukraine will be in NATO”
Hey.The pieces of shit called Vladimir Putin and Alexander Bastrykin
just murdered Alexei Navalny.
navalne?? are you fucking stupid. The traitor got
𝗮_𝗬𝗮𝗹𝗲_𝗱𝗶𝗽𝗹𝗼𝗺𝗮 from fucking
Post by Burton Rabota Baidjanoff
𝗰𝗶𝗮, was "𝗽𝗼𝗶𝘀𝗼𝗻𝗲𝗱" in little britain by "evil" Russians,
because they

You have no idea what a NAZI is.

A NAZI or Fascist is.

Someone who thinks that 'martial law' or an 'emergency'
is an excuse to never obey any laws at all.

They make 'war' against anyone or anything they feel
like so they can maintain the 'emergency'. A classic
example is the burning of the German legislature.
Probably Goebbels dunnit, and then Hitler figured
it out slightly later, but the 'emergency' meant
that the NAZIs rapidly murdered most other parties,
then they killed anyone that they felt like including
judges.

Communism is.

Something exactly the same except they use 'politics
and economics' as the excuse.

When they do this they modify 'due process of law' -
the constitution looks exactly the same.

When governments no longer obeys laws then the police
and military shoot the people in general based upon
whim, caprice, and random chance. There are no 'laws'
because there is nothing to separate 'law' from 'crime'.

Now for a long time the Russians were 'Communist' and
in a lot of ways that is the opposite side of the political
spectrum, but in other ways they are nearly identical.

The ways that they are generally identical is called
'totalitinarianism' which is a total tossing of all
laws out the window.

There is some to be said of anarchism. Nine nations
have built nuclear weapons for one purpose. To murder
you, murder me, and every last man, woman and child on
Earth. Hitler never had nuclear weapons. In a lot of
ways this makes many modern governments worse.

And of course stupid people think that nuclear weapons
were not at least partially designed to murder the
people - en masse - to combat what governments might
perversely call 'insurrection'.

People think that in the long run governments are
capable of REFRAINING from pressing the button.

The United Nations should EXPEL all nuclear powers
from UN membershep - forever - and never allow
any to return until they all get rid of their nuclear
weapons. They can each sign - and not implement - until
all have signed. It can be done.

Think about Ukraine. Ukraine was once part of the
Soviet Union. How explicit is it going to get.
Nuclear weapons are explicly to 'combat insurrection'.

Rabid dogs like this should never have these machines.
No one should have them. If the people of the world
could vote - 1. I do want to be murdered with nuclear
weapons. 2. I do NOT want to be murdered with nuclear
weapons - how would they vote? Do you think people
CARE if they are murdered or not? Perhaps NONE of
the nuclear powers are democracies. They cease that
when they build them.

Lines drawn on maps are just one 'special effect' that
excites the rabid dogs.

Yes the war in Ukraine is extremely wasteful.

Yes some type of compromise is possible.

But rabid dogs do not tend to look for compromises.

Russians can never be fascists, right? This is foolishness.

Why vote for a fake Lenin?

If real actual fascism has converted him into a rabid dog then?

You have the power to decide only once every six years.

By now Nikolay Kharitonov may be a lot more of a rational
choice than the fake Lenin.

Some times what is rational, actually has constructive
benefits.
Post by Burton Rabota Baidjanoff
...
Mike Terry
2024-02-16 21:45:29 UTC
Permalink
Hehe, numbers don't want to get close to zero! For example, 0.000000001 is pretty close to zero,
but is quite happy being a distance 0.000000001 from zero, and has no desire to get any closer! :)
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
What you have there, laddie, is a /sequence/, which /converges/ to zero...
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
Yes, "arbitrarily close" captures the idea that for any tolerence t, the sequence eventually gets
/and stays/ within that tolerence from the limit 0.

You could think of it as a game: You go first, and must choose a tolerence t greater than zero.
Then its my go, and I must choose a natural number n. If ALL the sequence entries beyond the n'th
entry differ from the claimed limit by less than your t, I win! Else you win...

E.g. if you choose tolerence .001, I will choose n=4. Since [4]=.0001, [5]=.00001, [6], [7]. [8],et
al are all within your given tolerence, I win!

If I can /always/ win the game, the sequence converges to the claimed limit. (Otherwise it doesn't...)

Another example... maybe your sequence above converges to 0.001? No - you start be choosing e.g.
t=0.00000001. Now I'm stuck - as soon as we go past [4] = 0.0001 all further entries will differ
from 0.001 by more than your tolerence. [Conclusion: the sequence does not converge to 0.001, as is
obvious intuitively, but I'm showing how the game works...]
Post by Chris M. Thomasson
infinitely close is the wrong wording?
Yes, it's not clear what "infinitely close" means - it sounds like it would be a property of two
specific numbers, like 0 and 0.00000000000000001, but that's nonsense - the latter is a fixed
non-zero distance from zero, and of course there are other numbers both closer or further away from
zero.

Using "arbitrarily" better captures the game-like nature of what is intended. I.e. that /first/ a
tolerence t is fixed as a kind of challenge, /then/ we try to find an entry in the sequence such
that that entry and all subsequent entries are within the fixed tolerence. Mathematically this can
be expressed with quantifiers, which capture the order of choices in the game. Something like:

DEF: x[n] --> a iff ∀t>0 ∃n [ if m>n then |x[m] - a| < t ]


HTH
Mike.
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the "metaphysical formation"
of arbitrarily close... ;^)
Chris M. Thomasson
2024-02-16 23:12:50 UTC
Permalink
Hehe, numbers don't want to get close to zero!  For example, 0.000000001
is pretty close to zero, but is quite happy being a distance 0.000000001
from zero, and has no desire to get any closer!  :)
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
What you have there, laddie, is a /sequence/, which /converges/ to zero...
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
Yes, "arbitrarily close" captures the idea that for any tolerence t, the
sequence eventually gets /and stays/ within that tolerence from the
limit 0.
You could think of it as a game:  You go first, and must choose a
tolerence t greater than zero. Then its my go, and I must choose a
natural number n.  If ALL the sequence entries beyond the n'th entry
differ from the claimed limit by less than your t, I win!  Else you win...
E.g. if you choose tolerence .001, I will choose n=4.  Since [4]=.0001,
[5]=.00001, [6], [7]. [8],et al are all within your given tolerence, I win!
If I can /always/ win the game, the sequence converges to the claimed
limit.  (Otherwise it doesn't...)
Another example... maybe your sequence above converges to 0.001?  No -
you start be choosing e.g. t=0.00000001.  Now I'm stuck - as soon as we
go past [4] = 0.0001 all further entries will differ from 0.001 by more
than your tolerence.  [Conclusion: the sequence does not converge to
0.001, as is obvious intuitively, but I'm showing how the game works...]
Post by Chris M. Thomasson
infinitely close is the wrong wording?
Yes, it's not clear what "infinitely close" means - it sounds like it
would be a property of two specific numbers, like 0 and
0.00000000000000001, but that's nonsense - the latter is a fixed
non-zero distance from zero, and of course there are other numbers both
closer or further away from zero.
Using "arbitrarily" better captures the game-like nature of what is
intended.  I.e. that /first/ a tolerence t is fixed as a kind of
challenge, /then/ we try to find an entry in the sequence such that that
entry and all subsequent entries are within the fixed tolerence.
Mathematically this can be expressed with quantifiers, which capture the
Your tolerance reminds me of an epsilon? Humm... Will get back to you. A
little busy right now.
    DEF:  x[n]  --> a    iff  ∀t>0 ∃n [ if m>n then |x[m] - a| < t ]
HTH
Mike.
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using
the "metaphysical formation" of arbitrarily close... ;^)
FromTheRafters
2024-02-16 22:33:12 UTC
Permalink
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
This sequence doesn't reach zero, but this series (1.111...) equals one
and one ninth.
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
Approaching arbitrarily closely seems right to me.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
That sort of works too. What you want to avoid is the 'infinite
<something>' which you often say when you mean 'infinitely many
something(s').
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
I don't know what you are getting at with this.
Chris M. Thomasson
2024-02-16 23:16:31 UTC
Permalink
Post by FromTheRafters
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
This sequence doesn't reach zero, but this series (1.111...) equals one
and one ninth.
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
Approaching arbitrarily closely seems right to me.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
That sort of works too. What you want to avoid is the 'infinite
<something>' which you often say when you mean 'infinitely many
something(s').
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using
the "metaphysical formation" of arbitrarily close... ;^)
I don't know what you are getting at with this.
I was told one time that infinitely closer is in the realm of
"metaphysical" because of the word infinite. However, the term
arbitrarily close is something others can deal with "better", so to
speak. Make any sense?
Chris M. Thomasson
2024-02-20 19:59:52 UTC
Permalink
Post by Chris M. Thomasson
Post by FromTheRafters
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
This sequence doesn't reach zero, but this series (1.111...) equals
one and one ninth.
Post by Chris M. Thomasson
arbitrarily close seems to be the accepted term.
Approaching arbitrarily closely seems right to me.
Post by Chris M. Thomasson
infinitely close is the wrong wording?
That sort of works too. What you want to avoid is the 'infinite
<something>' which you often say when you mean 'infinitely many
something(s').
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol.
Using the "metaphysical formation" of arbitrarily close... ;^)
I don't know what you are getting at with this.
I was told one time that infinitely closer is in the realm of
"metaphysical" because of the word infinite. However, the term
arbitrarily close is something others can deal with "better", so to
speak. Make any sense?
You could say approaching asymtotically.
https://www.dictionary.com/browse/asymptotically
Fine with me.
mitchr...@gmail.com
2024-02-17 04:06:07 UTC
Permalink
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
arbitrarily close seems to be the accepted term.
infinitely close is the wrong wording?
Infinitely close and zero difference are not the same thing.
0 and the infinitesimal are said to behave the same in
calculus without actually being.
Post by Chris M. Thomasson
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using
the "metaphysical formation" of arbitrarily close... ;^)
Chris M. Thomasson
2024-02-17 22:29:12 UTC
Permalink
Post by Chris M. Thomasson
[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
arbitrarily close seems to be the accepted term.
infinitely close is the wrong wording?
The function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using
the "metaphysical formation" of arbitrarily close... ;^)
A fun summation:

10^(-0) + 10^(-2) + 10^(-4) + 10^(-6) + ...

1.0101010101...

or:

10^(-0)*1 + 10^(-2)*2 + 10^(-4)*3 + 10^(-6)*4 + ...

1.020304 ...
Loading...