Post by Chris M. ThomassonPost by Ross FinlaysonPost by Chris M. ThomassonPost by Ben BacarissePost by Chris M. ThomassonTake a number that wants to get close to zero.
This makes no sense. "a number" is one number. And numbers don't want
anything.
That was designed to raise a laugh or two. I guess it bombed. Yikes!
Post by Ben BacarissePost by Chris M. Thomasson[0] = 1
[1] = .1
[2] = .01
[3] = .001
[...] = [...]
Strange notation. [0] = 1. Eh? Why not just use a more conventional
Too used to a programming language wrt indexing arrays I guess. :^)
Post by Ben Bacarisses_0 = 1
s_1 = 0.1
etc.
You can, if you prefer, write it as a function: s(n) = 10^-n (as you do
later).
Post by Chris M. Thomassonarbitrarily close seems to be the accepted term.
But it's not a very good one. For example, one could say that
p(n) = 2^n when n is even
p(n) = 2^-n when n is odd
gets arbitrarily close to zero but also arbitrarily far away from zero.
That's fine with me. I can see it wrt your logic.
Post by Ben BacarissePost by Chris M. Thomassoninfinitely close is the wrong wording?
I would not know what you mean if you said that, so I would say it's the
wrong wording. The best wording is to say
lim_{n->oo} s(n) = 0.
which you can read as "the limit, as n tends to infinity, if s(n) is
zero".
The limit of f(n) = 10^(-n) is zero. However, none of the iterates equal
zero. They just get closer and closer to it...
Post by Ben BacarissePost by Chris M. ThomassonThe function f(n) = 10^(-n) gets "infinitely close" to 0... lol. Using the
"metaphysical formation" of arbitrarily close... ;^)
Oh. Does the ;^) mean this as all a joke? If so, sorry.
I was thinking that the statement: "how close is infinitely close to
arbitrarily close" would make some people laugh.
Hey, just because C/C++ is very familiar and
there's lots of perceived utility in the non-blocking,
one thing I wondered to ask you about was this
idea of a queue that basically has this sort of
utility.
Queues are usually only iterated once. Yet, sometimes
it's that items get appended and it's a well-formed
sequence that results the message. In this case
then the idea of this "mono-hydra" or "slique",
as about about the hydra with multiple heads
or the deque the doubled-ended queue, here
the idea is that this particular data structure
has its synchronization about setting a mark
and then basically when it scans from the front
then when it results a full sequence, then it
pops all those off while swapping in the new head.
So, the idea of this data structure is a very usual
use case in the buffering or burst queue or whatever
is the reason why a queue is filling up in the
intermediate, until the consumer indicates a
well-formed message and pops that off, while
the producer just atomic-appends items to the tail.
Anyways I wondered in your studies of Single and
Multiple Producer and Consumer, algorithms,
toward the lock-free, if you've considered this
sort simple data structure that I call "monohydra"
or "slique".
It's for something like Internet Protocols where
packets arrive and get assembled, then when they
result a message, that this kind of data structure
is entirely ephemeral, and has no purpose being
in main memory, then for a usual sort of free-list
approach to usual sorts un-even loads.
It's like "how's your chip design going" and
it's like "we solved it by adding burst buffers
between every two units" and it's like "that's
not very systolic".
A usual complement to "how low can you go"
is "how cold can it flow".
"If you're going to go about reinventing geometric series,
one need not do so so publicly." -- Virgil
I will get back to you, a bit busy right now. Actually, I am working on
a neat sort of distributed "queue'ish" work system. It only uses atomic
exchange. When you get some free time to burn, take a deep look at the
https://groups.google.com/g/comp.lang.c++/c/Skv1PoQsUZo/m/XI3Qw64xAAAJ
I've been thinking about cooperative multithreading,
basically with cooperative threading with timeouts,
since about 2016 I wrote up this idea about the "re-routine",
which is a sort of outline for cooperative multithreading,
where the idea is that all the calls are idempotent and
memoized, and none return null on success, and their
exceptions are modeled as usual error modes or for flow
of control with exceptions if that's the thing, then the
idea being is that the executor sort of runs the re-routine
and then figures that whenever it gets a null, then it's pending,
so it throws itself out, that when the callback arrives when
there's a response, then it just runs again right through the
re-routine, where of course it's all conditioned on the idempotent
memoized intermediate results, until it completes or errors.
This way it's about the same on the heap the intermediate
memo-ized results as the routine is running, but there's
no stack at all, so, it's not stack-bound, at all.
Of course preemptive multithreading and the thread stack
and context switching is about the greatest sort of thing
when some code isn't well-behaved, or won't yield, here
then for figuring about how basically to implement
cooperative multithreading including timeouts and priority.
Thusly, in the same sort of context as the co-routine, is,
the re-routine, this being for a model of cooperating
multi-threading, at least in a module where it can be
either entirely callback-driven or including timeouts,
and besides the executor can run its own threads in
the stack-bound for ensuring usual throughput in
case whatever re-routine is implemented in blocking
fashion.
This is a sort of idea about where mostly what I
want is that the routine is written as if it were
synchronous and serial and blocking, so it's simple,
then that just the semantics of the adapters or
the extra work on the glue logic, makes it so that
the serial re-routine is written naturally "in the language",
making it all readable and directly up-front and
getting the async semantics out of the way.
That's not really relevant here in this context
about "the mathematical infinite and unbounded",
but it results a sort of "factory industry pattern",
or re-routines, then that the implementation of
routine ends up being as simple as possible and
as close as possible to the language of its predication,
while it's all well-formed and guaranteed its behavior,
that then it can be implemented variously local or remote.
I suppose "cooperative multithreading" isn't for everybody,
but re-routines is a great idea.
Then the idea of the sliques or monohydra is basically
exactly as for the buffering the serving the IO's.
I.e., the idea here is that usually request/response
type things for transits can sort of be related to
what goes through DMA and nonblocking or async I/O,
and scatter/gather and vector I/O, getting things flowing,
right by the little ring through their nose.
Then about the infinite and the infinite limit,
it's called "the infinite limit".
Consider a snake that travels from 0 to 1 then 1 to 2.
Not only did it get "close enough", to 1,
it got "far enough", away, to get to 2.
I.e. deductively it crossed that bridge.
In continuous time, ....
It's called "infinite limit", with the idea being also
that when it results continuous called "continuum limit".