Discussion:
Wait till they figure out that China has also AI
Add Reply
Mild Shock
2025-01-02 19:03:40 UTC
Reply
Permalink
Hi,

How it started:
https://www.instagram.com/p/Cump3losObg

How its going:
https://9gag.com/gag/azx28eK

Bye
Mild Shock
2025-01-28 00:35:54 UTC
Reply
Permalink
Hi,

Wait till USA figures out there is a second
competitor besides DeepSeek, its called Yi-Lightning:

Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253

It was already discussed 2 months ago:

Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!


Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Mild Shock
2025-01-28 00:42:21 UTC
Reply
Permalink
Hi,

Given that Geoffrey Hinton had a Little Language Model
in 1985, and that he is half british, I have doubts:

British-Canadian "Godfather of AI".
https://en.wikipedia.org/wiki/Geoffrey_Hinton

But not sure what will emerge from Europe. Maybe
they are not in a hurry? Or they will just use

the Chinese stuff? Like everything is anyway already
outsourced to China, made in China.

Bye
Post by Mild Shock
Hi,
Wait till USA figures out there is a second
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
http://youtu.be/ddWuEUjo4u4
Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Mild Shock
2025-01-28 01:15:02 UTC
Reply
Permalink
Hi,

This is also fun:

https://chat.qwenlm.ai/

https://x.com/Alibaba_Qwen

Bye
Post by Mild Shock
Hi,
Wait till USA figures out there is a second
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
http://youtu.be/ddWuEUjo4u4
Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Mild Shock
2025-01-31 15:26:33 UTC
Reply
Permalink
Hi,

So how its going? DeepSeek embraced by many cloud
providers, even by NVIDIA NIM itself.

DeepSeek-R1 Now Live With NVIDIA NIM
https://blogs.nvidia.com/blog/deepseek-r1-nim-microservice/

What what are these models doing and how are they
trained. Is Geoffrey Hinton our only AI God? There
seems to be another slightly disputed AI God,

S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural
Computation, 9(8):1735-1780, 1997.
https://people.idsia.ch/~juergen/deep-learning-history.html

Bye

P.S.: It allows a mechanistic view on our linguistic
brain if the latent space is some semantic vectors?
So that learning is a kind of control mechanism:

Machine Learning Approach to Model Order Reduction
of Nonlinear Systems via Autoencoder and LSTM Networks
Thomas Simpson - 23 Sep 2021
https://arxiv.org/abs/2109.11213
Post by Mild Shock
Hi,
Wait till USA figures out there is a second
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
http://youtu.be/ddWuEUjo4u4
Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Mild Shock
2025-01-31 22:56:28 UTC
Reply
Permalink
Hi,

Please meet Luo Fuli:

The 29-Year-Old Genius Behind DeepSeek’s AI Revolution


I find this paper interesting, finally
some say about fine tuning during pretraing:

Raise a Child in Large Language Model
13 Sep 2021 - Fuli Luo et al.
https://arxiv.org/pdf/2109.05687

Bye
Post by Mild Shock
Hi,
So how its going? DeepSeek embraced by many cloud
providers, even by NVIDIA NIM itself.
DeepSeek-R1 Now Live With NVIDIA NIM
https://blogs.nvidia.com/blog/deepseek-r1-nim-microservice/
What what are these models doing and how are they
trained. Is Geoffrey Hinton our only AI God? There
seems to be another slightly disputed AI God,
S. Hochreiter, J. Schmidhuber. Long Short-Term Memory. Neural
Computation, 9(8):1735-1780, 1997.
https://people.idsia.ch/~juergen/deep-learning-history.html
Bye
P.S.: It allows a mechanistic view on our linguistic
brain if the latent space is some semantic vectors?
Machine Learning Approach to Model Order Reduction
of Nonlinear Systems via Autoencoder and LSTM Networks
Thomas Simpson - 23 Sep 2021
https://arxiv.org/abs/2109.11213
Post by Mild Shock
Hi,
Wait till USA figures out there is a second
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
http://youtu.be/ddWuEUjo4u4
Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Mild Shock
2025-02-04 09:06:03 UTC
Reply
Permalink
Hi,

Because of the wide availability of Machine Learning
via Python libraries , the whole world (at least China)
has become a big Petri Dish that is experimenting with

new strategies to evolve brains on the computer.
Recent discovery seems to be Group Preference Optimization.
This is when you make the chat bot, detect and react

differently to different groups of people. It seems to
work on the "policy level". I don't understand it yet
completely. But chat bots can then evolve and use

multiple policies automatically:

Group Preference Optimization
https://arxiv.org/abs/2310.11523

DeepSeekMath: Pushing the Limits
https://arxiv.org/abs/2402.03300

Now it seems that it is also at the core of DeepSeekMath,
what is possibly detected is not group of people, but
mathematical topics, so that in the end it excells.

When unsupervised learning is used groups or math
topics might be found from data, through a form of
abduction.

Bye
Post by Mild Shock
Hi,
Wait till USA figures out there is a second
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
http://youtu.be/ddWuEUjo4u4
Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Mild Shock
2025-02-08 11:58:15 UTC
Reply
Permalink
Hi,

I try to motivate a Biology Teacher already for a while to
replicate the below grokking experiment. But I have my
own worries, why bother with the blackbox of what a

machine learning method has learnt?

Simple PyTorch Implementation of "Grokking"
https://github.com/teddykoker/grokking

Well its not correct to say that the learnt model is a black box.
The training data was somehow a black box, but the resulting
model is a white box, you can inspect it.

This gives rise to a totally new scientific profession of
full time artificial intelligence model gazers. And it is
aprils fools day all year long:

Language Models Use Trigonometry to Do Addition
https://arxiv.org/abs/2502.00873

Have Fun!

Bye
Post by Mild Shock
Hi,
Because of the wide availability of Machine Learning
via Python libraries , the whole world (at least China)
has become a big Petri Dish that is experimenting with
new strategies to evolve brains on the computer.
Recent discovery seems to be Group Preference Optimization.
This is when you make the chat bot, detect and react
differently to different groups of people. It seems to
work on the "policy level". I don't understand it yet
completely. But chat bots can then evolve and use
Group Preference Optimization
https://arxiv.org/abs/2310.11523
DeepSeekMath: Pushing the Limits
https://arxiv.org/abs/2402.03300
Now it seems that it is also at the core of DeepSeekMath,
what is possibly detected is not group of people, but
mathematical topics, so that in the end it excells.
When unsupervised learning is used groups or math
topics might be found from data, through a form of
abduction.
Bye
Post by Mild Shock
Hi,
Wait till USA figures out there is a second
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
Eric Schmidt DROPS BOMBSHELL: China DOMINATES AI!
http://youtu.be/ddWuEUjo4u4
Bye
Post by Mild Shock
Hi,
https://www.instagram.com/p/Cump3losObg
https://9gag.com/gag/azx28eK
Bye
Loading...