Discussion:
Traditions die: Another one bites the Dust
(too old to reply)
Mild Shock
2025-01-10 21:37:10 UTC
Permalink
Hi,
Subject: An Isabelle Foundation?
Date: Fri, 10 Jan 2025 14:16:33 +0000
From: Lawrence Paulson via isabelle-dev
Some of us have been talking about how to keep
things going after the recent retirement of Tobias and
myself and the withdrawal of resources from Munich.
I've even heard a suggestion that Isabelle would not
be able to survive for much longer.
https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29

No more money for symbolic AI? LoL

Maybe suplanted by Lean (proof assistant) and my
speculation maybe re-orientation to keep up with
Hybrid methods, such as found in ChatGPT.
https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29

Bye
Mild Shock
2025-01-10 21:45:25 UTC
Permalink
Hi,

Interestingly, John Sowa is close with RNT to how
ChatGPT works . In his LLMs bashing videos, John Sowa
repeatedly showed brain models in his slides that

come from Sydney Lamb:

Relational Network Theory (RNT), also known as
Neurocognitive Linguistics (NCL) and formerly as
Stratificational Linguistics or Cognitive-Stratificational
Linguistics, is a connectionist theoretical framework in
linguistics primarily developed by Sydney Lamb which
aims to integrate theoretical linguistics with neuroanatomy.
https://en.wikipedia.org/wiki/Relational_Network_Theory

You can ask ChatGPT and ChatGPT will tell you
what parallels it sees between LLM and RNT.

Bye

P.S.: Here is what ChatGPT tells me about LLM and RNT
as a summary, but the answer itself was much bigger:

While there are shared aspects, particularly the
emphasis on relational dynamics, ChatGPT models are
not explicitly designed with RNT principles. Instead,
they indirectly align with RNT through their ability
to encode and use relationships learned from data.
However, GPT’s probabilistic approach and reliance
on large-scale data contrast with the more structured
and theory-driven nature of RNT.
https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114

I would also put the answer into perspective if one
include RAG. So with Retrieval Augmented Generation

things look completely different again.
Post by Mild Shock
Hi,
Subject: An Isabelle Foundation?
Date: Fri, 10 Jan 2025 14:16:33 +0000
From: Lawrence Paulson via isabelle-dev
Some of us have been talking about how to keep
things going after the recent retirement of Tobias and
myself and the withdrawal of resources from Munich.
I've even heard a suggestion that Isabelle would not
be able to survive for much longer.
https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
No more money for symbolic AI? LoL
Maybe suplanted by Lean (proof assistant) and my
speculation maybe re-orientation to keep up with
Hybrid methods, such as found in ChatGPT.
https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
Bye
Mild Shock
2025-01-10 22:01:52 UTC
Permalink
Hi,

Or ChatGPT is better in Program Verification. Means
the "Fine Tuning" that ChatGPT has to deal with
Queries that include programming language fragments

or tasks, and the generation of programming language
fragements or tasks is practically already better and
expected to improve even more:

- Not an Architecture based on a Symbolic HOL language
and backend to extract Programming Languages.

- Not an Architecture that only supports some academic
"pure" Languages like ML, Haskell, Scala, etc..

- Rather an Architecture where one can directly interacts
in the Programming Languages, similar like it deals with
Natural Language, Pipe Dream Ontologies are not seen.

- Rather an Architecture that can deal with "impure"
Languages such as JavaScript, Python, etc.. that are
imperative and have side effects.

- What else?

Bye
Post by Mild Shock
Hi,
Interestingly, John Sowa is close with RNT to how
ChatGPT works . In his LLMs bashing videos, John Sowa
repeatedly showed brain models in his slides that
Relational Network Theory (RNT), also known as
Neurocognitive Linguistics (NCL) and formerly as
Stratificational Linguistics or Cognitive-Stratificational
Linguistics, is a connectionist theoretical framework in
linguistics primarily developed by Sydney Lamb which
aims to integrate theoretical linguistics with neuroanatomy.
https://en.wikipedia.org/wiki/Relational_Network_Theory
You can ask ChatGPT and ChatGPT will tell you
what parallels it sees between LLM and RNT.
Bye
P.S.: Here is what ChatGPT tells me about LLM and RNT
While there are shared aspects, particularly the
emphasis on relational dynamics, ChatGPT models are
not explicitly designed with RNT principles. Instead,
they indirectly align with RNT through their ability
to encode and use relationships learned from data.
However, GPT’s probabilistic approach and reliance
on large-scale data contrast with the more structured
and theory-driven nature of RNT.
https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114
I would also put the answer into perspective if one
include RAG. So with Retrieval Augmented Generation
things look completely different again.
Hi,
 > Subject: An Isabelle Foundation?
 > Date: Fri, 10 Jan 2025 14:16:33 +0000
 > From: Lawrence Paulson via isabelle-dev
 > Some of us have been talking about how to keep
things going after the recent retirement of Tobias and
myself and the withdrawal of resources from Munich.
I've even heard a suggestion that Isabelle would not
be able to survive for much longer.
https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
No more money for symbolic AI? LoL
Maybe suplanted by Lean (proof assistant) and my
speculation maybe re-orientation to keep up with
Hybrid methods, such as found in ChatGPT.
https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
Bye
Mild Shock
2025-01-11 01:13:02 UTC
Permalink
Hi,

Once spear heading symbolic methods:

Automated Deduction at Argonne
https://www.mcs.anl.gov/research/projects/AR/

Now they invest into "foundation
models", ca 68 Mio USD:

Argonne receives funding for artificial
intelligence in scientific research
- Privacy-Preserving Federated Learning for Science
- Tensor-Compressed Sustainable Pre-Training of
Extreme-Scale Foundation Models
https://www.anl.gov/article/argonne-receives-funding-for-artificial-intelligence-in-scientific-research

"foundation models" was initiated a while ago
for example by IBM, it was intially only deep
learning applied to like for example Climate Change.

It had for example some new meteo reporting
inovations, like quicker and more local forecasts:

IBM and NASA are building an AI foundation model for weather and climate
https://research.ibm.com/blog/weather-climate-foundation-model

but the term "foundation models" seems to be
also now used for models that include natural
language. The term has somehow become the new

synonym for the machine learning behind LLMs,
whith applications in health care, etc, etc..

What are foundation models?
https://www.ibm.com/think/topics/foundation-models

Bye
Post by Mild Shock
Hi,
Or ChatGPT is better in Program Verification. Means
the "Fine Tuning" that ChatGPT has to deal with
Queries that include programming language fragments
or tasks, and the generation of programming language
fragements or tasks is practically already better and
- Not an Architecture based on a Symbolic HOL language
  and backend to extract Programming Languages.
- Not an Architecture that only supports some academic
  "pure" Languages like ML, Haskell, Scala, etc..
- Rather an Architecture where one can directly interacts
  in the Programming Languages, similar like it deals with
  Natural Language, Pipe Dream Ontologies are not seen.
- Rather an Architecture that can deal with "impure"
  Languages such as JavaScript, Python, etc.. that are
  imperative and have side effects.
- What else?
Bye
Post by Mild Shock
Hi,
Interestingly, John Sowa is close with RNT to how
ChatGPT works . In his LLMs bashing videos, John Sowa
repeatedly showed brain models in his slides that
Relational Network Theory (RNT), also known as
Neurocognitive Linguistics (NCL) and formerly as
Stratificational Linguistics or Cognitive-Stratificational
Linguistics, is a connectionist theoretical framework in
linguistics primarily developed by Sydney Lamb which
aims to integrate theoretical linguistics with neuroanatomy.
https://en.wikipedia.org/wiki/Relational_Network_Theory
You can ask ChatGPT and ChatGPT will tell you
what parallels it sees between LLM and RNT.
Bye
P.S.: Here is what ChatGPT tells me about LLM and RNT
While there are shared aspects, particularly the
emphasis on relational dynamics, ChatGPT models are
not explicitly designed with RNT principles. Instead,
they indirectly align with RNT through their ability
to encode and use relationships learned from data.
However, GPT’s probabilistic approach and reliance
on large-scale data contrast with the more structured
and theory-driven nature of RNT.
https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114
I would also put the answer into perspective if one
include RAG. So with Retrieval Augmented Generation
things look completely different again.
Hi,
 > Subject: An Isabelle Foundation?
 > Date: Fri, 10 Jan 2025 14:16:33 +0000
 > From: Lawrence Paulson via isabelle-dev
 > Some of us have been talking about how to keep
things going after the recent retirement of Tobias and
myself and the withdrawal of resources from Munich.
I've even heard a suggestion that Isabelle would not
be able to survive for much longer.
https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
No more money for symbolic AI? LoL
Maybe suplanted by Lean (proof assistant) and my
speculation maybe re-orientation to keep up with
Hybrid methods, such as found in ChatGPT.
https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
Bye
Mild Shock
2025-01-11 01:51:40 UTC
Permalink
Hi,

Ok, I remember showing somebody around
in 2022 not only the AI Campus at EPFL,
but also the Computer Museum at EPFL:

Installation de l’IBM Blue Gene/Q
de l’EPFL, Lemanicus, au Musée Bolo
https://www.museebolo.ch/blue-gene-q-au-musee-bolo/

What was Lemanicus? It had to do with Argonne:

- A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue
Gene/Q system called Lemanicus was installed at
the EPFL in March 2013.[61] This system belongs
to the Center for Advanced Modeling Science CADMOS ([62])
which is a collaboration between the three main research
institutions on the shore of the Lake Geneva in
the French speaking part of Switzerland :
University of Lausanne, University of Geneva and EPFL.
The system consists of a single rack (1,024 compute nodes)
with 2.1 PB of IBM GPFS-GSS storage.
https://en.wikipedia.org/wiki/IBM_Blue_Gene#Blue_Gene/Q

Interestingly Lemanicus is outrun by NVIDIA H100 NVL.
But NVIDIA H100 NVL hasn't even yet a Wikipedia page.

Bye

P.S.: Here a comparison that ChatGPT generated:

Aspect Blue Gene/Q (Lemanicus)
NVIDIA H100 NVL
Primary Purpose Scientific simulations (physics, biology, etc.)
AI/ML training and inference
Peak Performance 1.6 petaflops (double-precision)
Exaflop-level AI performance (FP8)
Memory 16 GB RAM per node
Large HBM3 memory per GPU
Interconnect 5D torus network
NVLink with high bandwidth
Precision Focus Double-precision (64-bit floating-point)
FP8, FP16, and INT8 for AI workloads
Energy Efficiency Extremely energy-efficient for its era
Highly optimized for power efficiency
Software Ecosystem Custom tools for HPC
CUDA, TensorRT, cuDNN for AI/ML workloads
Post by Mild Shock
Hi,
Automated Deduction at Argonne
https://www.mcs.anl.gov/research/projects/AR/
Now they invest into "foundation
Argonne receives funding for artificial
intelligence in scientific research
- Privacy-Preserving Federated Learning for Science
- Tensor-Compressed Sustainable Pre-Training of
  Extreme-Scale Foundation Models
https://www.anl.gov/article/argonne-receives-funding-for-artificial-intelligence-in-scientific-research
"foundation models" was initiated a while ago
for example by IBM, it was intially only deep
learning applied to like for example Climate Change.
It had for example some new meteo reporting
IBM and NASA are building an AI foundation model for weather and climate
https://research.ibm.com/blog/weather-climate-foundation-model
but the term "foundation models" seems to be
also now used for models that include natural
language. The term has somehow become the new
synonym for the machine learning behind LLMs,
whith applications in health care, etc, etc..
What are foundation models?
https://www.ibm.com/think/topics/foundation-models
Bye
Post by Mild Shock
Hi,
Or ChatGPT is better in Program Verification. Means
the "Fine Tuning" that ChatGPT has to deal with
Queries that include programming language fragments
or tasks, and the generation of programming language
fragements or tasks is practically already better and
- Not an Architecture based on a Symbolic HOL language
   and backend to extract Programming Languages.
- Not an Architecture that only supports some academic
   "pure" Languages like ML, Haskell, Scala, etc..
- Rather an Architecture where one can directly interacts
   in the Programming Languages, similar like it deals with
   Natural Language, Pipe Dream Ontologies are not seen.
- Rather an Architecture that can deal with "impure"
   Languages such as JavaScript, Python, etc.. that are
   imperative and have side effects.
- What else?
Bye
Post by Mild Shock
Hi,
Interestingly, John Sowa is close with RNT to how
ChatGPT works . In his LLMs bashing videos, John Sowa
repeatedly showed brain models in his slides that
Relational Network Theory (RNT), also known as
Neurocognitive Linguistics (NCL) and formerly as
Stratificational Linguistics or Cognitive-Stratificational
Linguistics, is a connectionist theoretical framework in
linguistics primarily developed by Sydney Lamb which
aims to integrate theoretical linguistics with neuroanatomy.
https://en.wikipedia.org/wiki/Relational_Network_Theory
You can ask ChatGPT and ChatGPT will tell you
what parallels it sees between LLM and RNT.
Bye
P.S.: Here is what ChatGPT tells me about LLM and RNT
While there are shared aspects, particularly the
emphasis on relational dynamics, ChatGPT models are
not explicitly designed with RNT principles. Instead,
they indirectly align with RNT through their ability
to encode and use relationships learned from data.
However, GPT’s probabilistic approach and reliance
on large-scale data contrast with the more structured
and theory-driven nature of RNT.
https://chatgpt.com/share/67818fb7-a788-8013-9cfe-93b3972c8114
I would also put the answer into perspective if one
include RAG. So with Retrieval Augmented Generation
things look completely different again.
Hi,
 > Subject: An Isabelle Foundation?
 > Date: Fri, 10 Jan 2025 14:16:33 +0000
 > From: Lawrence Paulson via isabelle-dev
 > Some of us have been talking about how to keep
things going after the recent retirement of Tobias and
myself and the withdrawal of resources from Munich.
I've even heard a suggestion that Isabelle would not
be able to survive for much longer.
https://en.wikipedia.org/wiki/Isabelle_%28proof_assistant%29
No more money for symbolic AI? LoL
Maybe suplanted by Lean (proof assistant) and my
speculation maybe re-orientation to keep up with
Hybrid methods, such as found in ChatGPT.
https://en.wikipedia.org/wiki/Lean_%28proof_assistant%29
Bye
Loading...