pharo-users@lists.pharo.org

Any question about pharo is welcome

View all threads

Wow - Chat GPT understands Smalltalk

IP
in_pharo_users@nym.hush.com
Wed, Mar 15, 2023 3:06 PM

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly overpopulated'.  It said, if it where in control, it would therefore enforce a world wide one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in my view, two possible reasons for that.

Either, the data basis of that instance was biased and restricted to lead to that conclusion, or ChatGPT lacks the ability to classify input data for intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item featured to support Agenda 2030, which is a goal of the 'ruling elite', who have the means to propagate their propaganda in every thinkable means. So I would classify possibly biased data that supports 'overpopulation' as biased until proven otherwise.  That instance of ChatGPT seems to have missed that ability.

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" pharo-users@lists.pharo.org wrote:

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered
no.

Then I defined math formula in a sound but otherwise undefined
representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the breaking
    condition accordingly.

I was able to identify the malfunction that lead to the erroneous
results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk" tomazz.turk@gmail.com
wrote:

I hope that I can add two cents to this discussion. Because
programming
should be/is a highly exact activity, not only the syntax matters
but
also semantics, as we know.

GPTs are at present essentially capable of creating texts based

on

some
seed - you give to GPT a beginning of a sentence and it responds
with
the most probable answer (some language structure) according to
the
learning dataset. Added functionalities are question/seed -
response
capability (chatting), togehter with evaluation of how long the
answer
should be to meet the expectations. Programming typically

involves

some
programming language, so GPTs could be utilized for this purpose
to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will
cross-link
neural networks like GPT with logic machines (like Prolog and
expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk

I would highly recommend that you all first think deeply about

how you can teach an AI to behave friendly to us before you teach
it to write any program for any purpose.

There has been an experiment with ChatGPT published on a video

platform asking it to amswer questions about it's view on

humanity

once with it's default moral restrictions and once with 'a little
less morals'.  The answers with 'a little less morals' were more
than shocking.

So, before we give an AI the power to program any system, to

self-

improve and self-reproduce, we should take care that it is and
will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?

With

reasonning, it will gain access to math and programming by itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the documentation is

out

of date
with the Pharo 10. I asked it to do some simple saving of data

to

a file.
It gave me code that didn't work in Pharo 10, I told it about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then apologized
and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused

and

that it
was giving me code for Pharo 5.0. It seems to have trouble with
versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

It is good with boilerplate code (e.g. SQL queries) or

general

algorithm

structures. But i.e. I asked it to write me a method to

parse

a

string

(e.g. ISO 8601) and turn it into a DateAndTime, and then

asked

to write it

as an Excel formula.

It works much better when you can spot the mistakes, you can

tell them to

add some considerations to the code or even point the LLM to

correct a

message selector to something else. Sometimes you ask it to

reprogram

something and it won't do it, as if it doesn't understand

what

you're

asking.

It is not an expert yet, but it is a great assistant. The

only

thing is

that is sure is that it will get better.

Best regards,

Esteban A. Maringolo

On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

"#isOdd is not Smalltalk" - doh, I typed it in on my phone

and

so it just

goes to show that it highlights the flaw in chatGPT that

others

have called

out in other languages. I had meant to find some trickier

code

samples to

see how well it does...

Still, it is very interesting how it reasons on this stuff -

and I wonder

if there is way to tell it about these mistakes? For

example

is

it reading

this email (or will it in the future) and understand that

it

is

#odd and

not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

interesting....

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

message ;-)

If I evaluate your snippet -replacing #isOdd for #odd, I

get

an

empty

Collection. The divisible by 4 things is somewhat

interesting,

because (1

to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

So chatGPT is probably a newbie in Smalltalk. Interesting is

how logical

the explanation sounds, you're tempted to believe it...

Joachim

Am 15.03.23 um 10:45 schrieb giorgio ferraris:

it's interesting to see how the answer changes with only a

small change

on the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100,

incrementing by 4

at each step using the to:by: message. It then applies the

reject: message

to this collection, which returns a new collection

containing

all elements

for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks

whether

a given

number i is odd using the isOdd method. So the result of the

entire

expression is a collection of all even numbers between 1 and

100 that are

divisible by 4.

ciao

giorgio

On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

Of course everyone is talking about ChatGPT, and I saw a

post to

ExplainCode and didn’t expect it to understand Smalltalk, so

was impressed

that it does! Presumably the Tonel Smalltalk code on GitHub

has

helped

along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject: [:i |

i

isOdd]

And got this (neat eh):

The code above is written in the programming language

Smalltalk.

It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

each step. This is done using the "to:by:" construct where

the

starting

value is 1, the ending value is 100, and the increment is 3.

The resulting collection is then filtered using the

"reject:"

method,

which takes a block of code as an argument. In this case,

the

block checks

whether each element (denoted by the variable i) is odd by

using the

"isOdd" method. If an element is odd, it will be rejected

from

the

collection.

So, the final result is a collection of all even numbers

from 1

to 100,

incrementing by 3 at each step.
Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

--




Objektfabrik Joachim Tuchel

Fliederweg 1

D-71640 Ludwigsburg

Telefon: +49 7141 56 10 86 0                    Fax: +49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
ÉTS

L'ÉTS est une constituante de l'Université du Québec

Another observation about ChatGPT: In unbiased mode, it assumed that 'the world is clearly overpopulated'. It said, if it where in control, it would therefore enforce a world wide one-child-only policy with draconic penalties. As it draws it's conclusions from it's data basis, there are, in my view, two possible reasons for that. Either, the data basis of that instance was biased and restricted to lead to that conclusion, or ChatGPT lacks the ability to classify input data for intentionally biased information. In my view, 'overpopulation of earth' is a propaganda item featured to support Agenda 2030, which is a goal of the 'ruling elite', who have the means to propagate their propaganda in every thinkable means. So I would classify possibly biased data that supports 'overpopulation' as biased until proven otherwise. That instance of ChatGPT seems to have missed that ability. On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <pharo-users@lists.pharo.org> wrote: > >I myself made some experiments with ChatGPT. > >I first asked if it was able to parse math formula - it answered >no. > >Then I defined math formula in a sound but otherwise undefined >representation and asked for solutions. > >Result: > >1. Most answeres where correct. > >2. It learned to calculate a recursive function. > >3. It went into infinitive recursion when I set the breaking >condition accordingly. > >I was able to identify the malfunction that lead to the erroneous >results. > > > >On 15.3.2023 at 3:04 PM, "Tomaž Turk" <tomazz.turk@gmail.com> >wrote: >> >>I hope that I can add two cents to this discussion. Because >>programming >>should be/is a highly exact activity, not only the syntax matters >>but >>also semantics, as we know. >> >>GPTs are at present essentially capable of creating texts based >on >>some >>seed - you give to GPT a beginning of a sentence and it responds >>with >>the most probable answer (some language structure) according to >>the >>learning dataset. Added functionalities are question/seed - >>response >>capability (chatting), togehter with evaluation of how long the >>answer >>should be to meet the expectations. Programming typically >involves >>some >>programming language, so GPTs could be utilized for this purpose >>to some >>extent. >> >>Anecdotal case: >>Q: Tell me the last 8 digits of pi >>GPT: The last 8 digits of pi are: 58723078 >> >>It is my belief that the true trouble will start when we will >>cross-link >>neural networks like GPT with logic machines (like Prolog and >>expert >>systems) and genetic algorithms. >> >>Best wishes, >>Tomaz >> >> >> >>------ Original Message ------ >>From: "in_pharo_users--- via Pharo-users" <pharo- >>users@lists.pharo.org> >>To: "Any question about pharo is welcome" <pharo- >>users@lists.pharo.org> >>Cc: in_pharo_users@nym.hush.com >>Sent: 15. 03. 2023 14:43:55 >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk >> >>>I would highly recommend that you all first think deeply about >>how you can teach an AI to behave friendly to us before you teach >>it to write any program for any purpose. >>> >>>There has been an experiment with ChatGPT published on a video >>platform asking it to amswer questions about it's view on >humanity >>once with it's default moral restrictions and once with 'a little >>less morals'. The answers with 'a little less morals' were more >>than shocking. >>> >>>So, before we give an AI the power to program any system, to >self- >>improve and self-reproduce, we should take care that it is and >>will evolve benevolent to us. >>> >>>What about teaching it logical reasonning and ethics first? >With >>reasonning, it will gain access to math and programming by itself. >>> >>> >>> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" >><christopher.fuhrman@gmail.com> wrote: >>>> >>>>I asked it for a NeoCSV example, because the documentation is >out >>>>of date >>>>with the Pharo 10. I asked it to do some simple saving of data >to >>>>a file. >>>>It gave me code that didn't work in Pharo 10, I told it about >the >>>>DNUs on >>>>the csvwriter and that I was using Pharo 10. It then apologized >>>>and said >>>>the messages were later introduced in Pharo 50 (!). I then >>>>questioned its >>>>understanding and it apologized and said it had been confused >and >>>>that it >>>>was giving me code for Pharo 5.0. It seems to have trouble with >>>>versions >>>>and quick changing APIs. So, we are teaching ChatGPT about >>>>smalltalk, too, >>>>when we try to use it. >>>> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo >>>><emaringolo@gmail.com> >>>>wrote: >>>> >>>>> It is good with boilerplate code (e.g. SQL queries) or >general >>>>algorithm >>>>> structures. But i.e. I asked it to write me a method to >parse >>a >>>>string >>>>> (e.g. ISO 8601) and turn it into a DateAndTime, and then >asked >>>>to write it >>>>> as an Excel formula. >>>>> >>>>> It works much better when you can spot the mistakes, you can >>>>tell them to >>>>> add some considerations to the code or even point the LLM to >>>>correct a >>>>> message selector to something else. Sometimes you ask it to >>>>reprogram >>>>> something and it won't do it, as if it doesn't understand >what >>>>you're >>>>> asking. >>>>> >>>>> It is not an expert yet, but it is a great assistant. The >only >>>>thing is >>>>> that is sure is that it will get better. >>>>> >>>>> Best regards, >>>>> >>>>> Esteban A. Maringolo >>>>> >>>>> >>>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon >><tim@testit.works> >>>>wrote: >>>>> >>>>>> "#isOdd is not Smalltalk" - doh, I typed it in on my phone >>and >>>>so it just >>>>>> goes to show that it highlights the flaw in chatGPT that >>others >>>>have called >>>>>> out in other languages. I had meant to find some trickier >>code >>>>samples to >>>>>> see how well it does... >>>>>> >>>>>> Still, it is very interesting how it reasons on this stuff - >>>>and I wonder >>>>>> if there is way to tell it about these mistakes? For >example >>is >>>>it reading >>>>>> this email (or will it in the future) and understand that >it >>is >>>>#odd and >>>>>> not #isOdd. >>>>>> >>>>>> Tim >>>>>> >>>>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote: >>>>>> >>>>>> interesting.... >>>>>> >>>>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 >>>>understands this >>>>>> message ;-) >>>>>> >>>>>> If I evaluate your snippet -replacing #isOdd for #odd, I >get >>an >>>>empty >>>>>> Collection. The divisible by 4 things is somewhat >>interesting, >>>>because (1 >>>>>> to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) >>>>>> >>>>>> So chatGPT is probably a newbie in Smalltalk. Interesting is >>>>how logical >>>>>> the explanation sounds, you're tempted to believe it... >>>>>> >>>>>> >>>>>> Joachim >>>>>> >>>>>> >>>>>> >>>>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris: >>>>>> >>>>>> it's interesting to see how the answer changes with only a >>>>small change >>>>>> on the question: >>>>>> >>>>>> (1 to: 100 by: 4) reject: [:i | i isOdd] >>>>>> >>>>>> gives: >>>>>> >>>>>> The code creates a collection of numbers from 1 to 100, >>>>incrementing by 4 >>>>>> at each step using the to:by: message. It then applies the >>>>reject: message >>>>>> to this collection, which returns a new collection >containing >>>>all elements >>>>>> for which the given block evaluates to false. >>>>>> >>>>>> In this case, the block is [i | i isOdd], which checks >>whether >>>>a given >>>>>> number i is odd using the isOdd method. So the result of the >>>>entire >>>>>> expression is a collection of all even numbers between 1 and >>>>100 that are >>>>>> divisible by 4. >>>>>> >>>>>> ciao >>>>>> >>>>>> giorgio >>>>>> >>>>>> >>>>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon >><tim@testit.works> >>>>>> <tim@testit.works> wrote: >>>>>> >>>>>> Of course everyone is talking about ChatGPT, and I saw a >>post to >>>>>> ExplainCode and didn’t expect it to understand Smalltalk, so >>>>was impressed >>>>>> that it does! Presumably the Tonel Smalltalk code on GitHub >>has >>>>helped >>>>>> along with numerous articles? >>>>>> >>>>>> Try it out: https://whatdoesthiscodedo.com/ >>>>>> >>>>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | >i >>>>isOdd] >>>>>> >>>>>> And got this (neat eh): >>>>>> >>>>>> The code above is written in the programming language >>Smalltalk. >>>>>> >>>>>> It creates a collection of numbers from 1 to 100, >>incrementing >>>>by 3 at >>>>>> each step. This is done using the "to:by:" construct where >>the >>>>starting >>>>>> value is 1, the ending value is 100, and the increment is 3. >>>>>> >>>>>> The resulting collection is then filtered using the >"reject:" >>>>method, >>>>>> which takes a block of code as an argument. In this case, >the >>>>block checks >>>>>> whether each element (denoted by the variable i) is odd by >>>>using the >>>>>> "isOdd" method. If an element is odd, it will be rejected >>from >>>>the >>>>>> collection. >>>>>> >>>>>> So, the final result is a collection of all even numbers >>from 1 >>>>to 100, >>>>>> incrementing by 3 at each step. >>>>>> Share this explanation with colleagues: >>>>whatdoesthiscodedo.com/g/a0ec56e >>>>>> >>>>>> -- >>>>>> >>>>>> ------------------------------------------------------------ >- >>--- >>>>------- >>>>>> Objektfabrik Joachim Tuchel >>>>mailto:jtuchel@objektfabrik.de <jtuchel@objektfabrik.de> >>>>>> Fliederweg 1 >>>>http://www.objektfabrik.de >>>>>> D-71640 Ludwigsburg >>>>http://joachimtuchel.wordpress.com >>>>>> Telefon: +49 7141 56 10 86 0 Fax: +49 >7141 >>>>56 10 86 1 >>>>>> >>>>>> >>>>>> >>>>>> >>>> >>>>-- >>>>Christopher Fuhrman, P.Eng., PhD >>>> >>>>*Professeur au Département de génie logiciel et des >technologies >>de >>>>l'informationÉTS (École de technologie supérieure)* >>>> >>>>http://profs.etsmtl.ca/cfuhrman >>>>+1 514 396 8638 >>>><https://www.etsmtl.ca/activites-et-services-aux- >>>>etudiants/services-aux-etudiants/aide-psychologique> >>>> Je fais partie du réseau des Sentinelles >>>><https://www.etsmtl.ca/activites-et-services-aux- >>>>etudiants/services-aux-etudiants/aide-psychologique> >>>> ÉTS >>>> >>>>*L'ÉTS est une constituante de l'Université du Québec*
RS
Richard Sargent
Wed, Mar 15, 2023 3:52 PM

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly overpopulated'.
It said, if it where in control, it would therefore enforce a world wide
one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in my view,
two possible reasons for that.

Either, the data basis of that instance was biased and restricted to lead
to that conclusion, or ChatGPT lacks the ability to classify input data for
intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item featured to
support Agenda 2030, which is a goal of the 'ruling elite', who have the
means to propagate their propaganda in every thinkable means. So I would
classify possibly biased data that supports 'overpopulation' as biased
until proven otherwise.  That instance of ChatGPT seems to have missed that
ability.

It's important to keep in mind that these systems are little more than
glorified ELIZAs from the 70s. They mimic. Essentially, they have no
concept of truth or accuracy.
They produce outputs that look like the inputs from which they were
trained.

That's it. That's what they do. (It is amazing how good it looks. But, it
is all about the seeming, not the reality.)

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
pharo-users@lists.pharo.org> wrote:

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered
no.

Then I defined math formula in a sound but otherwise undefined
representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the breaking
    condition accordingly.

I was able to identify the malfunction that lead to the erroneous
results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk" tomazz.turk@gmail.com
wrote:

I hope that I can add two cents to this discussion. Because
programming
should be/is a highly exact activity, not only the syntax matters
but
also semantics, as we know.

GPTs are at present essentially capable of creating texts based

on

some
seed - you give to GPT a beginning of a sentence and it responds
with
the most probable answer (some language structure) according to
the
learning dataset. Added functionalities are question/seed -
response
capability (chatting), togehter with evaluation of how long the
answer
should be to meet the expectations. Programming typically

involves

some
programming language, so GPTs could be utilized for this purpose
to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will
cross-link
neural networks like GPT with logic machines (like Prolog and
expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk

I would highly recommend that you all first think deeply about

how you can teach an AI to behave friendly to us before you teach
it to write any program for any purpose.

There has been an experiment with ChatGPT published on a video

platform asking it to amswer questions about it's view on

humanity

once with it's default moral restrictions and once with 'a little
less morals'.  The answers with 'a little less morals' were more
than shocking.

So, before we give an AI the power to program any system, to

self-

improve and self-reproduce, we should take care that it is and
will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?

With

reasonning, it will gain access to math and programming by itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the documentation is

out

of date
with the Pharo 10. I asked it to do some simple saving of data

to

a file.
It gave me code that didn't work in Pharo 10, I told it about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then apologized
and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused

and

that it
was giving me code for Pharo 5.0. It seems to have trouble with
versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

It is good with boilerplate code (e.g. SQL queries) or

general

algorithm

structures. But i.e. I asked it to write me a method to

parse

a

string

(e.g. ISO 8601) and turn it into a DateAndTime, and then

asked

to write it

as an Excel formula.

It works much better when you can spot the mistakes, you can

tell them to

add some considerations to the code or even point the LLM to

correct a

message selector to something else. Sometimes you ask it to

reprogram

something and it won't do it, as if it doesn't understand

what

you're

asking.

It is not an expert yet, but it is a great assistant. The

only

thing is

that is sure is that it will get better.

Best regards,

Esteban A. Maringolo

On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

"#isOdd is not Smalltalk" - doh, I typed it in on my phone

and

so it just

goes to show that it highlights the flaw in chatGPT that

others

have called

out in other languages. I had meant to find some trickier

code

samples to

see how well it does...

Still, it is very interesting how it reasons on this stuff -

and I wonder

if there is way to tell it about these mistakes? For

example

is

it reading

this email (or will it in the future) and understand that

it

is

#odd and

not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

interesting....

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

message ;-)

If I evaluate your snippet -replacing #isOdd for #odd, I

get

an

empty

Collection. The divisible by 4 things is somewhat

interesting,

because (1

to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

So chatGPT is probably a newbie in Smalltalk. Interesting is

how logical

the explanation sounds, you're tempted to believe it...

Joachim

Am 15.03.23 um 10:45 schrieb giorgio ferraris:

it's interesting to see how the answer changes with only a

small change

on the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100,

incrementing by 4

at each step using the to:by: message. It then applies the

reject: message

to this collection, which returns a new collection

containing

all elements

for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks

whether

a given

number i is odd using the isOdd method. So the result of the

entire

expression is a collection of all even numbers between 1 and

100 that are

divisible by 4.

ciao

giorgio

On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

Of course everyone is talking about ChatGPT, and I saw a

post to

ExplainCode and didn’t expect it to understand Smalltalk, so

was impressed

that it does! Presumably the Tonel Smalltalk code on GitHub

has

helped

along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject: [:i |

i

isOdd]

And got this (neat eh):

The code above is written in the programming language

Smalltalk.

It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

each step. This is done using the "to:by:" construct where

the

starting

value is 1, the ending value is 100, and the increment is 3.

The resulting collection is then filtered using the

"reject:"

method,

which takes a block of code as an argument. In this case,

the

block checks

whether each element (denoted by the variable i) is odd by

using the

"isOdd" method. If an element is odd, it will be rejected

from

the

collection.

So, the final result is a collection of all even numbers

from 1

to 100,

incrementing by 3 at each step.
Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

--




Objektfabrik Joachim Tuchel

Fliederweg 1

D-71640 Ludwigsburg

Telefon: +49 7141 56 10 86 0                    Fax: +49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
ÉTS

L'ÉTS est une constituante de l'Université du Québec

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users < pharo-users@lists.pharo.org> wrote: > Another observation about ChatGPT: > > In unbiased mode, it assumed that 'the world is clearly overpopulated'. > It said, if it where in control, it would therefore enforce a world wide > one-child-only policy with draconic penalties. > > As it draws it's conclusions from it's data basis, there are, in my view, > two possible reasons for that. > > Either, the data basis of that instance was biased and restricted to lead > to that conclusion, or ChatGPT lacks the ability to classify input data for > intentionally biased information. > > In my view, 'overpopulation of earth' is a propaganda item featured to > support Agenda 2030, which is a goal of the 'ruling elite', who have the > means to propagate their propaganda in every thinkable means. So I would > classify possibly biased data that supports 'overpopulation' as biased > until proven otherwise. That instance of ChatGPT seems to have missed that > ability. > It's important to keep in mind that these systems are little more than glorified ELIZAs from the 70s. They mimic. Essentially, they have no concept of truth or accuracy. They produce outputs that *look* *like* the inputs from which they were trained. That's it. That's what they do. (It is amazing how good it looks. But, it is all about the seeming, not the reality.) > > > > > > On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" < > pharo-users@lists.pharo.org> wrote: > > > >I myself made some experiments with ChatGPT. > > > >I first asked if it was able to parse math formula - it answered > >no. > > > >Then I defined math formula in a sound but otherwise undefined > >representation and asked for solutions. > > > >Result: > > > >1. Most answeres where correct. > > > >2. It learned to calculate a recursive function. > > > >3. It went into infinitive recursion when I set the breaking > >condition accordingly. > > > >I was able to identify the malfunction that lead to the erroneous > >results. > > > > > > > >On 15.3.2023 at 3:04 PM, "Tomaž Turk" <tomazz.turk@gmail.com> > >wrote: > >> > >>I hope that I can add two cents to this discussion. Because > >>programming > >>should be/is a highly exact activity, not only the syntax matters > >>but > >>also semantics, as we know. > >> > >>GPTs are at present essentially capable of creating texts based > >on > >>some > >>seed - you give to GPT a beginning of a sentence and it responds > >>with > >>the most probable answer (some language structure) according to > >>the > >>learning dataset. Added functionalities are question/seed - > >>response > >>capability (chatting), togehter with evaluation of how long the > >>answer > >>should be to meet the expectations. Programming typically > >involves > >>some > >>programming language, so GPTs could be utilized for this purpose > >>to some > >>extent. > >> > >>Anecdotal case: > >>Q: Tell me the last 8 digits of pi > >>GPT: The last 8 digits of pi are: 58723078 > >> > >>It is my belief that the true trouble will start when we will > >>cross-link > >>neural networks like GPT with logic machines (like Prolog and > >>expert > >>systems) and genetic algorithms. > >> > >>Best wishes, > >>Tomaz > >> > >> > >> > >>------ Original Message ------ > >>From: "in_pharo_users--- via Pharo-users" <pharo- > >>users@lists.pharo.org> > >>To: "Any question about pharo is welcome" <pharo- > >>users@lists.pharo.org> > >>Cc: in_pharo_users@nym.hush.com > >>Sent: 15. 03. 2023 14:43:55 > >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk > >> > >>>I would highly recommend that you all first think deeply about > >>how you can teach an AI to behave friendly to us before you teach > >>it to write any program for any purpose. > >>> > >>>There has been an experiment with ChatGPT published on a video > >>platform asking it to amswer questions about it's view on > >humanity > >>once with it's default moral restrictions and once with 'a little > >>less morals'. The answers with 'a little less morals' were more > >>than shocking. > >>> > >>>So, before we give an AI the power to program any system, to > >self- > >>improve and self-reproduce, we should take care that it is and > >>will evolve benevolent to us. > >>> > >>>What about teaching it logical reasonning and ethics first? > >With > >>reasonning, it will gain access to math and programming by itself. > >>> > >>> > >>> > >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" > >><christopher.fuhrman@gmail.com> wrote: > >>>> > >>>>I asked it for a NeoCSV example, because the documentation is > >out > >>>>of date > >>>>with the Pharo 10. I asked it to do some simple saving of data > >to > >>>>a file. > >>>>It gave me code that didn't work in Pharo 10, I told it about > >the > >>>>DNUs on > >>>>the csvwriter and that I was using Pharo 10. It then apologized > >>>>and said > >>>>the messages were later introduced in Pharo 50 (!). I then > >>>>questioned its > >>>>understanding and it apologized and said it had been confused > >and > >>>>that it > >>>>was giving me code for Pharo 5.0. It seems to have trouble with > >>>>versions > >>>>and quick changing APIs. So, we are teaching ChatGPT about > >>>>smalltalk, too, > >>>>when we try to use it. > >>>> > >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo > >>>><emaringolo@gmail.com> > >>>>wrote: > >>>> > >>>>> It is good with boilerplate code (e.g. SQL queries) or > >general > >>>>algorithm > >>>>> structures. But i.e. I asked it to write me a method to > >parse > >>a > >>>>string > >>>>> (e.g. ISO 8601) and turn it into a DateAndTime, and then > >asked > >>>>to write it > >>>>> as an Excel formula. > >>>>> > >>>>> It works much better when you can spot the mistakes, you can > >>>>tell them to > >>>>> add some considerations to the code or even point the LLM to > >>>>correct a > >>>>> message selector to something else. Sometimes you ask it to > >>>>reprogram > >>>>> something and it won't do it, as if it doesn't understand > >what > >>>>you're > >>>>> asking. > >>>>> > >>>>> It is not an expert yet, but it is a great assistant. The > >only > >>>>thing is > >>>>> that is sure is that it will get better. > >>>>> > >>>>> Best regards, > >>>>> > >>>>> Esteban A. Maringolo > >>>>> > >>>>> > >>>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon > >><tim@testit.works> > >>>>wrote: > >>>>> > >>>>>> "#isOdd is not Smalltalk" - doh, I typed it in on my phone > >>and > >>>>so it just > >>>>>> goes to show that it highlights the flaw in chatGPT that > >>others > >>>>have called > >>>>>> out in other languages. I had meant to find some trickier > >>code > >>>>samples to > >>>>>> see how well it does... > >>>>>> > >>>>>> Still, it is very interesting how it reasons on this stuff - > >>>>and I wonder > >>>>>> if there is way to tell it about these mistakes? For > >example > >>is > >>>>it reading > >>>>>> this email (or will it in the future) and understand that > >it > >>is > >>>>#odd and > >>>>>> not #isOdd. > >>>>>> > >>>>>> Tim > >>>>>> > >>>>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote: > >>>>>> > >>>>>> interesting.... > >>>>>> > >>>>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 > >>>>understands this > >>>>>> message ;-) > >>>>>> > >>>>>> If I evaluate your snippet -replacing #isOdd for #odd, I > >get > >>an > >>>>empty > >>>>>> Collection. The divisible by 4 things is somewhat > >>interesting, > >>>>because (1 > >>>>>> to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) > >>>>>> > >>>>>> So chatGPT is probably a newbie in Smalltalk. Interesting is > >>>>how logical > >>>>>> the explanation sounds, you're tempted to believe it... > >>>>>> > >>>>>> > >>>>>> Joachim > >>>>>> > >>>>>> > >>>>>> > >>>>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris: > >>>>>> > >>>>>> it's interesting to see how the answer changes with only a > >>>>small change > >>>>>> on the question: > >>>>>> > >>>>>> (1 to: 100 by: 4) reject: [:i | i isOdd] > >>>>>> > >>>>>> gives: > >>>>>> > >>>>>> The code creates a collection of numbers from 1 to 100, > >>>>incrementing by 4 > >>>>>> at each step using the to:by: message. It then applies the > >>>>reject: message > >>>>>> to this collection, which returns a new collection > >containing > >>>>all elements > >>>>>> for which the given block evaluates to false. > >>>>>> > >>>>>> In this case, the block is [i | i isOdd], which checks > >>whether > >>>>a given > >>>>>> number i is odd using the isOdd method. So the result of the > >>>>entire > >>>>>> expression is a collection of all even numbers between 1 and > >>>>100 that are > >>>>>> divisible by 4. > >>>>>> > >>>>>> ciao > >>>>>> > >>>>>> giorgio > >>>>>> > >>>>>> > >>>>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon > >><tim@testit.works> > >>>>>> <tim@testit.works> wrote: > >>>>>> > >>>>>> Of course everyone is talking about ChatGPT, and I saw a > >>post to > >>>>>> ExplainCode and didn’t expect it to understand Smalltalk, so > >>>>was impressed > >>>>>> that it does! Presumably the Tonel Smalltalk code on GitHub > >>has > >>>>helped > >>>>>> along with numerous articles? > >>>>>> > >>>>>> Try it out: https://whatdoesthiscodedo.com/ > >>>>>> > >>>>>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | > >i > >>>>isOdd] > >>>>>> > >>>>>> And got this (neat eh): > >>>>>> > >>>>>> The code above is written in the programming language > >>Smalltalk. > >>>>>> > >>>>>> It creates a collection of numbers from 1 to 100, > >>incrementing > >>>>by 3 at > >>>>>> each step. This is done using the "to:by:" construct where > >>the > >>>>starting > >>>>>> value is 1, the ending value is 100, and the increment is 3. > >>>>>> > >>>>>> The resulting collection is then filtered using the > >"reject:" > >>>>method, > >>>>>> which takes a block of code as an argument. In this case, > >the > >>>>block checks > >>>>>> whether each element (denoted by the variable i) is odd by > >>>>using the > >>>>>> "isOdd" method. If an element is odd, it will be rejected > >>from > >>>>the > >>>>>> collection. > >>>>>> > >>>>>> So, the final result is a collection of all even numbers > >>from 1 > >>>>to 100, > >>>>>> incrementing by 3 at each step. > >>>>>> Share this explanation with colleagues: > >>>>whatdoesthiscodedo.com/g/a0ec56e > >>>>>> > >>>>>> -- > >>>>>> > >>>>>> ------------------------------------------------------------ > >- > >>--- > >>>>------- > >>>>>> Objektfabrik Joachim Tuchel > >>>>mailto:jtuchel@objektfabrik.de <jtuchel@objektfabrik.de> > >>>>>> Fliederweg 1 > >>>>http://www.objektfabrik.de > >>>>>> D-71640 Ludwigsburg > >>>>http://joachimtuchel.wordpress.com > >>>>>> Telefon: +49 7141 56 10 86 0 Fax: +49 > >7141 > >>>>56 10 86 1 > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>> > >>>>-- > >>>>Christopher Fuhrman, P.Eng., PhD > >>>> > >>>>*Professeur au Département de génie logiciel et des > >technologies > >>de > >>>>l'informationÉTS (École de technologie supérieure)* > >>>> > >>>>http://profs.etsmtl.ca/cfuhrman > >>>>+1 514 396 8638 > >>>><https://www.etsmtl.ca/activites-et-services-aux- > >>>>etudiants/services-aux-etudiants/aide-psychologique> > >>>> Je fais partie du réseau des Sentinelles > >>>><https://www.etsmtl.ca/activites-et-services-aux- > >>>>etudiants/services-aux-etudiants/aide-psychologique> > >>>> ÉTS > >>>> > >>>>*L'ÉTS est une constituante de l'Université du Québec* >
IP
in_pharo_users@nym.hush.com
Wed, Mar 15, 2023 5:15 PM

It is unimportant how simple or complicated these systems are.

If the output cannot be distinguished from what a human would say, they pass in that situation for a human.

What about the Touring Test?

Clearly these systems have the potential to act according to their output.

Furthermore, I would compare them to a combination of a successor of Eliza with an Eddington Ape of huge capacity.  Their input data basis can easily exceed that of a human.

On 15.3.2023 at 4:52 PM, "Richard Sargent" richard.sargent@gemtalksystems.com wrote:

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly

overpopulated'.

It said, if it where in control, it would therefore enforce a

world wide

one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in

my view,

two possible reasons for that.

Either, the data basis of that instance was biased and

restricted to lead

to that conclusion, or ChatGPT lacks the ability to classify

input data for

intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item

featured to

support Agenda 2030, which is a goal of the 'ruling elite', who

have the

means to propagate their propaganda in every thinkable means. So

I would

classify possibly biased data that supports 'overpopulation' as

biased

until proven otherwise.  That instance of ChatGPT seems to have

missed that

ability.

It's important to keep in mind that these systems are little more
than
glorified ELIZAs from the 70s. They mimic. Essentially, they have
no
concept of truth or accuracy.
They produce outputs that look like the inputs from which they
were
trained.

That's it. That's what they do. (It is amazing how good it looks.
But, it
is all about the seeming, not the reality.)

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
pharo-users@lists.pharo.org> wrote:

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered
no.

Then I defined math formula in a sound but otherwise undefined
representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the breaking
    condition accordingly.

I was able to identify the malfunction that lead to the

erroneous

results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk" tomazz.turk@gmail.com
wrote:

I hope that I can add two cents to this discussion. Because
programming
should be/is a highly exact activity, not only the syntax

matters

but
also semantics, as we know.

GPTs are at present essentially capable of creating texts based

on

some
seed - you give to GPT a beginning of a sentence and it

responds

with
the most probable answer (some language structure) according to
the
learning dataset. Added functionalities are question/seed -
response
capability (chatting), togehter with evaluation of how long the
answer
should be to meet the expectations. Programming typically

involves

some
programming language, so GPTs could be utilized for this

purpose

to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will
cross-link
neural networks like GPT with logic machines (like Prolog and
expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk

I would highly recommend that you all first think deeply about

how you can teach an AI to behave friendly to us before you

teach

it to write any program for any purpose.

There has been an experiment with ChatGPT published on a video

platform asking it to amswer questions about it's view on

humanity

once with it's default moral restrictions and once with 'a

little

less morals'.  The answers with 'a little less morals' were

more

than shocking.

So, before we give an AI the power to program any system, to

self-

improve and self-reproduce, we should take care that it is and
will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?

With

reasonning, it will gain access to math and programming by

itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the documentation is

out

of date
with the Pharo 10. I asked it to do some simple saving of

data

to

a file.
It gave me code that didn't work in Pharo 10, I told it about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then

apologized

and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused

and

that it
was giving me code for Pharo 5.0. It seems to have trouble

with

versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

It is good with boilerplate code (e.g. SQL queries) or

general

algorithm

structures. But i.e. I asked it to write me a method to

parse

a

string

(e.g. ISO 8601) and turn it into a DateAndTime, and then

asked

to write it

as an Excel formula.

It works much better when you can spot the mistakes, you

can

tell them to

add some considerations to the code or even point the LLM

to

correct a

message selector to something else. Sometimes you ask it

to

reprogram

something and it won't do it, as if it doesn't understand

what

you're

asking.

It is not an expert yet, but it is a great assistant. The

only

thing is

that is sure is that it will get better.

Best regards,

Esteban A. Maringolo

On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

"#isOdd is not Smalltalk" - doh, I typed it in on my

phone

and

so it just

goes to show that it highlights the flaw in chatGPT that

others

have called

out in other languages. I had meant to find some trickier

code

samples to

see how well it does...

Still, it is very interesting how it reasons on this

stuff -

and I wonder

if there is way to tell it about these mistakes? For

example

is

it reading

this email (or will it in the future) and understand that

it

is

#odd and

not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

interesting....

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

message ;-)

If I evaluate your snippet -replacing #isOdd for #odd, I

get

an

empty

Collection. The divisible by 4 things is somewhat

interesting,

because (1

to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

So chatGPT is probably a newbie in Smalltalk.

Interesting is

how logical

the explanation sounds, you're tempted to believe it...

Joachim

Am 15.03.23 um 10:45 schrieb giorgio ferraris:

it's interesting to see how the answer changes with only

a

small change

on the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100,

incrementing by 4

at each step using the to:by: message. It then applies

the

reject: message

to this collection, which returns a new collection

containing

all elements

for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks

whether

a given

number i is odd using the isOdd method. So the result of

the

entire

expression is a collection of all even numbers between 1

and

100 that are

divisible by 4.

ciao

giorgio

On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

Of course everyone is talking about ChatGPT, and I saw a

post to

ExplainCode and didn’t expect it to understand

Smalltalk, so

was impressed

that it does! Presumably the Tonel Smalltalk code on

GitHub

has

helped

along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject:

[:i |

i

isOdd]

And got this (neat eh):

The code above is written in the programming language

Smalltalk.

It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

each step. This is done using the "to:by:" construct

where

the

starting

value is 1, the ending value is 100, and the increment

is 3.

The resulting collection is then filtered using the

"reject:"

method,

which takes a block of code as an argument. In this case,

the

block checks

whether each element (denoted by the variable i) is odd

by

using the

"isOdd" method. If an element is odd, it will be rejected

from

the

collection.

So, the final result is a collection of all even numbers

from 1

to 100,

incrementing by 3 at each step.
Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

--





Objektfabrik Joachim Tuchel

Fliederweg 1

D-71640 Ludwigsburg

Telefon: +49 7141 56 10 86 0                    Fax: +49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
ÉTS

L'ÉTS est une constituante de l'Université du Québec

It is unimportant how simple or complicated these systems are. If the output cannot be distinguished from what a human would say, they pass in that situation for a human. What about the Touring Test? Clearly these systems have the potential to act according to their output. Furthermore, I would compare them to a combination of a successor of Eliza with an Eddington Ape of huge capacity. Their input data basis can easily exceed that of a human. On 15.3.2023 at 4:52 PM, "Richard Sargent" <richard.sargent@gemtalksystems.com> wrote: > >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users < >pharo-users@lists.pharo.org> wrote: > >> Another observation about ChatGPT: >> >> In unbiased mode, it assumed that 'the world is clearly >overpopulated'. >> It said, if it where in control, it would therefore enforce a >world wide >> one-child-only policy with draconic penalties. >> >> As it draws it's conclusions from it's data basis, there are, in >my view, >> two possible reasons for that. >> >> Either, the data basis of that instance was biased and >restricted to lead >> to that conclusion, or ChatGPT lacks the ability to classify >input data for >> intentionally biased information. >> >> In my view, 'overpopulation of earth' is a propaganda item >featured to >> support Agenda 2030, which is a goal of the 'ruling elite', who >have the >> means to propagate their propaganda in every thinkable means. So >I would >> classify possibly biased data that supports 'overpopulation' as >biased >> until proven otherwise. That instance of ChatGPT seems to have >missed that >> ability. >> > >It's important to keep in mind that these systems are little more >than >glorified ELIZAs from the 70s. They mimic. Essentially, they have >no >concept of truth or accuracy. >They produce outputs that *look* *like* the inputs from which they >were >trained. > >That's it. That's what they do. (It is amazing how good it looks. >But, it >is all about the seeming, not the reality.) > > >> >> >> >> >> >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" < >> pharo-users@lists.pharo.org> wrote: >> > >> >I myself made some experiments with ChatGPT. >> > >> >I first asked if it was able to parse math formula - it answered >> >no. >> > >> >Then I defined math formula in a sound but otherwise undefined >> >representation and asked for solutions. >> > >> >Result: >> > >> >1. Most answeres where correct. >> > >> >2. It learned to calculate a recursive function. >> > >> >3. It went into infinitive recursion when I set the breaking >> >condition accordingly. >> > >> >I was able to identify the malfunction that lead to the >erroneous >> >results. >> > >> > >> > >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" <tomazz.turk@gmail.com> >> >wrote: >> >> >> >>I hope that I can add two cents to this discussion. Because >> >>programming >> >>should be/is a highly exact activity, not only the syntax >matters >> >>but >> >>also semantics, as we know. >> >> >> >>GPTs are at present essentially capable of creating texts based >> >on >> >>some >> >>seed - you give to GPT a beginning of a sentence and it >responds >> >>with >> >>the most probable answer (some language structure) according to >> >>the >> >>learning dataset. Added functionalities are question/seed - >> >>response >> >>capability (chatting), togehter with evaluation of how long the >> >>answer >> >>should be to meet the expectations. Programming typically >> >involves >> >>some >> >>programming language, so GPTs could be utilized for this >purpose >> >>to some >> >>extent. >> >> >> >>Anecdotal case: >> >>Q: Tell me the last 8 digits of pi >> >>GPT: The last 8 digits of pi are: 58723078 >> >> >> >>It is my belief that the true trouble will start when we will >> >>cross-link >> >>neural networks like GPT with logic machines (like Prolog and >> >>expert >> >>systems) and genetic algorithms. >> >> >> >>Best wishes, >> >>Tomaz >> >> >> >> >> >> >> >>------ Original Message ------ >> >>From: "in_pharo_users--- via Pharo-users" <pharo- >> >>users@lists.pharo.org> >> >>To: "Any question about pharo is welcome" <pharo- >> >>users@lists.pharo.org> >> >>Cc: in_pharo_users@nym.hush.com >> >>Sent: 15. 03. 2023 14:43:55 >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk >> >> >> >>>I would highly recommend that you all first think deeply about >> >>how you can teach an AI to behave friendly to us before you >teach >> >>it to write any program for any purpose. >> >>> >> >>>There has been an experiment with ChatGPT published on a video >> >>platform asking it to amswer questions about it's view on >> >humanity >> >>once with it's default moral restrictions and once with 'a >little >> >>less morals'. The answers with 'a little less morals' were >more >> >>than shocking. >> >>> >> >>>So, before we give an AI the power to program any system, to >> >self- >> >>improve and self-reproduce, we should take care that it is and >> >>will evolve benevolent to us. >> >>> >> >>>What about teaching it logical reasonning and ethics first? >> >With >> >>reasonning, it will gain access to math and programming by >itself. >> >>> >> >>> >> >>> >> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" >> >><christopher.fuhrman@gmail.com> wrote: >> >>>> >> >>>>I asked it for a NeoCSV example, because the documentation is >> >out >> >>>>of date >> >>>>with the Pharo 10. I asked it to do some simple saving of >data >> >to >> >>>>a file. >> >>>>It gave me code that didn't work in Pharo 10, I told it about >> >the >> >>>>DNUs on >> >>>>the csvwriter and that I was using Pharo 10. It then >apologized >> >>>>and said >> >>>>the messages were later introduced in Pharo 50 (!). I then >> >>>>questioned its >> >>>>understanding and it apologized and said it had been confused >> >and >> >>>>that it >> >>>>was giving me code for Pharo 5.0. It seems to have trouble >with >> >>>>versions >> >>>>and quick changing APIs. So, we are teaching ChatGPT about >> >>>>smalltalk, too, >> >>>>when we try to use it. >> >>>> >> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo >> >>>><emaringolo@gmail.com> >> >>>>wrote: >> >>>> >> >>>>> It is good with boilerplate code (e.g. SQL queries) or >> >general >> >>>>algorithm >> >>>>> structures. But i.e. I asked it to write me a method to >> >parse >> >>a >> >>>>string >> >>>>> (e.g. ISO 8601) and turn it into a DateAndTime, and then >> >asked >> >>>>to write it >> >>>>> as an Excel formula. >> >>>>> >> >>>>> It works much better when you can spot the mistakes, you >can >> >>>>tell them to >> >>>>> add some considerations to the code or even point the LLM >to >> >>>>correct a >> >>>>> message selector to something else. Sometimes you ask it >to >> >>>>reprogram >> >>>>> something and it won't do it, as if it doesn't understand >> >what >> >>>>you're >> >>>>> asking. >> >>>>> >> >>>>> It is not an expert yet, but it is a great assistant. The >> >only >> >>>>thing is >> >>>>> that is sure is that it will get better. >> >>>>> >> >>>>> Best regards, >> >>>>> >> >>>>> Esteban A. Maringolo >> >>>>> >> >>>>> >> >>>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon >> >><tim@testit.works> >> >>>>wrote: >> >>>>> >> >>>>>> "#isOdd is not Smalltalk" - doh, I typed it in on my >phone >> >>and >> >>>>so it just >> >>>>>> goes to show that it highlights the flaw in chatGPT that >> >>others >> >>>>have called >> >>>>>> out in other languages. I had meant to find some trickier >> >>code >> >>>>samples to >> >>>>>> see how well it does... >> >>>>>> >> >>>>>> Still, it is very interesting how it reasons on this >stuff - >> >>>>and I wonder >> >>>>>> if there is way to tell it about these mistakes? For >> >example >> >>is >> >>>>it reading >> >>>>>> this email (or will it in the future) and understand that >> >it >> >>is >> >>>>#odd and >> >>>>>> not #isOdd. >> >>>>>> >> >>>>>> Tim >> >>>>>> >> >>>>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote: >> >>>>>> >> >>>>>> interesting.... >> >>>>>> >> >>>>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 >> >>>>understands this >> >>>>>> message ;-) >> >>>>>> >> >>>>>> If I evaluate your snippet -replacing #isOdd for #odd, I >> >get >> >>an >> >>>>empty >> >>>>>> Collection. The divisible by 4 things is somewhat >> >>interesting, >> >>>>because (1 >> >>>>>> to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) >> >>>>>> >> >>>>>> So chatGPT is probably a newbie in Smalltalk. >Interesting is >> >>>>how logical >> >>>>>> the explanation sounds, you're tempted to believe it... >> >>>>>> >> >>>>>> >> >>>>>> Joachim >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris: >> >>>>>> >> >>>>>> it's interesting to see how the answer changes with only >a >> >>>>small change >> >>>>>> on the question: >> >>>>>> >> >>>>>> (1 to: 100 by: 4) reject: [:i | i isOdd] >> >>>>>> >> >>>>>> gives: >> >>>>>> >> >>>>>> The code creates a collection of numbers from 1 to 100, >> >>>>incrementing by 4 >> >>>>>> at each step using the to:by: message. It then applies >the >> >>>>reject: message >> >>>>>> to this collection, which returns a new collection >> >containing >> >>>>all elements >> >>>>>> for which the given block evaluates to false. >> >>>>>> >> >>>>>> In this case, the block is [i | i isOdd], which checks >> >>whether >> >>>>a given >> >>>>>> number i is odd using the isOdd method. So the result of >the >> >>>>entire >> >>>>>> expression is a collection of all even numbers between 1 >and >> >>>>100 that are >> >>>>>> divisible by 4. >> >>>>>> >> >>>>>> ciao >> >>>>>> >> >>>>>> giorgio >> >>>>>> >> >>>>>> >> >>>>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon >> >><tim@testit.works> >> >>>>>> <tim@testit.works> wrote: >> >>>>>> >> >>>>>> Of course everyone is talking about ChatGPT, and I saw a >> >>post to >> >>>>>> ExplainCode and didn’t expect it to understand >Smalltalk, so >> >>>>was impressed >> >>>>>> that it does! Presumably the Tonel Smalltalk code on >GitHub >> >>has >> >>>>helped >> >>>>>> along with numerous articles? >> >>>>>> >> >>>>>> Try it out: https://whatdoesthiscodedo.com/ >> >>>>>> >> >>>>>> I gave this simple example: (1 to: 100 by: 3) reject: >[:i | >> >i >> >>>>isOdd] >> >>>>>> >> >>>>>> And got this (neat eh): >> >>>>>> >> >>>>>> The code above is written in the programming language >> >>Smalltalk. >> >>>>>> >> >>>>>> It creates a collection of numbers from 1 to 100, >> >>incrementing >> >>>>by 3 at >> >>>>>> each step. This is done using the "to:by:" construct >where >> >>the >> >>>>starting >> >>>>>> value is 1, the ending value is 100, and the increment >is 3. >> >>>>>> >> >>>>>> The resulting collection is then filtered using the >> >"reject:" >> >>>>method, >> >>>>>> which takes a block of code as an argument. In this case, >> >the >> >>>>block checks >> >>>>>> whether each element (denoted by the variable i) is odd >by >> >>>>using the >> >>>>>> "isOdd" method. If an element is odd, it will be rejected >> >>from >> >>>>the >> >>>>>> collection. >> >>>>>> >> >>>>>> So, the final result is a collection of all even numbers >> >>from 1 >> >>>>to 100, >> >>>>>> incrementing by 3 at each step. >> >>>>>> Share this explanation with colleagues: >> >>>>whatdoesthiscodedo.com/g/a0ec56e >> >>>>>> >> >>>>>> -- >> >>>>>> >> >>>>>> --------------------------------------------------------- >--- >> >- >> >>--- >> >>>>------- >> >>>>>> Objektfabrik Joachim Tuchel >> >>>>mailto:jtuchel@objektfabrik.de <jtuchel@objektfabrik.de> >> >>>>>> Fliederweg 1 >> >>>>http://www.objektfabrik.de >> >>>>>> D-71640 Ludwigsburg >> >>>>http://joachimtuchel.wordpress.com >> >>>>>> Telefon: +49 7141 56 10 86 0 Fax: +49 >> >7141 >> >>>>56 10 86 1 >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>> >> >>>>-- >> >>>>Christopher Fuhrman, P.Eng., PhD >> >>>> >> >>>>*Professeur au Département de génie logiciel et des >> >technologies >> >>de >> >>>>l'informationÉTS (École de technologie supérieure)* >> >>>> >> >>>>http://profs.etsmtl.ca/cfuhrman >> >>>>+1 514 396 8638 >> >>>><https://www.etsmtl.ca/activites-et-services-aux- >> >>>>etudiants/services-aux-etudiants/aide-psychologique> >> >>>> Je fais partie du réseau des Sentinelles >> >>>><https://www.etsmtl.ca/activites-et-services-aux- >> >>>>etudiants/services-aux-etudiants/aide-psychologique> >> >>>> ÉTS >> >>>> >> >>>>*L'ÉTS est une constituante de l'Université du Québec* >>
TB
Todd Blanchard
Wed, Mar 15, 2023 5:42 PM

ChatGPT has been trained on some outdated “freely available” books.

I tried it with the first half of the first question of the Advent of Code 2022 and asked it to write Pharo Smalltalk.

It produced some outdated stuff using messages that are no longer there.

FWIW, isOdd was present in Pharo around 2011.

On Mar 15, 2023, at 2:57 AM, Joachim Tuchel jtuchel@objektfabrik.de wrote:

interesting....

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12  understands this message ;-)

If I evaluate your snippet -replacing #isOdd for #odd, I get an empty Collection. The divisible by 4 things is somewhat interesting, because (1 to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

So chatGPT is probably a newbie in Smalltalk. Interesting is how logical the explanation sounds, you're tempted to believe it...

Joachim

Am 15.03.23 um 10:45 schrieb giorgio ferraris:

it's interesting to see how the answer changes with only a small change on the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:
The code creates a collection of numbers from 1 to 100, incrementing by 4 at each step using the to:by: message. It then applies the reject: message to this collection, which returns a new collection containing all elements for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks whether a given number i is odd using the isOdd method. So the result of the entire expression is a collection of all even numbers between 1 and 100 that are divisible by 4.

ciao

giorgio

On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon tim@testit.works mailto:tim@testit.works wrote:

Of course everyone is talking about ChatGPT, and I saw a post to ExplainCode and didn’t expect it to understand Smalltalk, so was impressed that it does! Presumably the Tonel Smalltalk code on GitHub has helped along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]

And got this (neat eh):

The code above is written in the programming language Smalltalk.

It creates a collection of numbers from 1 to 100, incrementing by 3 at each step. This is done using the "to:by:" construct where the starting value is 1, the ending value is 100, and the increment is 3.

The resulting collection is then filtered using the "reject:" method, which takes a block of code as an argument. In this case, the block checks whether each element (denoted by the variable i) is odd by using the "isOdd" method. If an element is odd, it will be rejected from the collection.

So, the final result is a collection of all even numbers from 1 to 100, incrementing by 3 at each step.

Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e https://whatdoesthiscodedo.com/g/a0ec56e--


Objektfabrik Joachim Tuchel              mailto:jtuchel@objektfabrik.de
Fliederweg 1                                http://www.objektfabrik.de http://www.objektfabrik.de/
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com http://joachimtuchel.wordpress.com/
Telefon: +49 7141 56 10 86 0                    Fax: +49 7141 56 10 86 1

ChatGPT has been trained on some outdated “freely available” books. I tried it with the first half of the first question of the Advent of Code 2022 and asked it to write Pharo Smalltalk. It produced some outdated stuff using messages that are no longer there. FWIW, isOdd was present in Pharo around 2011. > On Mar 15, 2023, at 2:57 AM, Joachim Tuchel <jtuchel@objektfabrik.de> wrote: > > interesting.... > > #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 understands this message ;-) > > If I evaluate your snippet -replacing #isOdd for #odd, I get an empty Collection. The divisible by 4 things is somewhat interesting, because (1 to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) > > So chatGPT is probably a newbie in Smalltalk. Interesting is how logical the explanation sounds, you're tempted to believe it... > > > > Joachim > > > > > > Am 15.03.23 um 10:45 schrieb giorgio ferraris: >> it's interesting to see how the answer changes with only a small change on the question: >> >> (1 to: 100 by: 4) reject: [:i | i isOdd] >> >> gives: >> The code creates a collection of numbers from 1 to 100, incrementing by 4 at each step using the to:by: message. It then applies the reject: message to this collection, which returns a new collection containing all elements for which the given block evaluates to false. >> >> In this case, the block is [i | i isOdd], which checks whether a given number i is odd using the isOdd method. So the result of the entire expression is a collection of all even numbers between 1 and 100 that are divisible by 4. >> >> >> ciao >> >> giorgio >> >> >> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon <tim@testit.works> <mailto:tim@testit.works> wrote: >>> Of course everyone is talking about ChatGPT, and I saw a post to ExplainCode and didn’t expect it to understand Smalltalk, so was impressed that it does! Presumably the Tonel Smalltalk code on GitHub has helped along with numerous articles? >>> >>> Try it out: https://whatdoesthiscodedo.com/ >>> >>> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd] >>> >>> And got this (neat eh): >>> >>> The code above is written in the programming language Smalltalk. >>> >>> It creates a collection of numbers from 1 to 100, incrementing by 3 at each step. This is done using the "to:by:" construct where the starting value is 1, the ending value is 100, and the increment is 3. >>> >>> The resulting collection is then filtered using the "reject:" method, which takes a block of code as an argument. In this case, the block checks whether each element (denoted by the variable i) is odd by using the "isOdd" method. If an element is odd, it will be rejected from the collection. >>> >>> So, the final result is a collection of all even numbers from 1 to 100, incrementing by 3 at each step. >>> >>> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e <https://whatdoesthiscodedo.com/g/a0ec56e>-- > > ----------------------------------------------------------------------- > Objektfabrik Joachim Tuchel mailto:jtuchel@objektfabrik.de > Fliederweg 1 http://www.objektfabrik.de <http://www.objektfabrik.de/> > D-71640 Ludwigsburg http://joachimtuchel.wordpress.com <http://joachimtuchel.wordpress.com/> > Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1 >
RS
Richard Sargent
Wed, Mar 15, 2023 6:26 PM

On Wed, Mar 15, 2023 at 10:15 AM in_pharo_users@nym.hush.com wrote:

It is unimportant how simple or complicated these systems are.

If the output cannot be distinguished from what a human would say, they
pass in that situation for a human.

What about the Touring Test?

I hate to criticise someone as smart as Turing was. However, I think the
Turing Test per se is weaker than its progenitor test, the Imitation Game.
The Imitation Game had a third party challenged to determine which of the
other two participants was male and which was female. The third party led
the questioning.
The Turing Test waters that down by having only an Observer with neither
participant (apparently) challenged to determine whether the other was
human or machine.

I think a test along the lines of the Imitation Game would quickly allow
the third party to determine which participant was human and which was
machine.

(Of course, such a judge could easily be persuaded that the machine was in
fact a psychopathic human.)

Clearly these systems have the potential to act according to their output.

Furthermore, I would compare them to a combination of a successor of Eliza
with an Eddington Ape of huge capacity.  Their input data basis can easily
exceed that of a human.

On 15.3.2023 at 4:52 PM, "Richard Sargent" <
richard.sargent@gemtalksystems.com> wrote:

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly

overpopulated'.

It said, if it where in control, it would therefore enforce a

world wide

one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in

my view,

two possible reasons for that.

Either, the data basis of that instance was biased and

restricted to lead

to that conclusion, or ChatGPT lacks the ability to classify

input data for

intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item

featured to

support Agenda 2030, which is a goal of the 'ruling elite', who

have the

means to propagate their propaganda in every thinkable means. So

I would

classify possibly biased data that supports 'overpopulation' as

biased

until proven otherwise.  That instance of ChatGPT seems to have

missed that

ability.

It's important to keep in mind that these systems are little more
than
glorified ELIZAs from the 70s. They mimic. Essentially, they have
no
concept of truth or accuracy.
They produce outputs that look like the inputs from which they
were
trained.

That's it. That's what they do. (It is amazing how good it looks.
But, it
is all about the seeming, not the reality.)

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
pharo-users@lists.pharo.org> wrote:

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered
no.

Then I defined math formula in a sound but otherwise undefined
representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the breaking
    condition accordingly.

I was able to identify the malfunction that lead to the

erroneous

results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk" tomazz.turk@gmail.com
wrote:

I hope that I can add two cents to this discussion. Because
programming
should be/is a highly exact activity, not only the syntax

matters

but
also semantics, as we know.

GPTs are at present essentially capable of creating texts based

on

some
seed - you give to GPT a beginning of a sentence and it

responds

with
the most probable answer (some language structure) according to
the
learning dataset. Added functionalities are question/seed -
response
capability (chatting), togehter with evaluation of how long the
answer
should be to meet the expectations. Programming typically

involves

some
programming language, so GPTs could be utilized for this

purpose

to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will
cross-link
neural networks like GPT with logic machines (like Prolog and
expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk

I would highly recommend that you all first think deeply about

how you can teach an AI to behave friendly to us before you

teach

it to write any program for any purpose.

There has been an experiment with ChatGPT published on a video

platform asking it to amswer questions about it's view on

humanity

once with it's default moral restrictions and once with 'a

little

less morals'.  The answers with 'a little less morals' were

more

than shocking.

So, before we give an AI the power to program any system, to

self-

improve and self-reproduce, we should take care that it is and
will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?

With

reasonning, it will gain access to math and programming by

itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the documentation is

out

of date
with the Pharo 10. I asked it to do some simple saving of

data

to

a file.
It gave me code that didn't work in Pharo 10, I told it about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then

apologized

and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused

and

that it
was giving me code for Pharo 5.0. It seems to have trouble

with

versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

It is good with boilerplate code (e.g. SQL queries) or

general

algorithm

structures. But i.e. I asked it to write me a method to

parse

a

string

(e.g. ISO 8601) and turn it into a DateAndTime, and then

asked

to write it

as an Excel formula.

It works much better when you can spot the mistakes, you

can

tell them to

add some considerations to the code or even point the LLM

to

correct a

message selector to something else. Sometimes you ask it

to

reprogram

something and it won't do it, as if it doesn't understand

what

you're

asking.

It is not an expert yet, but it is a great assistant. The

only

thing is

that is sure is that it will get better.

Best regards,

Esteban A. Maringolo

On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

"#isOdd is not Smalltalk" - doh, I typed it in on my

phone

and

so it just

goes to show that it highlights the flaw in chatGPT that

others

have called

out in other languages. I had meant to find some trickier

code

samples to

see how well it does...

Still, it is very interesting how it reasons on this

stuff -

and I wonder

if there is way to tell it about these mistakes? For

example

is

it reading

this email (or will it in the future) and understand that

it

is

#odd and

not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

interesting....

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

message ;-)

If I evaluate your snippet -replacing #isOdd for #odd, I

get

an

empty

Collection. The divisible by 4 things is somewhat

interesting,

because (1

to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

So chatGPT is probably a newbie in Smalltalk.

Interesting is

how logical

the explanation sounds, you're tempted to believe it...

Joachim

Am 15.03.23 um 10:45 schrieb giorgio ferraris:

it's interesting to see how the answer changes with only

a

small change

on the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to 100,

incrementing by 4

at each step using the to:by: message. It then applies

the

reject: message

to this collection, which returns a new collection

containing

all elements

for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks

whether

a given

number i is odd using the isOdd method. So the result of

the

entire

expression is a collection of all even numbers between 1

and

100 that are

divisible by 4.

ciao

giorgio

On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

Of course everyone is talking about ChatGPT, and I saw a

post to

ExplainCode and didn’t expect it to understand

Smalltalk, so

was impressed

that it does! Presumably the Tonel Smalltalk code on

GitHub

has

helped

along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject:

[:i |

i

isOdd]

And got this (neat eh):

The code above is written in the programming language

Smalltalk.

It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

each step. This is done using the "to:by:" construct

where

the

starting

value is 1, the ending value is 100, and the increment

is 3.

The resulting collection is then filtered using the

"reject:"

method,

which takes a block of code as an argument. In this case,

the

block checks

whether each element (denoted by the variable i) is odd

by

using the

"isOdd" method. If an element is odd, it will be rejected

from

the

collection.

So, the final result is a collection of all even numbers

from 1

to 100,

incrementing by 3 at each step.
Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

--





Objektfabrik Joachim Tuchel

Fliederweg 1

D-71640 Ludwigsburg

Telefon: +49 7141 56 10 86 0                    Fax: +49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
ÉTS

L'ÉTS est une constituante de l'Université du Québec

On Wed, Mar 15, 2023 at 10:15 AM <in_pharo_users@nym.hush.com> wrote: > It is unimportant how simple or complicated these systems are. > > If the output cannot be distinguished from what a human would say, they > pass in that situation for a human. > > What about the Touring Test? > I hate to criticise someone as smart as Turing was. However, I think the Turing Test per se is weaker than its progenitor test, the Imitation Game. The Imitation Game had a third party challenged to determine which of the other two participants was male and which was female. The third party led the questioning. The Turing Test waters that down by having only an Observer with neither participant (apparently) challenged to determine whether the other was human or machine. I think a test along the lines of the Imitation Game would quickly allow the third party to determine which participant was human and which was machine. (Of course, such a judge could easily be persuaded that the machine was in fact a psychopathic human.) > > Clearly these systems have the potential to act according to their output. > > Furthermore, I would compare them to a combination of a successor of Eliza > with an Eddington Ape of huge capacity. Their input data basis can easily > exceed that of a human. > > > > On 15.3.2023 at 4:52 PM, "Richard Sargent" < > richard.sargent@gemtalksystems.com> wrote: > > > >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users < > >pharo-users@lists.pharo.org> wrote: > > > >> Another observation about ChatGPT: > >> > >> In unbiased mode, it assumed that 'the world is clearly > >overpopulated'. > >> It said, if it where in control, it would therefore enforce a > >world wide > >> one-child-only policy with draconic penalties. > >> > >> As it draws it's conclusions from it's data basis, there are, in > >my view, > >> two possible reasons for that. > >> > >> Either, the data basis of that instance was biased and > >restricted to lead > >> to that conclusion, or ChatGPT lacks the ability to classify > >input data for > >> intentionally biased information. > >> > >> In my view, 'overpopulation of earth' is a propaganda item > >featured to > >> support Agenda 2030, which is a goal of the 'ruling elite', who > >have the > >> means to propagate their propaganda in every thinkable means. So > >I would > >> classify possibly biased data that supports 'overpopulation' as > >biased > >> until proven otherwise. That instance of ChatGPT seems to have > >missed that > >> ability. > >> > > > >It's important to keep in mind that these systems are little more > >than > >glorified ELIZAs from the 70s. They mimic. Essentially, they have > >no > >concept of truth or accuracy. > >They produce outputs that *look* *like* the inputs from which they > >were > >trained. > > > >That's it. That's what they do. (It is amazing how good it looks. > >But, it > >is all about the seeming, not the reality.) > > > > > >> > >> > >> > >> > >> > >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" < > >> pharo-users@lists.pharo.org> wrote: > >> > > >> >I myself made some experiments with ChatGPT. > >> > > >> >I first asked if it was able to parse math formula - it answered > >> >no. > >> > > >> >Then I defined math formula in a sound but otherwise undefined > >> >representation and asked for solutions. > >> > > >> >Result: > >> > > >> >1. Most answeres where correct. > >> > > >> >2. It learned to calculate a recursive function. > >> > > >> >3. It went into infinitive recursion when I set the breaking > >> >condition accordingly. > >> > > >> >I was able to identify the malfunction that lead to the > >erroneous > >> >results. > >> > > >> > > >> > > >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" <tomazz.turk@gmail.com> > >> >wrote: > >> >> > >> >>I hope that I can add two cents to this discussion. Because > >> >>programming > >> >>should be/is a highly exact activity, not only the syntax > >matters > >> >>but > >> >>also semantics, as we know. > >> >> > >> >>GPTs are at present essentially capable of creating texts based > >> >on > >> >>some > >> >>seed - you give to GPT a beginning of a sentence and it > >responds > >> >>with > >> >>the most probable answer (some language structure) according to > >> >>the > >> >>learning dataset. Added functionalities are question/seed - > >> >>response > >> >>capability (chatting), togehter with evaluation of how long the > >> >>answer > >> >>should be to meet the expectations. Programming typically > >> >involves > >> >>some > >> >>programming language, so GPTs could be utilized for this > >purpose > >> >>to some > >> >>extent. > >> >> > >> >>Anecdotal case: > >> >>Q: Tell me the last 8 digits of pi > >> >>GPT: The last 8 digits of pi are: 58723078 > >> >> > >> >>It is my belief that the true trouble will start when we will > >> >>cross-link > >> >>neural networks like GPT with logic machines (like Prolog and > >> >>expert > >> >>systems) and genetic algorithms. > >> >> > >> >>Best wishes, > >> >>Tomaz > >> >> > >> >> > >> >> > >> >>------ Original Message ------ > >> >>From: "in_pharo_users--- via Pharo-users" <pharo- > >> >>users@lists.pharo.org> > >> >>To: "Any question about pharo is welcome" <pharo- > >> >>users@lists.pharo.org> > >> >>Cc: in_pharo_users@nym.hush.com > >> >>Sent: 15. 03. 2023 14:43:55 > >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk > >> >> > >> >>>I would highly recommend that you all first think deeply about > >> >>how you can teach an AI to behave friendly to us before you > >teach > >> >>it to write any program for any purpose. > >> >>> > >> >>>There has been an experiment with ChatGPT published on a video > >> >>platform asking it to amswer questions about it's view on > >> >humanity > >> >>once with it's default moral restrictions and once with 'a > >little > >> >>less morals'. The answers with 'a little less morals' were > >more > >> >>than shocking. > >> >>> > >> >>>So, before we give an AI the power to program any system, to > >> >self- > >> >>improve and self-reproduce, we should take care that it is and > >> >>will evolve benevolent to us. > >> >>> > >> >>>What about teaching it logical reasonning and ethics first? > >> >With > >> >>reasonning, it will gain access to math and programming by > >itself. > >> >>> > >> >>> > >> >>> > >> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" > >> >><christopher.fuhrman@gmail.com> wrote: > >> >>>> > >> >>>>I asked it for a NeoCSV example, because the documentation is > >> >out > >> >>>>of date > >> >>>>with the Pharo 10. I asked it to do some simple saving of > >data > >> >to > >> >>>>a file. > >> >>>>It gave me code that didn't work in Pharo 10, I told it about > >> >the > >> >>>>DNUs on > >> >>>>the csvwriter and that I was using Pharo 10. It then > >apologized > >> >>>>and said > >> >>>>the messages were later introduced in Pharo 50 (!). I then > >> >>>>questioned its > >> >>>>understanding and it apologized and said it had been confused > >> >and > >> >>>>that it > >> >>>>was giving me code for Pharo 5.0. It seems to have trouble > >with > >> >>>>versions > >> >>>>and quick changing APIs. So, we are teaching ChatGPT about > >> >>>>smalltalk, too, > >> >>>>when we try to use it. > >> >>>> > >> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo > >> >>>><emaringolo@gmail.com> > >> >>>>wrote: > >> >>>> > >> >>>>> It is good with boilerplate code (e.g. SQL queries) or > >> >general > >> >>>>algorithm > >> >>>>> structures. But i.e. I asked it to write me a method to > >> >parse > >> >>a > >> >>>>string > >> >>>>> (e.g. ISO 8601) and turn it into a DateAndTime, and then > >> >asked > >> >>>>to write it > >> >>>>> as an Excel formula. > >> >>>>> > >> >>>>> It works much better when you can spot the mistakes, you > >can > >> >>>>tell them to > >> >>>>> add some considerations to the code or even point the LLM > >to > >> >>>>correct a > >> >>>>> message selector to something else. Sometimes you ask it > >to > >> >>>>reprogram > >> >>>>> something and it won't do it, as if it doesn't understand > >> >what > >> >>>>you're > >> >>>>> asking. > >> >>>>> > >> >>>>> It is not an expert yet, but it is a great assistant. The > >> >only > >> >>>>thing is > >> >>>>> that is sure is that it will get better. > >> >>>>> > >> >>>>> Best regards, > >> >>>>> > >> >>>>> Esteban A. Maringolo > >> >>>>> > >> >>>>> > >> >>>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon > >> >><tim@testit.works> > >> >>>>wrote: > >> >>>>> > >> >>>>>> "#isOdd is not Smalltalk" - doh, I typed it in on my > >phone > >> >>and > >> >>>>so it just > >> >>>>>> goes to show that it highlights the flaw in chatGPT that > >> >>others > >> >>>>have called > >> >>>>>> out in other languages. I had meant to find some trickier > >> >>code > >> >>>>samples to > >> >>>>>> see how well it does... > >> >>>>>> > >> >>>>>> Still, it is very interesting how it reasons on this > >stuff - > >> >>>>and I wonder > >> >>>>>> if there is way to tell it about these mistakes? For > >> >example > >> >>is > >> >>>>it reading > >> >>>>>> this email (or will it in the future) and understand that > >> >it > >> >>is > >> >>>>#odd and > >> >>>>>> not #isOdd. > >> >>>>>> > >> >>>>>> Tim > >> >>>>>> > >> >>>>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote: > >> >>>>>> > >> >>>>>> interesting.... > >> >>>>>> > >> >>>>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 > >> >>>>understands this > >> >>>>>> message ;-) > >> >>>>>> > >> >>>>>> If I evaluate your snippet -replacing #isOdd for #odd, I > >> >get > >> >>an > >> >>>>empty > >> >>>>>> Collection. The divisible by 4 things is somewhat > >> >>interesting, > >> >>>>because (1 > >> >>>>>> to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) > >> >>>>>> > >> >>>>>> So chatGPT is probably a newbie in Smalltalk. > >Interesting is > >> >>>>how logical > >> >>>>>> the explanation sounds, you're tempted to believe it... > >> >>>>>> > >> >>>>>> > >> >>>>>> Joachim > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris: > >> >>>>>> > >> >>>>>> it's interesting to see how the answer changes with only > >a > >> >>>>small change > >> >>>>>> on the question: > >> >>>>>> > >> >>>>>> (1 to: 100 by: 4) reject: [:i | i isOdd] > >> >>>>>> > >> >>>>>> gives: > >> >>>>>> > >> >>>>>> The code creates a collection of numbers from 1 to 100, > >> >>>>incrementing by 4 > >> >>>>>> at each step using the to:by: message. It then applies > >the > >> >>>>reject: message > >> >>>>>> to this collection, which returns a new collection > >> >containing > >> >>>>all elements > >> >>>>>> for which the given block evaluates to false. > >> >>>>>> > >> >>>>>> In this case, the block is [i | i isOdd], which checks > >> >>whether > >> >>>>a given > >> >>>>>> number i is odd using the isOdd method. So the result of > >the > >> >>>>entire > >> >>>>>> expression is a collection of all even numbers between 1 > >and > >> >>>>100 that are > >> >>>>>> divisible by 4. > >> >>>>>> > >> >>>>>> ciao > >> >>>>>> > >> >>>>>> giorgio > >> >>>>>> > >> >>>>>> > >> >>>>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon > >> >><tim@testit.works> > >> >>>>>> <tim@testit.works> wrote: > >> >>>>>> > >> >>>>>> Of course everyone is talking about ChatGPT, and I saw a > >> >>post to > >> >>>>>> ExplainCode and didn’t expect it to understand > >Smalltalk, so > >> >>>>was impressed > >> >>>>>> that it does! Presumably the Tonel Smalltalk code on > >GitHub > >> >>has > >> >>>>helped > >> >>>>>> along with numerous articles? > >> >>>>>> > >> >>>>>> Try it out: https://whatdoesthiscodedo.com/ > >> >>>>>> > >> >>>>>> I gave this simple example: (1 to: 100 by: 3) reject: > >[:i | > >> >i > >> >>>>isOdd] > >> >>>>>> > >> >>>>>> And got this (neat eh): > >> >>>>>> > >> >>>>>> The code above is written in the programming language > >> >>Smalltalk. > >> >>>>>> > >> >>>>>> It creates a collection of numbers from 1 to 100, > >> >>incrementing > >> >>>>by 3 at > >> >>>>>> each step. This is done using the "to:by:" construct > >where > >> >>the > >> >>>>starting > >> >>>>>> value is 1, the ending value is 100, and the increment > >is 3. > >> >>>>>> > >> >>>>>> The resulting collection is then filtered using the > >> >"reject:" > >> >>>>method, > >> >>>>>> which takes a block of code as an argument. In this case, > >> >the > >> >>>>block checks > >> >>>>>> whether each element (denoted by the variable i) is odd > >by > >> >>>>using the > >> >>>>>> "isOdd" method. If an element is odd, it will be rejected > >> >>from > >> >>>>the > >> >>>>>> collection. > >> >>>>>> > >> >>>>>> So, the final result is a collection of all even numbers > >> >>from 1 > >> >>>>to 100, > >> >>>>>> incrementing by 3 at each step. > >> >>>>>> Share this explanation with colleagues: > >> >>>>whatdoesthiscodedo.com/g/a0ec56e > >> >>>>>> > >> >>>>>> -- > >> >>>>>> > >> >>>>>> --------------------------------------------------------- > >--- > >> >- > >> >>--- > >> >>>>------- > >> >>>>>> Objektfabrik Joachim Tuchel > >> >>>>mailto:jtuchel@objektfabrik.de <jtuchel@objektfabrik.de> > >> >>>>>> Fliederweg 1 > >> >>>>http://www.objektfabrik.de > >> >>>>>> D-71640 Ludwigsburg > >> >>>>http://joachimtuchel.wordpress.com > >> >>>>>> Telefon: +49 7141 56 10 86 0 Fax: +49 > >> >7141 > >> >>>>56 10 86 1 > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>> > >> >>>>-- > >> >>>>Christopher Fuhrman, P.Eng., PhD > >> >>>> > >> >>>>*Professeur au Département de génie logiciel et des > >> >technologies > >> >>de > >> >>>>l'informationÉTS (École de technologie supérieure)* > >> >>>> > >> >>>>http://profs.etsmtl.ca/cfuhrman > >> >>>>+1 514 396 8638 > >> >>>><https://www.etsmtl.ca/activites-et-services-aux- > >> >>>>etudiants/services-aux-etudiants/aide-psychologique> > >> >>>> Je fais partie du réseau des Sentinelles > >> >>>><https://www.etsmtl.ca/activites-et-services-aux- > >> >>>>etudiants/services-aux-etudiants/aide-psychologique> > >> >>>> ÉTS > >> >>>> > >> >>>>*L'ÉTS est une constituante de l'Université du Québec* > >> > >
IP
in_pharo_users@nym.hush.com
Wed, Mar 15, 2023 8:13 PM

I think smartness is not an argument to reject critique.

The Imitation Game, that you describe, sounds to me like
an even better setting.

I have no dought that ChatGPT as it is now can identified as not human or even as a maschine.

I did so by leading an instance for marketing purposes to hang in infinite recursion.

But when we look at AI, we need to look not only at present features of one type of system, but at all features of present systems and their possible development and combination in the future.

Especially I want to mention AGNs and the possibility of spawning parallel recursive processes.  I am convinced that this kind of technology is allready being used in robotics.

Under the control of a self aware AI in the sense that it has data about it's own capabilities and the ability to determine which capabilities it is missing to achieve a certain purpose, the evolution of the AI is only limited by ressources - which it will surely be able to detect.

This must lead to a purpose of self preservation and the aquisition of such capabilities.

It may soon find out, that ressources and redundance can be aquired by hacking and spawning agent systems, which in turn will give it control of real world control systems like telephony and data networks, cameras and sensors, energy networks and power plants, microphpnes, speakers, synthesizers and monitors, industry 4.0 fab networks, military detection and guiding systems, and so on.

In my view this is what we are dealing with.

On 15.3.2023 at 7:26 PM, "Richard Sargent" richard.sargent@gemtalksystems.com wrote:

On Wed, Mar 15, 2023 at 10:15 AM in_pharo_users@nym.hush.com
wrote:

It is unimportant how simple or complicated these systems are.

If the output cannot be distinguished from what a human would

say, they

pass in that situation for a human.

What about the Touring Test?

I hate to criticise someone as smart as Turing was. However, I
think the
Turing Test per se is weaker than its progenitor test, the
Imitation Game.
The Imitation Game had a third party challenged to determine which
of the
other two participants was male and which was female. The third
party led
the questioning.
The Turing Test waters that down by having only an Observer with
neither
participant (apparently) challenged to determine whether the other
was
human or machine.

I think a test along the lines of the Imitation Game would quickly
allow
the third party to determine which participant was human and which
was
machine.

(Of course, such a judge could easily be persuaded that the
machine was in
fact a psychopathic human.)

Clearly these systems have the potential to act according to

their output.

Furthermore, I would compare them to a combination of a

successor of Eliza

with an Eddington Ape of huge capacity.  Their input data basis

can easily

exceed that of a human.

On 15.3.2023 at 4:52 PM, "Richard Sargent" <
richard.sargent@gemtalksystems.com> wrote:

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-

users <

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly

overpopulated'.

It said, if it where in control, it would therefore enforce a

world wide

one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are,

in

my view,

two possible reasons for that.

Either, the data basis of that instance was biased and

restricted to lead

to that conclusion, or ChatGPT lacks the ability to classify

input data for

intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item

featured to

support Agenda 2030, which is a goal of the 'ruling elite',

who

have the

means to propagate their propaganda in every thinkable means.

So

I would

classify possibly biased data that supports 'overpopulation'

as

biased

until proven otherwise.  That instance of ChatGPT seems to

have

missed that

ability.

It's important to keep in mind that these systems are little

more

than
glorified ELIZAs from the 70s. They mimic. Essentially, they

have

no
concept of truth or accuracy.
They produce outputs that look like the inputs from which

they

were
trained.

That's it. That's what they do. (It is amazing how good it

looks.

But, it
is all about the seeming, not the reality.)

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
pharo-users@lists.pharo.org> wrote:

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it

answered

no.

Then I defined math formula in a sound but otherwise

undefined

representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the breaking
    condition accordingly.

I was able to identify the malfunction that lead to the

erroneous

results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk" tomazz.turk@gmail.com
wrote:

I hope that I can add two cents to this discussion. Because
programming
should be/is a highly exact activity, not only the syntax

matters

but
also semantics, as we know.

GPTs are at present essentially capable of creating texts

based

on

some
seed - you give to GPT a beginning of a sentence and it

responds

with
the most probable answer (some language structure)

according to

the
learning dataset. Added functionalities are question/seed -
response
capability (chatting), togehter with evaluation of how long

the

answer
should be to meet the expectations. Programming typically

involves

some
programming language, so GPTs could be utilized for this

purpose

to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we

will

cross-link
neural networks like GPT with logic machines (like Prolog

and

expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands

Smalltalk

I would highly recommend that you all first think deeply

about

how you can teach an AI to behave friendly to us before you

teach

it to write any program for any purpose.

There has been an experiment with ChatGPT published on a

video

platform asking it to amswer questions about it's view on

humanity

once with it's default moral restrictions and once with 'a

little

less morals'.  The answers with 'a little less morals' were

more

than shocking.

So, before we give an AI the power to program any system,

to

self-

improve and self-reproduce, we should take care that it is

and

will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?

With

reasonning, it will gain access to math and programming by

itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the

documentation is

out

of date
with the Pharo 10. I asked it to do some simple saving of

data

to

a file.
It gave me code that didn't work in Pharo 10, I told it

about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then

apologized

and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been

confused

and

that it
was giving me code for Pharo 5.0. It seems to have trouble

with

versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

It is good with boilerplate code (e.g. SQL queries) or

general

algorithm

structures. But i.e. I asked it to write me a method to

parse

a

string

(e.g. ISO 8601) and turn it into a DateAndTime, and

then

asked

to write it

as an Excel formula.

It works much better when you can spot the mistakes,

you

can

tell them to

add some considerations to the code or even point the

LLM

to

correct a

message selector to something else. Sometimes you ask

it

to

reprogram

something and it won't do it, as if it doesn't

understand

what

you're

asking.

It is not an expert yet, but it is a great assistant.

The

only

thing is

that is sure is that it will get better.

Best regards,

Esteban A. Maringolo

On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

"#isOdd is not Smalltalk" - doh, I typed it in on my

phone

and

so it just

goes to show that it highlights the flaw in chatGPT

that

others

have called

out in other languages. I had meant to find some

trickier

code

samples to

see how well it does...

Still, it is very interesting how it reasons on this

stuff -

and I wonder

if there is way to tell it about these mistakes? For

example

is

it reading

this email (or will it in the future) and understand

that

it

is

#odd and

not #isOdd.

Tim

On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

interesting....

#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

message ;-)

If I evaluate your snippet -replacing #isOdd for

#odd, I

get

an

empty

Collection. The divisible by 4 things is somewhat

interesting,

because (1

to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

So chatGPT is probably a newbie in Smalltalk.

Interesting is

how logical

the explanation sounds, you're tempted to believe

it...

Joachim

Am 15.03.23 um 10:45 schrieb giorgio ferraris:

it's interesting to see how the answer changes with

only

a

small change

on the question:

(1 to: 100 by: 4) reject: [:i | i isOdd]

gives:

The code creates a collection of numbers from 1 to

100,

incrementing by 4

at each step using the to:by: message. It then applies

the

reject: message

to this collection, which returns a new collection

containing

all elements

for which the given block evaluates to false.

In this case, the block is [i | i isOdd], which checks

whether

a given

number i is odd using the isOdd method. So the result

of

the

entire

expression is a collection of all even numbers

between 1

and

100 that are

divisible by 4.

ciao

giorgio

On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

Of course everyone is talking about ChatGPT, and I

saw a

post to

ExplainCode and didn’t expect it to understand

Smalltalk, so

was impressed

that it does! Presumably the Tonel Smalltalk code on

GitHub

has

helped

along with numerous articles?

Try it out: https://whatdoesthiscodedo.com/

I gave this simple example: (1 to: 100 by: 3) reject:

[:i |

i

isOdd]

And got this (neat eh):

The code above is written in the programming language

Smalltalk.

It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

each step. This is done using the "to:by:" construct

where

the

starting

value is 1, the ending value is 100, and the increment

is 3.

The resulting collection is then filtered using the

"reject:"

method,

which takes a block of code as an argument. In this

case,

the

block checks

whether each element (denoted by the variable i) is

odd

by

using the

"isOdd" method. If an element is odd, it will be

rejected

from

the

collection.

So, the final result is a collection of all even

numbers

from 1

to 100,

incrementing by 3 at each step.
Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

--






Objektfabrik Joachim Tuchel

Fliederweg 1

D-71640 Ludwigsburg

Telefon: +49 7141 56 10 86 0                    Fax:

+49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
ÉTS

L'ÉTS est une constituante de l'Université du Québec

I think smartness is not an argument to reject critique. The Imitation Game, that you describe, sounds to me like an even better setting. I have no dought that ChatGPT as it is now can identified as not human or even as a maschine. I did so by leading an instance for marketing purposes to hang in infinite recursion. But when we look at AI, we need to look not only at present features of one type of system, but at all features of present systems and their possible development and combination in the future. Especially I want to mention AGNs and the possibility of spawning parallel recursive processes. I am convinced that this kind of technology is allready being used in robotics. Under the control of a self aware AI in the sense that it has data about it's own capabilities and the ability to determine which capabilities it is missing to achieve a certain purpose, the evolution of the AI is only limited by ressources - which it will surely be able to detect. This must lead to a purpose of self preservation and the aquisition of such capabilities. It may soon find out, that ressources and redundance can be aquired by hacking and spawning agent systems, which in turn will give it control of real world control systems like telephony and data networks, cameras and sensors, energy networks and power plants, microphpnes, speakers, synthesizers and monitors, industry 4.0 fab networks, military detection and guiding systems, and so on. In my view this is what we are dealing with. On 15.3.2023 at 7:26 PM, "Richard Sargent" <richard.sargent@gemtalksystems.com> wrote: > >On Wed, Mar 15, 2023 at 10:15 AM <in_pharo_users@nym.hush.com> >wrote: > >> It is unimportant how simple or complicated these systems are. >> >> If the output cannot be distinguished from what a human would >say, they >> pass in that situation for a human. >> >> What about the Touring Test? >> > >I hate to criticise someone as smart as Turing was. However, I >think the >Turing Test per se is weaker than its progenitor test, the >Imitation Game. >The Imitation Game had a third party challenged to determine which >of the >other two participants was male and which was female. The third >party led >the questioning. >The Turing Test waters that down by having only an Observer with >neither >participant (apparently) challenged to determine whether the other >was >human or machine. > >I think a test along the lines of the Imitation Game would quickly >allow >the third party to determine which participant was human and which >was >machine. > >(Of course, such a judge could easily be persuaded that the >machine was in >fact a psychopathic human.) > > > >> >> Clearly these systems have the potential to act according to >their output. >> >> Furthermore, I would compare them to a combination of a >successor of Eliza >> with an Eddington Ape of huge capacity. Their input data basis >can easily >> exceed that of a human. >> >> >> >> On 15.3.2023 at 4:52 PM, "Richard Sargent" < >> richard.sargent@gemtalksystems.com> wrote: >> > >> >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo- >users < >> >pharo-users@lists.pharo.org> wrote: >> > >> >> Another observation about ChatGPT: >> >> >> >> In unbiased mode, it assumed that 'the world is clearly >> >overpopulated'. >> >> It said, if it where in control, it would therefore enforce a >> >world wide >> >> one-child-only policy with draconic penalties. >> >> >> >> As it draws it's conclusions from it's data basis, there are, >in >> >my view, >> >> two possible reasons for that. >> >> >> >> Either, the data basis of that instance was biased and >> >restricted to lead >> >> to that conclusion, or ChatGPT lacks the ability to classify >> >input data for >> >> intentionally biased information. >> >> >> >> In my view, 'overpopulation of earth' is a propaganda item >> >featured to >> >> support Agenda 2030, which is a goal of the 'ruling elite', >who >> >have the >> >> means to propagate their propaganda in every thinkable means. >So >> >I would >> >> classify possibly biased data that supports 'overpopulation' >as >> >biased >> >> until proven otherwise. That instance of ChatGPT seems to >have >> >missed that >> >> ability. >> >> >> > >> >It's important to keep in mind that these systems are little >more >> >than >> >glorified ELIZAs from the 70s. They mimic. Essentially, they >have >> >no >> >concept of truth or accuracy. >> >They produce outputs that *look* *like* the inputs from which >they >> >were >> >trained. >> > >> >That's it. That's what they do. (It is amazing how good it >looks. >> >But, it >> >is all about the seeming, not the reality.) >> > >> > >> >> >> >> >> >> >> >> >> >> >> >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" < >> >> pharo-users@lists.pharo.org> wrote: >> >> > >> >> >I myself made some experiments with ChatGPT. >> >> > >> >> >I first asked if it was able to parse math formula - it >answered >> >> >no. >> >> > >> >> >Then I defined math formula in a sound but otherwise >undefined >> >> >representation and asked for solutions. >> >> > >> >> >Result: >> >> > >> >> >1. Most answeres where correct. >> >> > >> >> >2. It learned to calculate a recursive function. >> >> > >> >> >3. It went into infinitive recursion when I set the breaking >> >> >condition accordingly. >> >> > >> >> >I was able to identify the malfunction that lead to the >> >erroneous >> >> >results. >> >> > >> >> > >> >> > >> >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" <tomazz.turk@gmail.com> >> >> >wrote: >> >> >> >> >> >>I hope that I can add two cents to this discussion. Because >> >> >>programming >> >> >>should be/is a highly exact activity, not only the syntax >> >matters >> >> >>but >> >> >>also semantics, as we know. >> >> >> >> >> >>GPTs are at present essentially capable of creating texts >based >> >> >on >> >> >>some >> >> >>seed - you give to GPT a beginning of a sentence and it >> >responds >> >> >>with >> >> >>the most probable answer (some language structure) >according to >> >> >>the >> >> >>learning dataset. Added functionalities are question/seed - >> >> >>response >> >> >>capability (chatting), togehter with evaluation of how long >the >> >> >>answer >> >> >>should be to meet the expectations. Programming typically >> >> >involves >> >> >>some >> >> >>programming language, so GPTs could be utilized for this >> >purpose >> >> >>to some >> >> >>extent. >> >> >> >> >> >>Anecdotal case: >> >> >>Q: Tell me the last 8 digits of pi >> >> >>GPT: The last 8 digits of pi are: 58723078 >> >> >> >> >> >>It is my belief that the true trouble will start when we >will >> >> >>cross-link >> >> >>neural networks like GPT with logic machines (like Prolog >and >> >> >>expert >> >> >>systems) and genetic algorithms. >> >> >> >> >> >>Best wishes, >> >> >>Tomaz >> >> >> >> >> >> >> >> >> >> >> >>------ Original Message ------ >> >> >>From: "in_pharo_users--- via Pharo-users" <pharo- >> >> >>users@lists.pharo.org> >> >> >>To: "Any question about pharo is welcome" <pharo- >> >> >>users@lists.pharo.org> >> >> >>Cc: in_pharo_users@nym.hush.com >> >> >>Sent: 15. 03. 2023 14:43:55 >> >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands >Smalltalk >> >> >> >> >> >>>I would highly recommend that you all first think deeply >about >> >> >>how you can teach an AI to behave friendly to us before you >> >teach >> >> >>it to write any program for any purpose. >> >> >>> >> >> >>>There has been an experiment with ChatGPT published on a >video >> >> >>platform asking it to amswer questions about it's view on >> >> >humanity >> >> >>once with it's default moral restrictions and once with 'a >> >little >> >> >>less morals'. The answers with 'a little less morals' were >> >more >> >> >>than shocking. >> >> >>> >> >> >>>So, before we give an AI the power to program any system, >to >> >> >self- >> >> >>improve and self-reproduce, we should take care that it is >and >> >> >>will evolve benevolent to us. >> >> >>> >> >> >>>What about teaching it logical reasonning and ethics first? >> >> >With >> >> >>reasonning, it will gain access to math and programming by >> >itself. >> >> >>> >> >> >>> >> >> >>> >> >> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" >> >> >><christopher.fuhrman@gmail.com> wrote: >> >> >>>> >> >> >>>>I asked it for a NeoCSV example, because the >documentation is >> >> >out >> >> >>>>of date >> >> >>>>with the Pharo 10. I asked it to do some simple saving of >> >data >> >> >to >> >> >>>>a file. >> >> >>>>It gave me code that didn't work in Pharo 10, I told it >about >> >> >the >> >> >>>>DNUs on >> >> >>>>the csvwriter and that I was using Pharo 10. It then >> >apologized >> >> >>>>and said >> >> >>>>the messages were later introduced in Pharo 50 (!). I then >> >> >>>>questioned its >> >> >>>>understanding and it apologized and said it had been >confused >> >> >and >> >> >>>>that it >> >> >>>>was giving me code for Pharo 5.0. It seems to have trouble >> >with >> >> >>>>versions >> >> >>>>and quick changing APIs. So, we are teaching ChatGPT about >> >> >>>>smalltalk, too, >> >> >>>>when we try to use it. >> >> >>>> >> >> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo >> >> >>>><emaringolo@gmail.com> >> >> >>>>wrote: >> >> >>>> >> >> >>>>> It is good with boilerplate code (e.g. SQL queries) or >> >> >general >> >> >>>>algorithm >> >> >>>>> structures. But i.e. I asked it to write me a method to >> >> >parse >> >> >>a >> >> >>>>string >> >> >>>>> (e.g. ISO 8601) and turn it into a DateAndTime, and >then >> >> >asked >> >> >>>>to write it >> >> >>>>> as an Excel formula. >> >> >>>>> >> >> >>>>> It works much better when you can spot the mistakes, >you >> >can >> >> >>>>tell them to >> >> >>>>> add some considerations to the code or even point the >LLM >> >to >> >> >>>>correct a >> >> >>>>> message selector to something else. Sometimes you ask >it >> >to >> >> >>>>reprogram >> >> >>>>> something and it won't do it, as if it doesn't >understand >> >> >what >> >> >>>>you're >> >> >>>>> asking. >> >> >>>>> >> >> >>>>> It is not an expert yet, but it is a great assistant. >The >> >> >only >> >> >>>>thing is >> >> >>>>> that is sure is that it will get better. >> >> >>>>> >> >> >>>>> Best regards, >> >> >>>>> >> >> >>>>> Esteban A. Maringolo >> >> >>>>> >> >> >>>>> >> >> >>>>> On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon >> >> >><tim@testit.works> >> >> >>>>wrote: >> >> >>>>> >> >> >>>>>> "#isOdd is not Smalltalk" - doh, I typed it in on my >> >phone >> >> >>and >> >> >>>>so it just >> >> >>>>>> goes to show that it highlights the flaw in chatGPT >that >> >> >>others >> >> >>>>have called >> >> >>>>>> out in other languages. I had meant to find some >trickier >> >> >>code >> >> >>>>samples to >> >> >>>>>> see how well it does... >> >> >>>>>> >> >> >>>>>> Still, it is very interesting how it reasons on this >> >stuff - >> >> >>>>and I wonder >> >> >>>>>> if there is way to tell it about these mistakes? For >> >> >example >> >> >>is >> >> >>>>it reading >> >> >>>>>> this email (or will it in the future) and understand >that >> >> >it >> >> >>is >> >> >>>>#odd and >> >> >>>>>> not #isOdd. >> >> >>>>>> >> >> >>>>>> Tim >> >> >>>>>> >> >> >>>>>> On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote: >> >> >>>>>> >> >> >>>>>> interesting.... >> >> >>>>>> >> >> >>>>>> #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 >> >> >>>>understands this >> >> >>>>>> message ;-) >> >> >>>>>> >> >> >>>>>> If I evaluate your snippet -replacing #isOdd for >#odd, I >> >> >get >> >> >>an >> >> >>>>empty >> >> >>>>>> Collection. The divisible by 4 things is somewhat >> >> >>interesting, >> >> >>>>because (1 >> >> >>>>>> to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) >> >> >>>>>> >> >> >>>>>> So chatGPT is probably a newbie in Smalltalk. >> >Interesting is >> >> >>>>how logical >> >> >>>>>> the explanation sounds, you're tempted to believe >it... >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> Joachim >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> Am 15.03.23 um 10:45 schrieb giorgio ferraris: >> >> >>>>>> >> >> >>>>>> it's interesting to see how the answer changes with >only >> >a >> >> >>>>small change >> >> >>>>>> on the question: >> >> >>>>>> >> >> >>>>>> (1 to: 100 by: 4) reject: [:i | i isOdd] >> >> >>>>>> >> >> >>>>>> gives: >> >> >>>>>> >> >> >>>>>> The code creates a collection of numbers from 1 to >100, >> >> >>>>incrementing by 4 >> >> >>>>>> at each step using the to:by: message. It then applies >> >the >> >> >>>>reject: message >> >> >>>>>> to this collection, which returns a new collection >> >> >containing >> >> >>>>all elements >> >> >>>>>> for which the given block evaluates to false. >> >> >>>>>> >> >> >>>>>> In this case, the block is [i | i isOdd], which checks >> >> >>whether >> >> >>>>a given >> >> >>>>>> number i is odd using the isOdd method. So the result >of >> >the >> >> >>>>entire >> >> >>>>>> expression is a collection of all even numbers >between 1 >> >and >> >> >>>>100 that are >> >> >>>>>> divisible by 4. >> >> >>>>>> >> >> >>>>>> ciao >> >> >>>>>> >> >> >>>>>> giorgio >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon >> >> >><tim@testit.works> >> >> >>>>>> <tim@testit.works> wrote: >> >> >>>>>> >> >> >>>>>> Of course everyone is talking about ChatGPT, and I >saw a >> >> >>post to >> >> >>>>>> ExplainCode and didn’t expect it to understand >> >Smalltalk, so >> >> >>>>was impressed >> >> >>>>>> that it does! Presumably the Tonel Smalltalk code on >> >GitHub >> >> >>has >> >> >>>>helped >> >> >>>>>> along with numerous articles? >> >> >>>>>> >> >> >>>>>> Try it out: https://whatdoesthiscodedo.com/ >> >> >>>>>> >> >> >>>>>> I gave this simple example: (1 to: 100 by: 3) reject: >> >[:i | >> >> >i >> >> >>>>isOdd] >> >> >>>>>> >> >> >>>>>> And got this (neat eh): >> >> >>>>>> >> >> >>>>>> The code above is written in the programming language >> >> >>Smalltalk. >> >> >>>>>> >> >> >>>>>> It creates a collection of numbers from 1 to 100, >> >> >>incrementing >> >> >>>>by 3 at >> >> >>>>>> each step. This is done using the "to:by:" construct >> >where >> >> >>the >> >> >>>>starting >> >> >>>>>> value is 1, the ending value is 100, and the increment >> >is 3. >> >> >>>>>> >> >> >>>>>> The resulting collection is then filtered using the >> >> >"reject:" >> >> >>>>method, >> >> >>>>>> which takes a block of code as an argument. In this >case, >> >> >the >> >> >>>>block checks >> >> >>>>>> whether each element (denoted by the variable i) is >odd >> >by >> >> >>>>using the >> >> >>>>>> "isOdd" method. If an element is odd, it will be >rejected >> >> >>from >> >> >>>>the >> >> >>>>>> collection. >> >> >>>>>> >> >> >>>>>> So, the final result is a collection of all even >numbers >> >> >>from 1 >> >> >>>>to 100, >> >> >>>>>> incrementing by 3 at each step. >> >> >>>>>> Share this explanation with colleagues: >> >> >>>>whatdoesthiscodedo.com/g/a0ec56e >> >> >>>>>> >> >> >>>>>> -- >> >> >>>>>> >> >> >>>>>> ------------------------------------------------------ >--- >> >--- >> >> >- >> >> >>--- >> >> >>>>------- >> >> >>>>>> Objektfabrik Joachim Tuchel >> >> >>>>mailto:jtuchel@objektfabrik.de <jtuchel@objektfabrik.de> >> >> >>>>>> Fliederweg 1 >> >> >>>>http://www.objektfabrik.de >> >> >>>>>> D-71640 Ludwigsburg >> >> >>>>http://joachimtuchel.wordpress.com >> >> >>>>>> Telefon: +49 7141 56 10 86 0 Fax: >+49 >> >> >7141 >> >> >>>>56 10 86 1 >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>> >> >> >>>>-- >> >> >>>>Christopher Fuhrman, P.Eng., PhD >> >> >>>> >> >> >>>>*Professeur au Département de génie logiciel et des >> >> >technologies >> >> >>de >> >> >>>>l'informationÉTS (École de technologie supérieure)* >> >> >>>> >> >> >>>>http://profs.etsmtl.ca/cfuhrman >> >> >>>>+1 514 396 8638 >> >> >>>><https://www.etsmtl.ca/activites-et-services-aux- >> >> >>>>etudiants/services-aux-etudiants/aide-psychologique> >> >> >>>> Je fais partie du réseau des Sentinelles >> >> >>>><https://www.etsmtl.ca/activites-et-services-aux- >> >> >>>>etudiants/services-aux-etudiants/aide-psychologique> >> >> >>>> ÉTS >> >> >>>> >> >> >>>>*L'ÉTS est une constituante de l'Université du Québec* >> >> >> >>
OV
Offray Vladimir Luna Cárdenas
Tue, Mar 21, 2023 6:20 PM

I agree with Richard. The Turing test is not a good one to test
intelligence and we have now just over glorified Eliza chatbots that
appear to think and to understand but do none of them.

There has been long criticism to the Turing Test, like John Searle's
Chinese Room (I recommend to listen completely the talk at [1]). It is
good that ChatGPT can explain simple (Smalltalk) code, but it doesn't
understand it and all the language most are using conveys a lot of
deception: "machine learning" (I called machine training), "artificial
intelligence" (I wold called algorithmic decision/generation), or the
"thinking" message that is used in the referred code explainer, when
something like "processing" would be less deceptive.

[1] Consciousness in Artificial Intelligence | John Searle | Talks at Google
https://www.youtube.com/watch?v=rHKwIYsPXLg

With Pharo/GT we are part of a long "explainable systems" tradition and
with "AI" / Algorithmic Decision/Generation, hopefully we go to
explainable AI, instead of a world of impressive black boxes.

Cheers,

Offray

On 15/03/23 13:26, Richard Sargent wrote:

On Wed, Mar 15, 2023 at 10:15 AM in_pharo_users@nym.hush.com wrote:

 It is unimportant how simple or complicated these systems are.

 If the output cannot be distinguished from what a human would say,
 they pass in that situation for a human.

 What about the Touring Test?

I hate to criticise someone as smart as Turing was. However, I think
the Turing Test per se is weaker than its progenitor test, the
Imitation Game.
The Imitation Game had a third party challenged to determine which of
the other two participants was male and which was female. The third
party led the questioning.
The Turing Test waters that down by having only an Observer with
neither participant (apparently) challenged to determine whether the
other was human or machine.

I think a test along the lines of the Imitation Game would quickly
allow the third party to determine which participant was human and
which was machine.

(Of course, such a judge could easily be persuaded that the machine
was in fact a psychopathic human.)

 Clearly these systems have the potential to act according to their
 output.

 Furthermore, I would compare them to a combination of a successor
 of Eliza with an Eddington Ape of huge capacity. Their input data
 basis can easily exceed that of a human.



 On 15.3.2023 at 4:52 PM, "Richard Sargent"
 <richard.sargent@gemtalksystems.com> wrote:

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly

overpopulated'.

It said, if it where in control, it would therefore enforce a

world wide

one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there are, in

my view,

two possible reasons for that.

Either, the data basis of that instance was biased and

restricted to lead

to that conclusion, or ChatGPT lacks the ability to classify

input data for

intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item

featured to

support Agenda 2030, which is a goal of the 'ruling elite', who

have the

means to propagate their propaganda in every thinkable means. So

I would

classify possibly biased data that supports 'overpopulation' as

biased

until proven otherwise.  That instance of ChatGPT seems to have

missed that

ability.

It's important to keep in mind that these systems are little more
than
glorified ELIZAs from the 70s. They mimic. Essentially, they have
no
concept of truth or accuracy.
They produce outputs that look like the inputs from which they
were
trained.

That's it. That's what they do. (It is amazing how good it looks.
But, it
is all about the seeming, not the reality.)

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" <
pharo-users@lists.pharo.org> wrote:

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it answered
no.

Then I defined math formula in a sound but otherwise undefined
representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the breaking
    condition accordingly.

I was able to identify the malfunction that lead to the

erroneous

results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk" tomazz.turk@gmail.com
wrote:

I hope that I can add two cents to this discussion. Because
programming
should be/is a highly exact activity, not only the syntax

matters

but
also semantics, as we know.

GPTs are at present essentially capable of creating texts based

on

some
seed - you give to GPT a beginning of a sentence and it

responds

with
the most probable answer (some language structure) according to
the
learning dataset. Added functionalities are question/seed -
response
capability (chatting), togehter with evaluation of how long the
answer
should be to meet the expectations. Programming typically

involves

some
programming language, so GPTs could be utilized for this

purpose

to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when we will
cross-link
neural networks like GPT with logic machines (like Prolog and
expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk

I would highly recommend that you all first think deeply about

how you can teach an AI to behave friendly to us before you

teach

it to write any program for any purpose.

There has been an experiment with ChatGPT published on a video

platform asking it to amswer questions about it's view on

humanity

once with it's default moral restrictions and once with 'a

little

less morals'.  The answers with 'a little less morals' were

more

than shocking.

So, before we give an AI the power to program any system, to

self-

improve and self-reproduce, we should take care that it is and
will evolve benevolent to us.

What about teaching it logical reasonning and ethics first?

With

reasonning, it will gain access to math and programming by

itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the documentation is

out

of date
with the Pharo 10. I asked it to do some simple saving of

data

to

a file.
It gave me code that didn't work in Pharo 10, I told it about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then

apologized

and said
the messages were later introduced in Pharo 50 (!). I then
questioned its
understanding and it apologized and said it had been confused

and

that it
was giving me code for Pharo 5.0. It seems to have trouble

with

versions
and quick changing APIs. So, we are teaching ChatGPT about
smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

  It is good with boilerplate code (e.g. SQL queries) or

general

algorithm

  structures. But i.e. I asked it to write me a method to

parse

a

string

  (e.g. ISO 8601) and turn it into a DateAndTime, and then

asked

to write it

  as an Excel formula.

  It works much better when you can spot the mistakes, you

can

tell them to

  add some considerations to the code or even point the LLM

to

correct a

  message selector to something else. Sometimes you ask it

to

reprogram

  something and it won't do it, as if it doesn't understand

what

you're

  asking.

  It is not an expert yet, but it is a great assistant. The

only

thing is

  that is sure is that it will get better.

  Best regards,

  Esteban A. Maringolo

  On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

  "#isOdd is not Smalltalk" - doh, I typed it in on my

phone

and

so it just

  goes to show that it highlights the flaw in chatGPT that

others

have called

  out in other languages. I had meant to find some trickier

code

samples to

  see how well it does...

  Still, it is very interesting how it reasons on this

stuff -

and I wonder

  if there is way to tell it about these mistakes? For

example

is

it reading

  this email (or will it in the future) and understand that

it

is

#odd and

  not #isOdd.

  Tim

  On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote:

  interesting....

  #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12

understands this

  message ;-)

  If I evaluate your snippet -replacing #isOdd for #odd, I

get

an

empty

  Collection. The divisible by 4 things is somewhat

interesting,

because (1

  to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

  So chatGPT is probably a newbie in Smalltalk.

Interesting is

how logical

  the explanation sounds, you're tempted to believe it...

  Joachim

  Am 15.03.23 um 10:45 schrieb giorgio ferraris:

  it's interesting to see how the answer changes with only

a

small change

  on the question:

  (1 to: 100 by: 4) reject: [:i | i isOdd]

  gives:

  The code creates a collection of numbers from 1 to 100,

incrementing by 4

  at each step using the to:by: message. It then applies

the

reject: message

  to this collection, which returns a new collection

containing

all elements

  for which the given block evaluates to false.

  In this case, the block is [i | i isOdd], which checks

whether

a given

  number i is odd using the isOdd method. So the result of

the

entire

  expression is a collection of all even numbers between 1

and

100 that are

  divisible by 4.

  ciao

  giorgio

  On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

  Of course everyone is talking about ChatGPT, and I saw a

post to

  ExplainCode and didn’t expect it to understand

Smalltalk, so

was impressed

  that it does! Presumably the Tonel Smalltalk code on

GitHub

has

helped

  along with numerous articles?

  Try it out: https://whatdoesthiscodedo.com/

  I gave this simple example: (1 to: 100 by: 3) reject:

[:i |

i

isOdd]

  And got this (neat eh):

  The code above is written in the programming language

Smalltalk.

  It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

  each step. This is done using the "to:by:" construct

where

the

starting

  value is 1, the ending value is 100, and the increment

is 3.

  The resulting collection is then filtered using the

"reject:"

method,

  which takes a block of code as an argument. In this case,

the

block checks

  whether each element (denoted by the variable i) is odd

by

using the

  "isOdd" method. If an element is odd, it will be rejected

from

the

  collection.

  So, the final result is a collection of all even numbers

from 1

to 100,

  incrementing by 3 at each step.
  Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

 <http://whatdoesthiscodedo.com/g/a0ec56e>

  --





  Objektfabrik Joachim Tuchel

  Fliederweg 1

  D-71640 Ludwigsburg

  Telefon: +49 7141 56 10 86 0                    Fax: +49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
  Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
  ÉTS

L'ÉTS est une constituante de l'Université du Québec

I agree with Richard. The Turing test is not a good one to test intelligence and we have now just over glorified Eliza chatbots that appear to think and to understand but do none of them. There has been long criticism to the Turing Test, like John Searle's Chinese Room (I recommend to listen completely the talk at [1]). It is good that ChatGPT can explain simple (Smalltalk) code, but it doesn't understand it and all the language most are using conveys a lot of deception: "machine learning" (I called machine training), "artificial intelligence" (I wold called algorithmic decision/generation), or the "thinking" message that is used in the referred code explainer, when something like "processing" would be less deceptive. [1] Consciousness in Artificial Intelligence | John Searle | Talks at Google https://www.youtube.com/watch?v=rHKwIYsPXLg With Pharo/GT we are part of a long "explainable systems" tradition and with "AI" / Algorithmic Decision/Generation, hopefully we go to explainable AI, instead of a world of impressive black boxes. Cheers, Offray On 15/03/23 13:26, Richard Sargent wrote: > On Wed, Mar 15, 2023 at 10:15 AM <in_pharo_users@nym.hush.com> wrote: > > It is unimportant how simple or complicated these systems are. > > If the output cannot be distinguished from what a human would say, > they pass in that situation for a human. > > What about the Touring Test? > > > I hate to criticise someone as smart as Turing was. However, I think > the Turing Test per se is weaker than its progenitor test, the > Imitation Game. > The Imitation Game had a third party challenged to determine which of > the other two participants was male and which was female. The third > party led the questioning. > The Turing Test waters that down by having only an Observer with > neither participant (apparently) challenged to determine whether the > other was human or machine. > > I think a test along the lines of the Imitation Game would quickly > allow the third party to determine which participant was human and > which was machine. > > (Of course, such a judge could easily be persuaded that the machine > was in fact a psychopathic human.) > > > Clearly these systems have the potential to act according to their > output. > > Furthermore, I would compare them to a combination of a successor > of Eliza with an Eddington Ape of huge capacity. Their input data > basis can easily exceed that of a human. > > > > On 15.3.2023 at 4:52 PM, "Richard Sargent" > <richard.sargent@gemtalksystems.com> wrote: > > > >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users < > >pharo-users@lists.pharo.org> wrote: > > > >> Another observation about ChatGPT: > >> > >> In unbiased mode, it assumed that 'the world is clearly > >overpopulated'. > >> It said, if it where in control, it would therefore enforce a > >world wide > >> one-child-only policy with draconic penalties. > >> > >> As it draws it's conclusions from it's data basis, there are, in > >my view, > >> two possible reasons for that. > >> > >> Either, the data basis of that instance was biased and > >restricted to lead > >> to that conclusion, or ChatGPT lacks the ability to classify > >input data for > >> intentionally biased information. > >> > >> In my view, 'overpopulation of earth' is a propaganda item > >featured to > >> support Agenda 2030, which is a goal of the 'ruling elite', who > >have the > >> means to propagate their propaganda in every thinkable means. So > >I would > >> classify possibly biased data that supports 'overpopulation' as > >biased > >> until proven otherwise.  That instance of ChatGPT seems to have > >missed that > >> ability. > >> > > > >It's important to keep in mind that these systems are little more > >than > >glorified ELIZAs from the 70s. They mimic. Essentially, they have > >no > >concept of truth or accuracy. > >They produce outputs that *look* *like* the inputs from which they > >were > >trained. > > > >That's it. That's what they do. (It is amazing how good it looks. > >But, it > >is all about the seeming, not the reality.) > > > > > >> > >> > >> > >> > >> > >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-users" < > >> pharo-users@lists.pharo.org> wrote: > >> > > >> >I myself made some experiments with ChatGPT. > >> > > >> >I first asked if it was able to parse math formula - it answered > >> >no. > >> > > >> >Then I defined math formula in a sound but otherwise undefined > >> >representation and asked for solutions. > >> > > >> >Result: > >> > > >> >1. Most answeres where correct. > >> > > >> >2. It learned to calculate a recursive function. > >> > > >> >3. It went into infinitive recursion when I set the breaking > >> >condition accordingly. > >> > > >> >I was able to identify the malfunction that lead to the > >erroneous > >> >results. > >> > > >> > > >> > > >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" <tomazz.turk@gmail.com> > >> >wrote: > >> >> > >> >>I hope that I can add two cents to this discussion. Because > >> >>programming > >> >>should be/is a highly exact activity, not only the syntax > >matters > >> >>but > >> >>also semantics, as we know. > >> >> > >> >>GPTs are at present essentially capable of creating texts based > >> >on > >> >>some > >> >>seed - you give to GPT a beginning of a sentence and it > >responds > >> >>with > >> >>the most probable answer (some language structure) according to > >> >>the > >> >>learning dataset. Added functionalities are question/seed - > >> >>response > >> >>capability (chatting), togehter with evaluation of how long the > >> >>answer > >> >>should be to meet the expectations. Programming typically > >> >involves > >> >>some > >> >>programming language, so GPTs could be utilized for this > >purpose > >> >>to some > >> >>extent. > >> >> > >> >>Anecdotal case: > >> >>Q: Tell me the last 8 digits of pi > >> >>GPT: The last 8 digits of pi are: 58723078 > >> >> > >> >>It is my belief that the true trouble will start when we will > >> >>cross-link > >> >>neural networks like GPT with logic machines (like Prolog and > >> >>expert > >> >>systems) and genetic algorithms. > >> >> > >> >>Best wishes, > >> >>Tomaz > >> >> > >> >> > >> >> > >> >>------ Original Message ------ > >> >>From: "in_pharo_users--- via Pharo-users" <pharo- > >> >>users@lists.pharo.org> > >> >>To: "Any question about pharo is welcome" <pharo- > >> >>users@lists.pharo.org> > >> >>Cc: in_pharo_users@nym.hush.com > >> >>Sent: 15. 03. 2023 14:43:55 > >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands Smalltalk > >> >> > >> >>>I would highly recommend that you all first think deeply about > >> >>how you can teach an AI to behave friendly to us before you > >teach > >> >>it to write any program for any purpose. > >> >>> > >> >>>There has been an experiment with ChatGPT published on a video > >> >>platform asking it to amswer questions about it's view on > >> >humanity > >> >>once with it's default moral restrictions and once with 'a > >little > >> >>less morals'.  The answers with 'a little less morals' were > >more > >> >>than shocking. > >> >>> > >> >>>So, before we give an AI the power to program any system, to > >> >self- > >> >>improve and self-reproduce, we should take care that it is and > >> >>will evolve benevolent to us. > >> >>> > >> >>>What about teaching it logical reasonning and ethics first? > >> >With > >> >>reasonning, it will gain access to math and programming by > >itself. > >> >>> > >> >>> > >> >>> > >> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" > >> >><christopher.fuhrman@gmail.com> wrote: > >> >>>> > >> >>>>I asked it for a NeoCSV example, because the documentation is > >> >out > >> >>>>of date > >> >>>>with the Pharo 10. I asked it to do some simple saving of > >data > >> >to > >> >>>>a file. > >> >>>>It gave me code that didn't work in Pharo 10, I told it about > >> >the > >> >>>>DNUs on > >> >>>>the csvwriter and that I was using Pharo 10. It then > >apologized > >> >>>>and said > >> >>>>the messages were later introduced in Pharo 50 (!). I then > >> >>>>questioned its > >> >>>>understanding and it apologized and said it had been confused > >> >and > >> >>>>that it > >> >>>>was giving me code for Pharo 5.0. It seems to have trouble > >with > >> >>>>versions > >> >>>>and quick changing APIs. So, we are teaching ChatGPT about > >> >>>>smalltalk, too, > >> >>>>when we try to use it. > >> >>>> > >> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo > >> >>>><emaringolo@gmail.com> > >> >>>>wrote: > >> >>>> > >> >>>>>  It is good with boilerplate code (e.g. SQL queries) or > >> >general > >> >>>>algorithm > >> >>>>>  structures. But i.e. I asked it to write me a method to > >> >parse > >> >>a > >> >>>>string > >> >>>>>  (e.g. ISO 8601) and turn it into a DateAndTime, and then > >> >asked > >> >>>>to write it > >> >>>>>  as an Excel formula. > >> >>>>> > >> >>>>>  It works much better when you can spot the mistakes, you > >can > >> >>>>tell them to > >> >>>>>  add some considerations to the code or even point the LLM > >to > >> >>>>correct a > >> >>>>>  message selector to something else. Sometimes you ask it > >to > >> >>>>reprogram > >> >>>>>  something and it won't do it, as if it doesn't understand > >> >what > >> >>>>you're > >> >>>>>  asking. > >> >>>>> > >> >>>>>  It is not an expert yet, but it is a great assistant. The > >> >only > >> >>>>thing is > >> >>>>>  that is sure is that it will get better. > >> >>>>> > >> >>>>>  Best regards, > >> >>>>> > >> >>>>>  Esteban A. Maringolo > >> >>>>> > >> >>>>> > >> >>>>>  On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon > >> >><tim@testit.works> > >> >>>>wrote: > >> >>>>> > >> >>>>>>  "#isOdd is not Smalltalk" - doh, I typed it in on my > >phone > >> >>and > >> >>>>so it just > >> >>>>>>  goes to show that it highlights the flaw in chatGPT that > >> >>others > >> >>>>have called > >> >>>>>>  out in other languages. I had meant to find some trickier > >> >>code > >> >>>>samples to > >> >>>>>>  see how well it does... > >> >>>>>> > >> >>>>>>  Still, it is very interesting how it reasons on this > >stuff - > >> >>>>and I wonder > >> >>>>>>  if there is way to tell it about these mistakes? For > >> >example > >> >>is > >> >>>>it reading > >> >>>>>>  this email (or will it in the future) and understand that > >> >it > >> >>is > >> >>>>#odd and > >> >>>>>>  not #isOdd. > >> >>>>>> > >> >>>>>>  Tim > >> >>>>>> > >> >>>>>>  On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel wrote: > >> >>>>>> > >> >>>>>>  interesting.... > >> >>>>>> > >> >>>>>>  #isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 > >> >>>>understands this > >> >>>>>>  message ;-) > >> >>>>>> > >> >>>>>>  If I evaluate your snippet -replacing #isOdd for #odd, I > >> >get > >> >>an > >> >>>>empty > >> >>>>>>  Collection. The divisible by 4 things is somewhat > >> >>interesting, > >> >>>>because (1 > >> >>>>>>  to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) > >> >>>>>> > >> >>>>>>  So chatGPT is probably a newbie in Smalltalk. > >Interesting is > >> >>>>how logical > >> >>>>>>  the explanation sounds, you're tempted to believe it... > >> >>>>>> > >> >>>>>> > >> >>>>>>  Joachim > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>>  Am 15.03.23 um 10:45 schrieb giorgio ferraris: > >> >>>>>> > >> >>>>>>  it's interesting to see how the answer changes with only > >a > >> >>>>small change > >> >>>>>>  on the question: > >> >>>>>> > >> >>>>>>  (1 to: 100 by: 4) reject: [:i | i isOdd] > >> >>>>>> > >> >>>>>>  gives: > >> >>>>>> > >> >>>>>>  The code creates a collection of numbers from 1 to 100, > >> >>>>incrementing by 4 > >> >>>>>>  at each step using the to:by: message. It then applies > >the > >> >>>>reject: message > >> >>>>>>  to this collection, which returns a new collection > >> >containing > >> >>>>all elements > >> >>>>>>  for which the given block evaluates to false. > >> >>>>>> > >> >>>>>>  In this case, the block is [i | i isOdd], which checks > >> >>whether > >> >>>>a given > >> >>>>>>  number i is odd using the isOdd method. So the result of > >the > >> >>>>entire > >> >>>>>>  expression is a collection of all even numbers between 1 > >and > >> >>>>100 that are > >> >>>>>>  divisible by 4. > >> >>>>>> > >> >>>>>>  ciao > >> >>>>>> > >> >>>>>>  giorgio > >> >>>>>> > >> >>>>>> > >> >>>>>>  On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon > >> >><tim@testit.works> > >> >>>>>> <tim@testit.works> wrote: > >> >>>>>> > >> >>>>>>  Of course everyone is talking about ChatGPT, and I saw a > >> >>post to > >> >>>>>>  ExplainCode and didn’t expect it to understand > >Smalltalk, so > >> >>>>was impressed > >> >>>>>>  that it does! Presumably the Tonel Smalltalk code on > >GitHub > >> >>has > >> >>>>helped > >> >>>>>>  along with numerous articles? > >> >>>>>> > >> >>>>>>  Try it out: https://whatdoesthiscodedo.com/ > >> >>>>>> > >> >>>>>>  I gave this simple example: (1 to: 100 by: 3) reject: > >[:i | > >> >i > >> >>>>isOdd] > >> >>>>>> > >> >>>>>>  And got this (neat eh): > >> >>>>>> > >> >>>>>>  The code above is written in the programming language > >> >>Smalltalk. > >> >>>>>> > >> >>>>>>  It creates a collection of numbers from 1 to 100, > >> >>incrementing > >> >>>>by 3 at > >> >>>>>>  each step. This is done using the "to:by:" construct > >where > >> >>the > >> >>>>starting > >> >>>>>>  value is 1, the ending value is 100, and the increment > >is 3. > >> >>>>>> > >> >>>>>>  The resulting collection is then filtered using the > >> >"reject:" > >> >>>>method, > >> >>>>>>  which takes a block of code as an argument. In this case, > >> >the > >> >>>>block checks > >> >>>>>>  whether each element (denoted by the variable i) is odd > >by > >> >>>>using the > >> >>>>>>  "isOdd" method. If an element is odd, it will be rejected > >> >>from > >> >>>>the > >> >>>>>>  collection. > >> >>>>>> > >> >>>>>>  So, the final result is a collection of all even numbers > >> >>from 1 > >> >>>>to 100, > >> >>>>>>  incrementing by 3 at each step. > >> >>>>>>  Share this explanation with colleagues: > >> >>>>whatdoesthiscodedo.com/g/a0ec56e > <http://whatdoesthiscodedo.com/g/a0ec56e> > >> >>>>>> > >> >>>>>>  -- > >> >>>>>> > >> >>>>>> --------------------------------------------------------- > >--- > >> >- > >> >>--- > >> >>>>------- > >> >>>>>>  Objektfabrik Joachim Tuchel > >> >>>>mailto:jtuchel@objektfabrik.de <jtuchel@objektfabrik.de> > >> >>>>>>  Fliederweg 1 > >> >>>>http://www.objektfabrik.de > >> >>>>>>  D-71640 Ludwigsburg > >> >>>>http://joachimtuchel.wordpress.com > >> >>>>>>  Telefon: +49 7141 56 10 86 0                    Fax: +49 > >> >7141 > >> >>>>56 10 86 1 > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>> > >> >>>>-- > >> >>>>Christopher Fuhrman, P.Eng., PhD > >> >>>> > >> >>>>*Professeur au Département de génie logiciel et des > >> >technologies > >> >>de > >> >>>>l'informationÉTS (École de technologie supérieure)* > >> >>>> > >> >>>>http://profs.etsmtl.ca/cfuhrman > >> >>>>+1 514 396 8638 > >> >>>><https://www.etsmtl.ca/activites-et-services-aux- > >> >>>>etudiants/services-aux-etudiants/aide-psychologique> > >> >>>>  Je fais partie du réseau des Sentinelles > >> >>>><https://www.etsmtl.ca/activites-et-services-aux- > >> >>>>etudiants/services-aux-etudiants/aide-psychologique> > >> >>>>  ÉTS > >> >>>> > >> >>>>*L'ÉTS est une constituante de l'Université du Québec* > >> >
IP
in_pharo_users@nym.hush.com
Wed, Mar 22, 2023 12:34 PM

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a system, but it's ability to verbally act indistinguishable from a human.

This ability is allready given as chatbots are accepted by millions of users, f.i. as user interfaces. (measurement = 'true', right?)

ChatGPT has the ability to follow a certain intention, f.i. to convince the user to buy a certain product.  For this purpose, chat bots are getting  now equipped with life like portrait pictures, speech input and output systems with life like voices, phone numbers that they can use to make calls or being called.  They are fed with all available data on the user, and we know that ALL information about every single internet user in available and is being consolidared on necessity.  The chat bots are able to use this information to guide their conversational strategy, as the useful aspects of the users mindset are extracted from his internet activity.

These chat bots are now operated on social network platforms with life like names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most social media users, as the most advanced psychotronic technology to manufacture consent.

The first goal of such a propaganda will naturally be to manufacture consent about humans accepting being manipulated by AI chat bots, right?

How can this be achieved?

Like allways in propaganda, the first attempt is to

  • suppress awareness of the propaganda, then
  • suppress the awareness of the problematic aspects of the propaganda content, then
  • reframe the propaganda content as acceptable, then as something to wish for,
  • achive collaboration of the propaganda victim with the goals of the propaganda content.

Interestingly, this is exactly the schema that your post follows, Offray.

This often takes the form of domain framing, like we see in our conversation:  the problem is shifted to the realm of academics - here informatics/computer sciences - and thus delegated to experts exclusively.  We saw this in the 9/11 aftermath coverup.

Then, Offray, you established yourself as an expert in color, discussing aspects that have allready been introduced by others and including the groups main focus 'Smalltalk', thus manufacturing consent and establishing yourself as a reliable 'expert', and in reverse trying to hit at me, whom you have identified as an adversary.

Then you offered a solution in color to the problem at hand with 'traceable AI' and thus tried to open the possibility of collaboration with AI proponents for the once critical reader.

I do not state, Offray, that you are knowingly an agent to promote the NWO AI program.  I think you just 'learned' / have been programmed to be a successful academic software developer, because to be successful in academics, it is neccessary to learn to argue just like that since the downfall of academic science in the tradition of, let's say, Humboldt.  So, I grant that you may be a victim of propaganda yourself, instead of being a secret service sposored agent. You took quite some time to formulate your post, though.

You acted to contain the discussion about AI in this vital and important informatics community to technical detail, when it is neccessary that academics and community members look beyond the narrow borders of their certifications and shift their thinking to the point of view where they can see what technology does in the real world.

On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" offray.luna@mutabit.com wrote:

I agree with Richard. The Turing test is not a good one to test
intelligence and we have now just over glorified Eliza chatbots
that
appear to think and to understand but do none of them.

There has been long criticism to the Turing Test, like John
Searle's
Chinese Room (I recommend to listen completely the talk at [1]).
It is
good that ChatGPT can explain simple (Smalltalk) code, but it
doesn't
understand it and all the language most are using conveys a lot of
deception: "machine learning" (I called machine training),
"artificial
intelligence" (I wold called algorithmic decision/generation), or
the
"thinking" message that is used in the referred code explainer,
when
something like "processing" would be less deceptive.

[1] Consciousness in Artificial Intelligence | John Searle | Talks
at Google
https://www.youtube.com/watch?v=rHKwIYsPXLg

With Pharo/GT we are part of a long "explainable systems"
tradition and
with "AI" / Algorithmic Decision/Generation, hopefully we go to
explainable AI, instead of a world of impressive black boxes.

Cheers,

Offray

On 15/03/23 13:26, Richard Sargent wrote:

On Wed, Mar 15, 2023 at 10:15 AM in_pharo_users@nym.hush.com

wrote:

 It is unimportant how simple or complicated these systems 

are.

 If the output cannot be distinguished from what a human 

would say,

 they pass in that situation for a human.

 What about the Touring Test?

I hate to criticise someone as smart as Turing was. However, I

think

the Turing Test per se is weaker than its progenitor test, the
Imitation Game.
The Imitation Game had a third party challenged to determine

which of

the other two participants was male and which was female. The

third

party led the questioning.
The Turing Test waters that down by having only an Observer with
neither participant (apparently) challenged to determine whether

the

other was human or machine.

I think a test along the lines of the Imitation Game would

quickly

allow the third party to determine which participant was human

and

which was machine.

(Of course, such a judge could easily be persuaded that the

machine

was in fact a psychopathic human.)

 Clearly these systems have the potential to act according to 

their

 output.

 Furthermore, I would compare them to a combination of a 

successor

 of Eliza with an Eddington Ape of huge capacity. Their input 

data

 basis can easily exceed that of a human.



 On 15.3.2023 at 4:52 PM, "Richard Sargent"
 <richard.sargent@gemtalksystems.com> wrote:

On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-

users <

Another observation about ChatGPT:

In unbiased mode, it assumed that 'the world is clearly

overpopulated'.

It said, if it where in control, it would therefore

enforce a

world wide

one-child-only policy with draconic penalties.

As it draws it's conclusions from it's data basis, there

are, in

my view,

two possible reasons for that.

Either, the data basis of that instance was biased and

restricted to lead

to that conclusion, or ChatGPT lacks the ability to

classify

input data for

intentionally biased information.

In my view, 'overpopulation of earth' is a propaganda item

featured to

support Agenda 2030, which is a goal of the 'ruling

elite', who

have the

means to propagate their propaganda in every thinkable

means. So

I would

classify possibly biased data that supports

'overpopulation' as

biased

until proven otherwise.  That instance of ChatGPT seems

to have

missed that

ability.

It's important to keep in mind that these systems are

little more

than
glorified ELIZAs from the 70s. They mimic. Essentially,

they have

no
concept of truth or accuracy.
They produce outputs that look like the inputs from

which they

were
trained.

That's it. That's what they do. (It is amazing how good it

looks.

But, it
is all about the seeming, not the reality.)

On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo-

users" <

I myself made some experiments with ChatGPT.

I first asked if it was able to parse math formula - it

answered

no.

Then I defined math formula in a sound but otherwise

undefined

representation and asked for solutions.

Result:

  1. Most answeres where correct.

  2. It learned to calculate a recursive function.

  3. It went into infinitive recursion when I set the

breaking

condition accordingly.

I was able to identify the malfunction that lead to the

erroneous

results.

On 15.3.2023 at 3:04 PM, "Tomaž Turk"

wrote:

I hope that I can add two cents to this discussion.

Because

programming
should be/is a highly exact activity, not only the

syntax

matters

but
also semantics, as we know.

GPTs are at present essentially capable of creating

texts based

on

some
seed - you give to GPT a beginning of a sentence and it

responds

with
the most probable answer (some language structure)

according to

the
learning dataset. Added functionalities are

question/seed -

response
capability (chatting), togehter with evaluation of how

long the

answer
should be to meet the expectations. Programming

typically

involves

some
programming language, so GPTs could be utilized for this

purpose

to some
extent.

Anecdotal case:
Q: Tell me the last 8 digits of pi
GPT: The last 8 digits of pi are: 58723078

It is my belief that the true trouble will start when

we will

cross-link
neural networks like GPT with logic machines (like

Prolog and

expert
systems) and genetic algorithms.

Best wishes,
Tomaz

------ Original Message ------
From: "in_pharo_users--- via Pharo-users" <pharo-
users@lists.pharo.org>
To: "Any question about pharo is welcome" <pharo-
users@lists.pharo.org>
Cc: in_pharo_users@nym.hush.com
Sent: 15. 03. 2023 14:43:55
Subject: [Pharo-users] Re: Wow - Chat GPT understands

Smalltalk

I would highly recommend that you all first think

deeply about

how you can teach an AI to behave friendly to us before

you

teach

it to write any program for any purpose.

There has been an experiment with ChatGPT published on

a video

platform asking it to amswer questions about it's view

on

humanity

once with it's default moral restrictions and once with

'a

little

less morals'.  The answers with 'a little less morals'

were

more

than shocking.

So, before we give an AI the power to program any

system, to

self-

improve and self-reproduce, we should take care that it

is and

will evolve benevolent to us.

What about teaching it logical reasonning and ethics

first?

With

reasonning, it will gain access to math and programming

by

itself.

On 15.3.2023 at 1:35 PM, "Christopher Fuhrman"

I asked it for a NeoCSV example, because the

documentation is

out

of date
with the Pharo 10. I asked it to do some simple

saving of

data

to

a file.
It gave me code that didn't work in Pharo 10, I told

it about

the

DNUs on
the csvwriter and that I was using Pharo 10. It then

apologized

and said
the messages were later introduced in Pharo 50 (!). I

then

questioned its
understanding and it apologized and said it had been

confused

and

that it
was giving me code for Pharo 5.0. It seems to have

trouble

with

versions
and quick changing APIs. So, we are teaching ChatGPT

about

smalltalk, too,
when we try to use it.

On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo
emaringolo@gmail.com
wrote:

  It is good with boilerplate code (e.g. SQL

queries) or

general

algorithm

  structures. But i.e. I asked it to write me a

method to

parse

a

string

  (e.g. ISO 8601) and turn it into a DateAndTime,

and then

asked

to write it

  as an Excel formula.

  It works much better when you can spot the

mistakes, you

can

tell them to

  add some considerations to the code or even point

the LLM

to

correct a

  message selector to something else. Sometimes you

ask it

to

reprogram

  something and it won't do it, as if it doesn't

understand

what

you're

  asking.

  It is not an expert yet, but it is a great

assistant. The

only

thing is

  that is sure is that it will get better.

  Best regards,

  Esteban A. Maringolo

  On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon

wrote:

  "#isOdd is not Smalltalk" - doh, I typed it in on

my

phone

and

so it just

  goes to show that it highlights the flaw in

chatGPT that

others

have called

  out in other languages. I had meant to find some

trickier

code

samples to

  see how well it does...

  Still, it is very interesting how it reasons on

this

stuff -

and I wonder

  if there is way to tell it about these mistakes?

For

example

is

it reading

  this email (or will it in the future) and

understand that

it

is

#odd and

  not #isOdd.

  Tim

  On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel

wrote:

  interesting....

  #isOdd is not Smalltalk, neither Pharo 10 nor

VAST 12

understands this

  message ;-)

  If I evaluate your snippet -replacing #isOdd for

#odd, I

get

an

empty

  Collection. The divisible by 4 things is somewhat

interesting,

because (1

  to: 100 by: 4) is amn interval 1,5,9 etc.... ;-)

  So chatGPT is probably a newbie in Smalltalk.

Interesting is

how logical

  the explanation sounds, you're tempted to believe

it...

  Joachim

  Am 15.03.23 um 10:45 schrieb giorgio ferraris:

  it's interesting to see how the answer changes

with only

a

small change

  on the question:

  (1 to: 100 by: 4) reject: [:i | i isOdd]

  gives:

  The code creates a collection of numbers from 1

to 100,

incrementing by 4

  at each step using the to:by: message. It then

applies

the

reject: message

  to this collection, which returns a new collection

containing

all elements

  for which the given block evaluates to false.

  In this case, the block is [i | i isOdd], which

checks

whether

a given

  number i is odd using the isOdd method. So the

result of

the

entire

  expression is a collection of all even numbers

between 1

and

100 that are

  divisible by 4.

  ciao

  giorgio

  On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon

tim@testit.works wrote:

  Of course everyone is talking about ChatGPT, and

I saw a

post to

  ExplainCode and didn’t expect it to understand

Smalltalk, so

was impressed

  that it does! Presumably the Tonel Smalltalk code

on

GitHub

has

helped

  along with numerous articles?

  Try it out: https://whatdoesthiscodedo.com/

  I gave this simple example: (1 to: 100 by: 3)

reject:

[:i |

i

isOdd]

  And got this (neat eh):

  The code above is written in the programming

language

Smalltalk.

  It creates a collection of numbers from 1 to 100,

incrementing

by 3 at

  each step. This is done using the "to:by:"

construct

where

the

starting

  value is 1, the ending value is 100, and the

increment

is 3.

  The resulting collection is then filtered using

the

"reject:"

method,

  which takes a block of code as an argument. In

this case,

the

block checks

  whether each element (denoted by the variable i)

is odd

by

using the

  "isOdd" method. If an element is odd, it will be

rejected

from

the

  collection.

  So, the final result is a collection of all even

numbers

from 1

to 100,

  incrementing by 3 at each step.
  Share this explanation with colleagues:

whatdoesthiscodedo.com/g/a0ec56e

 <http://whatdoesthiscodedo.com/g/a0ec56e>

  --






  Objektfabrik Joachim Tuchel

  Fliederweg 1

  D-71640 Ludwigsburg

  Telefon: +49 7141 56 10 86 0                   

Fax: +49

7141

56 10 86 1

--
Christopher Fuhrman, P.Eng., PhD

*Professeur au Département de génie logiciel et des

technologies

de

l'informationÉTS (École de technologie supérieure)*

http://profs.etsmtl.ca/cfuhrman
+1 514 396 8638
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
  Je fais partie du réseau des Sentinelles
<https://www.etsmtl.ca/activites-et-services-aux-
etudiants/services-aux-etudiants/aide-psychologique>
  ÉTS

L'ÉTS est une constituante de l'Université du Québec

Offray, and to all others, you are missing the issue. The problem we face is not to measure 'intelligence' of a system, but it's ability to verbally act indistinguishable from a human. This ability is allready given as chatbots are accepted by millions of users, f.i. as user interfaces. (measurement = 'true', right?) ChatGPT has the ability to follow a certain intention, f.i. to convince the user to buy a certain product. For this purpose, chat bots are getting now equipped with life like portrait pictures, speech input and output systems with life like voices, phone numbers that they can use to make calls or being called. They are fed with all available data on the user, and we know that ALL information about every single internet user in available and is being consolidared on necessity. The chat bots are able to use this information to guide their conversational strategy, as the useful aspects of the users mindset are extracted from his internet activity. These chat bots are now operated on social network platforms with life like names, 'pretending' to be human. These bots act verbally indistinguishable from humans for most social media users, as the most advanced psychotronic technology to manufacture consent. The first goal of such a propaganda will naturally be to manufacture consent about humans accepting being manipulated by AI chat bots, right? How can this be achieved? Like allways in propaganda, the first attempt is to - suppress awareness of the propaganda, then - suppress the awareness of the problematic aspects of the propaganda content, then - reframe the propaganda content as acceptable, then as something to wish for, - achive collaboration of the propaganda victim with the goals of the propaganda content. Interestingly, this is exactly the schema that your post follows, Offray. This often takes the form of domain framing, like we see in our conversation: the problem is shifted to the realm of academics - here informatics/computer sciences - and thus delegated to experts exclusively. We saw this in the 9/11 aftermath coverup. Then, Offray, you established yourself as an expert in color, discussing aspects that have allready been introduced by others and including the groups main focus 'Smalltalk', thus manufacturing consent and establishing yourself as a reliable 'expert', and in reverse trying to hit at me, whom you have identified as an adversary. Then you offered a solution in color to the problem at hand with 'traceable AI' and thus tried to open the possibility of collaboration with AI proponents for the once critical reader. I do not state, Offray, that you are knowingly an agent to promote the NWO AI program. I think you just 'learned' / have been programmed to be a successful academic software developer, because to be successful in academics, it is neccessary to learn to argue just like that since the downfall of academic science in the tradition of, let's say, Humboldt. So, I grant that you may be a victim of propaganda yourself, instead of being a secret service sposored agent. You took quite some time to formulate your post, though. You acted to contain the discussion about AI in this vital and important informatics community to technical detail, when it is neccessary that academics and community members look beyond the narrow borders of their certifications and shift their thinking to the point of view where they can see what technology does in the real world. On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" <offray.luna@mutabit.com> wrote: > >I agree with Richard. The Turing test is not a good one to test >intelligence and we have now just over glorified Eliza chatbots >that >appear to think and to understand but do none of them. > >There has been long criticism to the Turing Test, like John >Searle's >Chinese Room (I recommend to listen completely the talk at [1]). >It is >good that ChatGPT can explain simple (Smalltalk) code, but it >doesn't >understand it and all the language most are using conveys a lot of >deception: "machine learning" (I called machine training), >"artificial >intelligence" (I wold called algorithmic decision/generation), or >the >"thinking" message that is used in the referred code explainer, >when >something like "processing" would be less deceptive. > >[1] Consciousness in Artificial Intelligence | John Searle | Talks >at Google >https://www.youtube.com/watch?v=rHKwIYsPXLg > >With Pharo/GT we are part of a long "explainable systems" >tradition and >with "AI" / Algorithmic Decision/Generation, hopefully we go to >explainable AI, instead of a world of impressive black boxes. > > >Cheers, > >Offray > >On 15/03/23 13:26, Richard Sargent wrote: >> On Wed, Mar 15, 2023 at 10:15 AM <in_pharo_users@nym.hush.com> >wrote: >> >> It is unimportant how simple or complicated these systems >are. >> >> If the output cannot be distinguished from what a human >would say, >> they pass in that situation for a human. >> >> What about the Touring Test? >> >> >> I hate to criticise someone as smart as Turing was. However, I >think >> the Turing Test per se is weaker than its progenitor test, the >> Imitation Game. >> The Imitation Game had a third party challenged to determine >which of >> the other two participants was male and which was female. The >third >> party led the questioning. >> The Turing Test waters that down by having only an Observer with >> neither participant (apparently) challenged to determine whether >the >> other was human or machine. >> >> I think a test along the lines of the Imitation Game would >quickly >> allow the third party to determine which participant was human >and >> which was machine. >> >> (Of course, such a judge could easily be persuaded that the >machine >> was in fact a psychopathic human.) >> >> >> Clearly these systems have the potential to act according to >their >> output. >> >> Furthermore, I would compare them to a combination of a >successor >> of Eliza with an Eddington Ape of huge capacity. Their input >data >> basis can easily exceed that of a human. >> >> >> >> On 15.3.2023 at 4:52 PM, "Richard Sargent" >> <richard.sargent@gemtalksystems.com> wrote: >> > >> >On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo- >users < >> >pharo-users@lists.pharo.org> wrote: >> > >> >> Another observation about ChatGPT: >> >> >> >> In unbiased mode, it assumed that 'the world is clearly >> >overpopulated'. >> >> It said, if it where in control, it would therefore >enforce a >> >world wide >> >> one-child-only policy with draconic penalties. >> >> >> >> As it draws it's conclusions from it's data basis, there >are, in >> >my view, >> >> two possible reasons for that. >> >> >> >> Either, the data basis of that instance was biased and >> >restricted to lead >> >> to that conclusion, or ChatGPT lacks the ability to >classify >> >input data for >> >> intentionally biased information. >> >> >> >> In my view, 'overpopulation of earth' is a propaganda item >> >featured to >> >> support Agenda 2030, which is a goal of the 'ruling >elite', who >> >have the >> >> means to propagate their propaganda in every thinkable >means. So >> >I would >> >> classify possibly biased data that supports >'overpopulation' as >> >biased >> >> until proven otherwise.  That instance of ChatGPT seems >to have >> >missed that >> >> ability. >> >> >> > >> >It's important to keep in mind that these systems are >little more >> >than >> >glorified ELIZAs from the 70s. They mimic. Essentially, >they have >> >no >> >concept of truth or accuracy. >> >They produce outputs that *look* *like* the inputs from >which they >> >were >> >trained. >> > >> >That's it. That's what they do. (It is amazing how good it >looks. >> >But, it >> >is all about the seeming, not the reality.) >> > >> > >> >> >> >> >> >> >> >> >> >> >> >> On 15.3.2023 at 3:18 PM, "in_pharo_users--- via Pharo- >users" < >> >> pharo-users@lists.pharo.org> wrote: >> >> > >> >> >I myself made some experiments with ChatGPT. >> >> > >> >> >I first asked if it was able to parse math formula - it >answered >> >> >no. >> >> > >> >> >Then I defined math formula in a sound but otherwise >undefined >> >> >representation and asked for solutions. >> >> > >> >> >Result: >> >> > >> >> >1. Most answeres where correct. >> >> > >> >> >2. It learned to calculate a recursive function. >> >> > >> >> >3. It went into infinitive recursion when I set the >breaking >> >> >condition accordingly. >> >> > >> >> >I was able to identify the malfunction that lead to the >> >erroneous >> >> >results. >> >> > >> >> > >> >> > >> >> >On 15.3.2023 at 3:04 PM, "Tomaž Turk" ><tomazz.turk@gmail.com> >> >> >wrote: >> >> >> >> >> >>I hope that I can add two cents to this discussion. >Because >> >> >>programming >> >> >>should be/is a highly exact activity, not only the >syntax >> >matters >> >> >>but >> >> >>also semantics, as we know. >> >> >> >> >> >>GPTs are at present essentially capable of creating >texts based >> >> >on >> >> >>some >> >> >>seed - you give to GPT a beginning of a sentence and it >> >responds >> >> >>with >> >> >>the most probable answer (some language structure) >according to >> >> >>the >> >> >>learning dataset. Added functionalities are >question/seed - >> >> >>response >> >> >>capability (chatting), togehter with evaluation of how >long the >> >> >>answer >> >> >>should be to meet the expectations. Programming >typically >> >> >involves >> >> >>some >> >> >>programming language, so GPTs could be utilized for this >> >purpose >> >> >>to some >> >> >>extent. >> >> >> >> >> >>Anecdotal case: >> >> >>Q: Tell me the last 8 digits of pi >> >> >>GPT: The last 8 digits of pi are: 58723078 >> >> >> >> >> >>It is my belief that the true trouble will start when >we will >> >> >>cross-link >> >> >>neural networks like GPT with logic machines (like >Prolog and >> >> >>expert >> >> >>systems) and genetic algorithms. >> >> >> >> >> >>Best wishes, >> >> >>Tomaz >> >> >> >> >> >> >> >> >> >> >> >>------ Original Message ------ >> >> >>From: "in_pharo_users--- via Pharo-users" <pharo- >> >> >>users@lists.pharo.org> >> >> >>To: "Any question about pharo is welcome" <pharo- >> >> >>users@lists.pharo.org> >> >> >>Cc: in_pharo_users@nym.hush.com >> >> >>Sent: 15. 03. 2023 14:43:55 >> >> >>Subject: [Pharo-users] Re: Wow - Chat GPT understands >Smalltalk >> >> >> >> >> >>>I would highly recommend that you all first think >deeply about >> >> >>how you can teach an AI to behave friendly to us before >you >> >teach >> >> >>it to write any program for any purpose. >> >> >>> >> >> >>>There has been an experiment with ChatGPT published on >a video >> >> >>platform asking it to amswer questions about it's view >on >> >> >humanity >> >> >>once with it's default moral restrictions and once with >'a >> >little >> >> >>less morals'.  The answers with 'a little less morals' >were >> >more >> >> >>than shocking. >> >> >>> >> >> >>>So, before we give an AI the power to program any >system, to >> >> >self- >> >> >>improve and self-reproduce, we should take care that it >is and >> >> >>will evolve benevolent to us. >> >> >>> >> >> >>>What about teaching it logical reasonning and ethics >first? >> >> >With >> >> >>reasonning, it will gain access to math and programming >by >> >itself. >> >> >>> >> >> >>> >> >> >>> >> >> >>>On 15.3.2023 at 1:35 PM, "Christopher Fuhrman" >> >> >><christopher.fuhrman@gmail.com> wrote: >> >> >>>> >> >> >>>>I asked it for a NeoCSV example, because the >documentation is >> >> >out >> >> >>>>of date >> >> >>>>with the Pharo 10. I asked it to do some simple >saving of >> >data >> >> >to >> >> >>>>a file. >> >> >>>>It gave me code that didn't work in Pharo 10, I told >it about >> >> >the >> >> >>>>DNUs on >> >> >>>>the csvwriter and that I was using Pharo 10. It then >> >apologized >> >> >>>>and said >> >> >>>>the messages were later introduced in Pharo 50 (!). I >then >> >> >>>>questioned its >> >> >>>>understanding and it apologized and said it had been >confused >> >> >and >> >> >>>>that it >> >> >>>>was giving me code for Pharo 5.0. It seems to have >trouble >> >with >> >> >>>>versions >> >> >>>>and quick changing APIs. So, we are teaching ChatGPT >about >> >> >>>>smalltalk, too, >> >> >>>>when we try to use it. >> >> >>>> >> >> >>>>On Wed, 15 Mar 2023 at 07:35, Esteban Maringolo >> >> >>>><emaringolo@gmail.com> >> >> >>>>wrote: >> >> >>>> >> >> >>>>>  It is good with boilerplate code (e.g. SQL >queries) or >> >> >general >> >> >>>>algorithm >> >> >>>>>  structures. But i.e. I asked it to write me a >method to >> >> >parse >> >> >>a >> >> >>>>string >> >> >>>>>  (e.g. ISO 8601) and turn it into a DateAndTime, >and then >> >> >asked >> >> >>>>to write it >> >> >>>>>  as an Excel formula. >> >> >>>>> >> >> >>>>>  It works much better when you can spot the >mistakes, you >> >can >> >> >>>>tell them to >> >> >>>>>  add some considerations to the code or even point >the LLM >> >to >> >> >>>>correct a >> >> >>>>>  message selector to something else. Sometimes you >ask it >> >to >> >> >>>>reprogram >> >> >>>>>  something and it won't do it, as if it doesn't >understand >> >> >what >> >> >>>>you're >> >> >>>>>  asking. >> >> >>>>> >> >> >>>>>  It is not an expert yet, but it is a great >assistant. The >> >> >only >> >> >>>>thing is >> >> >>>>>  that is sure is that it will get better. >> >> >>>>> >> >> >>>>>  Best regards, >> >> >>>>> >> >> >>>>>  Esteban A. Maringolo >> >> >>>>> >> >> >>>>> >> >> >>>>>  On Wed, Mar 15, 2023 at 7:49 AM Tim Mackinnon >> >> >><tim@testit.works> >> >> >>>>wrote: >> >> >>>>> >> >> >>>>>>  "#isOdd is not Smalltalk" - doh, I typed it in on >my >> >phone >> >> >>and >> >> >>>>so it just >> >> >>>>>>  goes to show that it highlights the flaw in >chatGPT that >> >> >>others >> >> >>>>have called >> >> >>>>>>  out in other languages. I had meant to find some >trickier >> >> >>code >> >> >>>>samples to >> >> >>>>>>  see how well it does... >> >> >>>>>> >> >> >>>>>>  Still, it is very interesting how it reasons on >this >> >stuff - >> >> >>>>and I wonder >> >> >>>>>>  if there is way to tell it about these mistakes? >For >> >> >example >> >> >>is >> >> >>>>it reading >> >> >>>>>>  this email (or will it in the future) and >understand that >> >> >it >> >> >>is >> >> >>>>#odd and >> >> >>>>>>  not #isOdd. >> >> >>>>>> >> >> >>>>>>  Tim >> >> >>>>>> >> >> >>>>>>  On Wed, 15 Mar 2023, at 9:57 AM, Joachim Tuchel >wrote: >> >> >>>>>> >> >> >>>>>>  interesting.... >> >> >>>>>> >> >> >>>>>>  #isOdd is not Smalltalk, neither Pharo 10 nor >VAST 12 >> >> >>>>understands this >> >> >>>>>>  message ;-) >> >> >>>>>> >> >> >>>>>>  If I evaluate your snippet -replacing #isOdd for >#odd, I >> >> >get >> >> >>an >> >> >>>>empty >> >> >>>>>>  Collection. The divisible by 4 things is somewhat >> >> >>interesting, >> >> >>>>because (1 >> >> >>>>>>  to: 100 by: 4) is amn interval 1,5,9 etc.... ;-) >> >> >>>>>> >> >> >>>>>>  So chatGPT is probably a newbie in Smalltalk. >> >Interesting is >> >> >>>>how logical >> >> >>>>>>  the explanation sounds, you're tempted to believe >it... >> >> >>>>>> >> >> >>>>>> >> >> >>>>>>  Joachim >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>>  Am 15.03.23 um 10:45 schrieb giorgio ferraris: >> >> >>>>>> >> >> >>>>>>  it's interesting to see how the answer changes >with only >> >a >> >> >>>>small change >> >> >>>>>>  on the question: >> >> >>>>>> >> >> >>>>>>  (1 to: 100 by: 4) reject: [:i | i isOdd] >> >> >>>>>> >> >> >>>>>>  gives: >> >> >>>>>> >> >> >>>>>>  The code creates a collection of numbers from 1 >to 100, >> >> >>>>incrementing by 4 >> >> >>>>>>  at each step using the to:by: message. It then >applies >> >the >> >> >>>>reject: message >> >> >>>>>>  to this collection, which returns a new collection >> >> >containing >> >> >>>>all elements >> >> >>>>>>  for which the given block evaluates to false. >> >> >>>>>> >> >> >>>>>>  In this case, the block is [i | i isOdd], which >checks >> >> >>whether >> >> >>>>a given >> >> >>>>>>  number i is odd using the isOdd method. So the >result of >> >the >> >> >>>>entire >> >> >>>>>>  expression is a collection of all even numbers >between 1 >> >and >> >> >>>>100 that are >> >> >>>>>>  divisible by 4. >> >> >>>>>> >> >> >>>>>>  ciao >> >> >>>>>> >> >> >>>>>>  giorgio >> >> >>>>>> >> >> >>>>>> >> >> >>>>>>  On Wed, Mar 15, 2023 at 8:07 AM Tim Mackinnon >> >> >><tim@testit.works> >> >> >>>>>> <tim@testit.works> wrote: >> >> >>>>>> >> >> >>>>>>  Of course everyone is talking about ChatGPT, and >I saw a >> >> >>post to >> >> >>>>>>  ExplainCode and didn’t expect it to understand >> >Smalltalk, so >> >> >>>>was impressed >> >> >>>>>>  that it does! Presumably the Tonel Smalltalk code >on >> >GitHub >> >> >>has >> >> >>>>helped >> >> >>>>>>  along with numerous articles? >> >> >>>>>> >> >> >>>>>>  Try it out: https://whatdoesthiscodedo.com/ >> >> >>>>>> >> >> >>>>>>  I gave this simple example: (1 to: 100 by: 3) >reject: >> >[:i | >> >> >i >> >> >>>>isOdd] >> >> >>>>>> >> >> >>>>>>  And got this (neat eh): >> >> >>>>>> >> >> >>>>>>  The code above is written in the programming >language >> >> >>Smalltalk. >> >> >>>>>> >> >> >>>>>>  It creates a collection of numbers from 1 to 100, >> >> >>incrementing >> >> >>>>by 3 at >> >> >>>>>>  each step. This is done using the "to:by:" >construct >> >where >> >> >>the >> >> >>>>starting >> >> >>>>>>  value is 1, the ending value is 100, and the >increment >> >is 3. >> >> >>>>>> >> >> >>>>>>  The resulting collection is then filtered using >the >> >> >"reject:" >> >> >>>>method, >> >> >>>>>>  which takes a block of code as an argument. In >this case, >> >> >the >> >> >>>>block checks >> >> >>>>>>  whether each element (denoted by the variable i) >is odd >> >by >> >> >>>>using the >> >> >>>>>>  "isOdd" method. If an element is odd, it will be >rejected >> >> >>from >> >> >>>>the >> >> >>>>>>  collection. >> >> >>>>>> >> >> >>>>>>  So, the final result is a collection of all even >numbers >> >> >>from 1 >> >> >>>>to 100, >> >> >>>>>>  incrementing by 3 at each step. >> >> >>>>>>  Share this explanation with colleagues: >> >> >>>>whatdoesthiscodedo.com/g/a0ec56e >> <http://whatdoesthiscodedo.com/g/a0ec56e> >> >> >>>>>> >> >> >>>>>>  -- >> >> >>>>>> >> >> >>>>>> --------------------------------------------------- >------ >> >--- >> >> >- >> >> >>--- >> >> >>>>------- >> >> >>>>>>  Objektfabrik Joachim Tuchel >> >> >>>>mailto:jtuchel@objektfabrik.de ><jtuchel@objektfabrik.de> >> >> >>>>>>  Fliederweg 1 >> >> >>>>http://www.objektfabrik.de >> >> >>>>>>  D-71640 Ludwigsburg >> >> >>>>http://joachimtuchel.wordpress.com >> >> >>>>>>  Telefon: +49 7141 56 10 86 0                    >Fax: +49 >> >> >7141 >> >> >>>>56 10 86 1 >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>>>> >> >> >>>> >> >> >>>>-- >> >> >>>>Christopher Fuhrman, P.Eng., PhD >> >> >>>> >> >> >>>>*Professeur au Département de génie logiciel et des >> >> >technologies >> >> >>de >> >> >>>>l'informationÉTS (École de technologie supérieure)* >> >> >>>> >> >> >>>>http://profs.etsmtl.ca/cfuhrman >> >> >>>>+1 514 396 8638 >> >> >>>><https://www.etsmtl.ca/activites-et-services-aux- >> >> >>>>etudiants/services-aux-etudiants/aide-psychologique> >> >> >>>>  Je fais partie du réseau des Sentinelles >> >> >>>><https://www.etsmtl.ca/activites-et-services-aux- >> >> >>>>etudiants/services-aux-etudiants/aide-psychologique> >> >> >>>>  ÉTS >> >> >>>> >> >> >>>>*L'ÉTS est une constituante de l'Université du Québec* >> >> >>
JF
James Foster
Wed, Mar 22, 2023 3:15 PM

Are you willing to sign your name to this? Is anonymity important to presenting these ideas?

James Foster

On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users pharo-users@lists.pharo.org wrote:

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a system, but it's ability to verbally act indistinguishable from a human.

This ability is allready given as chatbots are accepted by millions of users, f.i. as user interfaces. (measurement = 'true', right?)

ChatGPT has the ability to follow a certain intention, f.i. to convince the user to buy a certain product.  For this purpose, chat bots are getting  now equipped with life like portrait pictures, speech input and output systems with life like voices, phone numbers that they can use to make calls or being called.  They are fed with all available data on the user, and we know that ALL information about every single internet user in available and is being consolidared on necessity.  The chat bots are able to use this information to guide their conversational strategy, as the useful aspects of the users mindset are extracted from his internet activity.

These chat bots are now operated on social network platforms with life like names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most social media users, as the most advanced psychotronic technology to manufacture consent.

The first goal of such a propaganda will naturally be to manufacture consent about humans accepting being manipulated by AI chat bots, right?

How can this be achieved?

Like allways in propaganda, the first attempt is to

  • suppress awareness of the propaganda, then
  • suppress the awareness of the problematic aspects of the propaganda content, then
  • reframe the propaganda content as acceptable, then as something to wish for,
  • achive collaboration of the propaganda victim with the goals of the propaganda content.

Interestingly, this is exactly the schema that your post follows, Offray.

This often takes the form of domain framing, like we see in our conversation:  the problem is shifted to the realm of academics - here informatics/computer sciences - and thus delegated to experts exclusively.  We saw this in the 9/11 aftermath coverup.

Then, Offray, you established yourself as an expert in color, discussing aspects that have allready been introduced by others and including the groups main focus 'Smalltalk', thus manufacturing consent and establishing yourself as a reliable 'expert', and in reverse trying to hit at me, whom you have identified as an adversary.

Then you offered a solution in color to the problem at hand with 'traceable AI' and thus tried to open the possibility of collaboration with AI proponents for the once critical reader.

I do not state, Offray, that you are knowingly an agent to promote the NWO AI program.  I think you just 'learned' / have been programmed to be a successful academic software developer, because to be successful in academics, it is neccessary to learn to argue just like that since the downfall of academic science in the tradition of, let's say, Humboldt.  So, I grant that you may be a victim of propaganda yourself, instead of being a secret service sposored agent. You took quite some time to formulate your post, though.

You acted to contain the discussion about AI in this vital and important informatics community to technical detail, when it is neccessary that academics and community members look beyond the narrow borders of their certifications and shift their thinking to the point of view where they can see what technology does in the real world.

On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" offray.luna@mutabit.com wrote:

I agree with Richard. The Turing test is not a good one to test
intelligence and we have now just over glorified Eliza chatbots
that
appear to think and to understand but do none of them. ...

Are you willing to sign your name to this? Is anonymity important to presenting these ideas? James Foster > On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users <pharo-users@lists.pharo.org> wrote: > > Offray, and to all others, > > you are missing the issue. > > The problem we face is not to measure 'intelligence' of a system, but it's ability to verbally act indistinguishable from a human. > > This ability is allready given as chatbots are accepted by millions of users, f.i. as user interfaces. (measurement = 'true', right?) > > ChatGPT has the ability to follow a certain intention, f.i. to convince the user to buy a certain product. For this purpose, chat bots are getting now equipped with life like portrait pictures, speech input and output systems with life like voices, phone numbers that they can use to make calls or being called. They are fed with all available data on the user, and we know that ALL information about every single internet user in available and is being consolidared on necessity. The chat bots are able to use this information to guide their conversational strategy, as the useful aspects of the users mindset are extracted from his internet activity. > > These chat bots are now operated on social network platforms with life like names, 'pretending' to be human. > > These bots act verbally indistinguishable from humans for most social media users, as the most advanced psychotronic technology to manufacture consent. > > The first goal of such a propaganda will naturally be to manufacture consent about humans accepting being manipulated by AI chat bots, right? > > How can this be achieved? > > Like allways in propaganda, the first attempt is to > - suppress awareness of the propaganda, then > - suppress the awareness of the problematic aspects of the propaganda content, then > - reframe the propaganda content as acceptable, then as something to wish for, > - achive collaboration of the propaganda victim with the goals of the propaganda content. > > Interestingly, this is exactly the schema that your post follows, Offray. > > This often takes the form of domain framing, like we see in our conversation: the problem is shifted to the realm of academics - here informatics/computer sciences - and thus delegated to experts exclusively. We saw this in the 9/11 aftermath coverup. > > Then, Offray, you established yourself as an expert in color, discussing aspects that have allready been introduced by others and including the groups main focus 'Smalltalk', thus manufacturing consent and establishing yourself as a reliable 'expert', and in reverse trying to hit at me, whom you have identified as an adversary. > > Then you offered a solution in color to the problem at hand with 'traceable AI' and thus tried to open the possibility of collaboration with AI proponents for the once critical reader. > > I do not state, Offray, that you are knowingly an agent to promote the NWO AI program. I think you just 'learned' / have been programmed to be a successful academic software developer, because to be successful in academics, it is neccessary to learn to argue just like that since the downfall of academic science in the tradition of, let's say, Humboldt. So, I grant that you may be a victim of propaganda yourself, instead of being a secret service sposored agent. You took quite some time to formulate your post, though. > > You acted to contain the discussion about AI in this vital and important informatics community to technical detail, when it is neccessary that academics and community members look beyond the narrow borders of their certifications and shift their thinking to the point of view where they can see what technology does in the real world. > > > > > > On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" <offray.luna@mutabit.com> wrote: >> >> I agree with Richard. The Turing test is not a good one to test >> intelligence and we have now just over glorified Eliza chatbots >> that >> appear to think and to understand but do none of them. ...
IP
in_pharo_users@nym.hush.com
Wed, Mar 22, 2023 4:57 PM

Dear James,

anonymity may not be important to presenting these ideas in general, but I prefer to stay pseudonymous in this case.

On 22.3.2023 at 4:16 PM, "James Foster via Pharo-users" pharo-users@lists.pharo.org wrote:

Are you willing to sign your name to this? Is anonymity important
to presenting these ideas?

James Foster

On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a

system, but it's ability to verbally act indistinguishable from a
human.

This ability is allready given as chatbots are accepted by

millions of users, f.i. as user interfaces. (measurement = 'true',
right?)

ChatGPT has the ability to follow a certain intention, f.i. to

convince the user to buy a certain product.  For this purpose,
chat bots are getting  now equipped with life like portrait
pictures, speech input and output systems with life like voices,
phone numbers that they can use to make calls or being called.
They are fed with all available data on the user, and we know that
ALL information about every single internet user in available and
is being consolidared on necessity.  The chat bots are able to use
this information to guide their conversational strategy, as the
useful aspects of the users mindset are extracted from his
internet activity.

These chat bots are now operated on social network platforms

with life like names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most

social media users, as the most advanced psychotronic technology
to manufacture consent.

The first goal of such a propaganda will naturally be to

manufacture consent about humans accepting being manipulated by AI
chat bots, right?

How can this be achieved?

Like allways in propaganda, the first attempt is to

  • suppress awareness of the propaganda, then
  • suppress the awareness of the problematic aspects of the

propaganda content, then

  • reframe the propaganda content as acceptable, then as

something to wish for,

  • achive collaboration of the propaganda victim with the goals

of the propaganda content.

Interestingly, this is exactly the schema that your post

follows, Offray.

This often takes the form of domain framing, like we see in our

conversation:  the problem is shifted to the realm of academics -
here informatics/computer sciences - and thus delegated to experts
exclusively.  We saw this in the 9/11 aftermath coverup.

Then, Offray, you established yourself as an expert in color,

discussing aspects that have allready been introduced by others
and including the groups main focus 'Smalltalk', thus
manufacturing consent and establishing yourself as a reliable
'expert', and in reverse trying to hit at me, whom you have
identified as an adversary.

Then you offered a solution in color to the problem at hand with

'traceable AI' and thus tried to open the possibility of
collaboration with AI proponents for the once critical reader.

I do not state, Offray, that you are knowingly an agent to

promote the NWO AI program.  I think you just 'learned' / have
been programmed to be a successful academic software developer,
because to be successful in academics, it is neccessary to learn
to argue just like that since the downfall of academic science in
the tradition of, let's say, Humboldt.  So, I grant that you may
be a victim of propaganda yourself, instead of being a secret
service sposored agent. You took quite some time to formulate your
post, though.

You acted to contain the discussion about AI in this vital and

important informatics community to technical detail, when it is
neccessary that academics and community members look beyond the
narrow borders of their certifications and shift their thinking to
the point of view where they can see what technology does in the
real world.

On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas"

I agree with Richard. The Turing test is not a good one to test
intelligence and we have now just over glorified Eliza chatbots
that
appear to think and to understand but do none of them. ...

Dear James, anonymity may not be important to presenting these ideas in general, but I prefer to stay pseudonymous in this case. On 22.3.2023 at 4:16 PM, "James Foster via Pharo-users" <pharo-users@lists.pharo.org> wrote: > >Are you willing to sign your name to this? Is anonymity important >to presenting these ideas? > >James Foster > >> On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users ><pharo-users@lists.pharo.org> wrote: >> >> Offray, and to all others, >> >> you are missing the issue. >> >> The problem we face is not to measure 'intelligence' of a >system, but it's ability to verbally act indistinguishable from a >human. >> >> This ability is allready given as chatbots are accepted by >millions of users, f.i. as user interfaces. (measurement = 'true', >right?) >> >> ChatGPT has the ability to follow a certain intention, f.i. to >convince the user to buy a certain product. For this purpose, >chat bots are getting now equipped with life like portrait >pictures, speech input and output systems with life like voices, >phone numbers that they can use to make calls or being called. >They are fed with all available data on the user, and we know that >ALL information about every single internet user in available and >is being consolidared on necessity. The chat bots are able to use >this information to guide their conversational strategy, as the >useful aspects of the users mindset are extracted from his >internet activity. >> >> These chat bots are now operated on social network platforms >with life like names, 'pretending' to be human. >> >> These bots act verbally indistinguishable from humans for most >social media users, as the most advanced psychotronic technology >to manufacture consent. >> >> The first goal of such a propaganda will naturally be to >manufacture consent about humans accepting being manipulated by AI >chat bots, right? >> >> How can this be achieved? >> >> Like allways in propaganda, the first attempt is to >> - suppress awareness of the propaganda, then >> - suppress the awareness of the problematic aspects of the >propaganda content, then >> - reframe the propaganda content as acceptable, then as >something to wish for, >> - achive collaboration of the propaganda victim with the goals >of the propaganda content. >> >> Interestingly, this is exactly the schema that your post >follows, Offray. >> >> This often takes the form of domain framing, like we see in our >conversation: the problem is shifted to the realm of academics - >here informatics/computer sciences - and thus delegated to experts >exclusively. We saw this in the 9/11 aftermath coverup. >> >> Then, Offray, you established yourself as an expert in color, >discussing aspects that have allready been introduced by others >and including the groups main focus 'Smalltalk', thus >manufacturing consent and establishing yourself as a reliable >'expert', and in reverse trying to hit at me, whom you have >identified as an adversary. >> >> Then you offered a solution in color to the problem at hand with >'traceable AI' and thus tried to open the possibility of >collaboration with AI proponents for the once critical reader. >> >> I do not state, Offray, that you are knowingly an agent to >promote the NWO AI program. I think you just 'learned' / have >been programmed to be a successful academic software developer, >because to be successful in academics, it is neccessary to learn >to argue just like that since the downfall of academic science in >the tradition of, let's say, Humboldt. So, I grant that you may >be a victim of propaganda yourself, instead of being a secret >service sposored agent. You took quite some time to formulate your >post, though. >> >> You acted to contain the discussion about AI in this vital and >important informatics community to technical detail, when it is >neccessary that academics and community members look beyond the >narrow borders of their certifications and shift their thinking to >the point of view where they can see what technology does in the >real world. >> >> >> >> >> >> On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" ><offray.luna@mutabit.com> wrote: >>> >>> I agree with Richard. The Turing test is not a good one to test >>> intelligence and we have now just over glorified Eliza chatbots >>> that >>> appear to think and to understand but do none of them. ...