The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.313995
AFN 77.91332
ALL 96.427305
AMD 448.100257
ANG 2.103139
AOA 1077.17598
ARS 1703.374577
AUD 1.772477
AWG 2.114412
AZN 1.997009
BAM 1.95534
BBD 2.368643
BDT 143.716175
BGN 1.955514
BHD 0.442905
BIF 3487.053496
BMD 1.174674
BND 1.516275
BOB 8.126087
BRL 6.472214
BSD 1.176023
BTN 106.872846
BWP 15.532543
BYN 3.446389
BYR 23023.601139
BZD 2.365243
CAD 1.616151
CDF 2643.015516
CHF 0.9344
CLF 0.027374
CLP 1073.863159
CNY 8.271992
CNH 8.264216
COP 4511.897526
CRC 586.869368
CUC 1.174674
CUP 31.128848
CVE 110.240461
CZK 24.307497
DJF 209.420711
DKK 7.471123
DOP 75.56318
DZD 152.074444
EGP 55.663244
ERN 17.620103
ETB 182.567262
FJD 2.677672
FKP 0.877945
GBP 0.875143
GEL 3.165786
GGP 0.877945
GHS 13.524989
GIP 0.877945
GMD 86.336319
GNF 10226.810658
GTQ 9.005995
GYD 246.045232
HKD 9.139324
HNL 30.985103
HRK 7.533299
HTG 154.017028
HUF 385.450912
IDR 19554.90768
ILS 3.791491
IMP 0.877945
INR 106.836146
IQD 1540.637394
IRR 49480.180749
ISK 147.985292
JEP 0.877945
JMD 188.757984
JOD 0.832835
JPY 181.798378
KES 151.645911
KGS 102.725487
KHR 4708.991905
KMF 493.362918
KPW 1057.206469
KRW 1733.351701
KWD 0.360108
KYD 0.980069
KZT 606.197325
LAK 25479.003233
LBP 105314.013174
LKR 364.054316
LRD 208.161007
LSL 19.749252
LTL 3.468505
LVL 0.710549
LYD 6.3715
MAD 10.762067
MDL 19.804339
MGA 5312.817411
MKD 61.540516
MMK 2466.539579
MNT 4166.381385
MOP 9.423482
MRU 46.642618
MUR 53.940695
MVR 18.101865
MWK 2039.246081
MXN 21.111878
MYR 4.800304
MZN 75.073411
NAD 19.749252
NGN 1709.114662
NIO 43.280735
NOK 11.967292
NPR 170.998937
NZD 2.032814
OMR 0.451664
PAB 1.176023
PEN 3.961568
PGK 4.99993
PHP 68.765118
PKR 329.584029
PLN 4.213082
PYG 7899.140849
QAR 4.287946
RON 5.091387
RSD 117.376912
RUB 92.859497
RWF 1712.318852
SAR 4.405932
SBD 9.589331
SCR 15.887499
SDG 706.554364
SEK 10.929832
SGD 1.514448
SHP 0.881309
SLE 27.958386
SLL 24632.320839
SOS 672.150385
SRD 45.433983
STD 24313.370363
STN 24.494756
SVC 10.290578
SYP 12990.09313
SZL 19.732608
THB 36.943521
TJS 10.807756
TMT 4.123104
TND 3.434336
TOP 2.828332
TRY 50.174064
TTD 7.978122
TWD 36.983306
TZS 2904.853404
UAH 49.59696
UGX 4187.067994
USD 1.174674
UYU 46.009759
UZS 14259.643834
VES 320.972615
VND 30946.774082
VUV 142.677982
WST 3.264785
XAF 655.811022
XAG 0.018398
XAU 0.000272
XCD 3.174614
XCG 2.119501
XDR 0.815618
XOF 655.80265
XPF 119.331742
YER 279.982885
ZAR 19.683141
ZMK 10573.49202
ZMW 27.019641
ZWL 378.244397
  • SCS

    0.0200

    16.14

    +0.12%

  • CMSC

    0.0400

    23.34

    +0.17%

  • NGG

    -0.2600

    75.77

    -0.34%

  • RBGPF

    3.3200

    81

    +4.1%

  • RYCEF

    -0.1000

    14.8

    -0.68%

  • GSK

    -0.4600

    48.78

    -0.94%

  • BTI

    -0.4500

    57.29

    -0.79%

  • BCE

    -0.2800

    23.33

    -1.2%

  • VOD

    0.0000

    12.7

    0%

  • BP

    -1.4900

    33.76

    -4.41%

  • JRI

    -0.0500

    13.51

    -0.37%

  • RIO

    0.1700

    75.99

    +0.22%

  • RELX

    -0.2600

    40.82

    -0.64%

  • CMSD

    0.0150

    23.38

    +0.06%

  • BCC

    0.5100

    75.84

    +0.67%

  • AZN

    -0.2100

    91.35

    -0.23%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP