The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.177613
AFN 80.776958
ALL 98.674291
AMD 442.254844
ANG 2.049839
AOA 1041.815217
ARS 1325.015571
AUD 1.77753
AWG 2.050078
AZN 1.930861
BAM 1.954283
BBD 2.277932
BDT 138.142794
BGN 1.956115
BHD 0.428723
BIF 3381.475805
BMD 1.137352
BND 1.489844
BOB 7.855869
BRL 6.392603
BSD 1.136928
BTN 96.840421
BWP 15.522091
BYN 3.720713
BYR 22292.106206
BZD 2.283828
CAD 1.574386
CDF 3273.299627
CHF 0.936661
CLF 0.028029
CLP 1075.582507
CNY 8.267979
CNH 8.266601
COP 4773.467844
CRC 574.769111
CUC 1.137352
CUP 30.139837
CVE 110.179011
CZK 24.924961
DJF 202.462879
DKK 7.464893
DOP 66.999772
DZD 150.740411
EGP 57.771771
ERN 17.060285
ETB 152.189631
FJD 2.605106
FKP 0.849211
GBP 0.849329
GEL 3.121981
GGP 0.849211
GHS 16.258311
GIP 0.849211
GMD 81.322521
GNF 9847.271442
GTQ 8.756166
GYD 238.573806
HKD 8.823421
HNL 29.504363
HRK 7.53724
HTG 148.764551
HUF 404.313979
IDR 19017.555034
ILS 4.12516
IMP 0.849211
INR 96.949905
IQD 1489.444117
IRR 47882.534347
ISK 146.081688
JEP 0.849211
JMD 180.101815
JOD 0.806612
JPY 161.979428
KES 146.946635
KGS 99.461261
KHR 4551.427846
KMF 491.620598
KPW 1023.732863
KRW 1625.236725
KWD 0.348326
KYD 0.947465
KZT 581.578666
LAK 24591.915438
LBP 101870.04373
LKR 340.575696
LRD 227.392532
LSL 21.096928
LTL 3.358306
LVL 0.687973
LYD 6.220173
MAD 10.546369
MDL 19.566815
MGA 5131.063151
MKD 61.575461
MMK 2388.195606
MNT 4063.055995
MOP 9.08475
MRU 45.011465
MUR 51.407236
MVR 17.515996
MWK 1971.487361
MXN 22.252725
MYR 4.908247
MZN 72.801774
NAD 21.096928
NGN 1821.492028
NIO 41.837532
NOK 11.805172
NPR 154.949838
NZD 1.9184
OMR 0.437884
PAB 1.136913
PEN 4.168365
PGK 4.710324
PHP 63.575149
PKR 319.398439
PLN 4.267346
PYG 9104.934114
QAR 4.144765
RON 4.977848
RSD 117.109117
RUB 93.263383
RWF 1625.253012
SAR 4.266304
SBD 9.509741
SCR 16.177403
SDG 682.98601
SEK 10.969993
SGD 1.48723
SHP 0.89378
SLE 25.875339
SLL 23849.691791
SOS 649.801435
SRD 41.911684
STD 23540.897494
SVC 9.94828
SYP 14787.811104
SZL 21.089819
THB 38.01543
TJS 12.005819
TMT 3.992107
TND 3.400946
TOP 2.663793
TRY 43.778882
TTD 7.714014
TWD 36.458396
TZS 3059.478312
UAH 47.234259
UGX 4166.748076
USD 1.137352
UYU 47.871797
UZS 14721.575318
VES 98.435697
VND 29576.848055
VUV 137.968789
WST 3.15057
XAF 655.454098
XAG 0.034511
XAU 0.000344
XCD 3.073752
XDR 0.815175
XOF 655.448339
XPF 119.331742
YER 278.708486
ZAR 21.117949
ZMK 10237.534291
ZMW 31.806317
ZWL 366.226995
  • RIO

    0.0100

    60.88

    +0.02%

  • SCS

    0.1500

    10.01

    +1.5%

  • RBGPF

    -0.4500

    63

    -0.71%

  • CMSC

    -0.0800

    22.24

    -0.36%

  • BTI

    0.4700

    42.86

    +1.1%

  • NGG

    0.1900

    73.04

    +0.26%

  • CMSD

    -0.1300

    22.35

    -0.58%

  • RELX

    0.4300

    53.79

    +0.8%

  • GSK

    0.9100

    38.97

    +2.34%

  • BCE

    0.1100

    21.92

    +0.5%

  • RYCEF

    -0.1300

    10.12

    -1.28%

  • JRI

    0.1300

    12.93

    +1.01%

  • BCC

    -0.8300

    94.5

    -0.88%

  • AZN

    1.7800

    71.71

    +2.48%

  • VOD

    0.0100

    9.58

    +0.1%

  • BP

    -1.0600

    28.07

    -3.78%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP