The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.269837
AFN 77.069997
ALL 96.629997
AMD 445.353536
ANG 2.081122
AOA 1066.15044
ARS 1722.18944
AUD 1.787867
AWG 2.09277
AZN 1.981121
BAM 1.9558
BBD 2.339
BDT 142.289995
BGN 1.9558
BHD 0.4376
BIF 3423.89988
BMD 1.16265
BND 1.5095
BOB 8.033466
BRL 6.266456
BSD 1.1613
BTN 101.898996
BWP 16.579999
BYN 3.9578
BYR 22787.939203
BZD 2.3356
CAD 1.628001
CDF 2569.456831
CHF 0.91925
CLF 0.027956
CLP 1096.689962
CNY 8.27987
CNH 8.285032
COP 4495.095405
CRC 583.19998
CUC 1.16265
CUP 30.810224
CVE 110.742867
CZK 24.31927
DJF 206.799993
DKK 7.471775
DOP 74.399997
DZD 151.262995
EGP 55.237998
ERN 17.439749
ETB 177.765094
FJD 2.641313
FKP 0.873566
GBP 0.8682
GEL 3.156641
GGP 0.873566
GHS 12.643865
GIP 0.873566
GMD 85.459249
GNF 10079.999648
GTQ 8.905493
GYD 243.246619
HKD 9.033562
HNL 30.516999
HRK 7.534558
HTG 152.069995
HUF 390.057885
IDR 19308.767333
ILS 3.819247
IMP 0.873566
INR 102.123108
IQD 1521.299947
IRR 48918.497449
ISK 143.192418
JEP 0.873566
JMD 186.219993
JOD 0.824365
JPY 176.961183
KES 149.799995
KGS 101.674186
KHR 4677.999836
KMF 492.96399
KPW 1046.385408
KRW 1673.018858
KWD 0.356515
KYD 0.9678
KZT 625.289978
LAK 25215.999119
LBP 103993.296365
LKR 352.679988
LRD 212.519993
LSL 20.151899
LTL 3.433004
LVL 0.703276
LYD 6.316
MAD 10.724329
MDL 19.880999
MGA 5247.999817
MKD 61.619998
MMK 2441.039051
MNT 4176.90257
MOP 9.2942
MRU 46.534998
MUR 52.947519
MVR 17.792891
MWK 2013.69993
MXN 21.45675
MYR 4.911079
MZN 74.297668
NAD 20.151899
NGN 1697.736788
NIO 42.739999
NOK 11.627707
NPR 163.037994
NZD 2.018665
OMR 0.44629
PAB 1.16267
PEN 3.9432
PGK 4.96
PHP 68.311543
PKR 328.992788
PLN 4.2425
PYG 8216.999713
QAR 4.233616
RON 5.086249
RSD 117.249996
RUB 92.569097
RWF 1686.199941
SAR 4.360174
SBD 9.561428
SCR 16.121099
SDG 699.338224
SEK 10.930309
SGD 1.515713
SHP 0.872289
SLE 26.927404
SLL 24380.187775
SOS 663.699977
SRD 46.195615
STD 24064.506778
STN 24.499999
SVC 10.161
SYP 12855.220327
SZL 20.148999
THB 38.024511
TJS 10.829
TMT 4.080901
TND 3.408313
TOP 2.723047
TRY 48.770264
TTD 7.883
TWD 35.865779
TZS 2874.1999
UAH 48.837998
UGX 4045.767158
USD 1.16265
UYU 46.374644
UZS 14085.999508
VES 246.694981
VND 30583.507181
VUV 141.842343
WST 3.256712
XAF 655.956977
XAG 0.023914
XAU 0.000283
XCD 3.14212
XCG 2.0929
XDR 0.8158
XOF 655.956977
XPF 119.331742
YER 277.761248
ZAR 20.38711
ZMK 10465.248981
ZMW 25.634999
ZWL 374.372813
  • CMSD

    -0.0500

    24.65

    -0.2%

  • JRI

    0.1200

    14.07

    +0.85%

  • BCC

    1.1200

    73.09

    +1.53%

  • SCS

    0.0400

    16.78

    +0.24%

  • GSK

    -2.3000

    43.24

    -5.32%

  • CMSC

    0.0900

    24.28

    +0.37%

  • RIO

    -0.0800

    70.54

    -0.11%

  • RYCEF

    0.1300

    14.88

    +0.87%

  • NGG

    0.2500

    76.95

    +0.32%

  • AZN

    -0.1100

    83.29

    -0.13%

  • BTI

    0.2200

    52.07

    +0.42%

  • BCE

    -0.0500

    23.81

    -0.21%

  • RELX

    0.6200

    46.57

    +1.33%

  • BP

    -0.4600

    34.54

    -1.33%

  • RBGPF

    0.0000

    79.09

    0%

  • VOD

    0.0700

    11.73

    +0.6%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP