The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.303371
AFN 79.991365
ALL 97.154291
AMD 446.422073
ANG 2.097237
AOA 1074.525283
ARS 1599.477072
AUD 1.784351
AWG 2.112139
AZN 1.982649
BAM 1.956297
BBD 2.355399
BDT 142.326192
BGN 1.956323
BHD 0.441801
BIF 3489.172366
BMD 1.171783
BND 1.504676
BOB 8.075781
BRL 6.340757
BSD 1.169492
BTN 103.153887
BWP 15.723065
BYN 3.951705
BYR 22966.946217
BZD 2.352008
CAD 1.620493
CDF 3368.876255
CHF 0.933975
CLF 0.02889
CLP 1133.360475
CNY 8.358152
CNH 8.355404
COP 4649.916054
CRC 592.558266
CUC 1.171783
CUP 31.052249
CVE 110.22107
CZK 24.397693
DJF 208.252067
DKK 7.466326
DOP 73.959754
DZD 152.1728
EGP 56.803938
ERN 17.576745
ETB 167.289781
FJD 2.63979
FKP 0.867682
GBP 0.867963
GEL 3.16207
GGP 0.867682
GHS 14.141364
GIP 0.867682
GMD 83.758301
GNF 10136.620873
GTQ 8.963428
GYD 244.512546
HKD 9.134248
HNL 30.640184
HRK 7.535151
HTG 152.84952
HUF 393.082216
IDR 19170.720928
ILS 3.912824
IMP 0.867682
INR 103.230332
IQD 1532.089592
IRR 49302.767998
ISK 143.00429
JEP 0.867682
JMD 187.12998
JOD 0.830773
JPY 173.524632
KES 151.323626
KGS 102.473076
KHR 4689.132364
KMF 492.731013
KPW 1054.554975
KRW 1629.117904
KWD 0.357329
KYD 0.974548
KZT 628.489817
LAK 25372.451908
LBP 104725.130333
LKR 353.119794
LRD 234.326606
LSL 20.673757
LTL 3.45997
LVL 0.7088
LYD 6.348587
MAD 10.621367
MDL 19.628991
MGA 5201.344833
MKD 61.541721
MMK 2460.503401
MNT 4213.326804
MOP 9.39889
MRU 46.884522
MUR 53.984132
MVR 18.056667
MWK 2027.824304
MXN 21.948608
MYR 4.946112
MZN 74.879671
NAD 20.673757
NGN 1784.250596
NIO 43.040761
NOK 11.738541
NPR 165.156111
NZD 1.98428
OMR 0.450553
PAB 1.168734
PEN 4.119348
PGK 4.881283
PHP 66.607074
PKR 331.85037
PLN 4.251346
PYG 8429.143429
QAR 4.271597
RON 5.074755
RSD 117.185358
RUB 95.162723
RWF 1693.930746
SAR 4.396768
SBD 9.636536
SCR 17.32678
SDG 703.657854
SEK 10.991518
SGD 1.505442
SHP 0.920837
SLE 27.244202
SLL 24571.700616
SOS 668.378516
SRD 45.555994
STD 24253.541663
STN 24.506441
SVC 10.232602
SYP 15235.639113
SZL 20.667167
THB 37.496551
TJS 11.044198
TMT 4.112958
TND 3.417838
TOP 2.744427
TRY 48.352568
TTD 7.937018
TWD 35.728247
TZS 2931.475208
UAH 48.206764
UGX 4111.361362
USD 1.171783
UYU 46.821339
UZS 14544.698541
VES 178.849902
VND 30942.687003
VUV 140.800962
WST 3.25222
XAF 656.126645
XAG 0.028693
XAU 0.000326
XCD 3.166802
XCG 2.107636
XDR 0.816011
XOF 656.121044
XPF 119.331742
YER 281.346645
ZAR 20.617966
ZMK 10547.461366
ZMW 27.921219
ZWL 377.313638
  • RBGPF

    3.9500

    75.43

    +5.24%

  • CMSC

    0.2900

    24.23

    +1.2%

  • RIO

    1.5100

    63.97

    +2.36%

  • BTI

    0.5900

    56.02

    +1.05%

  • AZN

    -0.0800

    81.7

    -0.1%

  • RYCEF

    0.0200

    14.61

    +0.14%

  • GSK

    0.8900

    40.5

    +2.2%

  • BP

    -0.3700

    33.93

    -1.09%

  • RELX

    0.2500

    47.05

    +0.53%

  • NGG

    1.1800

    70.1

    +1.68%

  • VOD

    0.0600

    11.81

    +0.51%

  • JRI

    0.0500

    13.62

    +0.37%

  • BCC

    2.7900

    90.02

    +3.1%

  • SCS

    0.0900

    17.14

    +0.53%

  • CMSD

    0.5000

    24.46

    +2.04%

  • BCE

    0.2500

    24.72

    +1.01%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP