The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.232438
AFN 81.7399
ALL 97.895927
AMD 444.690649
ANG 2.06248
AOA 1056.812299
ARS 1342.051944
AUD 1.776305
AWG 2.07444
AZN 1.963769
BAM 1.955319
BBD 2.326228
BDT 140.905351
BGN 1.956255
BHD 0.434593
BIF 3431.056288
BMD 1.152467
BND 1.480136
BOB 7.961042
BRL 6.353668
BSD 1.152117
BTN 99.741473
BWP 15.528182
BYN 3.770473
BYR 22588.345428
BZD 2.314331
CAD 1.581934
CDF 3315.646835
CHF 0.942055
CLF 0.028263
CLP 1084.563727
CNY 8.284511
CNH 8.272986
COP 4705.142985
CRC 581.656968
CUC 1.152467
CUP 30.540365
CVE 110.237892
CZK 24.820447
DJF 205.169548
DKK 7.460613
DOP 68.323199
DZD 150.345929
EGP 58.324658
ERN 17.286999
ETB 158.433541
FJD 2.603941
FKP 0.85594
GBP 0.85647
GEL 3.135159
GGP 0.85594
GHS 11.867082
GIP 0.85594
GMD 82.4058
GNF 9982.545249
GTQ 8.854823
GYD 241.040727
HKD 9.046696
HNL 30.090601
HRK 7.536214
HTG 151.212816
HUF 402.706852
IDR 18944.591768
ILS 4.02004
IMP 0.85594
INR 99.807354
IQD 1509.328849
IRR 48547.656077
ISK 143.033075
JEP 0.85594
JMD 183.664836
JOD 0.817144
JPY 168.352902
KES 148.913382
KGS 100.783647
KHR 4617.864447
KMF 492.683845
KPW 1037.226262
KRW 1582.533008
KWD 0.35307
KYD 0.960164
KZT 602.06195
LAK 24856.887583
LBP 103230.815094
LKR 346.214864
LRD 230.423338
LSL 20.801885
LTL 3.402935
LVL 0.697116
LYD 6.280456
MAD 10.515714
MDL 19.811128
MGA 5148.733904
MKD 61.519872
MMK 2419.50369
MNT 4130.366588
MOP 9.315509
MRU 45.542801
MUR 52.575963
MVR 17.753793
MWK 1997.80873
MXN 22.112036
MYR 4.900869
MZN 73.712199
NAD 20.801885
NGN 1786.450441
NIO 42.399574
NOK 11.650198
NPR 159.586757
NZD 1.931967
OMR 0.443128
PAB 1.152117
PEN 4.137283
PGK 4.816816
PHP 65.888865
PKR 326.91661
PLN 4.268679
PYG 9195.738728
QAR 4.202067
RON 5.030175
RSD 117.20118
RUB 90.368278
RWF 1663.690891
SAR 4.323762
SBD 9.612065
SCR 16.999311
SDG 692.060432
SEK 11.146611
SGD 1.482116
SHP 0.905658
SLE 25.873303
SLL 24166.652664
SOS 658.438087
SRD 44.773754
STD 23853.731871
SVC 10.081521
SYP 14984.415101
SZL 20.797886
THB 37.818235
TJS 11.377302
TMT 4.033633
TND 3.410561
TOP 2.699196
TRY 45.723145
TTD 7.830075
TWD 34.101261
TZS 3058.947791
UAH 48.287326
UGX 4152.978764
USD 1.152467
UYU 47.108416
UZS 14469.441901
VES 118.193176
VND 30112.223648
VUV 138.533142
WST 3.179258
XAF 655.795737
XAG 0.03201
XAU 0.000342
XCD 3.114599
XDR 0.815599
XOF 655.795737
XPF 119.331742
YER 279.707783
ZAR 20.740485
ZMK 10373.586524
ZMW 26.643448
ZWL 371.093776
  • CMSC

    0.0900

    22.314

    +0.4%

  • CMSD

    0.0250

    22.285

    +0.11%

  • RBGPF

    0.0000

    69.04

    0%

  • SCS

    0.0400

    10.74

    +0.37%

  • RELX

    0.0300

    53

    +0.06%

  • RIO

    -0.1400

    59.33

    -0.24%

  • GSK

    0.1300

    41.45

    +0.31%

  • NGG

    0.2700

    71.48

    +0.38%

  • BP

    0.1750

    30.4

    +0.58%

  • BTI

    0.7150

    48.215

    +1.48%

  • BCC

    0.7900

    91.02

    +0.87%

  • JRI

    0.0200

    13.13

    +0.15%

  • VOD

    0.0100

    9.85

    +0.1%

  • BCE

    -0.0600

    22.445

    -0.27%

  • RYCEF

    0.1000

    12

    +0.83%

  • AZN

    -0.1200

    73.71

    -0.16%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP