The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.268335
AFN 73.221319
ALL 95.995822
AMD 435.377378
ANG 2.0801
AOA 1065.775351
ARS 1645.160368
AUD 1.642686
AWG 2.09494
AZN 1.975549
BAM 1.956114
BBD 2.328974
BDT 141.422701
BGN 1.914963
BHD 0.438701
BIF 3434.762603
BMD 1.162242
BND 1.480699
BOB 8.019287
BRL 6.049697
BSD 1.156391
BTN 106.669958
BWP 15.71459
BYN 3.379943
BYR 22779.934575
BZD 2.325573
CAD 1.578737
CDF 2510.44169
CHF 0.903591
CLF 0.026942
CLP 1063.823364
CNY 8.032363
CNH 8.001632
COP 4374.409916
CRC 550.490732
CUC 1.162242
CUP 30.799401
CVE 110.282702
CZK 24.359438
DJF 205.913939
DKK 7.470743
DOP 69.061383
DZD 152.855691
EGP 61.354848
ERN 17.433623
ETB 177.577468
FJD 2.562917
FKP 0.867634
GBP 0.864999
GEL 3.172683
GGP 0.867634
GHS 12.465001
GIP 0.867634
GMD 84.843804
GNF 10136.67072
GTQ 8.869576
GYD 241.918832
HKD 9.094017
HNL 30.607045
HRK 7.534234
HTG 151.49171
HUF 387.561655
IDR 19620.962015
ILS 3.590658
IMP 0.867634
INR 107.013159
IQD 1514.849677
IRR 1535204.877032
ISK 145.106082
JEP 0.867634
JMD 181.149078
JOD 0.824067
JPY 183.15532
KES 150.103752
KGS 101.638377
KHR 4640.66505
KMF 493.952675
KPW 1046.051654
KRW 1709.634418
KWD 0.357563
KYD 0.963659
KZT 575.824907
LAK 24770.976172
LBP 103549.821546
LKR 360.137808
LRD 211.040231
LSL 19.388012
LTL 3.431797
LVL 0.703028
LYD 7.385217
MAD 10.859243
MDL 20.039217
MGA 4802.791593
MKD 61.635083
MMK 2440.635948
MNT 4168.12319
MOP 9.309294
MRU 46.163609
MUR 53.405163
MVR 17.95628
MWK 2005.130484
MXN 20.519102
MYR 4.564699
MZN 74.279251
NAD 19.388012
NGN 1622.768117
NIO 42.557014
NOK 11.151545
NPR 170.67013
NZD 1.964891
OMR 0.446894
PAB 1.156386
PEN 4.025846
PGK 4.982821
PHP 68.792842
PKR 325.105184
PLN 4.252989
PYG 7441.194441
QAR 4.217149
RON 5.096895
RSD 117.439871
RUB 90.945831
RWF 1690.571366
SAR 4.363313
SBD 9.350445
SCR 16.671951
SDG 697.936729
SEK 10.628338
SGD 1.480423
SHP 0.871982
SLE 28.504002
SLL 24371.623637
SOS 659.705894
SRD 43.77813
STD 24056.053735
STN 24.504039
SVC 10.117668
SYP 128.493777
SZL 19.401198
THB 36.892447
TJS 11.083813
TMT 4.067845
TND 3.401104
TOP 2.798399
TRY 51.228511
TTD 7.846259
TWD 36.940104
TZS 3010.205727
UAH 50.818476
UGX 4353.698844
USD 1.162242
UYU 46.258818
UZS 14097.262856
VES 502.815511
VND 30497.218534
VUV 139.229241
WST 3.178155
XAF 656.062309
XAG 0.013061
XAU 0.000225
XCD 3.141016
XCG 2.084043
XDR 0.815934
XOF 656.065132
XPF 119.331742
YER 277.314768
ZAR 18.97568
ZMK 10461.571777
ZMW 22.347587
ZWL 374.241308
  • RBGPF

    0.1000

    82.5

    +0.12%

  • CMSD

    -0.0400

    23.16

    -0.17%

  • CMSC

    0.0350

    23.22

    +0.15%

  • NGG

    0.5500

    90.41

    +0.61%

  • GSK

    1.0000

    55.51

    +1.8%

  • RIO

    0.1400

    90.35

    +0.15%

  • BCE

    -0.1800

    25.88

    -0.7%

  • BTI

    0.4600

    58.33

    +0.79%

  • AZN

    0.7300

    194.95

    +0.37%

  • BP

    0.2100

    40.65

    +0.52%

  • RELX

    0.0000

    35.68

    0%

  • RYCEF

    -0.0600

    16.9

    -0.36%

  • VOD

    -0.0300

    14.48

    -0.21%

  • BCC

    -0.8600

    74.49

    -1.15%

  • JRI

    0.0100

    12.58

    +0.08%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP