The Prague Post - Firms and researchers at odds over superhuman AI

EUR -
AED 4.301512
AFN 73.790669
ALL 95.475949
AMD 435.455176
ANG 2.096449
AOA 1075.231129
ARS 1631.267406
AUD 1.639479
AWG 2.109762
AZN 1.993763
BAM 1.95698
BBD 2.358572
BDT 143.684199
BGN 1.953808
BHD 0.44278
BIF 3483.241172
BMD 1.171276
BND 1.495877
BOB 8.091742
BRL 5.858121
BSD 1.170981
BTN 110.304167
BWP 15.861295
BYN 3.317244
BYR 22957.016762
BZD 2.35517
CAD 1.602335
CDF 2709.161787
CHF 0.919733
CLF 0.026655
CLP 1048.983684
CNY 7.99572
CNH 8.002564
COP 4177.849093
CRC 532.910009
CUC 1.171276
CUP 31.038824
CVE 110.331538
CZK 24.371626
DJF 208.531309
DKK 7.472907
DOP 69.76088
DZD 155.189145
EGP 61.636019
ERN 17.569145
ETB 181.029683
FJD 2.582255
FKP 0.86795
GBP 0.86805
GEL 3.144875
GGP 0.86795
GHS 13.000618
GIP 0.86795
GMD 86.086277
GNF 10279.023486
GTQ 8.952246
GYD 244.992519
HKD 9.176892
HNL 31.117102
HRK 7.530488
HTG 153.309839
HUF 365.371516
IDR 20198.953741
ILS 3.490392
IMP 0.86795
INR 110.306533
IQD 1533.992368
IRR 1543800.813561
ISK 143.797682
JEP 0.86795
JMD 184.805396
JOD 0.830482
JPY 186.749479
KES 151.434177
KGS 102.373297
KHR 4691.749355
KMF 494.278817
KPW 1054.179114
KRW 1730.461294
KWD 0.360495
KYD 0.975872
KZT 543.956435
LAK 25659.927124
LBP 104864.050107
LKR 373.26714
LRD 214.875917
LSL 19.472311
LTL 3.458474
LVL 0.708493
LYD 7.43064
MAD 10.83458
MDL 20.363933
MGA 4865.830595
MKD 61.628235
MMK 2459.520119
MNT 4193.680971
MOP 9.450038
MRU 46.737388
MUR 54.850527
MVR 18.107528
MWK 2030.589921
MXN 20.364633
MYR 4.644121
MZN 74.847383
NAD 19.472311
NGN 1586.317933
NIO 43.09507
NOK 10.953015
NPR 176.486667
NZD 1.993776
OMR 0.450348
PAB 1.170981
PEN 4.060062
PGK 5.083087
PHP 71.043763
PKR 326.447304
PLN 4.240132
PYG 7425.35124
QAR 4.268874
RON 5.090948
RSD 117.412229
RUB 88.320226
RWF 1711.602996
SAR 4.392961
SBD 9.427115
SCR 17.433904
SDG 703.347971
SEK 10.815408
SGD 1.494912
SHP 0.874476
SLE 28.811972
SLL 24561.075254
SOS 669.192145
SRD 43.804604
STD 24243.055967
STN 24.514261
SVC 10.246004
SYP 129.499113
SZL 19.464406
THB 37.911856
TJS 11.007503
TMT 4.105324
TND 3.419489
TOP 2.820153
TRY 52.735436
TTD 7.95266
TWD 36.860651
TZS 3048.247428
UAH 51.602622
UGX 4356.534322
USD 1.171276
UYU 46.387183
UZS 14069.244547
VES 565.414603
VND 30873.673716
VUV 138.011302
WST 3.176816
XAF 656.341615
XAG 0.015411
XAU 0.000249
XCD 3.165433
XCG 2.110428
XDR 0.815912
XOF 656.366847
XPF 119.331742
YER 279.495832
ZAR 19.399263
ZMK 10542.873009
ZMW 22.160986
ZWL 377.150512
  • CMSC

    -0.0200

    22.89

    -0.09%

  • NGG

    0.5100

    87.47

    +0.58%

  • BCE

    -0.1700

    23.93

    -0.71%

  • BTI

    1.3600

    58.64

    +2.32%

  • GSK

    -0.8500

    54.78

    -1.55%

  • RIO

    0.9400

    99.79

    +0.94%

  • RBGPF

    63.0000

    63

    +100%

  • BCC

    0.3300

    84.15

    +0.39%

  • BP

    -0.1600

    46.19

    -0.35%

  • CMSD

    0.0400

    23.27

    +0.17%

  • RYCEF

    -0.1400

    15.4

    -0.91%

  • AZN

    -3.2200

    189.08

    -1.7%

  • VOD

    0.0570

    15.677

    +0.36%

  • RELX

    0.1850

    36.315

    +0.51%

  • JRI

    0.0250

    12.905

    +0.19%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Havel--TPP