Advertisement
U.S. markets closed
  • S&P 500

    5,254.35
    +5.86 (+0.11%)
     
  • Dow 30

    39,807.37
    +47.29 (+0.12%)
     
  • Nasdaq

    16,379.46
    -20.06 (-0.12%)
     
  • Russell 2000

    2,124.55
    +10.20 (+0.48%)
     
  • Crude Oil

    83.11
    +1.76 (+2.16%)
     
  • Gold

    2,254.80
    +42.10 (+1.90%)
     
  • Silver

    25.10
    +0.18 (+0.74%)
     
  • EUR/USD

    1.0793
    -0.0036 (-0.33%)
     
  • 10-Yr Bond

    4.2060
    +0.0100 (+0.24%)
     
  • dólar/libra

    1.2626
    -0.0012 (-0.10%)
     
  • USD/JPY

    151.3830
    +0.1370 (+0.09%)
     
  • Bitcoin USD

    70,683.62
    +1,617.90 (+2.34%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • FTSE 100

    7,952.62
    +20.64 (+0.26%)
     
  • Nikkei 225

    40,168.07
    -594.66 (-1.46%)
     

Nvidia moves into A.I. services and ChatGPT can now use your credit card

David Paul Morris—Bloomberg via Getty Images

It’s been another head-spinning week in A.I. news. Where to start? Bill Gates saying A.I. is as important as the invention of the microprocessor? Nah, I’m going to begin with Nvidia’s GTC conference, but I want to encourage you all to read the Eye on A.I. Research section (which comes after the news items), where I will tell you about my own experiment with GPT-4 and why I think it indicates we are not glimpsing “the spark of AGI” (artificial general intelligence) as a group of Microsoft computer scientists controversially claimed last week.

Now on to Nvidia. The chipmaker whose specialized graphics processing units have become the workhorses for most A.I. computing held its annual developers' conference, much of which was focused on A.I. The chipmaker made a slew of big announcements:

- Its next generation of DGX A.I. supercomputers, powered by linked clusters of its H100 GPUs, are now in full production and being made available to major cloud providers and other customers. Each H100 has a built-in “Transformer Engine” for running the Transformer-based large models that underpin generative A.I. The company says the H100 offers nine times faster training times and 30 times faster inference times than its previous generation of A100 GPUs, which were themselves considered the best in the field for A.I. performance.

- The company has also started offering its own Nvidia DGX Cloud, built on H100 GPUs, through several of the same cloud providers, starting with Oracle, and then expanding to Microsoft Azure and Google Cloud. This will allow any company to access A.I. supercomputing resources and software to train their own A.I. models from any desktop browser. The DGX Cloud comes with all those H100s configured and hooked up with Nvidia’s own networking equipment.

- Meanwhile, the company announced a separate tie-up with Amazon’s AWS that will see its H100s power new AWS EC2 clusters that can grow to include up to 20,000 GPUs. These will be configured using networking solutions developed by AWS itself, which allows AWS to offer huge systems at potentially lower-cost than the Nvidia DGX Cloud service can.

- The company announced a slate of its own pre-trained A.I. foundation models—for the generation of text (which it calls NeMo) as well as images, 3D rendering, and video (which it calls Picasso)—optimized for its own hardware. It also announced a set of models it calls BioNeMo that it says will help pharmaceutical and biotech companies accelerate drug discovery by generating protein and chemical structures. It announced some important initial business customers for these models too, including Amgen for BioNemo and Adobe, Shutterstock, and Getty for Picasso. (More on that in a minute.)

- Interestingly, both the DGX Cloud and Nvidia foundation models put the company into direct competition with some of its best customers, including OpenAI, Microsoft, Google, and AWS, all of which are offering companies pre-trained large models of their own and A.I. services in the cloud.

- Long-term, one of the most impactful announcements Nvidia made at GTC may have been cuLitho, a machine learning system that can help design future generations of computer chips while consuming far less power than previous methods. The system will help chipmakers design wafers with 2-nanometer scale transistors, the tiniest size currently on chipmakers’ roadmaps—and possibly even smaller ones.

Ok, now back to some of those initial customers Nvidia announced for Picasso. Adobe, Shutterstock, and Getty licensed their own image libraries to Nvidia to use to train Picasso—with what Nvidia and the companies say is a method in place to appropriately compensate photographers who provided photos to those sites. The chipmaker also said it is in favor of a system that would let artist and photographers easily label their works with a text tag that would prevent them from being used to train A.I. image generation technology. This should, in theory, avoid the copyright infringement issues and some of the ethical conundrums that are looming over other text-to-image creation A.I. systems and which have made it difficult for companies to use A.I.-generated images for their own commercial purposes. (Getty is currently suing Stability AI for alleged copyright infringement in the creation of the training set for Stable Diffusion, for example.)

But it may not be quite as simple as that. Some artists, photographers, legal experts, and journalists have questioned whether Adobe’s Stock license really allows the company to use those images for training an A.I. model. And a proposed compensation system has not yet been made public. It’s unclear if creators will be paid a fixed, flat amount for any image used in model training, on the grounds that each image only contributes a tiny fraction to the final model weights, or whether compensation will vary with each new image the model generates (since the model will draw more heavily on some images in response to a particular prompt). If someone uses an A.I. art system to explicitly ape the style and technique of a particular photographer or artist, one would think that creator would be entitled to more compensation than someone else whose work the model ingested during training, but which wasn’t central to that particular output. But that system would be even more technically complicated to manage and very difficult for the artists themselves to audit. So how will this work and will artists think it is fair? We have no idea.

Ok, one of the other big pieces of news last week was that OpenAI connected ChatGPT directly to the internet through a bunch of plugins. The initial set of these include Expedia’s travel sites (so it can look up and book travel), Wolfram (so it can do complex math reliably, an area where large language models have famously struggled), FiscalNote (real-time access to government documents, regulatory decisions, and legal filings), OpenTable (restaurant reservations), and Klarna (so it can buy stuff for you on credit, which you’ll have to pay for later). OpenAI billed this as a way to make ChatGPT even more useful in the real world and as a way to reduce the chance that it will “hallucinate” (make stuff up) or provide out-of-date information in response to questions. Now it has the ability to actually look up the answers on the internet.

I’m not the only one who thinks that while these plugins sound useful, they are also potentially dangerous. Before, ChatGPT’s hallucinations were relatively harmless—they were just words, after all. A human would have to read those words and act on them—or at least copy and paste them into a command prompt, in order for anything to happen in the real world. In a way, that meant humans were always in the loop. Those humans might not be paying close enough attention, but at least there was a kind of built-in check on the harm these large language models could do. Now, with these new plugins, if ChatGPT hallucinates, are you going to end up with first-class tickets to Rio you didn’t actually want, or a sofa you’ve bought on credit with Klarna? Who will be liable for such accidents? Will Klarna or your credit card company refund you if you claim ChatGPT misinterpreted your instructions or simply hallucinated? Again, it isn’t clear.

Dan Hendrycks, director of the Center for AI Safety in Berkeley, California, told me that competitive pressure seems to be driving tech companies creating these powerful A.I. systems to take unwarranted risks. “If we were to ask people in A.I. research a few years ago if hooking up this kind of A.I. to the internet was a good or bad thing, they all would have said, 'Man, we wouldn’t be stupid enough to do that,'” he says. “Well, things have changed.”

Hendrycks says that when he was offering suggestions to the U.S. National Institute of Standards and Technology (NIST) on how it might think about A.I. safety (NIST was in the process of formulating the A.I. framework it released in January), he had recommended that such systems not be allowed to post data to internet servers without human oversight. But with the ChatGPT plugins, OpenAI has “crossed that line. They’ve blown right past it,” he says. He worries this new connectivity makes it much more likely that large language models will be used to create and propagate cyberattacks. And longer-term, he worries OpenAI’s decision sets a dangerous precedent for what will happen when even more sophisticated and potentially dangerous A.I. systems debut.

Hendrycks says OpenAI’s actions show that the tech industry shouldn’t be trusted to refrain from dangerous actions. Government regulation, he says, will almost certainly be required. And there are rumblings that it could be coming. Read on for more of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

This story was originally featured on Fortune.com

More from Fortune:


Advertisement