...
Business

Anthropic iPhone AI app, business plan to compete with OpenAI announced


The new enterprise plan, called Team, has been in development for the past few quarters and has involved beta testing with between 30 and 50 customers in sectors including technology, financial services, legal services and healthcare, said Anthropic co-founder Daniela Amodei. she told CNBC in an interview, adding that the idea for the service was born partially from many of those same customers asking for a dedicated enterprise product.

“A lot of what we hear from companies is that people are already using Claude in the office,” Amodei said.

The Team plan offers access to all three of Anthropic’s latest Claude models, with increased usage limits, administrative tools and billing management, as well as a longer “context window”, meaning the ability for businesses to have ” multi-step conversations” and upload long documents such as research papers and legal contracts for processing, according to Anthropic. Other upcoming features include “citations from trusted sources to verify AI-generated claims,” according to the release.

The Team offering costs $30 per user per month when billed monthly. Requires a minimum of five users.

Anthropic’s first iOS app is free for users on all plans and is also available starting Wednesday. It provides synchronization with web chats and the ability to upload photos and files from a smartphone.

There are also plans to launch an Android app. “We actually just hired our first Android engineer, so we’re actively working on the Android app,” Amodei told CNBC, adding that the engineer starts next week.

News of the Team plan and iOS app comes more than a month after Anthropic debuted Claude 3, a suite of AI models that it says are its fastest and most powerful yet. The new tools are called Claude 3 Opus, Sonnet and Haiku.

The company said the most capable of the new models, Claude 3 Opus, outperformed OpenAI’s GPT-4 and Google’s Gemini Ultra in industry benchmark tests such as undergraduate knowledge, graduate reasoning and basic math. This is also the first time Anthropic has offered multimodal support: users can upload photos, graphs, documents and other types of unstructured data for analysis and responses.

The other models, Sonnet and Haiku, are more compact and cheaper than the Opus. The company declined to specify how long it took to train Claude 3 or how much it cost, but said companies like Airtable and Asana helped with A/B testing the models. In a statement on Wednesday, Anthropic confirmed that other current customers using Claude include Pfizer, Asana, Zoom, Perplexity AI, Bridgewater Associates and more currently.

The field of generative AI exploded last year, with a record $29.1 billion invested in nearly 700 deals in 2023, a more than 260% increase in deal value over the previous year, according to the PitchBook. It has become the most talked about phrase in corporate earnings releases quarter after quarter. Academics and ethicists have expressed significant concerns about the technology’s tendency to propagate bias, but it has nevertheless quickly made its way into schools, online travel, the medical industry, online advertising, and more.

Around this time last year, Anthropic had completed its Series A and B funding rounds, but had only launched the first version of its chatbot without any consumer access or major fanfare. Now, it is one of the most important AI startups, with a product that directly competes with ChatGPT in both the enterprise and consumer worlds.

Claude 3 can sum up to about 150,000 words, or a sizable book, the length of “Moby Dick” or “Harry Potter and the Deathly Hallows.” Its previous version summarized just 75,000 words. Users can input large sets of data and request summaries in the form of a memo, letter, or story. ChatGPT, on the other hand, can handle around 3,000 words.

In January, OpenAI came under fire over its enterprise offering for quietly backtracking on a ban on military use of ChatGPT and its other artificial intelligence tools. Its policies further state that users should not “use our service to harm yourself or others,” including to “develop or use weapons.” Before the change, OpenAI’s policy page specified that the company did not allow the use of its models for “activities that pose a high risk of physical harm, including: weapons development [and] military and war.”

Anthropic’s position on Claude’s military use is similar to OpenAI’s updated policy.

“The way we draw the line today is we don’t discriminate based on industry or business, but we have an acceptable use policy that says what you can and can’t use Claude for,” Amodei told CNBC, adding, “Any company in the world that is not in a sanctioned country, of course, [and] meets the basic business requirements, you can use Claude for all kinds of back-office applications and things like that, but we have… very strict guidelines about Claude not being used for weapons, basically anything that could cause violence or harm people. .”



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.