Anthropic publicly releases AI tool that can take over the user’s mouse cursor

An arms race and a wrecking ball

Competing companies like OpenAI have been working on equivalent tools but have not made them publicly available yet. It’s something of an arms race, as these tools are projected to generate a lot of revenue in a few years if they progress as expected.

There’s a belief that these tools could eventually automate many menial tasks in office jobs. It could also be a useful tool for developers in that it could “automate repetitive tasks” and streamline laborious QA and optimization work.

That has long been part of Anthropic’s message to investors: Its AI tools could handle large portions of some office jobs more efficiently and affordably than humans can. The public testing of the Computer Use feature is a step toward achieving that goal.

We’re, of course, familiar with the ongoing argument about these types of tools between the “it’s just a tool that will make people’s jobs easier” and the “it will put people out of work across industries like a wrecking ball”—both of these things could happen to some degree. It’s just a question of what the ratio will be—and that may vary by situation or industry.

There are numerous valid concerns about the widespread deployment of this technology, though. To its credit, Anthropic has tried to anticipate some of these by putting safeguards in from the get-go. The company gave some examples in its blog post:

Our teams have developed classifiers and other methods to flag and mitigate these kinds of abuses. Given the upcoming US elections, we’re on high alert for attempted misuses that could be perceived as undermining public trust in electoral processes. While computer use is not sufficiently advanced or capable of operating at a scale that would present heightened risks relative to existing capabilities, we’ve put in place measures to monitor when Claude is asked to engage in election-related activity, as well as systems for nudging Claude away from activities like generating and posting content on social media, registering web domains, or interacting with government websites.

These safeguards may not be perfect, as there may be creative ways to circumvent them or other unintended consequences or misuses yet to be discovered.

Right now, Anthropic is putting Computer Use out there for testing to see what problems arise and to work with developers to improve its capabilities and find positive uses.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top