The federal government and Anthropic have been at odds for weeks as they tried to hammer out an agreement on how the military can use Claude, Anthropic’s AI model. Anthropic CEO Dario Amodei has been firm that he will not allow the Pentagon to use Claude for mass surveillance of Americans or to create autonomous weapons, like pilotless drones.
文 | 锌刻度,作者 | 李觐麟,编辑 | 黎文婕
,更多细节参见51吃瓜
Skip 熱讀 and continue reading熱讀
In the months since, I continued my real-life work as a Data Scientist while keeping up-to-date on the latest LLMs popping up on OpenRouter. In August, Google announced the release of their Nano Banana generative image AI with a corresponding API that’s difficult to use, so I open-sourced the gemimg Python package that serves as an API wrapper. It’s not a thrilling project: there’s little room or need for creative implementation and my satisfaction with it was the net present value with what it enabled rather than writing the tool itself. Therefore as an experiment, I plopped the feature-complete code into various up-and-coming LLMs on OpenRouter and prompted the models to identify and fix any issues with the Python code: if it failed, it’s a good test for the current capabilities of LLMs, if it succeeded, then it’s a software quality increase for potential users of the package and I have no moral objection to it. The LLMs actually were helpful: in addition to adding good function docstrings and type hints, it identified more Pythonic implementations of various code blocks.
In the months before, space agency officials were in frequent contact with the State Department, which disseminated the latest predicted trajectories to embassies across the world. In these situations, oops doesn’t cut it: When one of the Salyuts, a Soviet space station model, was deorbited a few decades ago, flaming bits were littered across Argentina, scaring people and requiring the deployment of at least a few firefighters, according to local newspaper reports.