If you already have Node/npm, the docs explicitly allow global npm install: ([OpenClaw][1])
npm install -g openclaw@latest[assembled by OpenAI Agent Mode at the detailed instructions of Dazza Greenwood]
This list summarises each contribution to Volume 2: The Economics of Transformative AI. Each entry begins with the author(s) and the blurb from the volume‑overview page. A short summary then explains the article’s main thesis, followed by bullet points outlining the article’s major arguments or recommendations. Citations refer to lines from the essays themselves.
Blurb: The editors introduce the volume by situating transformative AI (TAI) alongside the industrial revolution and highlight the record investment and rapid progress in AI. They note that the essays consider how to respond to the economic implications of TAI rather than treat it as current A
Excellent! Ok now, I'd like to develop another test case that is a little different and also very real. I need to get my resume in order for a job listing at the company called Everlaw. My friend Leo told me about it and as an internal reference I just need to get Leo my resume to pass along. So I need help updating (and totally changing/recreating) my resume accordingly. To that end, I have added the project folder for this in projects/resume_for_everlaw off root. In that directory you will find sevel (7) files. Please do the following:
read each file and give me a blurb of the FULL CONTENT of each file, organize by filename, so I know you've got all the relevant info in active attention. Note that a lot of the deep context is found in the transcript of the call with me and Leo and with the actual job descriotion and my current draft resume but the other files also all have relevant info.
come up with a plan to create a new workflow for this, which must conclude with at least one finished and
Google has launched Antigravity, but it's not the physics-defying device the name might suggest. Instead, it's something potentially more transformative: an agent-first coding environment that fundamentally reimagines who—or what—sits at the center of software development.
To understand just how profound this shift really is, consider this: a coding IDE designed for software development was recently used to practice law—and it did so with stunning competence.
On November 19, 2025, Sam Harden of Legal Tech School published a video titled "Bunny vs. Fudd: Practicing Law in Google Antigravity" that demonstrates something remarkable. Instead of asking Antigravity to write Python scripts, Harden treated the AI agent like a junior associate and assigned it a fictional bu
Grasshopper Bank announced in August 2025 that it now offers a Model Context Protocol (MCP) server, enabling some account holders to interact with their accounts using AI Agents and API-based workflows. This represents a major step in digital banking for small businesses, allowing secure conversational access to account data through integrations such as Claude by Anthropic.[1][2][3]
The MCP server is a backend protocol designed to securely standardize and orchestrate how AI agents interact with financial account data. It enables business clients to request personalized financial analysis, transaction categorization, budgeting advice, and real-time cash flow insights via AI assistants, starting with Claude and potentially expanding to other LLMs in the future.[4][5][2][6][1]
Tobin,
Here’s a compact suggested plan that others can lift into production after Monday. It involves demo of two portable artifacts:
Grant of Authority — what the user (Principal) authorizes an agent to do, with purpose‑binding, explicit capabilities + constraints, negative constraints (exclusions), time bounds, data‑minimization categories, revocation, and a verifiable token counterparties can check.
Ask/Confirm Policy — how much autonomy the user wants and the exact circumstances that trigger permissions or clarifications (presets up front; customize later).
Why this lands: it converts apparent authority into provable authority, and makes “duty‑of‑loyalty‑like” behavior observable (authenticating requesters, selective disclosure, audit you can read). It also sets up safe portability across agent providers.
| {"resourceSpans":[{"resource":{"attributes":[{"key":"telemetry.sdk.language","value":{"stringValue":"python"}},{"key":"telemetry.sdk.name","value":{"stringValue":"opentelemetry"}},{"key":"telemetry.sdk.version","value":{"stringValue":"1.35.0"}},{"key":"service.name","value":{"stringValue":"agento"}},{"key":"service.version","value":{"stringValue":"1.0.0"}},{"key":"agento.module","value":{"stringValue":"1_01-B_JSON_Goal_to_PlanStructure-OTEL-Semantic-OI"}},{"key":"deployment.environment","value":{"stringValue":"development"}},{"key":"service.instance.id","value":{"stringValue":"da8aeafa-99a4-418e-94e1-6c484b533c6b"}}]},"scopeSpans":[{"scope":{"name":"__main__"},"spans":[{"traceId":"e27ba1417d55831a03436ba30a90ef5e","spanId":"6d1e6384c1168cf6","parentSpanId":"111e65453f2b188c","flags":256,"name":"llm.gemini.generate_plan","kind":3,"startTimeUnixNano":"1752994549414826000","endTimeUnixNano":"1752994563141393000","attributes":[{"key":"openinference.span.kind","value":{"stringValue":"LLM"}},{"key":"gen_ai.system", |
Can you turn this into a markdown table? Or into CSV. Tell me if you’re unable to make sense of all the data presented.
I think congress and the new administration should craft a modern statutory prong of fair use explicitly tailored to AI training data access, focused specifically on models that provide free/open access tiers or are open source. This targeted approach recognizes that existing copyright frameworks, rooted in decades-old standards, fail to address a transformative technology that is critical to economic growth, societal advances, and the greater good. While these models use copyrighted works in their training, they do so in ways that are highly innovative and do not directly compete with the original works. The carve-out could be part of a broader "new deal on data" that pairs expanded fair use protection with new obligations around transparency and limited licensing requirements - creating a balanced framework that promotes continued advancement in this essential field while providing appropriate safeguards and accountability.
Interested to learn more and consider this? I asked OpenAI Deep Research to help
The following is a longer reply to a comment than LinkedIn will permit.
Thanks for asking! I wrote “nope” because I fundamentally reject the idea that having AI significantly smarter than humans automatically poses an inherent problem. In fact, I see it as an exciting opportunity.
Here’s why: