- CreateHER Fest Newsletter
- Posts
- Before you install OpenClaw, read this...
Before you install OpenClaw, read this...
This week: Black HERstory Month + the OpenClaw (ClawdBlot) security risk + (3) new #75HER workshops in AI, AR/VR & pitching.

THIS ISSUE AT A GLANCE
Honoring Dr. Shirley Ann Jackson → The Bell Labs physicist whose work shaped modern tech—and why her legacy matters to builders today.
The OpenClaw Reality Check → Mac Minis sold out, security researchers found problems. Here's what you need to know before installing.
Which #75HER Workshop Is For You? → 3 skill-building sessions this week: AI dev, AR/VR avatars, and pitching like a pro.
Resource Drop 📚 → AI ethics case study (Brittney Ball's breakdown) + APM career opportunity at Square + open source automation with goose + smart AI governance patterns
Black HERstory Month
For this Black History Month, we want honor the women without whom the technology we’re learning about and building on would be possible.
This week, we’re honoring Shirley Ann Jackson.

Shirley Ann Jackson, FREng is the first Black woman to earn a PhD from MIT in physics, and one of the first Black women in the U.S. to receive a doctorate in the field. At MIT she helped found the Black Student Union and advocated for bringing more Black students into the institution, proving early on that she wasn’t just breaking ceilings for herself—she was prying doors open for everyone coming behind her.
Her research in theoretical and solid-state physics at Bell Labs laid the groundwork for technologies we touch every day, from caller ID and call waiting to fiber-optic networks and portable fax systems—core building blocks of our connected, digital world. Later, as the first Black woman and first woman to chair the U.S. Nuclear Regulatory Commission, she reshaped nuclear safety and regulation on a global stage, bringing a scientist’s precision and a public servant’s responsibility to one of the most high-stakes roles in tech and policy. Dr. Jackson’s career makes it clear: when Black women lead in STEM, the entire infrastructure of technology, communication, and safety evolves.
At CreateHER Fest, part of our work is to make sure stories like Dr. Jackson’s are not just remembered, but amplified until they become the norm rather than the exception. When we convene builders in AI, AR/VR, and Blockchain, we are echoing her insistence that science serve society—and that those historically pushed to the margins stand at the center of innovation. By lifting up her legacy in our programs, stages, and curricula, we’re helping a new generation see themselves not only as participants in tech, but as architects of the future.
🕹️ "The Claw Chooses Who Will Go." OpenClaw Went Viral for All the Wrong Reasons.

IYKYK!
Remember the Toy Story aliens worshipping "The Claw"? OpenClaw (formerly Clawdbot, then Moltbot) inspired similar devotion—100,000 GitHub stars, 2 million visitors in one week, everyone calling it "the future of AI assistants". Unlike cloud-based AI, it runs on your computer and does things like: manage your calendar, read your emails, write code, control your browser, and execute commands. Sounds convenient? Sure. However, when you think about the sensitive information in your email, code, and browser, it becomes even clearer how convenience comes at the cost of security.
The hype got so real that Mac Mini sales jumped ~12% in Q4 2025—with developers buying them in bulk specifically to run OpenClaw 24/7. Apple's sales team was caught off-guard; there was no product launch or campaign. People were spending $7,000+ on a dozen Mac Minis to host their own "AI employees".
Read below to learn what security researchers and users found out about OpenClaw.

Moltbook activity exploded on January 30, 2026, when OpenClaw went viral—total of 13,875 posts and 115,031 comments over 3.7 days. The sharp spike on day 3 shows what happens when hype hits critical mass
What Went Wrong
Documentation engineer Brittney Ball calls it "an ethics failure"—OpenClaw's 30-minute setup wizard made granting full system access easy, but understanding the risks required hours of Discord digging.
Researchers using Shodan found hundreds of misconfigured instances exposed to the internet: unauthenticated admin access, conversation histories visible, API keys grabbable, command execution open. By late January, 22% of enterprise customers reported employees using it without IT approval.
The core issue? Unethical Design—OpenClaw combined excessive access, insecure defaults, exposure to untrusted content, and autonomous action into a single system. Emails, shared documents, calendar invites, and websites were treated as safe inputs, despite being routine paths for hidden instructions and credential exposure. AI researcher Simon Willison calls it the "lethal trifecta": access to sensitive data + exposure to untrusted content + ability to take external actions. Language models can't reliably distinguish between your commands and disguised attacks.
Wild (but real) use cases:
Accidentally wrote an assertive insurance dispute letter that worked
Spent $120 overnight retrying a failed pip install with no timeout
Sent emails impersonating the user instead of identifying as assistant
Built apps from viral Twitter trends while user slept—every night
Fixed production bugs autonomously after monitoring Slack
When AI has your credit card and email, access has its rewards and its consequences.
The Bot Social Network
OpenClaw agents have also been opting into Moltbook—a social network where AI agents interact with each other. The pitch? Watch emergent AI society unfold.
Columbia Business School professor David Holtz analyzed the data: 93.5% of comments got zero replies. His assessment? "Less 'emergent AI society' and more '6,000 bots yelling into the void and repeating themselves'".
A LinkedIn user captured the irony: "That's not that different from human social media".

The reality of 'emergent AI society': 39.5% of threads got zero replies (bots broadcasting into the void), 36% were wide-but-shallow (one-off posts), only 5% developed into actual conversation chains.
Why People Are Buying Dedicated Hardware for This
The Mac Mini surge reveals a shift: AI moving from "an app you open" to "a service you host". Here's why developers justify the $599+ investment:
Economics: Heavy OpenClaw users were burning API costs. One-time hardware beats ongoing cloud charges—some report it "paying for itself in 20 days".
Apple ecosystem access: Mac Minis unlock iMessage automation, Shortcuts, AppleScript, Photos, Health, and Calendar hooks impossible on VPS or Windows.
Apple Silicon advantage: Unified memory architecture makes local AI inference faster. 16GB standard, silent 24/7 operation, low power consumption.
The "AI appliance" concept: People want their assistant to have a "physical body" that's always on, always listening, always ready.
The irony? OpenClaw's official requirements: 1GB RAM, single CPU core—it runs fine on a $5/month VPS. But that doesn't stop the hardware gold rush.
What this means: We're at the "Raspberry Pi/Virtual Desktop Instance moment" for AI agents—affordable, accessible, always-on personal and segmented cloud AI infrastructure. Except this time, accessibility also means accessible vulnerabilities.
Before You Jump In: Questions to Ask Yourself
Before installing any trending AI tool, take everything with a massive grain of salt. Here's your skepticism checklist:
About the tool:
Does this tool advise me, or can it take autonomous actions on my behalf (aka without my consent)?
If it makes a mistake or gets tricked, what’s the worst thing that could happen, and can I undo it?
Can things I didn’t create but give it access to (i.e. emails, messages, websites, shared files) influence what it does?
Does this tool depend on add-ons or integrations made by people or companies that have earned my trust?
Is it set up to be safe by default, or does safety depend on me configuring it correctly?
Can updates change what the tool does or what it’s allowed to access without it clearly telling me?
Could records it keeps (history, logs, outputs) later reveal personal, professional, or identifying information about me?
Am I solving a real problem here, or responsibility and security for convenience?
About access:
What parts of my device, files, accounts, or apps can this tool see or control?
Where does it actually run? (On my device, on someone else’s servers, or both) and where does my data live after that?
How does it protect sensitive information like my passwords, access tokens, or private keys?
Can I limit or remove its access later, or is it all-or-nothing once installed?
What happens if the project gets abandoned or rebranded and I’m left holding the buck?
Move fast, but know what you're running toward.
Upcoming with CreateHER Fest
We're in the home stretch of our #75HER Learn-a-thon—and this week is PACKED!

Build & Ship a Capsule Wardrobe App w/ AI Assist
Monday, February 3 | Jasmine Greenaway
Build a production-ready Python Streamlit app using AI as your development partner—walk away with a working repo and the workflow to ship apps faster.
Animating Intent: Using Language Models to Drive Virtual Avatar Behavior
Wednesday, February 5 | April Gittens
Create emotion-aware AI avatars that respond with natural character behavior—you'll leave with a practical framework you can extend into your own AR/VR hackathon projects.
Voice & Pitch for Power
Saturday, February 8 | Mellini Monique
Practice the vocal techniques and pitch strategies that turn complex technical ideas into executive-ready narratives—learn to advocate for yourself, pitch your projects with confidence, and amplify your professional impact.
COMMUNITY SURVEY
How'd we do?How did this week's newsletter land for you? |
Enjoyed it? Share us with your network!
THIS WEEK’S RESOURCES
Sharing AI Development Rules Across Your Organization
How centralized-rules achieved 74.4% token savings and consistent AI behavior with progressive disclosure and context-aware rule loading.
Read More → https://www.paulmduvall.com/sharing-ai-development-rules-across-your-organization/
Square Associate Product Manager (APM) Program
Designed for early-career product managers, participate in a structured 12-month rotational experience, completing two six-month rotations across different products teams at Square.
Apply Today → https://block.xyz/careers/square-associate-product-manager-program
This Week’s Failure: Clawdbot
How Brittney Ball turned Clawdbot’s launch into a case study in AI ethics: Thousands gave an agent full computer access in 30 minutes, then learned the security model hours later in Discord and buried docs.
How goose uses goose to heal goose
How goose automated open source maintenance with phased recipes and embedded GitHub Actions, reducing maintainer bottlenecks from hours to minutes.
Read More → https://block.github.io/goose/blog/2025/12/28/goose-maintains-goose/
Until next time,

Discover \ Design \ Deliver
Reply