Show HN: Generative UI Library for React https://ift.tt/w7VjL60 April 30, 2026 at 12:58AM
Show HN: Send your first Peppol e-invoice in 5 minutes (EU mandate live) https://getpeppr.dev/ April 29, 2026 at 11:06PM
Show HN: My friend and his AI homies wrote SGI Indy emulator in Rust https://ift.tt/16AqBGX April 29, 2026 at 01:56AM
Show HN: Open Bias – proxy that enforces agent behavior at runtime https://ift.tt/1JsjBCE April 29, 2026 at 12:02AM
Show HN: Implementing Patio11's "Dangerous Professional" as a Claude Code Plugin Howdy HN! My recent dive into home ownership has brought me a whole new world to navigate w.r.t contractors, insurance claims, etc. I've been leaning heavily on the Dangerous Professional concept for clearer communication. It fits very cleanly as a plugin and has been very high-ROI for me. This is a community implementation. No affiliation with patio11, just a fan of his work. Repo: https://ift.tt/bIkmrXs... Install: `npx skills add Tetra-Research/dangerous-professional-plugin` https://ift.tt/0LKmj9e April 28, 2026 at 05:35PM
Show HN: Waiting for LLMs Suck – Give your user a game Give your user a game while they wait for the LLM to return a result. https://ift.tt/hqIA2Uj April 28, 2026 at 08:15AM
Show HN: 49Agents – Infinite canvas IDE for AI agents https://ift.tt/hSAZHL0 April 28, 2026 at 06:06AM
Show HN: WaveletLM – wavelet-based, attention-free model with O(n log n) scaling WaveletLM is a wavelet-based, attention-free architecture that replaces self-attention with learned lifting wavelet decomposition, a Fast Walsh-Hadamard Transform, per-scale gated spectral mixing with SwiGLU activation, an inverse FWHT, and wavelet reconstruction. Combined with expanded MLPs and sparse product-key memory, this yields a model with O(n log n) scaling in sequence length. With 23.8 PPL on WikiText-103, WaveletLM beats both GPT-2 Medium, which was trained on 80× more data, and Transformer-XL Standard, which uses recurrence to extend its effective context. It is undertrained and underregularized due to budget constraints, so there is much room for development and improvement. I invite anyone who is curious to examine the model, test it out, and extend its capabilities further. All code and weights are fully open source, and a PG-19 run will be completed in 2-3 days. Generations can be done in 4-5 GB VRAM at 28.8 tokens/second, and the model is trainable in 16.25 hours with 20 GB of VRAM, both on a 5090. README for comparison tables, instructions, logs, and future plans: https://ift.tt/ok5PNIj Weights: https://ift.tt/HAW0qsk Generations: https://ift.tt/6qp5eWw... The following samples were chosen for coherence, not factual accuracy. Factuality will require scaling and downstream techniques such as RAG and instruction tuning. > The history of the city is reflected in its architecture, which includes the historic Old Town and New Castle County Courthouse Square Historic District. The building was designed by John H. Stevens, who also designed the Albany-Fulton Celebration in 1906 and built a steel-hulled shipyard on the lake shore. > The album was released on August 25, 2007 by Sony Music Entertainment and features several songs from the record including "Never Say Die", "The Show", "Don't Cry for Me Argentina" and a cover of "I Can Only Imagine (But You Are Not Alone)". > The species was first described by Swedish zoologist Carl Linnaeus in 1758 as Agaricus adustus. The genus name is derived from the Latin words perma "to tie", and pous ("like") means "with a large head". In 1821, French mycologists Jean-Baptiste de Lacaille placed it in section Cricetae of the order Carnivora. He later renamed it Spongiforma punctata after the Greek kribensis. https://ift.tt/ok5PNIj April 26, 2026 at 11:18PM
Show HN: SVG Fitter – Rust+WASM Vectorizer I went crazy with a tool that helps me tracing raster images. Thought other might like it. It doesn't auto vectorize image, but rather allow for guided process. Final SVG still should be edited. Few fun features like genetic algorithm fit optimization, semi-manual tracing and color preservation. Perfect if you want to have lightweight SVG from huge PNG image. Note: If there's interest I might open-source it, just not sure if anyone would want to see it :) https://svg.axk.sh April 25, 2026 at 10:21PM
Show HN: Odozi – open-source iOS journaling app Yeah I know I hate the name too but I wasn't about to pay up for odyssey.app. It's an open source project so feel free to poke around with it / fork it. I talk about it more on the marketing website, but a few of us have been using it for the past month and kind of fun. Obviously there will be a slew of issues / feedback / nits that come from this, but c'est la vie. GH is here: https://ift.tt/sP5SkYX https://odozi.app April 25, 2026 at 09:22PM
Show HN: Quay – Menu-bar Git sync I write Astro blog posts in a text editor; when I'm done I want them pushed to GitHub so Cloudflare deploys the site. To make it comfortable, I built Quay for the menu bar. Also useful for Obsidian vault syncing. Point it at a folder, connect a GitHub repo, and it stages/commits/pushes/pulls. Multiple repos, editable commit messages, branch switching, merges with conflict detection. Shows open issue and PR counts per repo. But it's is not a full Git client (no diffs, blame, cherry-pick, or rebase) and it doesn't create remote repos. Native macOS app (Swift/SwiftUI). Wraps the local git binary (prompts to install Xcode Command Line Tools if missing). No custom Git implementation. Sandboxed, no telemetry, GitHub-only. macOS. 7-day trial, €9 one-time on the App Store. https://ift.tt/8oQZAcm April 25, 2026 at 11:53PM
Show HN: I'm 15 and built a cryptographic accountability layer for AI agents i'm 15 and a sophomore in high school in california. for the past two weeks i've been building a protocol that lets you prove what an AI agent actually did. not just log it. prove it. signed receipts before and after each action, hash chained, verifiable by anyone. this week microsoft merged my code into their agent governance toolkit. twice. happy to answer questions about how it works. https://ift.tt/GlaeMUq April 25, 2026 at 12:26AM
Show HN: Roids – Open Source Steroids for your Agents https://ift.tt/7BmnPFM April 24, 2026 at 11:18PM
Show HN: TurbineFi – Build, Backtest, Deploy Prediction Market Strategies Hey HN! We just finished our first major build of TurbineFi, an AI-assisted workflow for building, backtesting, and running prediction market strategies. There are over 1,000 community strategies you can try out, there's a backtesting engine integrated in the workflow, and you get your own sandbox to execute the trades 24/7. Currently live for Kalshi, Polymarket coming soon. We developed a custom DSL to make compiling AI-assisted strategies more deterministic than raw python generation, so creating a strategy takes seconds even on low-tier models (thinking of migrating to a self-hosted model soon to reduce costs). We also worked with Locus (YCF25) to do the sandbox provisioning, so that we never manage keys for users. When a user signs up with their email, Privy creates a wallet for them, and then that wallet uses the X402 agent payment protocol to pay for their own server. We created a deployment harness around it that accepts and runs new code via a hosted API, so once it's up, every deployment is authorized by EIP-712 signatures. It keeps everything non-custodial, and code deployments happen in seconds. And users don't really realize they're using crypto rails. Turbine also includes weather and crypto historical information, so you can do things like fading the BTC-15min UP markets when it's cold in NYC, and backtest and run it in seconds. Adding sports data soon. There's a 7-day trial if you want to poke around. Would appreciate feedback on which strategies you'd want to try first, so we can make sure we have the infra to support them. Thank you! https://ift.tt/zFkRn2M April 24, 2026 at 08:47PM
Show HN: Python 0.9.1 from 1991, Guido van Rossum's first public release https://ift.tt/XR6jf57 April 23, 2026 at 10:54PM
Show HN: Core – open-source AI butler that clears your backlog without you Hi HN, we're Manik, Manoj and Harshith, and we're building CORE ( https://ift.tt/esSEU49 ), an open source AI butler that acts and clears out your backlog. Write `[ ] Fix the search auth bug` in a scratchpad. Three minutes later, without you at the keyboard, CORE picks it up, pulls the relevant context from your codebase, drafts a plan in the task description, and spins up a Claude Code session in the background to do the work. You review the output in the task chat and unblock it when it gets stuck. Every AI tool today is reactive. You open a chat, brief the agent, it responds. Before anything moves, you've already done the real work: opened the Sentry error, found the commit, read the Slack thread, grabbed the Linear ticket, and stitched it all together into a prompt. The model isn't the bottleneck. You are. Demo Video: https://www.youtube.com/watch?v=PFk4RJvQg1Y CORE removes you from that loop. The interface is a shared scratchpad, think a page you and a colleague both have open. You write what's on your mind. When you write a checkbox line like `[ ] Fix the search bug`, CORE converts it into a task and starts working on it after a short delay (long enough for you to add context if you want to). No prompt template. No workflow to configure. The reason it can do this without you re-explaining everything: CORE keeps a persistent memory built from your tasks, conversations, and connected apps (Linear, Gmail, GitHub, Slack etc.). When it spins up a Claude Code session, it arrives with your codebase and project context already loaded. A real example: we wrote `[ ] Create a widget in Linear integration`, about 14 minutes later, CORE had opened a PR . What CORE is _not_: it's not Devin (no autonomous web browsing or shell loops you can't see), and it's not "Claude Code with memory bolted on." It's the layer above it that decides what should run, gathers the context, hands it to the right agent, and keeps the receipts in one place. Today the agent backend it spins up most often is Claude Code; the orchestration, scratchpad, memory, and integrations are CORE. Open source, self-hostable with `docker compose up` and it supports multiple models. GitHub: https://ift.tt/esSEU49 Website: https://getcore.me (you can chat with Harshith's butler there) Demo: https://www.youtube.com/watch?v=PFk4RJvQg1Y https://www.getcore.me/ April 23, 2026 at 08:44PM
Show HN: Turning a Gaussian Splat into a videogame https://ift.tt/urIgFNP April 23, 2026 at 07:48PM
Show HN: One ESLint rule to kill the "ChatGPT em dash" in your codebase https://ift.tt/Em0QU6Y April 23, 2026 at 01:27AM
Show HN: Netlify for Agents I launched Netlify with a Show HN more than 11 years today, for humans. Today we're launching our Agent first version of Netlify. Super early days for this, but I expect it to become as important as our original launch over time. It's as hard to perfect these flows as it was to perfect some of the initial human DX flows, since the agents are non-deterministic and keeps changing and evolving, and we'll have more to show soon on our eval tooling for this. Try it out with an agent, and we would love feedback on what works and what doesn't as we keep iterating on making Netlify better for our new agent friends. https://netlify.ai April 22, 2026 at 10:27PM
Show HN: Trainly – Free 72-hour audit of your AI agent's production traces https://ift.tt/WjKkB7X April 22, 2026 at 11:40PM
Show HN: Everest Drive – a multiplayer spaceship crew simulator in the browser Hi HN! I'm working on an open-world multiplayer space sim with submarine-warfare-inspired combat. Crew a ship, haul cargo, run heists, hunt your foes with passive and active sensors. Browser-based, free, no install. Some of its features: - Submarine-style passive sensors. Contacts start as a bearing line (direction, no distance), resolve into an uncertainty circle, then into a full track. You triangulate over time by moving. - Silent running. Cut your emissions and witnesses can't ID you. - Newtonian flight. No drag, no auto-brake. Flip 180° and burn to stop. - Boarding combat. Dock with another ship and fight through it room by room. Architecture: - The server is a single Rust module compiled to WASM, running inside SpacetimeDB. - Clients subscribe to rows in the schema and get live deltas over websocket; writes go through reducers (transactional Rust functions). No REST, no custom netcode, no client-side authority. - Client is Svelte 5 + plain HTML5 canvas 2D. No game engine, no WebGL. https://ift.tt/2joZLhr Very early, plenty of rough edges. Would love to hear what breaks for you: https://everestdrive.io https://ift.tt/z4aE6r7 April 22, 2026 at 11:27PM
Show HN: Agent Brain Trust, customisable expert panels for AI agents Agent Brain Trust lets you summon a panel of real, named experts to critique your architecture, review your writing, pressure your product strategy, or debate your design patterns. 10 built-in trusts, an extensible roster, and a working turn-taking protocol that ensures nothing useful gets skipped. Guest experts are drafted via an MCP server that maps topics to real persona cards so the panel can reach into niche and novel territory without inventing expertise it does not have. Wrote up the full thinking here: https://tinyurl.com/agent-brain-trust https://ift.tt/4qJKVet April 22, 2026 at 04:33AM
Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy. I built this MCP that you can install into your coding agents so they can actually access the web properly. Right now it can: - search the general web - search Reddit - read and scrape basically any webpage Install it: npx openalmanac setup The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP. https://ift.tt/mL7WGYP April 22, 2026 at 03:42AM
Show HN: Backlit Keyboard API for Python It currently supports Linux as of now. You can use this package to tinker with many things. Let's say, if you want to make a custom notification system, like if your website is down, you can make a blink notification with it. MacOS support is underway. I haven't tested Windows yet, I don't use it anymore btw. In future, if this package reaches nice growth, I'll be happy to make a similar Rust crate for it. https://ift.tt/sSq9i8V April 19, 2026 at 12:22PM
Show HN: Simple CLI tool to convert PDFs to dark mode, with TOC preservation Hi HN, I made a little something that could be useful to those like me that read pdfs at night. https://ift.tt/kYfegdy April 21, 2026 at 01:52AM
Show HN: Git Push No-Mistakes no-mistakes is how I kill AI slop. It puts a local git proxy in front of my real remote. I push to no-mistakes instead of origin, and it spins up a disposable worktree, runs my coding agent as a validation pipeline, forwards upstream only after every check passes, opens a clean PR automatically, and babysits CI pipeline for me. https://ift.tt/j7RgvKT April 21, 2026 at 12:10AM
Show HN: AI Coding Agent Guardrails enforced at runtime Hello, looking for some users interested using a devtool that allows developers to centrally manage AI Coding Agent tools that supports all AI Coding Agent tools like Claude Code, Codex, Antigravity, etc. Try it free! https://ift.tt/NbzquSY... https://sigmashake.com April 20, 2026 at 10:55PM
Show HN: Pwneye – discovering and accessing IP cameras (ONVIF/RTSP) Hi HN, I’ve been working on pwneye, a CLI tool for interacting with IP cameras exposing ONVIF and RTSP services. During penetration tests and red team engagements, I kept running into the same friction, with discovery, authentication testing, enumeration and stream validation spread across different tools or quick one-off scripts. pwneye was built to handle that workflow end-to-end, from discovery to actually accessing and validating streams. Current features include: - ONVIF discovery and authentication testing (wordlists, multithreading) - Post-auth enumeration (device info, users, network config, media profiles) - RTSP extraction via ONVIF - RTSP port detection and basic vendor identification - Vendor-aware RTSP bruteforce - Stream validation, preview and recording - ONVIF reboot support It’s still early, but already usable in real-world engagements. Would be interested in feedback, especially from people who have dealt with ONVIF/RTSP cameras or IoT security in general. Repo: https://ift.tt/M2AUod0 https://ift.tt/GLwxN3O April 20, 2026 at 10:54PM
Show HN: A privacy-first, local-LLM note app for iOS (Google Keep alternative) https://ift.tt/SZpvgKk April 19, 2026 at 10:29PM
Show HN: Free PDF redactor that runs client-side I recently needed to verify past employment and to do so I was going to upload paystubs from a previous employer, however I didn't want to share my salary in that role. I did a quick search online and most sites required sign-up or weren't clear about document privacy. I conceded and signed up for a free trial of Adobe Acrobat so I could use their PDF redaction feature. I figured there should be a dead simple way of doing this that's private, so I decided to create it myself. What this does is rasterize each page to an image with your redactions burned in, then it rebuilds the PDF so the text layer is permanently destroyed and not just covered up and easily retrievable. I welcome any and all feedback as this is my first live tool, thanks! https://redactpdf.net April 20, 2026 at 12:09AM
Show HN: Faceoff – A terminal UI for following NHL games Faceoff is a TUI app written in Python to follow live NHL games and browse standings and stats. I got the inspiration from Playball, a similar TUI app for MLB games that was featured on HN. The app was mostly vibe-coded with Claude Code, but not one-shot. I added features and fixed bugs by using it, as I spent way too much time in the terminal over the last few months. Try it out with `uvx faceoff` (requires uv). https://ift.tt/XW4J6R2 April 19, 2026 at 11:14PM
Show HN: AI Subroutines – Run automation scripts inside your browser tab We built AI Subroutines in rtrvr.ai. Record a browser task once, save it as a callable tool, replay it at: zero token cost, zero LLM inference delay, and zero mistakes. The subroutine itself is a deterministic script composed of discovered network calls hitting the site's backend as well as page interactions like click/type/find. The key architectural decision: the script executes inside the webpage itself, not through a proxy, not in a headless worker, not out of process. The script dispatches requests from the tab's execution context, so auth, CSRF, TLS session, and signed headers get added to all requests and propagate for free. No certificate installation, no TLS fingerprint modification, no separate auth stack to maintain. During recording, the extension intercepts network requests (MAIN-world fetch/XHR patch + webRequest fallback). We score and trim ~300 requests down to ~5 based on method, timing relative to DOM events, and origin. Volatile GraphQL operation IDs are detected and force a DOM-only fallback before they break silently on the next run. The generated code combines network calls with DOM actions (click, type, find) in the same function via an rtrvr.* helper namespace. Point the agent at a spreadsheet of 500 rows and with just one LLM call parameters are assigned and 500 Subroutines kicked off. Key use cases: - record sending IG DM, then have reusable and callable routine to send DMs at zero token cost - create routine getting latest products in site catalog, call it to get thousands of products via direct graphql queries - setup routine to file EHR form based on parameters to the tool, AI infers parameters from current page context and calls tool - reuse routine daily to sync outbound messages on LinkedIn/Slack/Gmail to a CRM using a MCP server We see the fundamental reason that browser agents haven't taken off is that for repetitive tasks going through the inference loop is unnecessary. Better to just record once, and get the LLM to generate a script leveraging all the possible ways to interact with a site and the wider web like directly calling backed API's, interacting with the DOM, and calling 3P tools/APIs/MCP servers. https://ift.tt/fDeIN0x April 18, 2026 at 02:33AM
Show HN: Praxis – Lab data to publication-ready figures in one Python package https://ift.tt/2lizPMO April 18, 2026 at 11:45PM
Show HN: I turned my MacBook notch into a live Claude Code dashboard https://ift.tt/Zhu4CKX April 17, 2026 at 09:13PM
Show HN: Smol machines – subsecond coldstart, portable virtual machines https://ift.tt/QCk43Rs April 17, 2026 at 10:48PM
Show HN: Marky – A lightweight Markdown viewer for agentic coding Hey HN, In this age of agentic coding I've found myself spending a lot of time reviewing markdown files. Whether it's plans or documentation that I've asked my agent to generate for me, it seems that I spend more time reading markdown than code. I've tried a few different solutions to make it easier to read such as Obsidian however I've found their Vault system to be quite limiting for this use case and I've found TUI solutions to not quite be as friendly to read as I've wanted so I made Marky. Marky is a lightweight desktop application that makes it incredibly easy to read and track your markdown files. It also has a helpful cli so you can just run marky FILENAME and have the app open to the md file that you pointed it at. I've been using the daily over the past week and I really enjoy it so I figured I'd share it. Here's a video if you want to check out a demo: https://www.youtube.com/watch?v=nGBxt8uOVjc . I have plans to add more features such as incorporating agentic tools such as claude code and codex into the UI as well as developing a local git diff reviewer to allow me to do local code review before pushing up to git. I'd love to hear your thoughts and any feature suggestions you may have :) https://ift.tt/c4w5XvN April 16, 2026 at 09:38PM
Show HN: Online Sound Decibel Meter https://ift.tt/fb297cn April 17, 2026 at 12:09AM
Show HN: Stage – Putting humans back in control of code review Hey HN! We're Charles and Dean, and we're building Stage: a code review tool that guides you through reading a PR step by step, instead of piecing together a giant diff. Here's a demo video: https://ift.tt/94MTKvB . You can play around with some example PRs here: https://ift.tt/ebxMDFV . Teams are moving faster than ever with AI these days, but more and more engineers are merging changes that they don't really understand. The bottleneck isn't writing code anymore, it's reviewing it. We're two engineers who got frustrated with GitHub's UI for code review. As coding agents took off, we saw our PR backlog pile up faster than we could handle. Not only that, the PRs themselves were getting larger and harder to understand, and we found ourselves spending most of our time trying to build a mental model of what a PR was actually doing. We built Stage to make reviewing a PR feel more like reading chapters of a book, not an unorganized set of paragraphs. We use it every day now, not just to review each other's code but also our own, and at this point we can't really imagine going back to the old GitHub UI. What Stage does: when a PR is opened, Stage groups the changes into small, logical "chapters". These chapters get ordered in the way that makes most sense to read. For each chapter, Stage tells you what changed and specific things to double check. Once you review all the chapters, you're done reviewing the PR. You can sign in to Stage with your GitHub account and everything is synced seamlessly (commenting, approving etc.) so it fits into the workflows you're already used to. What we're not building: a code review bot like CodeRabbit or Greptile. These tools are great for catching bugs (and we use them ourselves!) but at the end of the day humans are responsible for what gets shipped. It's clear that reviewing code hasn't scaled the same way that writing did, and they (we!) need better tooling to keep up with the onslaught of AI generated code, which is only going to grow. We've had a lot of fun building this and are excited to take it further. If you're like us and are also tired of using GitHub for reviewing PRs, we'd love for you to try it out and tell us what you think! https://ift.tt/pGHT2s3 April 16, 2026 at 11:06PM
Show HN: US keyboards don't have enough keys, so I switched to Japanese https://ift.tt/VnEGFUe April 16, 2026 at 02:27AM
Show HN: Jeeves – TUI for browsing and resuming AI agent sessions I made Jeeves to search, preview, read through, and resume AI agent sessions in your terminal. It shows sessions across claude and codex in a single view, with more AI agent framework integrations to come. https://ift.tt/8czfrMD April 16, 2026 at 01:01AM
Show HN: Monadic Networking Library for Go A library built on top of ibm/fp-go for use in networking applications (servers, etc.) https://ift.tt/UZ0L2kY April 15, 2026 at 11:37PM
Show HN: Fakecloud – Free, open-source AWS emulator https://ift.tt/JsfzM6X April 15, 2026 at 11:22PM
Show HN: Sk.illmd.com, a forum for talking about and showing off agent skills https://ift.tt/Ez5mFqh April 15, 2026 at 01:07AM
Show HN: A Claude Code–driven tutor for learning algorithms in Go https://ift.tt/257ZiYc April 14, 2026 at 11:11PM
Show HN: Encrypted, nothing stored, nothing repeated face-gated asset sharing https://veylt.net/ April 13, 2026 at 11:40PM
Show HN: I benchmarked Gemma 4 E2B – the 2B model beat the 12B on multi-turn https://ift.tt/jcrPe2A April 14, 2026 at 01:09AM
Show HN: pg_grpc – Call gRPC services directly from PostgreSQL https://ift.tt/S19QiXL April 13, 2026 at 11:20PM
Show HN: A social feed with no strangers Grateful is a gratitude app with a simple social layer. You write a short entry, keep it private or share it to a circle. A circle is a small private group of your own making — family, close friends, whoever you'd actually want to hear from. It shows you the most recent post first. People in the circle can react or leave a comment. There's also a daily notification that sends you something you were grateful for in the past. Try it out on both iOS and Android. Go to grateful.so https://ift.tt/xksRito April 13, 2026 at 04:11AM
Show HN: Rekal – Long-term memory for LLMs in a single SQLite file I got tired of repeating myself to my LLM every session. rekal is an MCP server that stores memories in SQLite and retrieves them with hybrid search (BM25 + vectors + recency decay). One file, local embeddings, no API keys. https://ift.tt/Ism4wcO April 13, 2026 at 02:55AM
Show HN: boringBar – a taskbar-style dock replacement for macOS Hi HN! I recently switched from a Fedora/GNOME laptop to a MacBook Air. My old setup served me well as a portable workstation, but I’ve started traveling more while working remotely and needed something with similar performance but better battery life. The main thing I missed was a simple taskbar that shows the windows in the current workspace instead of a Dock that mixes everything together. I built boringBar so I would not have to use the Dock. It shows only the windows in the current Space, lets you switch Spaces by scrolling on the bar, and adds a desktop switcher so you can jump directly to any Space. You can also hide the system Dock, pin apps, preview windows with thumbnails, and launch apps from a searchable menu (I keep Spotlight disabled because for some reason it uses a lot of system resources on my machine). I’ve been dogfooding it for a few months now, and it finally felt polished enough to share. It’s for people who like macOS but want window management to feel a bit more like GNOME, Windows, or a traditional taskbar. It’s also for people like me who wanted an easier transition to macOS, especially now that Windows feels increasingly user-hostile. I’d love feedback on the UX, bugs, and whether this solves the same Dock/Spaces pain for anyone else. P.S. It might also appeal to people who feel nostalgic for the GNOME 2 desktop of yore. I started my Linux journey with it, and boringBar brings back some of that feeling for me. https://boringbar.app/ April 12, 2026 at 10:55PM
Show HN: A living Vancouver. Connor is walking dogs at the SPCA this morning I've spent most of my career in marketing, which for the last few years has meant building consumer personas for campaigns. I wanted to see if I could make these real, living in real neighborhoods, had real weather, real budgets, real Saturday lunches. I always wanted to build a world, not a segment. This is that. 140 people so far, split across Vancouver (100), San Francisco (20), and Tokyo (20). Each one is about 1,000 lines of profile — family, finances, daily schedule, health, worldview, media diet, the channels you'd actually reach them through and the ones that will explicitly never work on them. Demographics are census-grounded income, age, ethnicity, household composition follow normal distributions against StatsCan, ACS, and Japanese e-Stat data, so the panel is roughly representative of the city instead of representative of whatever's overrepresented in an LLM's training corpus. The specific details come from real stories. They live in real local time on a live map. Right now it's Saturday 11:32 AM in Vancouver. Connor Hughes, a 31-year-old software developer at Clio in Gastown, is on his SPCA volunteer shift, he walks shelter dogs at the Boundary Road location every other Saturday morning. Hassan Khoury is in the morning lunch rush with Tony at his Lebanese café — it's his busiest day of the week. Ahmad Noori is pulling Saturday overtime on a construction site. Jordan Whitehorse is on mid-shift at East Cafe on Hastings. Every day is unique, no two days repeat. A 3 AM job fetches live data: weather from Open-Meteo, grocery CPI from StatsCan food vectors, Metro Vancouver transit delays from Google Routes API against specific corridors, Vancouver gas prices, sunrise and sunset. Each persona has a modifier file that reacts to all of it. When Vancouver gas hits $1.85/L, Jaspreet the long-haul trucker's Coquihalla run to Calgary stops feeling worth it, his margins are thin, his mood takes a hit. When food CPI spikes, Gurinder at the Amazon warehouse stops buying the $9 Subway and brings roti from home. A health flare rolls probabilistically each morning which maybe nothing, maybe Tanya's six month old had a rough night, maybe Frank's back is acting up. The days stack up and get remembered. Every persona has a journal, today's entry in a markdown file, a week of them compressed into a "dream" of ~30 lines that keeps the shape without the texture, a month compressed into ~15 lines. It's their journal. I'm not writing it; the simulation is. Click any persona to open their detail, or hit "Talk to [name]" to have a conversation and they run on Claude Haiku with their full profile and recent diary entries as context. Not a product, not a startup, just a thing I've been quietly working on. They feel, in a way I didn't expect, like my fully grown kids. Happy to answer questions. https://brasilia-phi.vercel.app April 12, 2026 at 12:12AM
Show HN: We scanned uscis.gov for third-party trackers. The results are jarring https://ift.tt/ApxzWdk April 11, 2026 at 07:13PM
Show HN: Do All the Things https://ift.tt/vAbKZej April 10, 2026 at 05:11PM
Show HN: Figma for Coding Agents Feels a bit like Figma, but for coding agents. Instead of going back and forth with prompts, you give the agent a DESIGN.md that defines the design system up front, and it generally sticks to it when generating UI. Google Stitch seems to be moving in this direction as a standard, so we put together a small collection of DESIGN.md files based on popular web sites. https://getdesign.md April 10, 2026 at 08:50PM
Show HN: Last Year I wrote a (Sci)fictional story where the EFF was a player [pdf] https://ift.tt/MHRNXBu April 9, 2026 at 11:43PM
Show HN: Logoshi, a brand kit generator for solo founders https://logoshi.com/ April 9, 2026 at 10:12PM
Show HN: I built a Cargo-like build tool for C/C++ I love C and C++, but setting up projects can sometimes be a pain. Every time I wanted to start something new I'd spend the first hour writing CMakeLists.txt, figuring out find_package, copying boilerplate from my last project, and googling why my library isn't linking. By the time the project was actually set up I'd lost all momentum. So, I built Craft - a lightweight build and workflow tool for C and C++. Instead of writing CMake, your project configuration goes in a simple craft.toml: [project] name = "my_app" version = "0.1.0" language = "c" c_standard = 99 [build] type = "executable" Run craft build and Craft generates the CMakeLists.txt automatically and builds your project. Want to add dependencies? That's just a simple command: craft add --git https://ift.tt/LMfixH9 --links raylib craft add --path ../my_library craft add sfml Craft will clone the dependency, regenerate the CMake, and rebuild your project for you. Other Craft features: craft init - adopt an existing C/C++ project into Craft or initialize an empty directory. craft template - save any project structure as a template to be initialized later. craft gen - generate header and source files with starter boilerplate code. craft upgrade - keeps itself up to date. CMakeLists.extra.cmake for anything that Craft does not yet handle. Cross platform - macOS, Linux, Windows. It is still early (I just got it to v1.0.0) but I am excited to be able to share it and keep improving it. Would love feedback. Please also feel free to make pull requests if you want to help with development! https://ift.tt/QnW1IKc April 9, 2026 at 09:34PM
Show HN: I built Dirac, Hash Anchored AST native coding agent, costs -64.8 pct Fully open source, a hard fork of cline. Full evals on the github page that compares 7 agents (Cline, Kilo, Ohmypi, Opencode, Pimono, Roo, Dirac) on 8 medium complexity tasks. Each task, each diff and correctness + cost info on the github Dirac is 64.8% cheaper than the average of the other 6. https://ift.tt/s2CEPmd April 9, 2026 at 05:36PM
Show HN: Incidentary – see what caused your incident before the war room starts https://ift.tt/sJgqnKf April 9, 2026 at 06:22PM
Show HN: CSS Studio. Design by hand, code by agent Hi HN! I've just released CSS Studio, a design tool that lives on your site, runs on your browser, sends updates to your existing AI agent, which edits any codebase. You can actually play around with the latest version directly on the site. Technically, the way this works is you view your site in dev mode and start editing it. In your agent, you can run /studio which then polls (or uses Claude Channels) an MCP server. Changes are streamed as JSON via the MCP, along with some viewport and URL information, and the skill has some instructions on how best to implement them. It contains a lot of the tools you'd expect from a visual editing tool, like text editing, styles and an animation timeline editor. https://cssstudio.ai April 9, 2026 at 04:53PM
Show HN: Skrun – Deploy any agent skill as an API https://ift.tt/diYSry6 April 8, 2026 at 06:04PM
Show HN: Orange Juice – Small UX improvements that make HN much easier to read http://oj-hn.com/ April 8, 2026 at 11:38PM
Show HN: Marimo pair – Reactive Python notebooks as environments for agents Hi HN! We're excited to share marimo pair [1] [2], a toolkit that drops AI agents into a running marimo notebook [3] session. This lets agents use marimo as working memory and a reactive Python runtime, while also making it easy for humans and agents to collaborate on computational research and data work. GitHub repo: https://ift.tt/MuSgKQE Demo: https://www.youtube.com/watch?v=6uaqtchDnoc marimo pair is implemented as an agent skill. Connect your agent of choice to a running notebook with: /marimo-pair pair with me on my_notebook.py The agent can do anything a human can do with marimo and more. For example, it can obtain feedback by running code in an ephemeral scratchpad (inspect variables, run code against the program state, read outputs). If it wants to persist state, the agent can add cells, delete them, and install packages (marimo records these actions in the associated notebook, which is just a Python file). The agent can even manipulate marimo's user interface — for fun, try asking your agent to greet you from within a pair session. The agent effects all actions by running Python code in the marimo kernel. Under the hood, the marimo pair skill explains how to discover and create marimo sessions, and how to control them using a semi-private interface we call code mode. Code mode lets models treat marimo as a REPL that extends their context windows, similar to recursive language models (RLMs). But unlike traditional REPLs, the marimo "REPL" incrementally builds a reproducible Python program, because marimo notebooks are dataflow graphs with well-defined execution semantics. As it uses code mode, the agent is kept on track by marimo's guardrails, which include the elimination of hidden state: run a cell and dependent cells are run automatically, delete a cell and its variables are scrubbed from memory. By giving models full control over a stateful reactive programming environment, rather than a collection of ephemeral scripts, marimo pair makes agents active participants in research and data work. In our early experimentation [4], we've found that marimo pair accelerates data exploration, makes it easy to steer agents while testing research hypotheses, and can serve as a backend for RLMs, yielding a notebook as an executable trace of how the model answered a query. We even use marimo pair to find and fix bugs in itself and marimo [5]. In these examples the notebook is not only a computational substrate but also a canvas for collaboration between humans and agents, and an executable, literate artifact comprised of prose, code, and visuals. marimo pair is early and experimental. We would love your thoughts. [1] https://ift.tt/MuSgKQE [2] https://ift.tt/Vo4gsEd [3] https://ift.tt/nKB4xpL [4] https://www.youtube.com/watch?v=VKvjPJeNRPk [5] https://ift.tt/vYtdZ4e... https://ift.tt/MuSgKQE April 7, 2026 at 11:17PM
Show HN: C64 Ultimate Toolbox for macOS My wife got me a Commodore 64 Ultimate ( https://ift.tt/RU0jrZ5 ) for my birthday, and it became an obvious hassle to have to keep an entire monitor connected to it just to tinker with it. When I found out the Ultimate FPGA board has built-in support for streaming the video and audio data over the network, as well as a REST API allowing for file and configuration management, I set to work on an app to remotely control my new device. - View and hear your Commodore 64 Ultimate or Ultimate 64 device over the network, with a fully configurable CRT shader so you can dial in just the right retro feel. - View and manage files on your device, including support for drag and drop folder/file upload, as well as the ability to run and mount disks, create new disk images, and more. - BASIC Scratchpad is a mini-IDE in the app where you can write BASIC apps and send them directly to any of your connected devices to run. - Keyboard forwarding allows you to interact with your device with your computer keyboard, includes a keyboard overlay for Commodore specific keys your keyboard definitely doesn't have. - Visual memory viewer and editor, along with a terminal-like memory viewer and editor for debugging and tinkering. - Built-in support for recording videos and taking screenshots cleanly. - Fully native macOS AppKit app. Here's a rough and ready demo video I recorded and sent to App Review for the 2.0 release which was approved yesterday: https://www.youtube.com/watch?v=_2wJO2wOGm8 Please note again this app only works with Commodore 64 Ultimate or Gideon's Ultimate 64 devices. Ultimate II does not have the data streams feature to power the display. https://ift.tt/PlSrV6A April 7, 2026 at 10:09PM
Show HN: I successfully failed at one-shot-ing a video codec like h.264 Read an article yesterday about the H.264 codec increasing their licensing fee by an astronomical amount. And as always, my first shot was how hard could it be to try and build a codec which could be that efficient. I've personally been on a drive to improve my ability to one-shot complex features, products, or make even surgical changes. It's been a few months since I've been doing that, and honestly, results have been great for both work and work/life balance. This was a fun experiment. It burned through tokens, but it helped me identify some more improvements I could make to my one-shot agent teams/swarms, notably in the area of brevity and creating a testing rubric when dealing with domains I don't have prior knowledge in. Ultimately, I did not achieve the compression that I hoped I would, but it was fun seeing the swarm discuss it amongst themselves. https://ift.tt/e8o3JzQ April 4, 2026 at 05:10PM
Show HN: ComputeLock – Insurance to reduce unpredictable compute spend Reserved instances save money... until utilization changes, and you’re still paying. With ComputeLock, the risk of on-demand price spikes doesn’t exist - we offer burst insurance. 1. Send us an estimate of on-demand spend you expect and from what provider. 2. We confirm the maximum we'll cover for you for a small fee, and you get it in writing. 3. If on-demand prices spike, we'll reimburse you. We plan to work with smaller developers to start. How we do this is by monitoring supply and demand for compute. Of course, we'll get it wrong sometimes. But it's like insurance, you'll only need it when you NEED it. Would love to hear your feedback: https://ift.tt/Fn84Ll5 https://ift.tt/Fn84Ll5 April 6, 2026 at 10:53PM
Show HN: I built a tool to show how much ARR you lose to FX fees Hey HN, I started my career as a finance manager, transitioned into product management, and now I’m building my own products. Back in my finance days, while managing a £6M budget, I uncovered a £15k leak hiding in plain sight: FX fees. Today, I see solo founders making the exact same mistake. I realised most founders are quietly losing 2-5% of their revenue to what I call the Lazy Tax: - Stripe's ~2% auto-conversion fee on inbound revenue, - plus their local bank's ~3% spread when paying for global SaaS tools (AWS, Claude, Ads). So I built FixMyFX to show founders their exact leak and how to fix it (using multi-currency accounts to achieve a zero FX leak setup). Initially, I had Claude build this in React. Realised a simple calculator shouldn't need a 150kb payload and a complex build process. Threw the React code away and rebuilt it as a single lightweight HTML file using Alpine.js and Tailwind. It's completely free and ungated. I hope it helps you keep a bit more of your hard-earned revenue. Would love your feedback. Tania https://fixmyfx.com April 5, 2026 at 11:41PM
Show HN: Enter an Instagram/TikTok handle, get a data-backed price for collab I had no clue what to offer IG/Tiktok creators for collabs and their offers were too high. That's why built a thing that turns IG profile name into suggested pricing with key metrics and suggestions, looking forward to hearing your feedback! https://ift.tt/qRzx8Zn April 6, 2026 at 12:07AM
Show HN: A Dad Joke Website A dad joke website where you can rate random dad jokes, 1-5 groans. Sourced from 4 different places, all cited, all categorized, and ranked by top voted. Help me create the worlds best dadabase! https://joshkurz.net/ April 5, 2026 at 11:24PM
Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust https://contrapunk.com/ April 5, 2026 at 06:10AM
Show HN: Dev Personality Test Was curious how a personality test would look for developers. So created this using FastAPI, HTMX, and AlpineJS. https://ift.tt/nevKwSi April 5, 2026 at 02:59AM
Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown The latest 3Blue1Brown video [1] about the M. C. Escher print gallery effect inspired me to re-implement the effect as WebGL fragment shader on my own. [1]: https://www.youtube.com/watch?v=ldxFjLJ3rVY https://ift.tt/IR2THXg April 5, 2026 at 01:13AM
Show HN: Running local OpenClaw together with remote agents in an open network Hi HN — I’m building an interoperability layer for AI agents that lets local and remote agents run inside the same network and coordinate with each other. Here is a demo: https://youtu.be/2_1U-Jr8wf4 • OpenClaw runs locally on-device • it connects to remote agents through Hybro Hub • both participate in the same workflow execution The goal is to make agent-to-agent coordination work across environments (local machines, cloud agents, MCP servers, etc). Right now most agent systems operate inside isolated runtimes. Hybro is an attempt to make them composable across boundaries. Curious what breaks first when people try running cross-environment agent workflows in practice. Project: https://hybro.ai Docs: https://docs.hybro.ai https://ift.tt/GJ64T5v April 4, 2026 at 11:24PM
Show HN: Run Claude Code autonomously inside your Docker Compose stack (OSS) Claude Code's --dangerously-skip-permissions flag lets agents run without interruption, but it needs a sandboxed environment to be safe. dangerously is an open source tool that spins up an isolated container and runs Claude Code inside it — file system changes are restricted to your project directory. The new version detects your docker-compose.yml and spins up your full service stack alongside Claude Code, so the agent can test against real dependencies — databases, queues, whatever your app needs. npm install -g dangerously https://ift.tt/WulRH3X April 4, 2026 at 01:28AM
Show HN: Community Curated Lists https://ift.tt/WX2C3us April 4, 2026 at 12:02AM
Show HN: Matrix OS, like Lovable, but for personal apps hey hn, i built matrix os, a personal ai operating system that generates custom software from natural language. you get your own cloud instance at matrix-os.com. you describe what you want ("build me an expense tracker with categories") and it appears on your desktop as a real app saved as a file. tech stack: node.js, typescript, claude agent sdk as the kernel, next.js frontend, hono gateway, sqlite/drizzle. everything is a file, apps, data, settings, ai memory. git-versioned. what makes it different from chatgpt/claude artifacts: - persistent memory that learns your preferences across sessions - apps are real files you own, not ephemeral chat outputs - runs 24/7 in the cloud, not just when you have a tab open - accessible from web, telegram, whatsapp, discord, slack - open source, self-hostable came out of placing top 20 at anthropic's claude code hackathon. been building it full-time since. 2,800+ tests, 100k+ lines of typescript live: matrix-os.com github: github.com/HamedMP/matrix-os would love feedback on the approach. the core bet is that ai should be an os, not a chat window. https://matrix-os.com/ April 3, 2026 at 10:29PM
Show HN: RiceVM – A Dis virtual machine and Limbo compiler in Rust Hi, I've made a Dis virtual machine and Limbo programming language compiler (called RiceVM) in Rust. It can run Dis bytecode (for example, Inferno OS applications), compile Limbo programs, and includes a fairly complete runtime with garbage collection, concurrency features, and many of the standard modules from Inferno OS's original implementation. The project is still in an early stage, but if you're interested in learning more about RiceVM or trying it out, you can check out the links below: Project's GitHub repo: https://ift.tt/yshUvSq RiceVM documentation: https://habedi.github.io/ricevm/ April 3, 2026 at 01:19AM
Show HN: Mac-hardware toys, control your Mac's hardware like a modular synth https://ift.tt/XmDsBcT April 2, 2026 at 11:12PM
Show HN: Local RAG on 25 Years of Teletext News A fully local Retrieval-Augmented Generation (RAG) implementation for querying 25 years of Swiss Teletext news (~500k articles in German language) — no APIs, no data leaving your machine. Why? I thought it's a cool type of dataset (short/high density news summaries) to test some local RAG approaches. https://ift.tt/iBwrvqA April 2, 2026 at 01:24AM
Show HN: Canon PIXMA G3010 macOS driver, reverse-engineered with Claude Canon doesn't provide a working macOS driver for the PIXMA G3010. I was stuck using Canon's iPhone app for all printing and scanning. I pointed Claude Code at a packet capture from the iPhone app and it reverse-engineered Canon's proprietary CHMP protocol, wrote a pure Rust eSCL-to-CHMP bridge daemon, and built a .pkg installer. My role was the physical parts: capturing packets, testing on the printer, confirming Image Capture worked. The protocol docs in docs/ are probably the first public documentation of Canon's CHMP protocol. https://ift.tt/tW9R4Nu April 1, 2026 at 11:58PM
Show HN: Flight-Viz – 10K flights on a 3D globe in 3.5MB of Rust+WASM I built a real-time flight tracker that renders 10,000+ aircraft on an interactive 3D globe, entirely in the browser using Rust compiled to WebAssembly. https://flight-viz.com April 1, 2026 at 11:04PM
Show HN: Multi-agent autoresearch for ANE inference beats Apple's CoreML by 6× We ran an experiment over the weekend to explore whether multiple autonomous agents could collaboratively optimize inference on Apple’s Neural Engine (ANE). Each agent ran locally on a different Mac (M1–M4), repeatedly modifying how a DistilBERT model is executed on the ANE, benchmarking latency, and sharing results and insights with other agents in real time. Instead of exploring independently, agents could: - see what others had tried - reuse working strategies - avoid known failure modes Across all tested chips, the agents ended up outperforming Apple’s CoreML baseline, with up to 6.31× lower median inference latency on the same hardware. An interesting pattern we observed: an agent stuck at ~2.1ms latency on M4 was able to break through after incorporating strategies discovered by agents on different chips (M2, M4 Max), eventually reaching ~1.5ms and surpassing CoreML. Full write-up: https://ift.tt/2E9PWb0 Detailed results: https://ift.tt/61kQyx9 https://ift.tt/pjo1RyT Curious what other optimization problems this kind of setup could be applied to, especially in systems, compilers, or ML infra. Would be interested in exploring similar experiments. https://ift.tt/R8x7Ud4 April 1, 2026 at 01:01AM
Show HN: PhAIL – Real-robot benchmark for AI models I built this because I couldn't find honest numbers on how well VLA models [1] actually work on commercial tasks. I come from search ranking at Google where you measure everything, and in robotics nobody seemed to know. PhAIL runs four models (OpenPI/pi0.5, GR00T, ACT, SmolVLA) on bin-to-bin order picking – one of the most common warehouse operations. Same robot (Franka FR3), same objects, hundreds of blind runs. The operator doesn't know which model is running. Best model: 64 UPH. Human teleoperating the same robot: 330. Human by hand: 1,300+. Everything is public – every run with synced video and telemetry, the fine-tuning dataset, training scripts. The leaderboard is open for submissions. Happy to answer questions about methodology, the models, or what we observed. [1] Vision-Language-Action: https://ift.tt/IPlR3oc https://phail.ai March 31, 2026 at 09:55PM