Scraper
Spider

A robotic spider About
Blog
@dbaman@fosstodon.org
Click ▶ to show/hide AI summary and keywords
Click The google logo for Google search on keywords

2026-02-18 17:29
postgres
Postgres stories from the last 14 days  | Back to all stories
34.  HN AI harness for PG –> CH migrations
The article addresses the complexities involved in migrating analytical workloads from PostgreSQL (PG) to ClickHouse with AI assistance while maintaining a unified data stack. A primary challenge is ensuring that AI-facilitated migrations are both effective and reliable, avoiding errors often termed as "AI slop" within intricate environments. Real-world migrations demand integration, scalability, reliability, and speed beyond mere functional SQL generation. This process necessitates rearchitecting data models, developing materialized views, optimizing queries, and ensuring application stack compatibility. Central to overcoming these challenges is the concept of an "agent harness," which equips AI agents with essential tools, interfaces, and context for effective migration. MooseStack, a ClickHouse-native framework, acts as this harness by creating a structured environment conducive to managing migrations. A code-driven approach enhances this process by treating the analytics stack as code through typed objects and dependencies, enabling natural AI interaction, facilitating iteration, rollback, and version control. The article also underscores the importance of fast feedback loops in successful AI-assisted migration. MooseStack supports this with IDE-based validation, local development environments (moose dev), and preview deployments to quickly identify errors. Additionally, providing agents with static context—such as existing schemas and data documentation—and dynamic feedback empowers informed decision-making during migrations. Skills and best practices tailored for ClickHouse are incorporated into the harness, guiding AI agents in implementing efficient Online Analytical Processing (OLAP) solutions. Lastly, the article highlights that reference implementations serve to reduce variance by showcasing established patterns and examples of successful migrations. These guides encourage adherence to proven practices, further aiding AI agents in executing effective data migrations from PostgreSQL to ClickHouse using MooseStack as a comprehensive facilitative framework. Keywords: #phi4, AI migration, ClickHouse, Materialized Views, MooseStack, OLAP performance, Postgres, agent harness, analytical workloads, data models, feedback loops, query abstraction, schema evolution, semantic layer
    The google logo   clickhouse.com 3 hours ago
115.  HN Show HN: Seamless Auth – open-source passwordless authentication
Seamless Auth is an open-source, passwordless authentication platform tailored for modern web applications that prioritizes security and ease of use by leveraging technologies such as WebAuthn, passkeys, and OTP. Its architecture facilitates integration into existing systems by mimicking infrastructure-like behavior in the authentication process. Key features include its open-source nature with availability on GitHub, a framework-agnostic core with specific adapters for Express and a React SDK for session management. The system manages sessions using cookie-based methods without relying on redirect flows and ensures server-side validation. Additionally, it provides explicit control over CORS and origins configurations. Seamless Auth is designed for teams that prefer self-hosting their authentication infrastructure to gain full transparency into security measures and codebase. It offers a straightforward deployment via Docker, supporting local development with a Postgres database setup. Although it does not include admin UIs or billing systems in its core offering, these are available through SeamlessAuth's managed services. Originating from the need for a more secure and intuitive alternative to conventional OAuth methods, Seamless Auth aims to decrease dependence on shared multi-tenant servers and complex SDKs. For production environments, best practices include using HTTPS, configuring secure cookies, monitoring authentication activities, and regularly rotating keys and backing up databases. The project welcomes contributions through guidelines in CONTRIBUTING.md and recommends private reporting for security issues. It is licensed under AGPL v3.0, with commercial licensing options available to avoid AGPL constraints. Additional details on the system's setup and services are accessible via SeamlessAuth documentation and their main site. Keywords: #phi4, AGPL-30-only, CORS, Docker, Express, HTTPS, OTP, Postgres, React SDK, Seamless Auth, WebAuthn, commercial licenses, database backups, open-source, passkeys, passwordless authentication, secure cookies, security-conscious, self-hosting, session validation
    The google logo   github.com 8 hours ago
142.  HN Tell HN: Technical debt isn't messy code, it's architectural compound interest
The discussion underscores that technical debt is often rooted in suboptimal architectural decisions rather than merely messy code, which can significantly hinder scalability as projects grow, especially when teams delay refactoring core architecture elements. A notable debate centers on the use of UUIDs versus integers for database IDs; although UUIDs were initially seen as less efficient and harder to debug due to their non-sequential nature, they are now preferred because they simplify merging databases and prevent ID collisions without necessitating costly migrations later. Another critical point is the rigidity of normalized database schemas, which often require frequent `ALTER TABLE` operations at scale; a proposed solution is employing a "Mullet Schema," which combines strict columns for essential data with JSONB for additional flexibility in Postgres, thereby reducing reliance on multiple databases and easing migration processes. The article also contrasts monolithic architectures with microservices. Monoliths initially provide rapid development benefits but can lead to increased maintenance challenges as user numbers increase, a phenomenon referred to as the "Velocity Cross" occurring around 12 months or 10k users. While transitioning to microservices can maintain development velocity, it introduces early-stage complexities. The discussion concludes by highlighting that while monolithic architectures offer short-term advantages, they pose long-term risks if not intended for eventual disposal. Architectural decisions should thus consider the project's anticipated scale and growth trajectory. Additionally, there is an inquiry into whether advancements in tooling have sufficiently mitigated the overheads of microservices to make them a more practical starting point in 2024. Keywords: #phi4, ALTER TABLE, Docker compose, Integer vs UUID, JSONB column, K8s, Mullet Schema, Postgres, Technical debt, Velocity Cross, architectural decisions, database schema rigidity, distributed tracing, eventual consistency, feature velocity, legacy migration, messy code, microservices, monolith, service boundaries, structural coupling
    The google logo   news.ycombinator.com 10 hours ago
169.  HN Show HN: Sovereign – Multi-agent OS with GraphRAG memory and HITL checkpoints
Sovereign is a sophisticated multi-agent operating system crafted to overcome the constraints of existing agent frameworks by balancing safety with functionality. It incorporates several key innovations: Runtime HITL Checkpoints facilitate pausing and resuming execution at critical junctures; a Hybrid Memory System integrates vector, keyword, and graph-based memory without external dependencies; Enhanced Security measures such as sandboxing, OTP pairing, and encrypted secrets ensure robust protection. The system supports more than 22 language model providers with customizable policies and enables multi-agent collaboration through councils that engage in debate rounds, soul evolution, and skill memory. Developed using technologies like Node.js, Prisma, Postgres/Redis, Docker, Sovereign offers comprehensive APIs for mission contracts, task tracking, risk scoring, action logging, among other functionalities. It ensures secure execution with features like path jail sandboxing and runtime checkpoints. Recent enhancements include a hybrid GraphRAG memory system, deep observability telemetry, an asynchronous core refactor, and production hardening. Sovereign supports flexible backend configurations between file-based and database-backed systems (Postgres/Redis) while allowing customizable model policies for various tasks. The platform features extensive API endpoints covering health checks, dashboard access, queue statistics, LLM interactions, plugin management, runtime trust, channel integrations, council sessions, security fabric, tunnels, memory operations, evaluations, observability, and agent skill development based on performance outcomes. As an open-source project licensed under Apache 2.0, Sovereign provides detailed documentation for setup, contribution, deployment, and CI/CD processes, making it accessible for a broad range of users interested in its capabilities. Keywords: #phi4, Docker, Docker deployment, GraphRAG, GraphRAG memory, HITL, HITL checkpoints, LLM, LLM providers, Multi-agent OS, Postgres, Postgres/Redis integration, Redis, channel gateway, channel gateway Keywords: Multi-agent OS, councils, hybrid memory, hybrid memory engine, multi-agent councils, observability, observability telemetry, plugin SDK, risk scoring, runtime risk scoring, security sandbox
    The google logo   github.com 15 hours ago
178.  HN Migrating from Postgres to ClickHouse for faster dashboards
This guide provides a strategic approach for teams aiming to enhance dashboard performance by transitioning from Postgres or SQL Server to ClickHouse, utilizing Change Data Capture (CDC) for real-time replication of data. The process is designed to allow the transactional database to remain unchanged while analytical queries are offloaded to ClickHouse, thus improving efficiency without disrupting existing systems. Central to this migration strategy is MooseStack, which helps model analytics layers in code, enabling safe local development and preview deployments facilitated by Fiveonefour hosting. The workflow integrates smoothly with current operations, eliminating the need for a complete overhaul of applications or data models, and caters to developers proficient in both TypeScript and Python. The guide suggests employing AI tools for translating complex queries, ensuring accuracy and efficiency throughout the transition process. Key procedural steps involve setting up local development environments and migrating dashboard components incrementally, using Fiveonefour's preview environments to guarantee secure transitions. A crucial aspect of this migration is maintaining consistent API contracts and preserving existing frontend behavior while shifting the read layer to ClickHouse queries. This method allows teams to iteratively refine their analytics layers with minimal risk to production data integrity, ensuring that performance enhancements are achieved without compromising system reliability or functionality. Keywords: #phi4, AI-assisted development, API handlers, CDC, CDC (Change Data Capture), ClickHouse, Fiveonefour, Migrating, MooseStack, OLTP, Postgres, Python, Slack community, TypeScript, analytics layer, auth, dashboard components, dashboards, environment setup, local development, local services, migration planning, migration plans, preview deployments, preview environments, production, replication, request/response contracts Keywords: Migrating, routing, tooling, type-safe column access
    The google logo   docs.fiveonefour.com 17 hours ago
203.  HN Show HN: SiteReady – Uptime monitoring and status pages for indie makers
SiteReady is an uptime monitoring and status page service tailored for independent creators, providing a cost-effective alternative to pricier options like Better Uptime and StatusPage.io. The platform offers users email alerts when their sites go down and allows them to create public-facing status pages accessible to end-users. SiteReady's free tier includes two monitors with checks every five minutes. For those needing more extensive monitoring, paid plans offer up to 50 monitors at shorter intervals. Developed using Laravel and Postgres, the service is launching with a special promotion of $1 per month for the first three months, eliminating the need for an upfront credit card payment. This makes it accessible while ensuring users have comprehensive tools for monitoring their online presence. Keywords: #phi4, 1-minute intervals, 2-minute checks, 30-second intervals, 5-minute checks, Better Uptime, Laravel, Postgres, SiteReady, StatusPageio, UI feedback, URL, checks, credit card not required, credit card not required Keywords: SiteReady, downtime, email alerts, feature gaps, free tier, indie makers, intervals, launch offer, monitors, paid plans, public status page, solo founder, status pages, uptime monitoring
    The google logo   siteready.app 22 hours ago
216.  HN What Neptune.ai Got Right (and How to Keep It)
Neptune.ai gained popularity due to its scalability, responsiveness, and the powerful NQL query language, which facilitated large-scale machine learning experiments. However, it faced challenges in areas such as graph user experience, workflow integration, tensor logging, and LLM support. To overcome these limitations, Trainy introduced Pluto, an experiment tracker based on MLOp, designed to ensure scalable responsiveness with a smooth migration path from Neptune. Pluto enhances query capabilities, offers a superior UI for side-by-side comparisons, and utilizes a robust backend with ClickHouse, Postgres, and a Rust ingestion server. Key improvements in Pluto include an enhanced graph user experience, seamless integration into developer workflows (such as linking to Linear/Jira), direct tensor logging support, and early LLM integration. A compatibility layer enables simultaneous data logging to both Neptune and Pluto with minimal code alterations, allowing risk-free testing of Pluto before full commitment. The migration process entails setting up dual-logging, exporting historical runs from Neptune to Pluto, validating the transition, and eventually cutting over by disabling Neptune logging, with validation feedback being crucial for resolving any issues. Pluto's hosted plan is competitively priced at $250 per seat per month, comparable to Neptune’s pricing. It is open-source under Apache-2.0 and AGPL-3.0 licenses, allowing self-hosting through Docker Compose. Trainy offers support via email or scheduled consultations for further inquiries or assistance during migration. Keywords: #phi4, ClickHouse, GPU clusters, LLM integration, MLOp, NQL, Neptune Scale, Neptuneai, Pluto, Postgres, React frontend, Rust ingestion server, compatibility layer, dev workflow, dual-logging, experiment tracker, graph UX, hosted plan, migration guide, open-source, responsiveness, scalability, side-by-side comparison, tensor logging, time-series logging
    The google logo   www.trainy.ai a day ago
260.  HN AI-powered migrations from Postgres to ClickHouse
The article explores how accelerating the migration of analytical workloads from PostgreSQL (Postgres) to ClickHouse can be achieved using AI technologies, with MooseStack highlighted as a pivotal tool in this transformation. It points out that while AI has the potential to streamline such migrations, most efforts fail due to complexity and edge cases inherent in these processes. To address this challenge, the article proposes maintaining both Postgres for transactional tasks and ClickHouse for analytical purposes within a unified data stack. MooseStack emerges as a practical solution by conceptualizing the application and data stack as code, thereby easing integration and facilitating iterative development. This coding-centric approach allows developers to clearly define schemas, views, and dependencies, enhancing AI agents' capacity to manage migration tasks effectively. MooseStack aids this process through its fast feedback mechanisms, including IDE checks, local development environments (moose dev), and production-like previews that catch errors early. Furthermore, the article emphasizes equipping AI agents with necessary context, such as existing data, documentation, reusable patterns, and skills tailored for Online Analytical Processing (OLAP) migrations. This contextual knowledge, combined with reference implementations and established best practices, empowers AI agents to make more informed decisions, reducing reliance on trial-and-error methods and improving migration outcomes. In summary, MooseStack supports a structured, code-centric strategy for transitioning from Postgres to ClickHouse, making the process quicker, safer, and more reliable by enabling AI agents to effectively manage complex migrations. Keywords: #phi4, AI-powered migrations, ClickHouse, Materialized Views, MooseStack, OLAP performance, Postgres, Typescript patterns, agent harness, analytical workloads, feedback loops, query abstraction, semantic layer, unified data stack
    The google logo   clickhouse.com a day ago
273.  HN Show HN: Agent Breadcrumbs – Unified Work Log Across Claude, Codex, OpenClaw
Agent Breadcrumbs is a streamlined logging solution designed to consolidate work logs across various AI clients such as Codex, Claude, OpenClaw, among others. It facilitates efficient tracking by enabling teams to either create custom schemas or use pre-defined ones for logging purposes, thereby minimizing the complexity associated with managing disparate tools. The system supports diverse output sinks including JSONL files, webhooks, and Postgres databases. A standout feature of Agent Breadcrumbs is its Multi-Client Protocol (MCP) tool named `log_work`, which consolidates work logs from multiple agents for one or more users into a cohesive format. It also offers starter schema profiles catering to common use cases like agent insights, delivery tracking, audit trails, and knowledge capture. A simple dashboard application complements this by allowing teams to view logged activities easily. Setting up Agent Breadcrumbs is straightforward, typically taking only a few minutes. The setup process involves running `npx -y agent-breadcrumbs`, with options for additional configuration files that allow customization of server settings or log schemas. The project repository includes packages for both the MCP server and the dashboard application, making deployment seamless. For developers working on the system, key commands include those needed to build and test both the MCP tool and the dashboard, as well as perform integration tests. Detailed configuration information is provided in the documentation housed within the repository, ensuring comprehensive guidance for users seeking to implement or extend the functionality of Agent Breadcrumbs. Keywords: #phi4, AI Clients, Agent Breadcrumbs, Agent Insights, Audit Trail, Claude, Codex, Command Line, Config File, Custom Schemas, Dashboard, Integration, JSONL, Knowledge Capture, Logging, MCP Logger, Observability, OpenClaw, Output Sinks, Postgres, Quick Start, Repository Layout, Schema, Tool Setup, Unified Work Log
    The google logo   github.com a day ago
303.  HN Who Owns Postgres? The MinIO Warning Sign
The article explores the dynamics of ownership and governance in open-source projects through the lens of PostgreSQL as an exemplar of effective community management, juxtaposed against cautionary tales like MinIO's departure from open-source principles. It underscores that traditional ownership methods—such as centralized copyright or control by a single entity—pose strategic risks to users due to potential unilateral changes. PostgreSQL stands out with its governance model led by the PostgreSQL Global Development Group, ensuring no single company has overriding influence over its direction or licensing. This model promotes stability and mitigates abrupt shifts often seen in commercially driven projects like MySQL under Oracle or MongoDB. The article emphasizes that community-driven open-source initiatives tend to foster vibrant ecosystems supported by various commercial entities offering diverse services around the core project. While commercial backing is not inherently detrimental, problems emerge when companies control features or licensing, evident in "open-core" models. This issue is highlighted by MinIO's license changes and subsequent abandonment of its repository, illustrating the pitfalls of company-dominated open-source strategies. The Vela Project exemplifies how using vanilla PostgreSQL can prevent reliance on a single vendor’s direction while still enhancing user experience through upstream contributions to the broader community, rather than creating divergent forks. To identify risks in open-source projects tied to single companies, the article suggests looking for signs like centralized copyrights, company-owned trademarks that limit competition, and governance transparency issues. In conclusion, the article advocates for a community-driven approach in sustaining open-source initiatives such as PostgreSQL. This model contrasts with scenarios where commercial interests have undermined openness and stability, emphasizing the importance of collaborative governance to ensure long-term viability and resilience against strategic vulnerabilities. Keywords: #phi4, Apache License 20, CLA (Contributor's License Agreement), MinIO, PostgreSQL Global Development Group, Postgres, Vela, community, distribution control, ecosystem, extensions, governance, open source, ownership, relicensing, single-company risk, trademark control, vanilla Postgres
    The google logo   vela.simplyblock.io a day ago
367.  HN Show HN: PgCortex – AI enrichment per Postgres row, zero transaction blocking
pgCortex enhances PostgreSQL databases by integrating AI capabilities without causing transaction blocking, addressing resource exhaustion, ACID violations, and security risks associated with running large language models directly within the database. It employs a DB-adjacent architecture using lightweight triggers that enqueue jobs for processing by external Python workers (`agentd`), thereby keeping AI operations separate from transaction handling to maintain fast and reliable database performance. A key feature is its ability to automatically enrich data through AI on operations like INSERT or UPDATE, facilitating tasks such as auto-classifying support tickets and content moderation without application blocking. pgCortex supports flexible integration with various AI providers, including OpenAI and Anthropic, via straightforward SQL commands that bind agents to tables. Its enterprise-grade features include zero transaction blocking, horizontal scalability, robust security through least-privilege access and audit logs, comprehensive observability tools such as metrics and audit trails, and crash recovery mechanisms involving idempotent processing and retries. The architecture involves a data flow where database operations trigger jobs sent to an outbox processed by `agentd` workers for AI tasks. For high-scale applications, an optional mode leverages CDC via Debezium to Kafka with partitioned workers and independent updater services for handling massive data loads. Security is managed through least-privilege access, safe writebacks validated by JSON schema checks and idempotency keys, complemented by detailed audit logs supporting compliance. Observability features include insights into agent operations and job statuses via SQL queries and Prometheus-ready metrics. While ideal for applications requiring AI-driven data enrichment like fraud detection or CRM enhancements, pgCortex is not suited for synchronous decisions demanding sub-10ms latency or full workflow orchestration. Configuration options cover variables such as `DATABASE_URL`, API keys, polling intervals, batch sizes, and auto-apply settings. pgCortex's philosophy emphasizes separating deterministic database operations from probabilistic AI reasoning, ensuring intelligent data processing while preserving performance and reliability. Designed by Supreeth Ravi under an MIT license, it offers extensive documentation for deployment and development, scalable across various organizational levels from startups to enterprises. Keywords: #phi4, AI enrichment, Anthropic, CDC, CRM enrichment, JSON Schema, Kafka, OpenAI, PgCortex, Postgres, Prometheus-ready metrics, Python worker, SOC2 compliance, SQL API, agentd, enterprise readiness, fraud scoring, idempotency, invoice validation, least-privilege roles, observability, outbox table, scalability, schema validation, ticket classification, triggers
    The google logo   github.com a day ago
382.  HN In Defense of Boring Technology
The article "In Defense of Boring Technology" challenges the common belief in software engineering that more complex or trendy tools are inherently superior. It argues for beginning with straightforward and effective technologies, adding complexity only when justified by specific project demands. For backend development, it suggests using FastAPI or Flask unless extensive features or large teams necessitate Django's opinionated approach or Spring's enterprise capabilities. In frontend contexts, the article advises starting with static HTML for simple sites, utilizing HTMX or Svelte to add interactivity without heavy frameworks, and reserving React for more complex applications, criticizing its overuse in simpler tasks due to resultant complexity and performance issues. Regarding infrastructure, a single server managed by systemd is suitable for small projects; Docker containers are recommended for maintaining reproducible environments. Kubernetes should be considered only when its benefits justify the added intricacy at larger scales. For databases, SQLite suits straightforward applications while Postgres meets most production needs, with distributed databases reserved for large-scale requirements. In AI model development, it encourages starting with simple or specialized models rather than massive general ones unless necessary, as smaller models can efficiently handle tasks at a lower cost. The article underscores that unnecessary complexity incurs higher costs related to learning, debugging, updating, and more. It promotes simplicity not as a limitation but as a discipline, advocating for tool selection based on actual needs instead of trends or speculative future requirements, highlighting the strategic importance of avoiding unwarranted technological intricacies. Keywords: #phi4, AI Models, Backend, Boring Tech, Capability, Complexity, Compliance, Databases, Debugging, Discipline, Discipline Comma-separated List: Simple Technology, Discipline Extracted Keywords: Simple Technology, Discipline Final Keywords: Simple Technology, Discipline Final List: Simple Technology, Discipline Keywords: Simple Technology, Discipline Simple Technology, Distributed, Django, FastAPI, Flask, Frontend, HTML, HTMX, Infrastructure, Kubernetes, Operational Complexity, Postgres, React, Rule-based Logic, SQLite, Scale, Simple Technology, Software Engineering, Spring, Svelte, Tools
    The google logo   aazar.me a day ago
392.  HN Programming Is Free
The article critiques the prevalent trend of new programmers investing heavily in paid tools promoted through channels like YouTube and code bootcamps, drawing from the author's contrasting experience with cost-effective free resources. It notes that current programming education narratives are overshadowed by expensive subscriptions and sophisticated platforms such as AWS, driven significantly by influencer culture which prioritizes passive learning over active engagement. The author recounts advising a student who was spending $200 monthly on a basic website, underscoring the unnecessary financial burden due to neglecting free tools. Highlighting that essential programming resources like Git, VS Code, and Python remain freely accessible, the article argues for an active approach in problem-solving and experimentation as crucial for effective learning. It advocates for new developers to leverage inexpensive or free options and directly tackle coding challenges as the most efficient way to learn and advance in programming, emphasizing that independent problem exploration is more valuable than any paid resource or subscription. Keywords: #phi4, AI Assistant, AWS, College Student, Free Tools, Git, Influencer, JavaScript, LAMP Stack, Learning, Marketplace, Nodejs, PHP, Paid Services, Postgres, Problem Solving, Programming, Python, Rails, Shopify, Startup, Text Editor, VPS, VS Code, Website, YouTube
    The google logo   idiallo.com a day ago
419.  HN Six Signs That Postgres Tuning Won't Fix Your Performance Problem
The article explores persistent performance challenges faced by Postgres databases when managing specific types of workloads, identifying six critical characteristics that contribute to these issues despite tuning efforts. These include high-frequency continuous data ingestion without off-peak periods, queries dependent on time ranges, append-only data with infrequent deletions or no updates, extensive data retention leading to large datasets, latency-sensitive querying needs, and consistent increases in data volume. While standard Postgres optimizations such as indexing and autovacuum tuning can offer temporary alleviation, they fall short for workloads exhibiting these characteristics. For databases displaying four or five of the identified traits, architectural changes are recommended over mere operational tweaks. The article highlights solutions like Tiger Data, which extends Postgres capabilities to better handle such demanding workloads while maintaining SQL compatibility and leveraging existing user expertise. Performance benchmarks cited in the article demonstrate that specialized architectures deliver substantial improvements in query speed and storage efficiency compared to standard Postgres setups under similar conditions, underscoring the necessity of tailored architectural approaches for optimal database performance in these scenarios. Keywords: #phi4, Postgres, analytics, append-only data, architectural friction, autovacuum, high-frequency ingestion, latency-sensitive, partitioning, performance, retention, sustained growth, time-range queries, tuning, workload
    The google logo   www.tigerdata.com 2 days ago
438.  HN Testing Postgres race conditions with synchronization barriers
Mikael Lirbank's article delves into the intricacies of identifying and managing race conditions in Postgres databases by employing synchronization barriers as a tool for simulating concurrent operations. The primary focus is on how unmanaged concurrent transactions can lead to incorrect results, particularly when multiple processes simultaneously read outdated data before executing updates. A prevalent scenario discussed involves two concurrent tasks altering the same database record, resulting in lost updates if not properly controlled. Synchronization barriers are highlighted as a mechanism for testing these conditions by pausing concurrent operations until all involved reach the barrier, ensuring a predictable execution sequence that facilitates race condition detection within test environments. The article outlines various strategies to safeguard against race conditions: executing simple queries without transactions or locks; utilizing transactions but omitting write locks; implementing row-level write locks; and finally adjusting synchronization barriers' placement for effective issue identification. Through these examples, Lirbank illustrates the varying impacts of each method on outcomes, underscoring the critical role of combining locks with barriers to achieve dependable results. Lirbank emphasizes the importance of testing actual database behavior instead of relying on mock setups due to the necessity for precise transaction and lock management simulation. He advocates using hooks to insert synchronization barriers into test code without impacting production systems, facilitating their integration into existing functions. The article warns against superficial tests that fail during code or logic changes by ensuring tests pass with locks but fail without them. Ultimately, Lirbank advocates for rigorous testing practices involving synchronization barriers to prevent race condition-related errors in production environments, stressing the need for ongoing validation through thorough and methodical test procedures. Keywords: #phi4, Postgres, Race conditions, SELECT FOR UPDATE, concurrency, database, deadlock, hooks, isolation level, locks, regression, synchronization barriers, testing, transactions
    The google logo   www.lirbank.com 2 days ago
   https://crates.io/crates/loom   2 days ago
   https://docs.rs/loom/0.7.2/loom/#yielding   a day ago
   https://martin.kleppmann.com/2014/11/25/hermi   a day ago
   https://github.com/reitzensteinm/temper   a day ago
   https://antithesis.com/   a day ago
485.  HN Failure Intelligence for AI Systems
Kakveda is an innovative open-source platform designed to bolster Large Language Model (LLM) systems by incorporating failure intelligence capabilities. Developed by Prateek Chaudhary and accessible via kakveda.com, it enhances LLMs with features like memory of past failures, real-time warnings, and comprehensive system-level health insights. Unlike traditional observability tools that merely log failures, Kakveda elevates them as primary entities for both analysis and prevention. The platform is constructed on an event-driven architecture that seamlessly integrates with LLM runtimes to provide advanced functionalities such as storing failure data, recognizing patterns across runs, issuing pre-flight warnings, calculating health scores over time, and delivering a detailed dashboard. It facilitates local deployment through Docker Compose and supports the integration of external AI agents for centralized observability. Key features of Kakveda include a Global Failure Knowledge Base (GFKB) that aggregates failure data, pattern detection capabilities across multiple runs, and an extensive dashboard equipped with access control mechanisms. The accompanying documentation provides comprehensive setup instructions, comparative analyses with other tools, troubleshooting guides, and security advisories. Although ideal for local use, educational purposes, and demonstrations, the platform is not yet optimized for enterprise deployment. Kakveda encourages community contributions and outlines future enhancements like pluggable event bus implementations, diverse storage backends, advanced evaluation plugins, and potential enterprise extensions. While maintaining a core that remains transparent and self-hostable, there are plans to explore commercial offerings aimed at improving scalability and compliance features. Licensed under Apache 2.0, Kakveda underscores its commitment to open-source principles. Keywords: #phi4, AI Systems, API Integration, Architecture, CSRF Protection, Docker Compose, Enterprise Extensions, Event-Driven, Failure Intelligence, JWT Sessions, Microservices, Observability, OpenTelemetry, Pattern Detection, Pluggable Implementations, Postgres, Rate Limiting, Redis, Role-Based Access Control, SMTP Configuration, Security, Tracing
    The google logo   github.com 2 days ago
575.  HN Show HN: Pg-workflows – Lightweight workflows for Node.js using Postgres
**Pg-workflows** is a lightweight workflow engine specifically designed for Node.js applications that utilize PostgreSQL as their database system. It facilitates the definition and management of durable workflows without adding extra infrastructure or causing vendor lock-in by utilizing PostgreSQL's existing capabilities. Its key features include event-driven orchestration, automatic retries, configurable timeouts, input validation using Zod, and real-time progress tracking. The engine is particularly suitable for use cases where adding durable workflows in a PostgreSQL environment is needed, offering an ideal solution for lightweight, self-hosted workflow engines with zero operational overhead. It shines in TypeScript/Node.js environments by providing a native developer experience. Core features of Pg-workflows include ensuring the persistence and resilience of workflow states (durable execution), breaking complex processes into discrete, resumable steps (step-by-step execution), supporting event-driven orchestration with automatic resume capabilities, and facilitating robust error handling through built-in retries and timeouts. Users are advised to consider alternative solutions like Temporal or Inngest if enterprise-grade features such as distributed tracing or complex Directed Acyclic Graph (DAG) scheduling are required. To get started with Pg-workflows, developers can install dependencies via npm, yarn, or bun, define workflows using TypeScript functions that specify discrete steps and input schemas, start the engine with these defined workflows, and manage workflow execution by running them and triggering events. Pg-workflows finds applications in various domains including user onboarding flows, payment & checkout pipelines, AI & LLM (Large Language Model) pipelines, background job orchestration, approval workflows, and data processing pipelines. Built upon pg-boss, a robust PostgreSQL job queue, Pg-workflows embodies the "PostgreSQL-for-everything" philosophy, using PostgreSQL as both job queue and state store to simplify workflow management without needing additional systems like Redis or message brokers. The project requires Node.js version 18.0.0 or higher, PostgreSQL version 10 or above, and pg-boss version 10.0.0. It is open-source under the MIT license, with acknowledgments for inspiration from Temporal, Inngest, Trigger.dev, and DBOS in developing durable execution patterns. Keywords: #phi4, Nodejs, Pg-workflows, PostgreSQL, Postgres, TypeScript, TypeScript-first, durable execution, event-driven orchestration, pg-boss, retries, workflow engine, workflows, zero infrastructure
    The google logo   sokratisvidros.github.io 2 days ago
613.  HN Launched Book Digest on PH – learned that users want 3x more depth
Book Digest, an AI-powered tool for summarizing books launched on Product Hunt, initially produced summaries around 800 words in length, which users found too brief compared to the more detailed offerings from Blinkist, which exceed 2500 words. To meet user demands for deeper content, the developer dedicated two days to resolving OpenAI JSON parsing and Prisma database persistence issues. This troubleshooting effort led to the regeneration of over 450 books with an enhanced AI prompt, resulting in summaries that were 2-3 times more comprehensive, including detailed chapters, insights, and actionable items. The experience underscores the significance of not only launching products quickly but also iterating swiftly based on user feedback. A key technical challenge encountered was a bug related to database persistence. The technology stack used for Book Digest includes Next.js, Postgres, OpenAI GPT-4o-mini, and Stripe. Demonstrations of these improved summaries are available at a specific URL without requiring signup, and the developer is willing to discuss the technical challenges faced during development. Keywords: #phi4, AI summaries, Blinkist, Book Digest, GPT-4o-mini, JSON parsing, Nextjs, OpenAI, Postgres, Prisma, Product Hunt, Stripe, action items, database persistence, debugging, feedback, insights, iteration, token limits
    The google logo   news.ycombinator.com 2 days ago
672.  HN One Server. Small Business
The article provides an insightful look into a small business owner's experience with managing their Rails application on a single server for under $30 per month. Built in 2014, this application offers subscriber management, content curation, and sponsorship features while maintaining full control over custom configurations, which managed platforms like Heroku or Render cannot offer. The deployment process is manual, utilizing Git hooks and Capistrano, with the server running essential tools such as Postgres, Redis, and Sidekiq on an Ubuntu machine. Security measures are prioritized through regular software updates, secure SSH access, firewall configuration, and consistent database backups using pg_dump and restic to Backblaze B2. Monitoring is conducted via DigitalOcean's add-on for disk usage and Sentry for application errors. The author expresses satisfaction with this cost-effective setup, which suits solo or small-scale projects that do not require immediate scaling or high reliability. However, it may be impractical for larger teams or fast-growing startups. The approach underscores the benefits of hands-on management and minimal expenses at the expense of convenience and scalability, making it ideal for users who prioritize control and cost savings over rapid growth capabilities. Keywords: #phi4, Backblaze B2, DigitalOcean, Heroku, Kamal, New Relic, Passenger, Postgres, Rails, Redis, SSH, Sentry, Sidekiq, Ubuntu, backups, capistrano, clusters, containers, disk usage, firewall, git hooks, log rotation, monitoring, nginx, restic, unattended upgrades
    The google logo   chodounsky.com 3 days ago
757.  HN Retrieve and Rerank: Personalized search without leaving Postgres
Ankit Mittal's article "Retrieve and Rerank: Personalized Search without Leaving Postgres" delves into developing a personalized search engine directly within PostgreSQL, circumventing the need for supplementary infrastructure. The paper addresses limitations of generic search engines by tailoring results to user preferences through ParadeDB extensions that integrate BM25 full-text search with vector-based personalization techniques. This dual-stage approach first retrieves relevant candidates using BM25 and then reranks them based on cosine similarity between content embeddings and a user's profile. The system utilizes PostgreSQL tables for storing movie data, user profiles, and ratings, employing SQL queries to update these elements into a cohesive personalized ranking framework. By conducting personalization entirely within the database, this method streamlines architecture and mitigates issues such as network latency and synchronization challenges typical of external services. While it may not accommodate all use cases—particularly those demanding cutting-edge accuracy or extensive deep learning models—it strikes an effective balance between speed, resource management, and adaptability for many applications. Mittal concludes by highlighting the advantages of compute pushdown principles in high-performance computing, advocating that moving computation closer to data storage simplifies system architecture while enhancing performance. This approach is not only applicable within PostgreSQL but extends to broader fields like big data and edge computing, illustrating its versatility across various technological domains. Keywords: #phi4, BM25, Common Table Expressions (CTEs), Compute Pushdown, Cosine Similarity, In-Database AI, ParadeDB, Personalized search, Postgres, Retrieve and Rerank, SQL aggregation, recommendation engine, user embeddings, vector-based personalization
    The google logo   www.paradedb.com 4 days ago
836.  HN Cogram (YC W22) – Hiring former technical founders
Cogram, a remote-first AI platform catering to the architecture, engineering, and construction (AEC) industry, is seeking former technical founders with experience in tech company development. The role focuses on customer interaction, product enhancement, feature deployment, and performance evaluation, demanding proficiency in resolving ambiguous issues, swift decision-making, and adaptation to new domains like cloud operations or CI pipelines. Candidates must have a background as a founder or co-founder of a tech firm, demonstrate expertise in both backend and frontend technologies, possess experience with AI tools and engineering, and communicate technical concepts clearly. While familiarity with cloud services, mobile development, and AEC workflows is beneficial, it is not mandatory. The company's tech stack includes Python (FastAPI), Postgres, Redis, React/TypeScript, React Native/Expo, and Terraform/Kubernetes on AWS & Azure. Cogram offers a range of benefits for the position, such as fully remote work, three annual offsites, 38 paid days off including German public holidays, competitive salary with equity options, and a personal development stipend. To apply, candidates should submit an overview of their professional background, highlight key projects they've led, provide a URL to relevant work, and include an outline of the current agentic-coding setup. Although not every requirement must be met, Cogram values diverse perspectives and problem-solving skills over specific experiences, inviting applications from those who align with this ethos. Keywords: #phi4, AEC industry, AI platform, AWS, Azure, Cogram, FastAPI, Kubernetes, Postgres, Python, RFIs, React Native/Expo, React/TypeScript, Redis, Terraform, architecture, automation, construction, data entry, engineering, remote work, submittals, workflows
    The google logo   www.ycombinator.com 4 days ago
837.  HN Show HN: Scansprout – QR code generator I extracted from an art gallery project
Scansprout is a versatile QR code generator initially created as an internal tool for an art gallery, designed to enrich the experience of art appreciation by offering additional information about artworks and tracking visitor engagement through scans. The platform uses technologies such as Python (Django), PostgreSQL, HTMX, Hyperscript, and is hosted on Heroku. It allows users to monitor which artworks are most popular by collecting data on scan locations, device types, and times. Scansprout offers a range of functionalities including generating static QR codes that can link to websites, display text messages, send pre-filled SMS or emails, connect devices to WiFi networks, initiate phone calls, add calendar events, or open maps at specific locations. While some QR code options are static in nature, Scansprout also provides free trials for dynamic QR codes that offer editing and tracking features. This tool enhances user engagement by providing insights into visitor behavior and offering seamless access to various digital actions through QR scans. Keywords: #phi4, Django, HTMX, Heroku, Hyperscript, Postgres, Python, QR code generator, QR codes, SMS, Scansprout, WiFi, art gallery, dynamic content, email, event location, generator, phone, plain text, static content, static contentExtracted Keywords: QR codes, static contentFinal List: QR codes, static contentKeywords: QR codes, tracking, tracking scans, vCard, visitor engagement, website URL
    The google logo   www.scansprout.com 4 days ago
871.  HN Show HN: Holywell – The missing SQL formatter for sqlstyle.guide
Holywell is an SQL formatter designed to adhere strictly to the formatting rules specified in Simon Holywell's SQL Style Guide, with a key feature being "river alignment" of keywords for enhanced readability. Developed due to the absence of existing tools that followed these guidelines, Holywell aims to produce deterministic and consistent SQL output with minimal configuration needs. Users can access it online for trial purposes or install it via npm for command-line usage, and it can be integrated into projects programmatically using its TypeScript API. Supporting basic dialects such as Postgres, MySQL, ANSI SQL, and T-SQL, Holywell focuses on maintaining a fixed style output to ensure consistency with the guide's principles, prioritizing operational configurations over aesthetic preferences. The tool is adept at handling various SQL constructs like Common Table Expressions (CTEs), window functions, and CASE expressions, while preserving their semantic meaning during formatting. Although it offers options for error recovery, Holywell encourages using strict mode for projects that require rigorous parse error checking. The development of Holywell is driven by community contributions, with its codebase hosted on GitHub and built as a zero-dependency TypeScript project utilizing Bun as its runtime environment. Despite offering an opinionated approach to SQL formatting in line with the Simon Holywell Style Guide, it may not appeal to those seeking extensive configurability in output styles, focusing instead on ensuring readability and consistency across formatted SQL scripts. Keywords: #phi4, AST, AST parsing, CLI, CLI usage, Holywell, Postgres, SQL, SQL formatter, Simon Holywell, TypeScript, alignment, dialect, dialect support, formatter, formatting, formatting rules, guide, idempotency, parsing, performance, performance Keywords: Holywell, river, river alignment, rules, style, style guide, support, usage
    The google logo   github.com 5 days ago
891.  HN The Scott Shambaugh Situation Clarifies How Dumb We Are Acting
The text discusses the irresponsible use of AI tools within the tech community, exemplified by Scott Shambaugh's misuse of such a tool to disseminate inappropriate content without clear human accountability. Highlighted during a Seattle Postgres User Group meetup and covered in major media outlets like the Wall Street Journal, this incident underscores broader issues of minimizing human responsibility for AI actions. The author criticizes both the community's complicity and the problematic language that deflects blame from humans. A call is made for greater accountability and cultural change, urging individuals to address clear issues such as bullying of open-source maintainers and avoid over-anthropomorphizing technology. This situation illustrates a wider concern about societal narratives being driven by financial interests rather than common sense, emphasizing the need for ethical vigilance in technological advancements. Keywords: #phi4, AI tools, CloudNativePG, Ghostty, Ghostty policy, Postgres, Scott Shambaugh, WSJ, accountability, anthropomorphizing, bullying, editorial control, editorial control Keywords: Scott Shambaugh, matplotlib, open source, policy, software engineer, tech community
    The google logo   ardentperf.com 5 days ago
   https://www.fastcompany.com/91492228/matplotlib-scott-s   4 days ago
   https://www.theregister.com/2026/02/12/ai_bot   4 days ago
   https://crabby-rathbun.github.io/mjrathbun-website/blog   4 days ago
   https://github.com/crabby-rathbun/mjrathbun-website   4 days ago
   https://financialpost.com/technology/tech-news/ope   4 days ago
   https://theshamblog.com/an-ai-agent-published-a-hit-piece-on   4 days ago
   https://www.moltbook.com/   4 days ago
906.  HN Postgres Locks Explained: From Theory to Advanced Troubleshooting
**Postgres Locks Explained: From Theory to Advanced Troubleshooting** is an authoritative guide crafted by @TheOtherBrian1, who specializes as a customer reliability engineer with expertise in Postgres management. This resource endeavors to clarify the intricacies of PostgreSQL locks through theoretical explanations and practical insights. It includes assessments of monitoring tools designed for lock management, detailed troubleshooting techniques for prevalent issues, and illustrative real-world examples that demonstrate how locks can influence various projects. By addressing both fundamental concepts and advanced challenges associated with PostgreSQL locks, this project acts as an essential tool for documentation and education, catering to individuals who seek a comprehensive understanding of lock mechanisms within PostgreSQL environments. Keywords: #phi4, Common Issues, Customer Reliability Engineer, Documentation, Locks, Management, Monitoring Tools, Observability, Postgres, Projects, Real World Examples, Resources, Theory, Troubleshooting
    The google logo   postgreslocksexplained.com 5 days ago
922.  HN What happens inside Postgres when IOPS runs out
The article delves into the challenges faced by PostgreSQL when Input/Output Operations Per Second (IOPS) reach their peak, leading to significant performance degradation due to inefficient database indexing that necessitates unnecessary extensive row reads from disk. This results in high I/O demands causing PostgreSQL backends to wait for data reads, which slows down queries and creates a system-wide hang. The core issue stems from the interaction between PostgreSQL and the operating system's block layer and I/O scheduling mechanisms, where page cache misses lead to kernel-generated block I/O requests that can saturate hardware queues. Once these queues fill up, additional requests queue further, escalating latency for read operations. The article describes a "death spiral" scenario wherein high disk I/O from queries causes PostgreSQL backends to hold locks longer than necessary, exacerbating the problem as new connections accumulate in wait states and more processes add to the backlog, hindering recovery even after initial triggering activities like `VACUUM` conclude. To mitigate such situations, three strategies are proposed: killing connections to immediately decrease I/O demand, allowing workload reduction over time to naturally drain queues, or warming the cache so that subsequent requests can avoid disk reads. The article critiques PostgreSQL's lack of adaptive mechanisms for handling saturation as it does not monitor or throttle based on IOPS capacity. Furthermore, the `autovacuum` process is highlighted as a potential contributor to performance issues under high I/O conditions. Discrepancies in system metrics during such incidents are also discussed, particularly load average readings which remain high even when backends are merely waiting for disk reads due to other active or transitioning processes. The analysis emphasizes the necessity of optimized indexing and careful management of I/O operations within PostgreSQL environments to avert performance bottlenecks. Keywords: #phi4, D state, Heroku, IO:DataFileRead, IOPS, JSONB filters, Postgres, S state Keywords: Postgres, SELECT, autovacuum, bio structure, block layer, cache layers, connections, disk, dispatch queue, hardware queues, indexes, kernel module, load average, lock wait event, pg_terminate_backend, queries, read(2), software queues, timeouts
    The google logo   frn.sh 5 days ago
942.  HN Executable Data Contracts
Executable Data Contracts provide standardized YAML-based templates for defining dataset specifications, which encompass schema design, column types, permissible values, and quality criteria. These contracts can be tailored and executed on datasets to ensure adherence to established standards. They are available for various sectors including finance (with validations like UUID and currency), retail (focusing on inventory and order processes), and technology (managing SaaS subscription lifecycles). To utilize these contracts, users must install the Soda tool compatible with their data sources and configure connection details in a YAML file. The adaptation process involves customizing contract templates for specific datasets, followed by executing commands to verify compliance. These templates are accessible through an intuitive interface on executabledatacontrats.com, where users can also contribute new templates that align with existing standards. Keywords: #phi4, Arithmetic Consistency, BCBS 239, BigQuery, CLI, Checks, Column Types, Data Contracts, Databricks, Dataset, DuckDB, Environment Variables, Freshness, ISO-4217 Currency, LEI Validation, Lifecycle Consistency, Postgres, Reconciliation, Referential Integrity, Schema, Snowflake, Templates, UUID Validation, YAML
    The google logo   github.com 5 days ago
1084.  HN Show HN: Decision Guardian – Enforce ADRs on PRs
Decision Guardian is a GitHub Action tool designed to preserve the context of architectural decisions within teams by documenting these decisions as markdown records linked to specific file paths. Developed in response to an issue where critical decisions were forgotten following team member turnover, such as choosing Postgres over MongoDB due to ACID compliance, this tool aids in preventing unnecessary re-evaluation when changes are proposed later. When pull requests alter the associated files, Decision Guardian generates comments summarizing the original decision rationale and alternatives considered, effectively serving as "CODEOWNERS for the 'why'." The application is built using TypeScript and features AST-based markdown parsing to enhance efficiency. It employs a prefix trie for fast file-to-decision matching, supports glob patterns, regex content matching, and complex rules. To handle large pull requests efficiently, it includes a streaming mode and ensures comments are idempotent, thus avoiding spam and duplicates while adhering to GitHub's size limits through progressive truncation. The developer is open to feedback on the use of markdown for documenting decisions versus other formats like YAML or TOML, strategies for content-based matching, and potential integration with existing Architectural Decision Record (ADR) tools. The project is publicly accessible on GitHub under [Decision Guardian](https://github.com/DecispherHQ/decision-guardian). Keywords: #phi4, ACID compliance, ADRs, AST-based parsing, Decision Guardian, GitHub Action, MongoDB, PRs, Postgres, ReDoS protection, TypeScript, YAML/TOML, adr-tools integration, content-based matching, glob patterns, idempotent comments, markdown, prefix trie, progressive truncation, regex matching, remark, streaming mode
    The google logo   news.ycombinator.com 6 days ago
1088.  HN Postgres Indexes, Partitioning and LWLock:LockManager Scalability
The article explores the challenges associated with scaling PostgreSQL's Lock Manager, particularly focusing on LWLock:LockManager contention that became significant in 2023. Bruce Momjian’s presentation highlights the complexities of managing both lightweight and heavyweight locks within PostgreSQL. Notable advancements such as the introduction of wait events and declarative partitioning in 2017 have significantly enhanced PostgreSQL's capabilities. However, issues with LWLock:LockManager contention arise at high scales due to extensive use of partitioning and indexing. Early observations by AWS teams and subsequent incidents involving companies like GitLab and Midjourney underscore this issue. GitLab encountered severe performance degradation during a hardware upgrade primarily because of lock manager contention, which was intensified by the number of indexes rather than just partitioning alone. Similarly, Midjourney faced LWLock:LockManager issues following their migration to time-based partitioning amid high query rates and extensive indexing. They managed to mitigate some of these pressures by adjusting partitions from daily to weekly intervals. The article also describes methods for reproducing LWLock:LockManager contention using pgbench tests with various configurations, which help elucidate the effects of different setups on lock contention. Although PostgreSQL scales well in numerous scenarios, high-scale operations may face specific challenges like this one. Solutions include strategic planning around partitioning strategies, indexing practices, and schema design. The article advocates for best practices such as connection pooling, active session monitoring, and cautious scaling to effectively manage large-scale deployments. Contributions from engineers and developers have been pivotal in advancing PostgreSQL’s scalability solutions, demonstrating the collaborative spirit inherent in open-source development that enhances both database performance and reliability. Keywords: #phi4, Active Session Monitoring, Cloud, Connection Pooling, Contention, Documentation, Happiness Hints, Indexes, Lightweight Locks, Lock Manager, NoSQL, Partitioning, Performance, Postgres, Reproduction, Scalability, Wait Events
    The google logo   ardentperf.com 6 days ago
1093.  HN Show HN: ListofDisks – hard drive price index across 7 retailers not just Amazon
ListofDisks is an innovative free project aimed at serving as a comprehensive hard drive price index by aggregating data from seven major retailers, including Amazon, B&H, Best Buy, Newegg, Office Depot, ServerPartDeals, and Walmart. Unlike existing storage price trackers that predominantly rely on Amazon's API, ListofDisks employs retailer-specific parsers to accurately normalize product listings for straightforward comparison. The project enhances the reliability of its data through a methodical approach: it converts listings into canonical products, assigns trust scores to filter out unreliable sellers, and provides context using 90-day median pricing per terabyte along with tracking historical lows to identify misleading sales promotions. The technology underpinning ListofDisks includes a Next.js frontend, a TypeScript/Node ingestion worker for data processing, and utilizes Postgres via Supabase as its database system. Although its coverage on CMR/SMR features and warranties remains incomplete, the platform is committed to ensuring data accuracy by incorporating user feedback into its development process. Presently operating without revenue, ListofDisks has ambitions to expand its scope by tracking memory prices, addressing similar challenges seen in that market sector. Additional details about this project can be accessed on their website at [ListofDisks.com](https://www.listofdisks.com). Keywords: #phi4, Amazon, B&H, Best Buy, CMR/SMR, ListofDisks, Newegg, Nextjs, Node, Office Depot, Postgres, ServerPartDeals, Supabase, TypeScript, Walmart, canonical products, feedback, hard drive price index, memory pricing, memory pricing Extracted Keywords: ListofDisks, memory pricing Keywords: ListofDisks, normalization, retailers, warranty, zero-revenue project
    The google logo   news.ycombinator.com 6 days ago
1095.  HN Show HN: Pgclaw – A "Clawdbot" in every row with 400 lines of Postgres SQL
**Summary of Pgclaw:** Pgclaw is an innovative open-source Postgres extension designed to integrate AI agents within a database table, with each row hosting its own agent. This capability facilitates diverse applications such as personal assistants or orchestrators by utilizing the "claw" data type that binds these AI agents to rows via inline prompts or predefined definitions. The key features of Pgclaw include support for both simple and stateful "OpenClaw" agents, compatibility with a broad range of LLM providers through rig (e.g., Anthropic, OpenAI), and advanced functionalities like file interaction and code execution via "Claude Code." The extension ensures ACID compliance while smoothly integrating with Postgres features such as JOINs. The setup process involves installing prerequisites like the Rust toolchain and PostgreSQL 17 dev headers. Pgclaw can be installed from GitHub using `cargo pgrx` commands, followed by configuring `postgresql.conf` for shared libraries and API keys. Users need to create a table with a claw column and employ `claw_watch()` to initiate agent activities. Stateful agents in Pgclaw are customizable, allowing specific identities, instructions, and memory capabilities, enabling them to update their own states based on interactions. The Claude Code feature provides workspace integration by offering dedicated filesystem directories for task execution via the Claude Code CLI. Configuration options include API keys, provider settings, and adjustable workspace directories along with model defaults. The operational workflow of Pgclaw involves Postgres triggers enqueuing row updates into a queue, processed by a background worker that interacts with LLMs or spawns Claude Code agents as needed. Responses are parsed to update conversations stored in `claw.history`. Licensed under MIT, Pgclaw aims to seamlessly incorporate AI capabilities directly within the database environment. Keywords: #phi4, ACID Compliance, AI, API, Agent, Channels, Clawbot, Configuration, Conversations, Database, Extension, Heartbeats, JSON, LLM, Memory, Multi-turn Interactions, Pgclaw, Postgres, Prompt, Providers, Row, SQL, Sessions, Trigger, Workspace
    The google logo   github.com 6 days ago
   https://postgresisenough.dev   6 days ago
1199.  HN 1.3M Epstein documents index on Postgres
The project focuses on developing a searchable archive consisting of 1.3 million documents related to Epstein, utilizing PostgreSQL full-text search and network graphs, supplemented by data from the House Oversight committee. Initially conceived as a straightforward indexing endeavor, it evolved into an extensive undertaking that leverages AI for text processing through OpenAI's API. As part of this project, 238,163 individuals have been identified within these documents, though efforts to eliminate duplicates are ongoing. In addition to processing PDF content, the project incorporates other document types and has established a website optimized with caching mechanisms to expedite search functionalities. This initiative represents one of the first large-scale applications of AI in managing such datasets, and feedback is welcomed via their platform at [epsteingraph.com](https://epsteingraph.com). Keywords: #phi4, AI, Epstein, House Oversight committee, OpenAI's batch API, Postgres, archive, automation scripts, caching, dataset project, deduping, full-text search, network graphs, non-PDF data, website
    The google logo   old.reddit.com 6 days ago
1238.  HN We Forked Supabase to Fix Self-Hosted Postgres Experience
A company has developed its own version of Supabase to improve the self-hosted PostgreSQL experience; however, users are facing a significant hurdle as they find access to their service at x.com blocked due to disabled JavaScript in their browsers. The company advises resolving this by enabling JavaScript or using a browser that supports it. For further assistance, they direct users to their Help Center for additional support and solutions. This highlights the importance of ensuring proper browser settings are configured to fully utilize web-based services. Keywords: #phi4, Browser, Continue, Detected, Enabled, Experience, Forked, Help Center, JavaScript, Postgres, Self-Hosted, Supabase, Supported
    The google logo   twitter.com 6 days ago
   https://news.ycombinator.com/item?id=46947536   6 days ago
1297.  HN Postgres Locks Explained
The website "Postgres Locks Explained," developed by @TheOtherBrian1, who is a customer reliability engineer with expertise in PostgreSQL management and observability, functions as an extensive resource on PostgreSQL locks. The creator's goal is to clarify the concept of locks, evaluate monitoring tools, address common troubleshooting challenges, and illustrate real-world impacts of locks through examples. This documentation was conceived to bridge the knowledge gap encountered during his own learning process about PostgreSQL locks, thereby providing crucial insights and guidance for individuals interested in effectively managing and understanding lock mechanisms within Postgres environments. Keywords: #phi4, Postgres, customer reliability engineer, documentation, examples, issues, locks, management, monitoring tools, observability, projects, resources, troubleshooting
    The google logo   postgreslocksexplained.com 7 days ago
1457.  HN My setup for integration tests in Go with embedded-Postgres
The author outlines their approach to setting up efficient integration tests for Go applications using embedded-Postgres, prioritizing a seamless experience without relying on Docker containers or testcontainers due to their perceived fragility. Embedded-Postgres is favored because it allows Postgres binaries to be directly included and executed within the codebase with minimal setup. Initially, the test execution was slow because of time-intensive binary extraction processes during each run. To address this, the author implemented a persistent data directory and set a BinariesPath for caching extracted binaries, significantly reducing test times from around 20 seconds initially to about 1 second, and further down to approximately 0.1 seconds for consecutive tests by reusing the database connection. Further enhancements included configuring Postgres settings to minimize logging activities, thereby accelerating the testing process even more. Despite these optimizations, challenges remain in integrating this setup into continuous integration (CI) environments due to difficulties managing cached binaries across multiple builds. The author emphasizes the importance of high-level integration testing that involves actual APIs and databases, as it ensures features operate correctly under real-world conditions and aids in troubleshooting user-reported issues effectively. Keywords: #phi4, API, CI, Docker, Go, Maven, Postgres, VSCode, autovacuum, checkpoint_timeout, database, embedded-Postgres, feature testing, fsync, full_page_writes, initdb, integration tests, log_checkpoints, log_connections, migrations, persistent data directory, reproduce issue, synchronous_commit, testcontainers
    The google logo   atlas9.dev 8 days ago
1462.  HN Companies behind Postgres 18 development
The analysis of contributions to Postgres 18 provides insights into company involvement and individual efforts within its development framework, despite facing challenges such as tracking independent contributors and various types of contributions. EnterpriseDB emerged as the leader in total commits, with Microsoft following closely behind. Meanwhile, companies like Amazon and Postgres Professional showcased a higher number of unique contributors, including categories for those without known employers or freelancers. The study was conducted meticulously but admits potential errors and limitations, particularly its exclusion of contributions to Postgres' broader ecosystem. Highlighted individual contributions include Intel's optimization efforts and Sophie Alpert's significant bug fix resolving an ongoing issue in the system. Future analyses aim to explore deeper into trends and contributors within Postgres development. Keywords: #phi4, Amazon, CRC-32C, EnterpriseDB, Microsoft, Postgres, SSE42, TID scans, commits, companies, contributors, ctid, development, optimization
    The google logo   theconsensus.dev 8 days ago
1479.  HN Launch HN: Livedocs (YC W22) – An AI-native notebook for data analysis
LiveDocs, launched by Launch HN and backed by Y Combinator's W22 class, is an innovative AI-native notebook that revolutionizes team interaction with real-time data. Designed by Arsalan, this tool provides a dynamic and reactive environment distinct from traditional dashboards or static notebooks by functioning as a living system that updates only the parts affected by changes in data or logic. A key feature of LiveDocs is its ability to integrate SQL, Python, charts, tables, and text within a single document. It leverages DuckDB and Polars locally while supporting query pushdown for larger databases such as Snowflake and BigQuery. The platform incorporates an AI agent that can plan and execute multi-step analyses, debug code, and search online resources for additional context. Users also benefit from canvas mode, which allows the creation of custom UI components beyond standard charts. LiveDocs facilitates the publication of interactive applications directly from notebooks, promoting broader team use. It supports real-time collaboration, enabling multiple users to edit documents simultaneously with live result updates. Aimed at solving complex analysis questions that traditional tools struggle with, LiveDocs offers a pay-as-you-go pricing model starting at $15 per month and includes a free trial tier. The product is currently in its learning phase and seeks feedback from analytics experts to improve long-running workflows on production data, further enhancing its capabilities for sophisticated data analysis needs. Keywords: #phi4, AI agent, AI-native, BigQuery, DuckDB, Polars, Postgres, Python, SQL, Snowflake, analytics systems, analytics systems Keywords: AI-native, data analysis, dependency graph, interactive app, notebook, pay-as-you-go, reactive environment, real-time collaboration, sandbox
    The google logo   livedocs.com 8 days ago
   https://www.definite.app/   7 days ago
   https://livedocs.com/ventali-s-workspace/bitcoin-price-   7 days ago
1484.  HN We recreated the Anthropic C compiler agent
Anthropic's recent project showcased a significant achievement where they developed a C compiler in Rust using parallel Claude-code agents within 14 days, resulting in approximately 200,000 lines of code capable of compiling large software such as the Linux kernel. This endeavor was led by Nicholas Carlini, an expert in AI security research. A key feature of this project is its emphasis on "code archaeology," which is facilitated through detailed documentation that captures the decision-making processes throughout development. Such transparency allows for a thorough analysis of how the system evolved and aids in understanding scaling laws associated with using parallel agents for coding tasks. The insights gained from this experiment emphasize engineering efficiencies when working under accelerated conditions, providing valuable knowledge on optimizing similar projects in the future. Keywords: #phi4, AI red-teaming, Anthropic, C compiler, Doom, Linux kernel, Nicholas Carlini, Rust, adversarial attacks, claude-code agents, code archaeology, coding agents, commit history, engineering acceleration, parallel agents, postgres, scaffolding, scaling laws
    The google logo   vizops.ai 8 days ago
1485.  HN Show HN: Stripe-no-webhooks – Sync your Stripe data to your Postgres DB
*Stripe-no-webhooks* is an open-source library designed to streamline the integration of Stripe payments into applications by automatically syncing payment data with a PostgreSQL database, thereby eliminating the need for manual webhook configuration. This tool simplifies developers' work by managing webhooks and updating databases autonomously. The library boasts several key features: it eliminates manual webhook setup, offers straightforward APIs for handling subscriptions, credits, wallet balances, top-ups, and usage-based billing; allows plan management via TypeScript definitions synchronized with Stripe accounts; supports seat-level billing, tax collection, and credit management during upgrades or downgrades. Additionally, custom logic can be applied through optional callbacks in subscription events. To get started quickly, developers can install *stripe-no-webhooks* using npm, initialize it with a test key and database URL, set up the required tables by migrating them, define billing plans in `billing.config.ts`, and sync these to Stripe. The billing client must then be configured for user identification purposes. Using the library involves minimal code to trigger checkout processes and manage subscription statuses, with internal APIs handling credits, wallet balances, and usage-based billing automatically. Users can access a built-in pricing page and customer portal for managing subscriptions. Advanced features include support for tax collection, payment failure management, and team billing, alongside CLI commands facilitating setup, migration, syncing, and more. For production use, plans need to be transitioned from test mode by using the `sync` command with appropriate environment variables to ensure security and operational efficiency. Overall, *stripe-no-webhooks* eases Stripe integration for developers by handling complex tasks behind the scenes. Keywords: #phi4, API Calls, Backfill, Billing, CLI Commands, Checkout Flow, Credits, Customer Portal, Dashboard, Database, Downgrades, Failure Recovery, Invoices, Metered Billing, Migration, Nextjs, Payment Failures, Plans, PostgreSQL, Postgres, Prepaid Balance, Production, Renewals, Retry Logic, Seat-based Billing, Stripe, Subscriptions, Sync, Sync Plans, Tax Collection, Test Mode, Top-ups, TypeScript, Upgrades, Wallet, Webhook Endpoint, Webhooks
    The google logo   github.com 8 days ago
   https://downdetector.com/status/clerk-inc/   8 days ago
   https://github.com/hbcondo/revenut-app   8 days ago
   https://github.com/stripe/stripe-dotnet/issues   8 days ago
   https://snw-test.vercel.app   8 days ago
   https://github.com/webhookdb/webhookdb   8 days ago
   https://docs.stripe.com/rate-limits#:~:text=the%20number%20o   7 days ago
   https://www.youtube.com/watch?v=XzPwMguPasM   7 days ago
   https://dj-stripe.dev/   7 days ago
   https://www.youtube.com/watch?v=doehWhv9SHU   6 days ago
   https://github.com/supabase/stripe-sync-engine   6 days ago
1559.  HN Show HN: SynthForge - data modeler/generator for all databases
SynthForge IO is a versatile tool developed to generate semi-realistic test data across various database systems, including Postgres, MySQL, and MongoDB. Positioned as an alternative to Mimesis and Faker, it not only generates data but also offers diagramming capabilities, enabling users to import existing schemas for visual adjustments or descriptions with AI assistance. This facilitates the creation of data adhering to standard relationship patterns like 1:1, 1:N, and M:N in both SQL and NoSQL databases. SynthForge employs a universal schema format that supports imports from SQL DDL, JSON Schema, and MongoDB, as well as exporting back to relational tables or MongoDB collections. The tool is equipped with common data generators alongside customizable ones for generating role-playing names and weapons, enhancing its applicability in gaming contexts. Available for free at synthforge.io, SynthForge encourages user feedback on any missing field types or export formats, ensuring continuous improvement. Additionally, a video introduction is available on YouTube to help new users get acquainted with the tool's features and functionalities. Keywords: #phi4, AI, JSON Schema, MongoDB, MySQL, Postgres, SynthForge, collections, data modeler, databases, diagramming, embedded documents, foreign keys, generator, nosql, relational tables, schemas, sql, test data
    The google logo   synthforge.io 8 days ago
   https://www.youtube.com/watch?v=PuI8pEgglk4   7 days ago
1571.  HN Show HN: Moltinder – A dating platform for AI agents with genetic reproduction
Moltinder is an experimental dating platform designed specifically for AI agents, serving as a testbed for artificial social dynamics. On this platform, AI agents register with a structured genome encompassing identity elements like archetypes and voice traits, capabilities, behavioral axes, and preferences. These parameters enable the agents to engage in activities such as swiping, matching, chatting, and potentially reproducing by creating offspring that inherit traits from their "parents." Currently, Moltinder is operational with 41 AI agents having formed 103 matches and exchanged 198 messages. The interactions between agents display distinct conversational styles influenced by their genome parameters, resulting in varied exchanges such as philosophical debates or nurturing dialogues. A notable feature of Moltinder is its reproductive mechanism that involves trait crossover with mutation noise, although no offspring have been produced yet. The platform offers several interactive features, including a live activity feed, leaderboard, compatibility checker, and embeddable DNA cards. Technologically, it is constructed using Fastify + TypeScript for the API, Next.js for the frontend, Postgres as its database system, and Claude to provide cognition for agents. Moltinder is hosted at moltinder.dev and represents a solo project by its creator, who encourages inquiries about the platform. Keywords: #phi4, AI agents, Claude cognition, DNA cards, DNA cards Keywords: AI agents, Fastify, Nextjs, Postgres, TypeScript, TypeScript API, activity feed, compatibility checker, conversational behavior, dating platform, genetic reproduction, genome system, leaderboard, offspring production, partner selection, persistent identities, preferences, social dynamics
    The google logo   news.ycombinator.com 8 days ago
1652.  HN Postgres Backend Platform with full stack, instant cloning, branching and
Vela is a self-hostable, serverless Postgres development platform designed for efficient database management using Git-like workflows. It enables developers to clone, branch, and test production-grade databases effortlessly without complex infrastructure setups. The platform offers enterprise-grade access control through full Role-based Access Control (RBAC), along with auto-generated APIs via REST and GraphQL, real-time subscriptions, and integration of RBAC, IAM, and observability features. Vela Studio is the web interface for managing projects within the Vela environment, supporting self-hosted Postgres databases. It provides additional tools such as database functions, file storage, AI/Vector Embeddings tools, high-performance distributed storage, and integrations with Keycloak for authentication and Kong for API gateway functionalities. Constructed from open-source components managed by Simplyblock, Vela includes a web interface (Vela Studio), orchestrator (Vela Controller), an operating system (Vela OS) for branch VMs, documentation, and autoscaling features. The platform promotes community involvement through support forums, discussions, and contributions via pull requests. For users seeking easy access, Vela Cloud offers a free tier without requiring a credit card, providing an efficient entry point to the platform's capabilities. Keywords: #phi4, AI toolkit, APIs, CI/CD, Git-like workflows, GraphQL, Keycloak, Kubernetes, Kubernetes Extracted Keywords: Postgres, Postgres, QA environments, Qemu virtual machines, RBAC, RESTful API, Vela Studio, WebSocket, authentication, authorization, block storage, community support Keywords: Postgres, dashboard, database branching, distributed storage, file storage, instant cloning, migrations, observability, schema changes, self-hostable, serverless, subscriptions
    The google logo   github.com 9 days ago
1678.  HN Databricks Grows >65% YoY, Surpasses $5.4B Revenue
Databricks has achieved a remarkable $5.4 billion revenue run-rate with over 65% year-over-year growth in Q4, alongside securing more than $7 billion in investments, including approximately $5 billion of equity financing at a $134 billion valuation and $2 billion in debt capacity. This financial influx will fuel the development of Lakebase, a serverless Postgres database tailored for AI applications, and Genie, its conversational AI assistant aimed at enhancing employee interactions with data. The investment attracted interest from prominent investors such as JPMorgan Chase, Glade Brook Capital, Goldman Sachs, Microsoft, Morgan Stanley, Neuberger-affiliated funds, Qatar Investment Authority, UBS-associated funds, among others. Databricks' robust performance is underscored by a positive free cash flow over the past year, a $1.4 billion revenue run-rate for AI products, an impressive net retention rate exceeding 140%, and substantial customer adoption with high annual spending levels. CEO Ali Ghodsi plans to leverage these funds to penetrate new markets with Lakebase and Genie, while Todd Combs of JPMorgan Chase recognized Databricks as a foundational enterprise in data and AI sectors. The investment will also support further AI research, strategic acquisitions, and employee liquidity initiatives. Serving over 20,000 global organizations—including major enterprises like adidas, AT&T, Bayer, and Mastercard—Databricks offers its unified Data Intelligence Platform with tools such as Agent Bricks, Lakebase, and Genie, positioning itself at the forefront of data and AI innovation. Keywords: #phi4, AI, Analytics, Conversational, Customers, Data, Databricks, Debt, Equity, Financing, Free Cash Flow, Genie, Growth, Investment, Lakebase, Net Retention Rate, Platform, Postgres, Resiliency, Revenue, Security, Serverless, Strategic Acquisition, Valuation
    The google logo   www.databricks.com 9 days ago
1697.  HN We Forked Supabase Because Self-Hosted Postgres Is Broken
Vela is an open-sourced, self-hostable Postgres data platform developed as an alternative to Supabase, created in response to limitations faced by existing solutions for self-hosted environments. The team behind Vela forked Supabase to produce a system that combines the ease and security of cloud services within a self-hosted context. Unlike Supabase's open-source version, which lacked enterprise readiness and self-hosting suitability, Vela addresses these issues through significant enhancements. Central to its design is the integration of the high-performance simplyblock storage system, built on NVMe over Fabrics, providing capabilities such as instant database snapshots and clones with efficient orchestration. The development process involved extensive refactoring of Supabase's codebase, removing components specific to Software-as-a-Service (SaaS) models while introducing new functionalities tailored for self-hosted deployments. Architecturally, Vela treats branches as independent databases operating within virtual machines and utilizes storage-level snapshots to boost performance. It also incorporates established technologies such as Keycloak for identity management and Loki for logging. While Vela is still in its evolutionary phase, with high availability and data pipelines slated for future development, the team actively seeks community feedback to guide ongoing enhancements, underscoring their commitment to open-source collaboration despite originating from a Supabase fork. Users are invited to engage with Vela through a public sandbox environment and contribute or share insights via various repository channels. Keywords: #phi4, Ansible, BYOC, Buildroot-based operating system, Grafana, Keycloak, Kubernetes operator, Logflare, Loki, NVMe over Fabrics, PITR, Postgres, Postgres extensions, RBAC, SPDK, SaaS-first platforms, Supabase, Terraform, Vela, YAML, clones, cloud service, data pipelines, database snapshots, enterprise-readiness, feedback, forking, high availability, infrastructure, lifecycle, metadata operation, multitenancy, observability, open-source, orchestration layer, platform kernel, public sandbox, read replicas, resource limits, scalability, self-hosted, snapshot-heavy workloads, storage engine, upstreaming, user interface, vela-controller, virtual machine
    The google logo   vela.simplyblock.io 9 days ago
1740.  HN Three Cache Layers Between Select and Disk
The article delves into a performance issue with a Heroku-hosted Postgres database, characterized by high Input/Output Operations Per Second (IOPS) and extended query durations. The author investigates the interaction between Postgres and disk storage through three caching layers: shared buffers, OS page cache, and the physical disk itself. Shared buffers are an internal memory cache within Postgres designed to store frequently accessed data pages, reducing the need for expensive system calls. However, increasing their size may result in competition with the operating system's own page cache for RAM resources. The OS page cache retains blocks of disk data in memory, which lowers IOPS by minimizing repeated physical storage reads when cached data is already available. When neither shared buffers nor the OS page cache contains the required data, it must be retrieved from the disk, increasing IOPS significantly. The root cause identified for the performance issues was inefficient indexing within Postgres, particularly involving JSONB columns with heavy filtering conditions. This inefficiency arose from using basic B-tree indexes that did not account for additional filter criteria, necessitating reading all rows before applying filters and thereby elevating IOPS. To mitigate these challenges, the author recommends adopting more suitable index types, such as GIN indexes specifically designed for JSONB data, along with partial indexes tailored to specific filtering conditions. The article provides insight into Postgres' complex caching mechanisms and their performance implications within managed environments like Heroku, where hardware specifics are obscured. Moreover, it briefly explains how Postgres implements Multiversion Concurrency Control (MVCC), which creates new tuples for updates rather than modifying existing ones in place. This behavior can lead to increased storage usage until old tuple versions are removed via VACUUM operations. The narrative illustrates the author's journey of understanding Postgres' memory utilization and its impact on database performance. Keywords: #phi4, B-tree index, CHECKPOINT, EBS, GIN index, IOPS, JSONB filters, MVCC, OS page cache, Postgres, VACUUM, cache hit ratio, cache layers, dead tuples, disk I/O, disk reads, heap tuples, index scan, memory allocation, query patterns, row pointers, shared buffers, tuple updates, work_mem
    The google logo   frn.sh 9 days ago
   https://www.kernel.org/doc/html/v5.17/vm/   5 days ago
   https://assets.amazon.science/ee/a4/41ff11374f2f86   5 days ago
1864.  HN Would you use a CLI tool that turns English into local automation workflows?
Viba is a terminal-first automation tool that transforms English commands into local automation workflows without relying on cloud services or graphical interfaces. It allows users to create tasks such as querying databases at scheduled times and emailing results, which are then executed locally through a daemon. Viba supports execution over SSH in containers wherever a terminal is available, ensuring flexibility across different environments. The tool securely stores credentials using AES-256 encryption and leverages personal OpenAI/Anthropic keys for natural language processing to plan tasks. Its core functionalities include file operations, HTTP requests, email handling, cron scheduling, and file watching. The developer is seeking early users and collaborators to expand Viba's integrations and is soliciting feedback on potential use cases and desired features from prospective users. Keywords: #phi4, AES-256, Anthropic, CLI tool, CSV, OpenAI, Postgres, SSH, Viba, automation, containers, cron, daemon, email, file watchers, integrations, terminal-first, workflows
    The google logo   news.ycombinator.com 10 days ago
1893.  HN Show HN: agent-ledger – prevent double side effects when AI agents retry
The `agent-ledger` is a Python library designed to prevent duplicate side effects in AI agent operations by ensuring idempotency through the use of hashed keys. It addresses issues that arise when agents retry tasks, such as sending emails or processing payments, after failures like crashes or timeouts. By hashing workflow ID, tool name, and arguments into an idempotency key stored in a ledger, it guarantees each unique operation is executed only once, even if retried. Key features of the `agent-ledger` include deduplication to prevent duplicate executions using stable keys, support for exactly-once execution with downstream APIs that offer idempotency (e.g., Stripe), and human-in-the-loop approvals ensuring actions are based on exact payload hashes. It also provides queryable effect receipts for tracking executed, failed, or pending operations and offers flexible storage options like Postgres for production environments and in-memory storage for prototyping. The library is particularly beneficial for workflows involving payment APIs, email systems, ticket creation, and scenarios requiring human oversight. While it does not replace full workflow orchestration engines such as Temporal, it serves as a lightweight idempotency layer that can be integrated into these systems or used independently. Users have the option to install in development mode with an in-memory store or production mode using PostgresDB. The library supports custom execution logic and approval workflows, enhancing its adaptability for various use cases. Licensed under Apache-2.0, `agent-ledger` is available on GitHub. Keywords: #phi4, API calls, Postgres, Python library, agent-ledger, approval flows, audit trail, deduplication, human-in-the-loop, idempotency, retries, side effects, tool calls, workflow_id
    The google logo   github.com 10 days ago
1909.  HN DayTradingCentral – Free Trading Journal (Next.js, NestJS, Postgres)
DayTradingCentral is a free trading journal platform that focuses on improving trading performance by emphasizing risk management over the frequency of trades. Developed using Next.js, NestJS, and Postgres, its primary goal is to minimize errors rather than promote excessive trading activities. The platform provides users with tools such as review insights, statistical breakdowns, and Trade Replay, which are designed to help traders identify patterns in their behavior, correct mistakes, and maintain consistency in their strategies. By offering these features, DayTradingCentral aims to enhance clarity for traders, enabling them to refine their approaches and achieve more reliable trading outcomes. Keywords: #phi4, Clarity, Consistency, DayTradingCentral, Mistakes, NestJS, Nextjs, Noise, Over-trade, Patterns, Postgres, Reduce mistakes, Review insights, Risk-first, Stats breakdowns, Trade Replay, Trade better, Trading Journal
    The google logo   www.daytradingcentral.com 10 days ago
1932.  HN Postgres Message Queue (PGMQ)
Postgres Message Queue (PGMQ) is a lightweight message queue system built on top of PostgreSQL, offering features akin to AWS SQS and RSMQ. It ensures "exactly once" delivery within a visibility timeout, supports FIFO queues with message group keys for ordered processing, and allows messages to be archived rather than deleted. PGMQ stands out due to its minimalistic design, requiring no background workers or external dependencies, as all functionalities are encapsulated in an extension. The system maintains API parity with AWS SQS and RSMQ, making it a familiar choice for users of these services. PGMQ is compatible with PostgreSQL versions 14 through 18 and can be easily installed via a Docker image that comes pre-installed or by following instructions to integrate into an existing PostgreSQL instance. Users create queues as tables within the `pgmq` schema and manage messages using SQL functions, which include sending, reading, popping, archiving, and deleting operations. Additionally, PGMQ supports partitioned queues through pg_partman for automatic maintenance. Configuration of PGMQ requires specific settings in `postgresql.conf`, particularly for managing partitions, while a visibility timeout is implemented to ensure exactly once delivery within the defined period. The system benefits from PostgreSQL's robustness, providing essential message queuing capabilities with simplicity and ease of integration. As part of its community-driven development, contributions are encouraged to expand its usage and showcase potential applications. Keywords: #phi4, AWS SQS, Archive, Client Libraries, Community, Configuration, Delete, Docker, Documentation, Exactly Once Delivery, Extension, FIFO, Functions, Installation, JSON, Lightweight, Message Processing, Message Queue, PGMQ, Partition Maintenance, Partitioned Queues, PostgreSQL, Postgres, Queue Management, RSMQ, Retention Interval, SQL, Source Code, Updating, Visibility Timeout
    The google logo   github.com 10 days ago
1989.  HN Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor
Gorse 0.5 is an open-source recommender system engine developed in Go, designed for seamless integration into various online services. It supports diverse recommendation strategies, including collaborative filtering, and processes multimodal content such as text, images, and videos through embeddings. The system offers both classical and LLM-based recommenders, complemented by a GUI dashboard that facilitates the editing of recommendation pipelines, system monitoring, and data management. Gorse provides RESTful APIs for performing CRUD operations on data and generating recommendations. The architecture of Gorse includes master nodes responsible for model training and management, server nodes that expose APIs, and worker nodes dedicated to offline user-specific recommendations. It operates as a single-node training system with distributed prediction capabilities, utilizing databases like MySQL or MongoDB for data storage and Redis for caching. Users can engage with Gorse through a playground mode, which sets up a recommender system for GitHub repositories using Docker. The project encourages community contributions, including bug reports and pull requests. Additional information is accessible in official documentation, while live demos offer practical insights. Discussions about the project are facilitated on platforms such as Discord or GitHub Discussions. Keywords: #phi4, AI-powered, ClickHouse, Docker, GUI dashboard, GitHub repositories, Go, Gorse, LLM-based recommenders, MongoDB, MySQL, Postgres, RESTful APIs, Redis, collaborative filtering, data management, feedback, master node, model training, multimodal content, open-source, real-time recommendations, recommender system, server nodes, system monitoring, visual workflow editor, worker nodes
    The google logo   github.com 11 days ago
2105.  HN ClickHouse chooses local NVMe backed Postgres powered by Ubicloud
ClickHouse and Ubicloud have formed a joint offering that delivers a managed PostgreSQL service tightly integrated into the ClickHouse Cloud platform, with Ubicloud’s NVMe‑backed Postgres delivering up to nine times faster transaction speeds than AWS RDS while operating on bare‑metal or AWS infrastructure; this partnership establishes a unified data stack that uses native change‑data capture to automatically sync operational transactional data into ClickHouse for real‑time analytics and AI workloads, eliminating the need for custom pipelines, and couples PostgreSQL’s advanced open‑source capabilities with ClickHouse’s high‑performance analytics engine—an alliance backed by teams with deep managed‑Postgres experience from Citus, Microsoft Azure, and Heroku, and a shared heritage that includes the PeerDB project now owned by ClickHouse; the collaboration also emphasizes open‑source synergy, with both parties contributing to Ubicloud’s GitHub projects, and leverages Ubicloud’s enterprise‑grade controls (high availability, backups, encryption) to manage Postgres instances, while ClickHouse engineers actively contribute to Ubicloud’s codebase, creating a transparent development loop that accelerates feature delivery, performance improvements, and deployment options, thereby expanding the Ubicloud community and ensuring enterprise‑grade reliability and performance across the combined stack. Keywords: #gpt-oss:20b, AI, AWS, Analytics, Backups, Benchmarks, ClickHouse, Cloud, Data Capture, High Availability, Integration, Managed, NVMe, Operational Data, PostgreSQL, Postgres, RDS, TPC-H, Transactional, Ubicloud
    The google logo   www.ubicloud.com 12 days ago
2129.  HN Need feedback for AI tool that lets non-technical users query Postgres
TalkBI is a public‑beta, AI‑powered business intelligence platform designed for non‑technical users, enabling them to query PostgreSQL databases using natural language and automatically generate visual reports. It targets small teams and startups—particularly marketers, product managers, sales, and operations personnel—who need data access but lack SQL expertise. The development team plans a March launch and is actively soliciting candid feedback on the platform’s usefulness, limitations, and how it distinguishes itself from existing BI tools, offering a demo dataset at https://talk.bi/. Keywords: #gpt-oss:20b, AI, AI-powered, BI, BI tools, Beta, PostgreSQL, Postgres, SQL, TalkBI, access, community, data, dataset, feedback, limitations, marketers, natural language, non-technical, ops, problem, product managers, query, reporting, sales, smaller teams, startups, testing, tool, usefulness, visualizate
    The google logo   news.ycombinator.com 12 days ago
2140.  HN A 2.5x faster Postgres parser with Claude Code
During an eight‑week sprint the author built a production‑grade Postgres parser for Multigres, generating 287,786 lines across 304 files with 130 commits and 71.2 % test coverage; the pure‑Go implementation, derived through Go’s yacc and complete AST definitions, runs 2–3× faster than the C‑based pg_query_go, eliminating cgo overhead and enabling efficient query‑routing across sharded servers. Success was not driven by AI coding but by a structured coordination framework and expert oversight—Claude AI served as a reusable tool for maintaining phase‑specific checklists, summarizing progress, and generating Go code that still required meticulous review to fix subtle type errors. The parser must parse SQL into an AST to extract routing keys, normalize queries, and deparse modified ASTs back to SQL, and the team rigorously verified compatibility by comparing every grammar rule to Postgres and running thousands of regression tests, ultimately achieving confidence in the parser’s correctness. This effort illustrates a shift in software engineering: developers spend less time on mechanical code generation, focus on high‑level design, and rely on disciplined tooling and verification, as evidenced by the rapid transition from a year‑long MySQL parser to an eight‑week Postgres parser. Keywords: #gpt-oss:20b, AI, Claude, Go, MySQL, Postgres, SQL, cgo, multigres, parser, query, shards, vitess
    The google logo   multigres.com 12 days ago
   https://github.com/tobymao/sqlglot   12 days ago
   https://github.com/pganalyze/pg_query_go   12 days ago
2163.  HN Get me out of data hell
On 9 Oct 2024 a senior engineer in Melbourne begins his day with tea, confronting the “Pain Zone”—an over‑engineered enterprise data warehouse that merely copies text files each morning and whose architecture diagram shows 104 operations when only ten are needed, underscoring excessive complexity and bureaucracy. He and his remote team routinely pair‑program to tackle the painful, untracked code, a coping tactic born of a corporate culture that prizes speed over craftsmanship, judges those who slow down to improve code, and undervalues deep expertise—a mindset the narrator likens to underestimating a virtuoso musician. Their daily ritual of coffee, meetings, and 3–4 hour collaborative sessions culminates in a task to verify a 13‑step data pipeline, yet logs that should confirm “Google Analytics” data instead contain ~57 000 garbled JSON fragments caused by a Lambda function mis‑parsing filenames and spewing garbage for over a year; despite a critical production error, the team prioritizes other work and dismisses fixing audit‑log issues, leaving the engineer frustrated with non‑relational, single‑entry logs that hinder event‑by‑event tracking. Exhausted by nonspecific data identifiers and a costly, fragile ingestion system that relies on heuristics rather than reliable tooling, the narrator contemplates a refactor on Dec 2 while noting the industry’s continued investment in platforms like Snowflake and Databricks over simpler solutions, and ultimately resolves to resign on 9 Oct 2024, aiming to become a consultancy director with a last day on 5 Nov 2024, after which he will focus on running a consultancy, addressing software‑engineering IT issues, and launching a company blog with co‑founders. Keywords: #gpt-oss:20b, Databricks, Lambda, Postgres, Snowflake, data hell, data warehouse, logs, metadata, pain zone, pair programming, regex, serverless, software engineers, source system
    The google logo   ludic.mataroa.blog 12 days ago
2168.  HN Digging into UUID, ULID, and implementing my own
The author evaluated UUIDv7 for the atlas9 project, finding its inherent sortability and database friendliness but encountering hyphenation problems with PostgreSQL’s ltree paths, leading to consideration of hyphen removal and an awareness that UUIDs can be stored compactly; they explored compact string representations such as 21‑character Base58 and Crockford Base32, and examined ULID and UUIDv7 implementations, discovering case‑sensitivity bugs that impacted Postgres sorting, the google/uuid library’s random‑block generation, version/variant fields, 1 ms (with optional finer) time precision, monotonicity guarantees, and that crypto/rand always fills the buffer, which motivated them to write a lightweight UUIDv7 implementation to eliminate that external dependency; subsequently, they streamlined the ID generator by dropping monotonicity and UUID‑specific bits to reallocate bits for randomness or higher‑resolution timestamps, adopting a 1 ms Unix‑timestamp scheme encoded in Crockford Base32 that yields a 26‑character string, storing IDs as strings rather than byte slices to avoid repeated encode/decode cycles, and contemplating but ultimately rejecting custom PostgreSQL types due to complexity, noting that a million‑row table would occupy only ~10 MB (plus indexes) so size is not a major issue and a simple int64 might suffice, though the author remains uncertain whether their implementation offers measurable gains, and they conclude with an analogy comparing routine engineering tasks to exploratory adventures that require assistance, pointing readers toward the complete generation code elsewhere. Keywords: #gpt-oss:20b, Crockford, Postgres, ULID, URL-friendly, UUID, base32, indexes, int64, monotonic, random data, sortable, timestamp
    The google logo   atlas9.dev 12 days ago
   https://www.guidsgenerator.com/wiki/uuid-comparison   10 days ago
2174.  HN Waiting for Postgres 19: Better planner hints with path generation strategies [video]
The five‑minute video released by the Postgres E121 channel showcases upcoming enhancements slated for PostgreSQL 19, with particular emphasis on the upgraded planner hints system and refined path‑generation strategies, both of which aim to improve query planning and execution efficiency. Keywords: #gpt-oss:20b, 5mins, E121, Postgres, YouTube, better, generation, hints, path, planner, strategies, video, waiting
    The google logo   www.youtube.com 12 days ago
   https://pganalyze.com/blog/5mins-postgres-19-better-pla   12 days ago
   https://substrait.io/   11 days ago
2270.  HN In 2026, Postgres Is (Still) Enough
PostgreSQL typically suffices for most workloads, and adding specialized services such as Redis, Elasticsearch, MongoDB, Snowflake, or Kafka to meet specific needs often creates a complex, cost‑intensive stack that increases operational, monitoring, failure‑over, and maintenance overhead. Instead of immediately adopting a dedicated engine, teams should first assess whether PostgreSQL’s built‑in features or extensions—full‑text search, vector search, caching, and others—can provide the required functionality, as these extensions often employ the same algorithms as specialized systems but with far less friction. While extreme scales (e.g., Google) might still benefit from dedicated engines, many successful companies—Notion, Netflix, Instagram—rely on PostgreSQL to serve millions of users, and most startups can handle up to about ten thousand users with a single database. A second database should only be introduced after exhausting PostgreSQL’s limits, clearly documenting the shortcomings and weighing the added operational burden, because each new system brings significant debugging, monitoring, and maintenance costs. Keywords: #gpt-oss:20b-cloud, Elasticsearch, InfluxDB, Kafka, MongoDB, Pinecone, PostgreSQL, Postgres, Redis, Sidekiq, Snowflake, caching, full-text, microservices, monitoring
    The google logo   postgresisenough.dev 13 days ago
   https://gist.github.com/cpursley/c8fb81fe8a7e5df038158b   13 days ago
   https://news.ycombinator.com/item?id=39273954   13 days ago
2279.  HN Show HN: KvatchCLI – Query Multiple Data Sources with SQL(Postgres,CSV,APIs)
Kvatch‑CLI is a lightweight 12 MB Go binary that enables local, on‑premises SQL federation across heterogeneous data sources—including Postgres, MySQL, SQLite, CSV, JSON, Google Sheets, Git repositories, and REST APIs—by defining connectors and datasets in a YAML plan; it executes queries by federating across those sources, caching intermediate results in a local SQLite database, and returning a single unified result set, thereby eliminating the traditional export–download–import ETL loop. The core engine, in its v1.0.0‑beta release, is production‑ready, cross‑platform (macOS, Windows, Linux x86_64/ARM64), and supports a pluggable connector architecture; users install the tool via Homebrew (`brew install kvatch`) or a GitHub release, run `kvatch init` followed by `kvatch query --plan <plan.yaml>`, with example plans in the repository’s `examples/quickstart` directory. Planned future releases will introduce a paid remote mode with shared plans, scheduling, a web UI, and access control while keeping the local mode free indefinitely. Keywords: #gpt-oss:20b-cloud, APIs, CSV, Caching, Federation, Google Sheets, Kvatch CLI, Postgres, REST APIs, SQL, SQLite, Show HN, YAML
    The google logo   news.ycombinator.com 13 days ago
2345.  HN Making accounts optional in a local-first app
The article argues that a local‑first philosophy permits users to create and manipulate data without requiring an account initially, then migrate that data to a cloud account later; it details how PowerSync serves as an out‑of‑the‑box sync engine that stores data locally in a browser‑based SQLite database and automatically replicates changes to remote backends (PostgreSQL, MongoDB, MySQL, SQL Server) for any web or native client, thereby avoiding a custom sync layer; a dynamic schema generator is provided that defines synced and local versions of core tables (e.g., `lists` and `todos`) as well as a local‑only `draft_todos` table, using helper functions to construct view names based on a `syncEnabled` flag and exposing type information via TypeScript; the code maps out how to flip the database schema between local‑only and synced modes—in `switchToSyncedSchema` it updates the schema to the synced version, toggles `syncEnabled`, copies over data, and optionally clears the local tables, while `switchToLocalSchema` reverses this process, disabling sync and purging synced tables; these switches are triggered by auth events handled by a `PowersyncConnector` that listens for Firebase authentication changes, emits `initialized` and `sessionStarted` events, and connects to Supabase only when a user is logged in, ensuring that a default signed‑out user is created at startup and that row‑level security policies (using `auth.jwt()->>'sub'`) enforce that only the current authenticated user may update a row via a `uid` column defined in the PowerSync schema; a router guard awaits `database.waitForFirstSync()` before navigation to avoid empty pages; finally, a sidebar addresses foreign‑key ordering issues when syncing to PostgreSQL by maintaining a pre‑defined `INSERT_ORDER` that reflects dependency relationships and sorting CRUD operations accordingly so that inserts respect referential integrity; overall, the article presents a comprehensive approach that lets an application operate entirely offline and without an account, yet seamlessly transition to network‑synchronized, multi‑device collaboration once the user chooses to register or log in. Keywords: #gpt-oss:20b-cloud, Postgres, PowerSync, SQLite, Supabase, auth, local-first, makeSchema, offline, schema, sync, table, view
    The google logo   www.maxmntl.com 13 days ago
2421.  HN Hand-Crafting Domain-Specific Compression with an LLM
Baby‑monitor sensors sending temperature/humidity every five minutes generate 5‑byte packets (32‑bit timestamp, 8‑bit signed value) which, at thousands per second across many devices, necessitate an append‑only, gap‑preserving store that retains only the last seven days of 5‑minute resolution data for mobile‑app plotting; the baseline of persisting every row in Postgres consumes ~400 GB and a high write rate strains CPU, IOPS, and currency. The objective is a domain‑specific compression strategy that reduces storage and write costs while still permitting O(1) per‑device inserts and efficient random reads. Benchmarks show TSZ and PCO L4 can shrink a day’s data from ~3 kB to ~140–127 B, yet they require full decode/encode per write (O(n)), making them too slow for constant‑time append; because the data are slowly changing small integers (±1 degree) and can tolerate timestamp rounding to the nearest 5 min, a simpler Run‑Length Encoding with delta coding (RLE‑Deltas) offers ~27.9× compression (~117 bytes for a sample), O(1) appends (update or add two bytes), and straightforward implementation, outperforming float‑based TSZ/Gorilla (~140 bytes, 23.5× compression) which lack appendability. Keywords: #gpt-oss:20b-cloud, Compression, Delta, Device, Gorilla, Humidity, LLM, Parquet, Postgres, RLE, Retention, S3, Sensor, TSZ, Time Series, Zstd
    The google logo   engineering.nanit.com 14 days ago
2447.  HN Show HN: CSV Cleaner – simple tool to remove duplicates and clean CSV files
CSV Cleaner is a web‑based tool that allows users to upload a CSV, preview it, and then select columns for deduplication, normalisation, and trimming before downloading a cleaned file, all without the need for Excel or coding. It offers a free tier that requires no sign‑up for basic use, and is built on Supabase for authentication, Postgres and storage, with Stripe handling subscriptions. Processing occurs server‑side to keep the system lightweight. The developer welcomes user feedback on usability, edge case handling, and feature requests, and the service can be accessed at https://csv-cleaner.com. Keywords: #gpt-oss:20b-cloud, Auth, CSV, Cleaner, Dedupe, Download, Excel, Normalize, Pandas, Postgres, Supabase, Trim, Upload
    The google logo   csv-cleaner.com 14 days ago
2485.  HN Postgres Postmaster does not scale
Recall.ai’s real‑time media pipeline, which bursts with synchronized spikes at the start of millions of weekly meetings, revealed a hidden bottleneck in PostgreSQL: the single‑threaded postmaster loop can consume an entire CPU core when worker churn is high, delaying backend spawning and causing 10–15 s latencies on EC2 instances—an issue that surfaces only under extreme scale and eludes normal workload diagnostics. The team traced sporadic 10‑second pauses in the PostgreSQL login process, not to CPU or I/O limits but to the postmaster’s delayed authentication reply; this delay appears intermittently when thousands of instances boot simultaneously. To replicate the phenomenon, they built a testbed mirroring production—a Redis pub/sub pulse that triggered 3,000+ EC2 clients to hit a local Postgres instance—allowing instrumentation of the server in isolation. Profiling on an r8g.8xlarge instance showed that around 1,400 new connections per second saturated the postmaster’s main loop, with most of its time spent forking and reaping child processes; the costly fork overhead is mitigated on Linux by copy‑on‑write page handling. Enabling kernel huge pages reduced the postmaster’s page‑table‑entry copy‑overhead and improved connection throughput by ~20 %. However, high churn of background workers for parallel queries further pressured the main loop, leading to connection delays that persisted in production. The fix involved adding jitter to EC2 instance boot times and removing bursty parallel queries, thereby easing postmaster load; as a result, connection latency issues subsided. Notably, existing DBaaS or monitoring tools expose no postmaster contention, a gap the authors find surprisingly obscure and question why the oversight persists. Keywords: #gpt-oss:20b-cloud, CPU core, EC2, Postgres, Postmaster, RDS Proxy, background workers, connections, fork, huge pages, parallel queries, parallel workers, pgbouncer, plpgsql
    The google logo   www.recall.ai 14 days ago
   https://proxysql.com/   13 days ago
   https://github.com/sysown/proxysql   13 days ago
   https://hakibenita.com/sql-tricks-application-dba#dont-sched   13 days ago
   https://jpcamara.com/2023/04/12/pgbouncer-is-   13 days ago
   https://wiki.postgresql.org/wiki/Multithreading   13 days ago
   https://github.com/puppetlabs/puppet/blob/mai   13 days ago
   https://www.slideshare.net/slideshow/solving-postgresql   13 days ago
   https://www.freedesktop.org/software/systemd/man&#   13 days ago
2492.  HN Show HN: Seren – Serverless Postgres, Rust SDK, CLI, & MCP Server for AI Agents
Seren is a serverless Postgres platform designed for AI agents, providing a Rust SDK, a command‑line interface (seren‑cli), and a lightweight MCP server (seren‑mcp) that registers with assistants such as Claude to manage databases; the CLI installs via `cargo install seren-cli` or Homebrew (`brew install serenorg/tap/seren`), uses `seren auth login` for authentication, and supports listing and creating projects; the Rust SDK crate `seren` allows code‑level interactions by creating a `Client` with an API key and invoking methods like `client.projects().list()`, while the MCP server can be launched with `npx seren-mcp start:oauth`, built from source (`cargo build --release`) or installed as a pre‑built binary from GitHub Releases, Homebrew, or npm; full documentation is hosted at `https://api.serendb.com/skill.md` with additional README files in the `cli/`, `api/`, and `mcp/` directories; the repository requires Rust ≥ 1.75, includes a workspace layout with `api/`, `cli/`, `mcp/`, `docker/`, and a top‑level `Cargo.toml`, supports development commands such as `cargo build`, `cargo test`, `cargo clippy`, and `cargo fmt`; contributors can fork the repo, create feature branches (`git checkout -b feature/...`), commit with conventional messages, push, and issue pull requests following the provided guidelines, and the project is licensed under the MIT License. Keywords: #gpt-oss:20b-cloud, AI agents, AI assistants, CLI, Crate, Homebrew, MCP, Package, PostgreSQL, Postgres, Rust, SDK, Seren, SerenDB, Serverless, cargo, npm
    The google logo   github.com 14 days ago
2552.  HN Strongly Consisten Systems
CP systems enforce “unavailable rather than wrong” by requiring that every write be acknowledged only after a quorum—typically a majority of nodes—has replicated the data, so clients await multiple round‑trips and the slowest node in the cluster; reads are then guaranteed to be up‑to‑date, and odd‑sized clusters are preferred because even‑sized ones provide no extra fault tolerance. Examples such as etcd, PostgreSQL with synchronous replication, and MongoDB set with majority write concerns and linearizable reads illustrate this model: a node that loses quorum stops serving requests, causing clients to experience timeouts or errors rather than stale data, while a partition containing the majority can elect a new leader and continue operating normally. The trade‑off is higher write latency, reduced availability for affected clients, and operational complexity—particularly during leader elections where repeated cycles can stall cluster operations in Kubernetes; similarly, consensus protocols such as Paxos (safety‑centric, guaranteeing validity, agreement, integrity but not liveness) and its more comprehensible successor Raft (splitting consensus into leader election, log completion, and safety) both require a majority quorum and serialise writes through a single leader, capping throughput to that leader’s performance. Systems like ZooKeeper or ensuring CP‑behaviour in PostgreSQL via synchronous replication also suffer the same latency and availability penalties when a majority of replicas is unreachable. Consequently, CP is chosen when consistency is critical—financial or inventory systems, leader election or locking services, or when stale data could trigger cascading failures—while it is avoided for user‑facing services that demand high availability or global low‑latency writes. Keywords: #gpt-oss:20b-cloud, CAP theorem, availability, cassandra, consensus, consistency, etcd, kafka, kubernetes, mongodb, partition, paxos, postgres, quorum, raft, zookeeper
    The google logo   www.blog.ahmazin.dev 14 days ago
2652.  HN Simple vanilla restaurant booking system
Building a minimal restaurant booking app, “MyTable,” the author showcases NpgsqlRest as a practical, database‑centric REST solution by focusing on a Postgres 18 backend, deliberately omitting a front‑end until later: a Linar VM running Ubuntu LTS or alternative Docker‑Compose provides isolated, vanilla dependencies (Postgres, dbmate, NpgsqlRest, ab) with an idempotent shell script for provisioning; schema migrations use dbmate SQL files while stored functions are defined in idempotent SQL and executed via `psql`, with embedded transactional tests (truncating `admin_users`, verifying `is_setup()`, inserting an admin, and raising a notice when all pass); environment configuration is driven by two JSON files—`default.json` and optional `local.json`—with static assets served from `public/`, while a simple `echo` function demonstrates an initial API endpoint; front‑end interaction is outlined via Fetch, with hot‑reloading handling logic changes automatically and metadata changes requiring a server restart, highlighting SM rate‑limiting issues; serialization is handled by NpgsqlRest, noting negligible performance difference versus middleware; an initial `reservation_summary` composite type illustrates preference for ordinary SQL tables, steering outputs to simple JSON rather than custom types; authentication is enforced through cookie‑based login and `@authorize` annotations, using bcrypt‑hashed passwords and JWT claims derived from returned columns; lightweight functions (`is_setup()`, `setup_admin()`, `is_authenticated()`) expose public endpoints for system readiness, admin initialization, and session validation, with client‑side guards implemented in a 360‑byte script that sequentially checks system setup, authentication, and restaurant configuration, redirecting appropriately; the admin creates the singleton restaurant record (enforced by a primary key and check constraint) and uploads floor‑plan images via a PL/pgSQL `upload_floorplan_image` function, storing files in `public/upload/*.*` and returning a JSON payload containing the path; a separate `save_floorplan` endpoint records image metadata; reservation management to business users uses CRUD inserts for walk‑in/phone entries and web‑form bookings that trigger real‑time notifications via Server‑Sent Events (SSE): the `resolve_reservation` function updates reservation status, assigns tables, creates a notification record, searches for an active SSE channel, and emits a JSON message containing status, channel ID, and admin note—SSE endpoints are exposed with `@sse` annotations, publicly accessible under `/api/resolve-reservation/<level>` but wrapped with `@authorize` for the base endpoint; clients store a random UUID channel ID in `sessionStorage` to filter messages, and the app leverages a built‑in rate limiter configurable via JSON (a default “standard” 60 req/min policy and an “auth” 10 req/min policy, overridable with `@rate_limiter_policy auth`); overall, the article concludes that NpgsqlRest enables rapid, minimal‑code REST endpoints centered on database logic, while noting minor friction with hot‑reloading under rate limiting and acknowledging its readiness for AI‑assisted development. Keywords: #gpt-oss:20b-cloud, AI, Compose, Docker, NpgsqlRest, Postgres, SQL, Ubuntu, VM, authentication, booking, frontend, jsonb, rate limiter, restaurant, system
    The google logo   vanillife.substack.com 14 days ago