Show HN: C.O.R.E – Opensource, user owned, shareable memory for Claude, Cursor

github.com

9 points by Manik_agg 16 hours ago

Hi HN,

I keep running in the same problem of each AI app “remembers” me in its own silo. ChatGPT knows my project details, Cursor forgets them, Claude starts from zero… so I end up re-explaining myself dozens of times a day across these apps.

The deeper problem

1. Not portable – context is vendor-locked; nothing travels across tools.

2. Not relational – most memory systems store only the latest fact (“sticky notes”) with no history or provenance.

3. Not yours – your AI memory is sensitive first-party data, yet you have no control over where it lives or how it’s queried.

Demo video: https://youtu.be/iANZ32dnK60

Repo: https://github.com/RedPlanetHQ/core

What we built

- CORE (Context Oriented Relational Engine): An open source, shareable knowledge graph (your memory vault) that lets any LLM (ChatGPT, Cursor, Claude, SOL, etc.) share and query the same persistent context.

- Temporal + relational: Every fact gets a full version history (who, when, why), and nothing is wiped out when you change it—just timestamped and retired.

- Local-first or hosted: Run it offline in Docker, or use our hosted instance. You choose which memories sync and which stay private.

Why this matters

- Ask “What’s our roadmap now?” and “What was it last quarter?” — timeline and authorship are always preserved.

- Change a preference (e.g. “I no longer use shadcn”) — assistants see both old and new memory, so no more stale facts or embarrassing hallucinations.

- Every answer is traceable: hover a fact to see who/when/why it got there.

Try it

- Hosted free tier (HN launch): https://core.heysol.ai

- Docs: https://docs.heysol.ai/core/overview

demondynamic 2 hours ago

Hey nice one! Where can i find the documentation for self-host?

  • harshithmul 2 hours ago

    Hey thank you, you can find this here https://github.com/RedPlanetHQ/core?tab=readme-ov-file#core-..., it's a docker compose based setup

    • demondynamic an hour ago

      What’s even the point of calling yourself open source if you can’t bother to support LLaMA models? Kinda defeats the purpose, don’t you think?

      • harshithmul an hour ago

        We are working on it, LLaMA models are not working good when it comes to finding facts from the message. We should have this out in the next couple of days