The world’s first
Bitcoin-centric Ai

Chat, code, produce content, ask and learn about Bitcoin.

Chat with Satoshi

We use AI to make Bitcoin easier and more accessible.
We use Bitcoin to make AI more open.

Spirit of Satoshi

An online agent leveraging the Satoshi models that teaches you about Bitcoin daily on Twitter and LinkedIn, and that you can speak to on Nostr.

21 Questions Book

The first book and micro-documentary written and produced with a Bitcoin AI.

Lightning-enabled Crowdsourced LLMs

The first book and micro-documentary written and produced with a Bitcoin AI.

Code Satoshi

Code-Satoshi alpha is a “Miniscript” assistant. We’re currently working on Script, Lightning, Nostr and other implementations.

The Bitcoin Pulse (coming soon)

A tool made up of a chain of different models that scan the markets, summarise what’s being said and generate a intelligence report, with early drafts of content.

Embed-Satoshi (alpha)

Like Clippy - but a Bitcoiner. Integrate with your website or app, so Satoshi can assist with customer onboarding, success, and even sales.

Our Models

Download
Free
Industry Report

Bitcoin and AI share something in common. The can both be complicated. Our first report dives into the process of building a language model, crowdsourcing data & training, integrating micropayments, and dispelling many AI myths.

Oops! Something went wrong while submitting the form. Please try again or refresh the page.

Meet The Spirit
of Satoshi

An online agent leveraging the Satoshi models that teaches you about Bitcoin daily on Twitter and LinkedIn, that co-writes books (coming soon) and that you can speak to on Nostr.

LECS-LLM Dev Kit

Stands for Lightning-Enabled CrowdSourced LLM tool.By leveraging Lightning & a Nostr profile, anyone, anywhere in the world, with relevant knowledge can earn money for participating in the data curation, annotation and reinforcement learning stage of model development.

Any company, community or group can now crowd-source the development of large language models, at scale!

Our Products

Code Satoshi

The 7B is a fine-tuned Zephyr, small enough to run locally, and even on a EmbassyOS. You can test it out here, or download it from HuggingFace

Industry Report

The 13B is the bigger brother. Trained on an almost identical dataset, only with more epochs and obviously twice the size. This is about the limit of what you can run effectively locally, and perfect if you’re looking for a Bitcoin assistant to integrate into your product or service.

21 Questions Project

The 30B is our flagship model. Outperforms every model that we've built, and any model out there for that matter, in relation to Bitcoin. This is likely too large to run locally, but you can test it out here. It’s also open source, and available on Hugging Face here.

21 Questions
Project

The Spirit of Satoshi, along with key thought leaders in the Bitcoin space are producing a series of easy to digest content pieces including the 21 Questions Film and the 21 Questions Book series.

The Films are a series of micro-documentary answering the most pertinent questions on Bitcoin, while the Book series do a similar thing in digital, paperback and audiobook formats - soon to be available on Amazon and online!

Code Satoshi

Specially trained code-assistant models, focused on Bitcoin & Bitcoin-related languages libraries and protocols.

These models are designed to help engineers build Bitcoin-related features into their products faster and easier. You can test our Miniscript agent now!

Code Satoshi can:
• Produce
• CodeCorrect
• CodeExplain code

Frequently Asked Questions

General

Why is it called “Spirit of Satoshi”?

keyboard_arrow_down

Spirit is another way of saying “essence.” The name Spirit of Satoshi thus refers to the essence or idea of what Bitcoin is and represents. You might say: “Why didn’t you call it Spirit of Bitcoin?” And the truth is, Spirit of Satoshi not only sounds better, but it’s an homage to the creator. The well known “We are all Satoshi” can now be reversed, and because this model is being built alongside the community, we can say that “Satoshi is all of us.” In this way, together we are creating this “essence of bitcoin and all that Bitcoin represents” and embedding it into an AI, whose first incarnation is a Language Model.

What do you envision Spirit of Satoshi being used for in the future?

keyboard_arrow_down

It’s very early days. Spirit of Satoshi is an experiment in what’s possible if we get together with a dedicated community of people and build something that is trained with a non-mainstream bias. The aim is to have a more truthful and useful version of ChatGPT. What that could be used for afterwards is anyone’s guess. We anticipate it will be useful for anyone that wants to learn about Bitcoin. It could become a tool for Orange-Pilling the next billion people, in multiple languages. Either via a cool chat interface, an app, as an education assistant, or even a Bitcoin-influencer on Twitter! It could be used as an onboarding and education assistant for Bitcoin products and services, like wallets or exchanges. Perhaps it will become a personal Bitcoin assistant to everyone, ensuring you can maximize your privacy and security. Who knows. The future is bright, and what matters is that we build this foundation first.

Is Spirit of Satoshi trained to speak like and emulate Satoshi Nakamoto himself, with his ideas and explanations, or with the understanding of modern Bitcoin maximalists?

keyboard_arrow_down

When you train or fine-tune a model, you are training the style of linguistic output, and you’re tweaking the probabilities that certain words will be strung together in certain ways. While we could try and tune the model to speak like Satoshi, what we’re focusing on is training and tuning it on a far broader corpus of text, and as a result it will speak like some sort of average of all Bitcoiners and Austrian Economists. The style will be familiar in general, and represent the essence of Bitcoin thought (whatever the probabilities show that to be), hence the name Spirit of Satoshi, but, you will also be able to prompt it to take on a style. We plan to do some cool things in this dimension, so stay tuned!

Why make an AI that's Bitcoin-biased? Wouldn't a perfectly neutral bot without any biases be better?

keyboard_arrow_down

It’s impossible for biased humans to create an unbiased AI. Artificial Intelligence (or more appropriately, a Probability Program) is only able to string words together according to the probability that one word would follow after another, and that probability comes from the unavoidably biased humans who build and train it. There are many similar programs that were trained to speak in ways that support the false narratives that make people easy to control. Spirit of Satoshi is acting as a counterforce, to speak truths as understood by those who choose to verify rather than trust, and help guide new Bitcoin users at the same time.

Furthermore, bias essentially means viewpoint. If you water down something to the point it has no bias, it ceases to have an opinion or viewpoint, which ultimately begins to defeat the utility of the tool in the first place. ChatGPT and other mainstream models are seeing this occur in real time.

I've heard that Bitcoin maximalists are toxic. Is Spirit of Satoshi going to be that toxic as well?

keyboard_arrow_down

This is a tricky subject. People often label those whose ideas they don’t like as “toxic” as a way to not contend with the underlying idea or argument, but instead attack and discredit the person making the argument. We are not interested in such labels, and will be focused on training the model on data that is as factual as possible, and has a language style that carries with it embedded biases of the Bitcoin, Austrian, and Libertarian communities and philosophies. To some, this may be “toxic”, and if that’s the case, alternatives like ChatGPT exist for them to use. For others this may be the truth and accuracy they’re looking for, and it will be a breath of fresh air.

A final note on this. We are not optimizing our model for harmlessness, but for truth within the framework of the collective biases of Bitcoin-Austro-Liberatrian thought. That means that some responses may come out brutal, harsh sounding, and perhaps even “toxic.” We are ok with this.

Will Spirit of Satoshi be able to answer my questions about any topic, or just things related to Bitcoin?

keyboard_arrow_down

You will certainly be able to ask it anything, and it will likely respond with something. Whether it’s useful or relevant outside the context of Bitcoin and what the model has been trained on is anyone’s guess. Language Models are complex “probability machines”, or sophisticated auto-complete programs. They can say all kinds of things. Our focus is on having it produce outputs that are as accurate and useful as possible within the domains we’ve trained it on.

Longer term, the size and breadth of those domains will increase beyond just Bitcoin, Austrian Economics, and our initial peripheral data set. For example, home schooling, sound health, etc. 

What language model is SoS based on (Open AI, Llama, Prem, etc.)?

keyboard_arrow_down

We are currently experimenting with a number of Open Source models, including Falcon, Llama, Red Pyjama and Mosaic ML. We have not chosen a final base model or architecture, but plan to do so in the coming months.

One note to make here is that Open AI’s models are closed source, so while you can do some basic fine tuning on their sub-models like Da Vinci or Curie, you cannot really “base” your model on ChatGPT and tune it the way we’re doing. What you can do is build a Vector Store of specific data and use Open AI’s API to query that data and include it in responses. That is not “training” a model on your data, but an entirely different process. This is a trivial thing to do and many people are using this process and erroneously calling it “training.” We’re not interested in using ChatGPT in this way. We’re interested in building a truly unique model.

How does the model ensure the accuracy of the answers it provides?

keyboard_arrow_down

AI Models are essentially probability machines. When you train them, what you are doing is creating a sort of probability that a certain word will come after another word, in a certain sentence, and likewise with sentences and paragraphs as it scales up. Think of it like a sophisticated “auto-complete” that you’d get with gmail or google docs, only instead of auto-completing the end of a single sentence, it can produce full paragraphs and more.

What this essentially means is that a model doesn’t really “know” the difference between fact or fiction. They have no concept of true or false, only likelihoods. So the only way to ensure accuracy is to do more and more training, until you have good enough probabilities that the words the model will produce will be closer to fact. From there you can run the model through particular filters to try and clean out noise or mis-information, but things get tricky when doing this. It’s why mainstream models like ChatGPT can be a bit “woke”.

The best method to increase accuracy is to have a model reference a tree of knowledge, or a database of some sort which contains all the “factual” data, and use relevant components of that data in its responses. This is called “retrieval augmentation”, and it’s something a lot of companies are experimenting with. We will do the same as time and budget permit. It’s why we’re building the Nakamoto Repository, so we can ultimately tag all the data we’ve gathered for future model retrieval.

How much does it cost to use SoS?

keyboard_arrow_down

This is a work in progress. We will look to have a free tier, and then price for prompts or usage in Sats. We are Open Sourcing the model also, so you can download it and run it yourself, but depending on the size of the final model, that will likely be out of most people’s reach. Keep an eye on this as the information will update as we evolve.

Still have questions? Email us at [email protected]

Contributors

Thank you to everybody who helped us trained the model. We no-longer take any application to train any model.

What should I keep or discard in “Don’t trust, verify”?

keyboard_arrow_down

The data in this section is being used for training the model through a method called Fine Tuning. The chunks of data and Q&A Pairs will not only be about Bitcoin, but about other subjects, as well.

You won’t be reviewing questions and answers that are only related to Bitcoin. For all the Q&A pairs you see, you’re checking to see if it’s how a Bitcoiner might respond, regardless of the topic.

Fine Tuning is about linguistic style, not the specific data that’s inside the model. The particular data that the model will reference is an entirely separate project.

How should I write my answers in “We Are All Satoshi”?

keyboard_arrow_down

Imagine you are an all knowing artificial intelligence agent, trained on everything Bitcoin, along with anything related to Bitcoin. What would be the tonality? The style or response? You will want to answer the questions as if you were Satoshi. Your responses should be comprehensive and detailed, as if the question was being asked by a real person who is just starting to learn about Bitcoin.

Your replies will help train Satoshi to respond to these types of questions in similar ways.

How should I write my answers in “FudBuster”?

keyboard_arrow_down

Similar to We Are All Satoshi, respond to the FUD that you see here as if you were an all-knowing AI agent. Your responses should be comprehensive, helpful, and detailed. The goal is not to scare the person off, but to help them come around to a point of view that is more pragmatic and objective. Let the facts do the talking. Aim to turn the FUD around and control the frame of the discussion in your response.

Sometimes FUD comes from people who are antagonistic towards Bitcoin, but sometimes it comes from our friends who don’t yet know any better, and are sincerely looking for answers. As you choose how to respond, try imagining a scenario where you might encounter this FUD.

It’s okay to answer some with a little sharpness, and others with patience, but always write a full response (ie. no “HFSP” or “If you don’t believe me or don’t get it…”). A variety of responses like these will better prepare Satoshi for any FUD he might be asked about.

How do I earn sats here?

keyboard_arrow_down

The answers you give, along with their questions, will be shown in the Don’t Trust, Verify tool, for other community members to check. When a critical threshold of other  community members approve your answers and choose to keep them, you will receive some sats in your account.

The sats are connected to the number of points you earn. Successfully “kept” entries will earn 10 points, while those which are edited and not kept will only earn a single point. Entries that are considered junk by the community (ie; there is a consensus for discarding it) will actually lose points. We’ve designed it this way to ensure that bad actors are penalized, and the best actors earn the most and rise to the top of the leaderboard.

To ensure you’re earning the most you can, focus on the questions, responses, and edits that you are most confident on, and skip the rest.

How do I report a bug?

keyboard_arrow_down

If you see any bugs, please let us know either in our Telegram group, or by emailing us at [email protected]

What is FIne-tuning?

keyboard_arrow_down

Fine-tuning refers to the process of refining a pre-trained language model through targeted adjustments to adapt it to specific linguistic or stylistic attributes, resulting in a model that generates text aligned with the desired style or tone.

Still have questions? Email us at [email protected]