TL;DR I designed a program that helps users to verify the authenticity of information on the internet. Buckle up, this is pretty in depth and includes video.
Why Rene?
How does it work?
What the install process is
What the install process is – Brave
What the install process is – OpenAI
What the install process is – Google
Demo – Options
Demo – Surf and Check
Closing Thoughts
Bonus: Video
Why Rene?
In an age of bots and misinformation/disinformation, I wanted to see if I could use tools to help cut through the noise. To do that, I looked to the ideas of one of my favorite philosophers, Rene Descartes.
He is probably most famous for his ideas relating to reality. He would ponder truth and wonder, how do we know 2 + 2 is 4? What if instead 2 + 2 is really 5 and something else is telling us that it is 4? The idea was curious to me because it meant to critically think about things to verify their authenticity.
In this case, Rene is an attempt to build a tool that uses AI and web searches to cross reference and determine the truth of information. It is a Google Chrome Extension that allows a user to check websites whether they be social media sites such as Reddit or simply news pages.
It is my belief that truth and information should be transparent. Additionally, it should not be monetized or profited from so I have open sourced this code for those that want to use it and improve upon it. However, I will also be working on improving the code over time as well. Truth is not a project that is complete in a short period of time, but rather must be continually updated as people and society itself evolves. As such, the code is available at https://github.com/Palvaran/factcheck
How does it work?
The idea is pretty simple, use AI to fact check information. The problem is that AI is trained on data that ends at a certain point. For example, OpenAI’s GPT-4o was trained on data through 2023. That means it has no awareness of current events. The solution is to provide clear instructions for the LLM and feed it current data via search engines to improve it’s accuracy.
The original version of this code is using OpenAI’s API with the more popular and cheaper models of GPT-4o-mini and GPT-3.5-Turbo. The web searches are feeding data to the LLM via Brave’s API. However, as this is open source, additional models will be added along with other LLMs and web search engines. I chose Brave for the initial implementation as they are very respectful of user privacy which is a core pillar of good IT architecture and best practices.
Originally, I had wanted to have multiple LLMs available and APIs so that the searches and analysis could be dynamic and aggregated out, but at the same time I understood that I should not do all of the creation in a vacuum. In some earlier versions of the code, it included Anthropic for an alternate AI method as well as a local LLM via Meta’s Llama. Unfortunately, I ran into some technical issues with both and put them aside to continue to develop the main code.
In the case of Anthropic, I was not able to get the API to work as it appears to want an organization and not an individual for accounts. For Meta, I tried using Ollama to host Llama-3.2, but I but had some issues with CORS which would require a proxy server. This was antithetical to one of the core tenants I wanted which was that the first distributed version of this code needed to be as simple and as straight forward as possible. Simplicity in this solution is part of what I was trying to accomplish so that the most good can be had easily. If things become too complicated or complex to setup then it will drive people away.
At one time I had even thought about self hosting all of the APIs on servers within the public cloud, but when I started to estimate out what the pricing would look like it turned out to be fairly expensive to run. My best guess is that an average user would browse 50-100 websites in a day. Extrapolating that out and it would be potentially 1,500-3,000 websites in a month. Napkin math says that, not including infrastructure/Database/AI costs, it would be maybe $10 per person on the average side and for the heavy users probably five times that at $50.
Those initial costs rule out the primary reason to use the idea in the first place which was fact checking information for all and as free as possible. While advertising could offset the costs the idea of advertising being on a web browsing experience seems counterintuitive to me. I really want this project to be free, simple, and with a user experience that is as minimally disrupted as possible.
What I discovered is that if you get the API keys yourself from OpenAI and Brave, you can run this for about a dollar a month or less depending on usage. Brave will allow you to have a free account that can make 2,000 lookups in a month. In my own usage for the buildout, it was less than half of that. In the case of OpenAI, it is not free, but the cost is extremely minor at $0.60 for 1 million output tokens and $0.15 for 1 million input tokens. As a reference, a paragraph is about 100 tokens so you can get about 10,000 paragraphs for a dollar. That would mean 10,000 input paragraphs (i.e the article going to OpenAI) would be $0.60 and 10,000 output paragraphs (i.e the analysis you see) would be $0.15. That puts the total price at $0.85.
What the install process is:
For this tool, you need three things as well as a credit card for billing the APIs. While Brave has a free plan, OpenAI’s plan will still run you about a dollar in a month.
- OpenAI API Key – https://platform.openai.com/login
- Brave API Key – https://api-dashboard.search.brave.com/login
- Download it from the Google Web Store or take the Github code and load it unpacked into your Google Chrome Web Browser within Extensions.
Brave Sign Up:
Note, you will need a credit card or Google Pay to confirm, but with the Free plan, you will not be charged.
Just go to https://brave.com/search/api/ and click Sign Up. You don’t need a credit card or anything other than a username and password. Here is what the process looks like.

Click Register.

Then click Verify email.

You should be good to login now.

If a MFA code is sent to your email, just enter it here. Then you will be in the dashboard.

Click Subscriptions.

Under Free, click Subscribe.

Enter in your credit card or Google Pay information. Note, you will not be charged for the Free plan.

Now we can get an API key. Click API Keys.

Click Add API Key. Just give it a name. Notice it even says Free Subscription.

Bam. We have our 1st key. Make sure to copy and save the API key as you won’t ever be able to view it again. If you mess up, worry not, just delete the key and click Add API key again.

OpenAI Sign Up:
Note, just like Brave API, for OpenAI’s API you will need a credit card. This is not like a subscription to the OpenAI Plus or Pro plans, but instead you have to put some credits on your account like a debit card. The reason is that you are paying a small portion of money for your queries to cover infrastructure costs. Don’t fret, the cost is less than a dollar a month for most people @ 10,000 paragraphs for about a buck.
Since ChatGPT is fairly popular I will assume for this article that you already have a free account that you use now.
However, if you don’t, Just go to https://auth.openai.com/create-account to create an account or if you have a login already go to https://auth.openai.com/log-in

Once you login, it will take you to the API developer platform, a different area than the usual ChatGPT area. I will assume you have not been here before so let me take you step by step through some of the common areas.

Hit the gear icon near the top right next to your profile icon. It will take us to the settings.

Then click Billing. I leave about $20 on my card, but you can adjust yours.

If you want to adjust just click modify.

If you click Limits you can set a budget alert.

If you are wondering about costs, you can click Usage.

Ok, jump to API keys now.

Click Create new secret key to make a new key.

Just give it a name and hit create secret key again. This is a temporary key I made to show what it looks like. Make sure to copy and save the API key as you won’t ever be able to view it again.

Now we are ready to put those keys to use in the program. Let’s go get it now.
Google Web Store
I have published it on the Google Web Store. All you need to do is download it and enter your keys from above. However, as of this writing, the extension is in review.

Once the approval goes through, I will update everyone that is interested, but until then the Github code is still available to manually import the extension.
Here is how the application currently looks as of March 10th, 2025.
Options
Let me show you the current available options that the application has. Just click the puzzle icon to pin the extension.

Now it will show up. The icon is how you will interact with the application the most. However, a right click context menu has also been added for users to use as well.

Right click and choose options to see what is available.

Let’s start with the API Keys tab.

To improve privacy, the default behavior hides the API keys once you add them. Additionally, if you click Show, it will only show you part of the keys, but if you click it one more time, it will fully decode it for you. Also, to help troubleshoot that the API keys are valid a basic validation test has been put in to verify.

Other options here include being able to backup your settings so that you can import them later.
The next screen includes options for configuring the AI Model settings. This is where additional models will go including local LLMs. Currently, you can configure between GPT-4o Mini and GPT-3.5 Turbo or a combination of the two for search/analysis. For full transparency I also list the prices for each model so that users can decide which model works best for them. Note, Brave’s free tier allows a user to have 2,000 queries a month as of March 10th, 2025 otherwise there is a paid base tier that is available for $3 per 1,000 queries.

The next tab, UI preferences, allows you to customize the position of the popup overlay as well as dark theme if desired. There is even an experimental setting to autocheck headlines on news sites as you browse, but it is still in testing as it will burn through API calls as a user scrolls through webpages. That is why Site Management under it is an idea to ignore some sites.

Analytics is an important part of determining truth. It can be boring to many looking at the numbers, but one thing I love about math is that it is always true. When you first start off, the data is empty, but as you use the application you can watch it change. We will come back to this screen later to show you how it changes after the Demo.

In the case of this application, analytics is how a user can measure and see their activity. The data is anonymized with a UUID generation, timestamp, domain visited, text length sent to the AI to analyze, model used to analyze, rating of how accurate the article was, and if the source was credible or not.

Note, no specific websites nor user or device information is recorded. Rather, a domain, but not a specific website is recorded (i.e domain.com is recorded, but domain.com/aboutme is not). The purpose of this is accountability. It is to improve the user experience via data aggregation and collation by using domains and their accuracy scores to better determine truth. For example, if for a single month, cnn.com rates 60%, but bbc.com rates 70% then it would stand to reason that bbc.com has slightly more credence. However, if the following month cnn.com were to increase to 65% and bbc.com were to drop to 60% then cnn.com would have more credence.
About is the last tab. It shows info about how it works, how to find out more about it, and is linked to Github for What’s New.

Now let’s see it in action.
Demo
Let’s take the application for a test drive. First up, Google News. Let’s check out The Weather Channel story. That should be easy to verify.

All we need to do is hit the Fact Check icon in the toolbar and it starts the process.

Alternatively, we could right click the page and do the same thing if no text was selected.

Interesting. I figured this would be higher, but let’s see why it got the score it did. We can see the article was published today and has lots of claims.

Let’s check the next page for more information.

Overall it is moderately accurate with credible sources, but they made a mistake that could mislead readers which lowered the rating. Let’s check the references for more info.

Next up, let’s try a social network. How about Reddit News?

Let’s see about the microplastics article.

Something neat you can do is just highlight the text that you are interested in for cross examination. You don’t have to search the entire page. In this case, we will just highlight Microplastics Are Messing with Photosynthesis in Plants and hit the Fact Check icon.

Let’s check references to see what it found.

If you click the link in the reference it will take you to the source to read further.

Pretty neat. Now, let’s check back on our analytics and see what it shows.

Closing Thoughts:
Let me start off by saying that it is important to not always judge a book by its cover. Some websites will surprise you, for better or worse, with their analysis. In my early testing, I had expected Infowars to be lower rated for what I perceived to be opinions over facts, but ironically, it proved to be higher than some other popular domains. Consequently, the top domain turned out to be the Houston Chronicle. In the end, this is why we test and why I recommend letting math help guide your decision some.
So that’s it. That is the why, how, and what. I hope this article and this tool helps you to think, verify, and know better. If you have the desire to further improve the way we interact with information, please feel free to use, contribute, or help. Overtime, my hope is that this project will continue to grow and include additional LLMs, search index options, and other helpful features.
Thanks for coming to my Ted Talk.
Bonus: Video
Here is a video of the application surfing and scanning.