On web search engines

Nowadays, searching on the internet is something that most people, including me, constantly rely on for finding new information. While most of the time I can find roughly what I was looking for, sometimes I get frustrated by the results I’m getting and the search engine’s behavior in general. That is why I could really relate to Luke Smith’s video on this and while I disagree on some details, I like the application idea hinted at towards the end of the video.

Most of what he talks about is spot on and I saw it as well. While I have been using DuckDuckGo for a few years now, to some extent it also has the same problems people often bash Google for. The things I encountered are:

On the other hand, sometimes I still do find new sites when using web search engines. Also, just because a website has its own local search, that does not mean it is pointless to index its individual pages, because for one, some websites have terrible search (bitchute.com used to have absolutely useless search) and for two, you often don’t know which website has what you are looking for so you want to search multiple websites anyway.

Now, the application idea. There already are meta search engines which don’t index sites themselves but aggregate results from other search engines. I know of searx.me, which I didn’t use much, and metager.org which I discovered only recently and I started to prefer it over DDG. These are doing some of their own filtering on top but I think this could be improved upon further.
This is why I would like a desktop application, or possibly a web browser addon (in worst case a website) that would act like a search engine for results from other search engines. It could work something like this:

  1. You could import files containing blacklist of websites or even a whitelist of websites. You could keep multiple search profiles. For example, you could have one that blacklists spam sites and news sites for when you are looking for very specific things and then you could have one that whitelists only news sites when you try to find some article.
  2. You can select search engines from which to get results.
  3. You put in your search query with all the useful functions like exact phrases, word filtering and possibly some that search engines usually don’t support like marking the priority of individual words in the query.
  4. The application will obtain first few pages of results from each search engine and will aggregate them together, throwing away blacklisted websites.
  5. The application will then do its own search on these results using your query and will rank the pages by relevance. This can be done on text snippets which are displayed in search results or, to make it more thorough, the application could first open the linked pages, get the text in them and search on that.
  6. Because you usually want to refine your search query, the application would keep this data and continue running queries on it. It would ask search engines for new results only when you tell it to.

This way you would get around the problem of search engines ranking highly irrelevant results because you let the application rank them properly and reorder them for you. And getting multiple pages of results from multiple search engines should get around the problem of search engine just completely refusing to show you certain websites. It might take a while for the application to download all the information but you would take way longer looking through the results manually. If you ever stumble upon software which does something similar to this, do tell me please.

Finally, the truth is that for any topic we search for, there is only a limited number of websites that hold the information we are looking for. That is why I decided to create a list of websites which I find useful and I hope that you will find them useful too.

signature