All of curi.us gone from google?

IDK if this is just me or how new this is.
Searching site:curi.us conjunction in duckduckgo is fine, but on google I get literally nothing. (Same for www.curi.us)

if I search just site:curi.us and include the omitted results:

duckduckgo shows lots of results

@Elliot

1 Like

It’s not literally zero results but my site does seem blacklisted.

The link at the bottom to show omitted results makes more results come up. It works with a search for Rand too. I also do get one result for conjunction:

I don’t know how to use Google Search Console well but if I do URL inspection on Curiosity – Analysis of David Deutsch’s The Final Prejudice it says:

URL not in property

Inspect a URL in the currently selected property or switch properties

Same result for Curiosity – David Deutsch Smears Ayn Rand and Curiosity – Criticism of Eliezer Yudkowsky on Karl Popper (which you can actually see in the picture above).

It shows 6290 impressions in the last year but only 55 in the last 3 months and 252 in the last 6 months, so it looks I’ve been blacklisted for at least 6 months. Actually it’s over a year but less severely initially because there are 30,000 impressions for the last 16 months.

Clicking around the Search Console I don’t see any errors or problems.

The Fallible Ideas site was also blacklisted/dranked/deboosted/something by Google between 12 and 16 months ago. There’s a huge drop in traffic when I compare those time periods.

My default search engine has been Duck Duck Go or Bing (varies by device) for more than 16 months. I highly recommend switching… I think it’s actually important to stop letting Google control the information you see by default. (It occurs to me that I still do use YouTube’s search feature. That seems problematic to rely on. I guess I should sometimes search for YouTube videos and channels with DDG or Bing.)

Thanks for letting me know.

1 Like

I used this a little recently b/c I realized stuff wasn’t getting indexed after updating my site.

One thing I found is that I had xk.io as a property and that this meant http://xk.io (not https). I also had a redirect from http to https. I think google was detecting this as a redirect error, but there wasn’t enough info to really investigate the problem much. Some pages that are only on the new site did show up (most of the old URLs are redirects to new pages but didn’t seem to have updated in google’s cache).

I solved the issue by deleting that existing property and creating a new property https://xk.io, and adding DNS validation (via TXT record). Then I submitted a page for indexing that wasn’t showing up and left it for a while.
Now lots of the new pages show up.

Maybe doing that could help?
There’s also a place to add a url for a sitemap that might help too (I don’t think I have one on xk.io)

1 Like

Oh yes they are http entries in Google Search Console. I’ll try to fix that later, thanks. I already do have a sitemap for curi.us at https://curi.us/sitemap.txt and I entered it in Google Search Console ages ago. It appears to be missing the podcast pages and probably something else but all the blog posts look right.

I tried to set a TXT record on curi.us (using linode’s dns tool) in order to verify the domain with Google Search Console.

I added it with default TTL, waited a few days, then changed the TTL to 8 hours. It still isn’t showing up:

So domain ownership verification with Google is still failing:

Anyone know what to do?

The “Learn why” goes to this page No page information in search results - Search Console Help which basically generically claims the reason Google wouldn’t index a page is they’re being blocked by robots.txt but https://curi.us/robots.txt does not block google (can anyone see a problem with my robots.txt?)

FYI - I usually set TTL very low (60s is usually the lowest) when doing DNS stuff. That way it propagates v quickly. Afterwards you can set it higher if you want.

Try cloudflare – it looks like that’s what your NS servers are set to

❯ dig NS curi.us

; <<>> DiG 9.16.1-Ubuntu <<>> NS curi.us
<--- snip --->

;; QUESTION SECTION:
;curi.us.                       IN      NS

;; ANSWER SECTION:
curi.us.                0       IN      NS      ian.ns.cloudflare.com.
curi.us.                0       IN      NS      kara.ns.cloudflare.com.

Linode does have the TXT record, it’s just that it’s not the place DNS actually points to:

❯ dig TXT curi.us @ns1.linode.com

; <<>> DiG 9.16.1-Ubuntu <<>> TXT curi.us @ns1.linode.com
<--- snip --->

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;curi.us.                       IN      TXT

;; ANSWER SECTION:
curi.us.                28800   IN      TXT     "google-site-verification=pqKW-jYySPNrfROceADfDYDeDS1S6oJuXGqvYfGmPQw"

;; AUTHORITY SECTION:
curi.us.                86400   IN      NS      ns2.linode.com.
<--- snip --->

Oh, thanks. Just set it there.

No – it looked fine to me. (Maybe try a robots.txt validation tool? I don’t think that will help tho unless there’s like a syntax issue) I checked that the other day when I made the OP (and found the same BS claim from google, too).

That sort of claim (from google) feels like gaslighting. It wouldn’t be so bad if they gave webmasters decent reports or support, but it’s just a dead end.

I can see it now:

❯ dig TXT curi.us @1.0.0.1

; <<>> DiG 9.16.1-Ubuntu <<>> TXT curi.us @1.0.0.1
<---snip--->

;; QUESTION SECTION:
;curi.us.                       IN      TXT

;; ANSWER SECTION:
curi.us.                300     IN      TXT     "google-site-verification=pqKW-jYySPNrfROceADfDYDeDS1S6oJuXGqvYfGmPQw"

1.0.0.1 is cloudflares like test/alt DNS server – using that just to avoid using a DNS server that cached prev results.

This one claims my file is disallowing googlebot!

trying to figure out why

After trying to read docs a bit, I just guessed what to do edited it to this:

Sitemap: https://curi.us/sitemap.txt

User-agent: *
Crawl-Delay: 20
Disallow:

User-agent: AhrefsBot
Crawl-Delay: 90
Disallow:

User-agent: MJ12bot
Crawl-Delay: 90
Disallow:


User-agent: SemrushBot
Disallow: /

Now the validator thinks it’s ok. Gotta kill cloudflare cache and also check my other sites that were created in similar ways (FI, BoI).

Yeah, I wonder if there’s a rule like user agent lines only get reset after a list of disallows.
I found this suggestion from https://www.robotstxt.org/robotstxt.html

To allow a single robot

User-agent: Google
Disallow:

User-agent: *
Disallow: /

In any case, robots.txt seems like it probably has some complex rules to account for not having any delimiters between blocks/groups.

fixed robots.txt which was only broken on curi.us

added podcast stuff to sitemap

verified domains in google search console. it said to check back in a day for data. i’m not sure if it’s going to give me old data or not (maybe not due to no proof of ownership in the past).

looks like the FI site was not affected like the curi site:

so maybe stuff will be fixed soon.

Google doesn’t consider curi.us to be mobile-friendly.

I just made some tweaks to the FI site for mobile and added a viewport tag on curi.us but I’m not sure how to fix this in general on curi.us and get a good result given the two column layout. I don’t think putting the sidebar on top on mobile would work well since it’s tall. I could adjust the min width for the center column to be smaller but in my testing the results are less usable on my phone (larger min width means you get enough words per line instead of too few). Moving the sidebar to the right side and then making the main column width match the screen width for small screens might be better, but I don’t think google would like it: they’d say content width is larger than screen width. IMO mobile users should be prepared to scroll, zoom or user reader mode sometimes, and mobile browsers ought to be designed to work OK with regular websites (as I believe was the original goal) but now I think google wants you to make mobile more of a first class citizen that you design for and penalizes your search results if you don’t do that.

It’s all custom html/css design in a rails 2 app.

Suggestions for design concept or technical details?

Maybe you could move the sidebar stuff into a burger menu or something like that on mobile. But I think you should redesign what is available through the burger menu in that case as the sidebar has too many links for a burger menu in my opinion.

So maybe group the current links from the side menu more. Like in a “about” section, and “previous posts” section etc.

I made changes to https://curi.us for mobile. Please try it and let me know about problems. The goal is good enough, and to satisfy Google, not super nice.

It looks fine on my phone (and in the responsive-preview via browser dev console).

There were a few pages that scrolled horizontally – IDK how much of an issue google will think that is.
Google might also dislike some of the small links (like “Permalink | Messages” below posts on the home page) or how close they are together.

Actually, google has a page for testing this. Seems okay:



https://search.google.com/test/mobile-friendly/result?id=eKDsBrc_7ruy8g-IzkPhVA

I don’t know why this would be, but https://curi.us/2380 (my microblog) showed up as not friendly – the screenshot shows that it’s the desktop version of the page though. If i go there on my phone it renders the mobile version. I didn’t test other posts via search console, but they looked okay from my phone.


https://search.google.com/test/mobile-friendly/result?id=cnZhCEpGig-3FCuVwukj6g

edit: something weird is going on. I tested https://curi.us/2380-max (which should be the same page) and got a diff screenshot that doesn’t match my phone, but it’s also mobile friendly.
https://search.google.com/test/mobile-friendly/result?id=7KmJ0GIMYlZQlkoF9Wu-HQ

i had a few weird results in my testing too. safari’s responsive design testing mode had some different behaviors than my phone.

that screenshot at the bottom just looks glitched. it has recent posts on the sidebar but with the blog post content apparently pushed below. that shouldn’t happen.

https://curi.us/stylesheets/mobile.css

basically if the browser width is 640px or less, then the sidebar goes above the content instead of on the side, and also some of it is hidden, particularly the recent posts which would be way too tall to go there. sidebar no longer being a sidebar, and hiding part of it, have the same trigger condition.

i tried opening it in mac safari with both urls and making the browser narrow to trigger mobile mode. wasn’t able to get any glitches. edit: i tried the page in mac chrome, edge and firefox too. looked fine and properly transitioned to mobile layout when narrow.

good enough for now maybe. i set up bing webmaster tools (they let you import domains from google so you don’t have to verify each one). i think i’ll let both webmaster tools gather data for a while then go check what they’re saying in a couple weeks.

1 Like