A notch above a monkey

AJAX workshop

Fry said :

We’re preparing a workshop at Spletne urice (???Web hours??? in Slovenian, a weekly meeting of web enthusiasts) that will focus on AJAX, hosted by me and another Marko. Actually we’re preparing two of them. The first one will cover the basics of asynchronous transport and data, the second one will add some DOM scripting and review some AJAX frameworks.

Being the other Marko I can say this workshop will totally rock. It has to, because AJAX is cool, right? Instant fame, riches and all that.

Seriously though, I do believe it will be a great experience for those who’ll attend. Everyone is welcome, but only first 10, sorry, 9 registered will get this chance because of limits imposed on us by the choice of venue. It’s even free (not likely to happen again any time soon), so price can be hardly something to worry about.

You can also help us make it better for you by contributing websites you’d like to see getting a makeover and telling us as much as you can about what you’d like to hear there (and what not).

Update: Workshop is full and registration is closed. I’m sorry not everyone will get a chance to participate, but we might do another one later this year.

Fry's outbreak

I give up. I tried to think of a way to properly announce Fry’s new blog , but I can’t think of any. Let me just say I’m glad that he decided to start it (again) and I look forward to reading it.

Google ignore

A few days ago I came to recognize that email hiding technique I use won’t work anymore. However I still think it would be a great idea if I could remove a part of a page from search index, even if it won’t help me hide from spammers in the long run.

What I’d really like is to specify which parts of a page are not its content and can be safely ignored by search engines. Ignoring it would also mean absence of those parts from search indexes. Spiders could still follow links inside those parts, because I can still use robots.txt file if I really want to block their traversal. In effect, I’d like a more fine-grained approach to blocking, which doesn’t force the layout of my pages to even slightly favor spiders over people.

The way I imagine it would work is by simply applying a class name to the parts of the page I’d want shielded. I don’t care much (yet) if such class name would need to be specified in robots file or if it would be some name achieved by a web consensus. As far as I can see both approaches would work equally well.

The benefit of this would also be better search results. Every now and then I follow a search hit that fit my query because of a combination of site content and its navigation. Judging by my logs I’m not the only deceived soul. So, wouldn’t it be nice to be able to tell which parts are content and which aren’t?