How to Analyse SharePoint Log Files

One thing that you notice instantly when you ever try to figure out what goes on in a SharePoint instance is the unbelievable amount of log messages that are written. Even when no user is on your site, SharePoint itself has so many things going on that a constant stream of messages is created.

If you try to open a SharePoint log file in an ordinary editor you find another “speciality”: The log messages itself are huge. It’s great that every message is written on one single line, but the lines are endless. If your editor breaks the line at the end of its window then you may not be able to show more than 15 messages at a time.

Luckily there are special log viewers that know SharePoint and help you to bring order in that chaos.

This post is part of the Improve Your Log Messages series. You can find the other parts here:

 

ULS Viewer

ULS Viewer is one of those tools that can help you to figure out what is going on in your SharePoint. If you select the last line it automatically updates the view and you can see how the messages pass by. This is helpful when you aren’t interested in a specific message and would like to know what is happening at the moment.

If you search for a specific GUID or are only interested in the errors you can easily filter the flood of messages. Even if the user interface doesn’t look that elaborated, it offers you a lot of functionality. That’s the reason why this tool is widely used for years and should be part of every SharePoint installation.

ULS Viewer

 

SharePoint Log Viewer

The SharePoint Log Viewer is another tool that helps you to find the messages you are looking for. Unlike the ULS Viewer the user interface looks much more modern and a bit less frightening. It offers less functionality on the main screen but you still find all the commands you need.

One point I like in particular is how the Log Viewer displays exceptions. You can see the whole stack trace at a glance, without the need to select the message first. This may seem as unimportant but when you have tried it for a few days you don’t want to miss it. When your SharePoint throws exceptions you should give Log Viewer a try.

SharePoint Log Viewer

 

Analytic Tools

ULS Viewer and Log Viewer are great to look at the log files. However, when you want to analyse the log messages in a different way those tools are not the right ones. With SharePoint 2013 there are some built in features that help you to analyse the traffic. If you need to dig deeper tools like Harepoint, Tryane or Webtrends can help you to get a better insight.

A tool that is most often overlooked is Google Analytics. SharePoint is after all a web site and most likely you already use Google Analytics for your other web sites. There are definitely some specific data you can’t get by tracking only the users, but depending on what you want to know this may be all that is needed.

 

Kibana?

With the logs of all the other systems in Kibana I definitely would like to put the SharePoint messages there as well. Unfortunately it’s not that easy. So far I could not get a stable configuration that handles all the edge cases in a way I like. I keep trying and will post an update as soon as I got it working. Should you have some ideas on how to approach SharePoint log messages, then please leave a comment.

2 thoughts on “How to Analyse SharePoint Log Files”

  1. I work for a business which uses sharepoint. I manage the unix farms / middleware and use kibana/elasticsearch. Management saw how much stuff I had in there and wanted it for sharepoint, so i’ve gone pretty far down this road.

    I ended up writing a custom log shipper (rather than logstash) in c#. We run it on all sharepoint servers, and it polls the logs for new entries. It puts them on a queue and then every .25 seconds, ships them to elasticsearch in roughly the same format as logstash. Don’t try to ship them individually, elasticsearch won’t scale very well – batch them up and then use the bulk api to post them.

    Also, don’t try and seek on ULS logs. Open a stream reader on the current log, readline until it returns null, then the next time you poll do the same – don’t close the streamreader. I had days of debugging random seek errors. Also open the files are UTF-16, not utf8.

    Don’t try and watch for file changed events because they are not triggered on the logs. Make sure you poll.

    My employer won’t let me open source the code, but I may do a greenfields implementation in the future.

    Feel free to email me if you need any more help.

    Reply
    • Hi Anko,
      Thank you very much for your offer. The idea of a custom made log shipper sounds interesting. I will try to give logstash another chance before I explore this road. The batching of messages is something so far I didn’t thought of but definitely will should I go for a custom one. Little things like utf-16 instead of utf-8 and the file change events cost you easily days until you find it out. Thanks for pointing it out.

      Johnny

      Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.