Sunday, August 31, 2008

Renaming Team Project in VS2008 – nothing changed

Surprisingly, one of most popular posts on my blog is the one about renaming Team Projects. As it was written for VS2005, I thought I’d review it as of VS2008 SP1.

As it turns out there is not much to review. You still cannot rename the Team Project in VS2008, however there are several changes as to what can be deleted as of latest version of TFS:

  • Work items now can be deleted (use destroywi command of TFS Power Tool), so once you move work items you can delete old ones
  • Version control artifacts can be removed – fully or partially (partially by removing the contents while retaining the history) using destroy command with tf command-line client

Additionally, it is worth to note that work item moving tool by Eric Lee is available at CodePlex (it is not easy to find unless you know it is there).

Overall, the initial recommendation still stands: make sure you name Team Projects appropriately since it is a lot of pain to get close to renaming them (and is impossible to perform “clean” rename).

Saturday, August 30, 2008

Getting Latest in VS2008 (addendum)

One thing I did not describe my previous post is the actual user experience in VS IDE. So as a footnote, it is worth to note that when get latest is performed as part of check out (due to either VS IDE or Team Project settings), you will be presented with the following dialog:

The good part of it is that now you are aware of what is happening; the bad part is that you cannot cancel and having additional dialog pop up is somewhat disruptive.

Friday, August 29, 2008

Two flavours of “Get Latest On Check-out” in VS 2008

While in TFS/VS 2005 there was no option to get latest on check out, in 2008 version there is not one but two different ways to configure that feature (Disclaimer: I am not endorsing getting latest on check out but just trying to reach sort of closure of TFS/VS 2008 featureset).

First option is to enable this setting per workstation, using Visual Studio TFS source control provider settings (available through “Tools->Options” menu):

This option is fully controlled by the user in his environment, and does not affect other users in any way.

Second option is to configure “Get Latest On Check-Out” per Team Project (using “Team->Team Project Settings->Source Control” menu):

Since the option is set for the Team Project, it can be enabled by the administrator and will affect all users working with the project files.

Thus in VS 2008 one has a choice of having “Get Latest On Check-out” option enabled either for all developers working at the project (using Team Project settings) or a developer can enable that option for himself (by using VS Source Control provider settings).

From the “best practices” standpoint, I’d like to note once again that getting latest on check-out is very disruptive, evil and outdated practice. While I am highlighting those features, I am neither a fan or a user of those.

Consider the following typical scenario – you have checked out file in VS project. Since get latest is performed, you just got yourself the latest version of that file. If that latest version contains changes that are incompatible with the other files’ versions in your workspace (say, dependencies on new interfaces that are not yet in your workspace), then you are screwed. That is, to make things tick now you will have to get latest versions of all relevant files in your workspace (hello and welcome back, VSS!).

And besides, Team Project setting somewhat smells of dictatorship, since it will force everyone on team to conform to VSS-like mode of operation. Not a good thing in today’s flexible world.

Related posts:
- Get latest on check-out in TFS 2008
- (Not) getting latest on check out – a bug?

Sunday, August 24, 2008

Editing files in VS2008 SP1

As a follow up to a previous post on file handling in VS2005/VS2008, I thought it is worth to mention another big difference coming as part of VS2008 SP1.

Pre-SP1, if you edit a source controlled file that is not a part of currently loaded solution, VS will not prompt you to check out this file (and will not check it out automatically, if that is what you configuration settings).

However, if you work with files in Source Control Explorer in SP1, your experience will be pretty much identical to Solution Explorer experience, even if the file is not part of the current solution. That is, editing file will check it out the file (if that is your VS settings – Source Control Explorer behavior is defined by the same set of settings as Solution Explorer; namely, “Tools->Options->Source Control->Environment” tab).

Together with the change mentioned in my previous post, this small tune-up should significantly decrease the number of local changes that never made it up to the repository (that is, if you are tweaking files locally and modify them out of solution context).

Tuesday, August 19, 2008

Static Analysis for T-SQL

One little known feature of Visual Studio Team Edition for Database Professionals (aka Data Dude) is the ability to run static analysis on SQL scripts (similar to static code analysis for managed code). To be more precise, the analysis feature is part of Power Tools for Data Dude (and part of VSTS 2008 Database Edition GDR CTP).

I was going to look into that feature for quite a while; but sadly the feature’s lacking documentation and I did not have time to dig so I was postponing it. Today I come across very interesting review of the analysis feature, that provides missing documentation and more. Check out “Analyzing T-SQL Static Analysis 2005 & 2008” article at Mike Fourie’s blog.

And while there, you might want to have a look at Mike’s blog; it is one of the few blogs that deals both with MSBuild and TFS topics (for those who don’t know, Mike is the main person behind SDC tasks project at CodePlex).

Get new version of StyleCop while it is hot!

It is a bit of coincidence, but here within several days of each other you have both FxCop 1.36 and StyleCop 4.3 released!

The new version of StyleCop has the following changes (see Jason Allor post for more details):

  • New name (now it is StyleCop instead of Source Analysis for C#)
  • New rules (nothing ground-breaking there, but they are pretty neat)
  • Bug fixes (whole lot of them)

If for bug fixes alone I’d recommend you download it and give it a spin.

Monday, August 18, 2008

Editing writable files in VS2008

One interesting change of TFS source control provider behavior in Visual Studio 2008 is the handling of writable files.

In VS2005, if you make certain source controlled file writable locally, editing it will not cause check out (you will have to explicitly check the file out); of course that assumes that VS is set either to explicitly or automatically check out file on edit or save.

With the same VS settings in VS2008, you will still be prompted to check out the file, even if it is writable. TFS source control provider tracks all controlled files in the solution, regardless of their read-only status.

The rational behind this change is clear – changing files locally without referencing source control repository may lead to changes never propagating to repository at all (and thus problems of “I have changed the file and it was not checked in” kind may arise).

However, there are some interesting problems you might encounter with that new behavior. Let’s say certain file is locked by someone else (with exclusive check-out lock). In VS2005 you would make this file writable locally, and VS would be happy to let you edit the file. In VS2008, however, VS will first check the status of the file in source control, and seeing that it is locked won’t allow you to edit this file.

There is workaround to this (aside from not ever messing up with local files modifications :); “Tools->Options->Source Control->Environment” tab in Visual Studio may be used to tweak the options. Setting checked-in items “Editing” behavior to “Do nothing” will allow you to edit file regardless of its status in source control (setting “Saving” behavior to “Save As” will allow you to save it). But keep in mind that this setting is probably very unproductive choice for day-to-day work.

Thanks for this tip go to Richard Berg.

Get new version of FxCop while it is hot!

Those who use FxCop know that the latest release (1.36) has been in beta for quite a while. Now the wait is over - FxCop 1.36 was RTM’ed! Download it while it is hot!

If you wonder how I knew about new release – I knew because I read a new blog of David Kean. Look there for a short summary of FxCop 1.36 featureset; and get subscribed for more content!

Monday, August 11, 2008

(Let’s not) blame the other guy

I did some general research on SCM tools lately, and one fact I found interesting was the amount of negative information floating around (no matter what tool you are looking at). What’s interesting about it is that in most cases the negative zeal seems to be misplaced. And here is why – what I know from my personal experience, blaming the tool is usually the symptom of something else lurking behind. So being positive guy that I am, let me share my thoughts about “something else”.

First problem that comes to mind is selecting the wrong tool from the start and then blaming it.

With all white papers floating around, and trial versions available (not to mention sales persons ready to do circus dance at your office), it is still the easiest thing to do. People do it by a) assigning the wrong person to do feasibility assessment, b) compiling a list of wrong requirements or c) making the decision based on political/financial motives alone. Those are just a few reasons that come to mind; I am pretty sure anybody who worked (or better yet, consulted) for multiple companies can add more. And mind you, when I am talking about “wrong” tool I mean “wrongness” on a great scale – choosing system without offline support for largely distributed work force with limited connectivity, lacking of the integration to the one development tools that is used by everybody etc. Things of that scale usually cannot be fixed internally no matter the resources available.

Choosing the wrong SCM (or looking at bigger picture – ALM) tool may result in different outcomes. Three outcomes I have personally seen are: dump the tool and do re-assessment (extremely rare since somebody has to take the blame and requires understanding that the tool is a wrong one early on), readjust internal practices to align with the tool (also relatively rare since internal practices usually have more champions that any new tool, regardless of their merit) and make a do with what is there.

In every case there will be frustration abound, and some (or may be even most) of it is always directed towards the tool.


While choosing the right tool is undoubtedly important, setting up the right process is even more important. By process I mean general set of SCM practices that may include software design, development, maintenance and release; whether these practices are formalized and documented is not material.

Let’s assume that the well performed feasibility study allowed your company to acquire the right suite of tools; i.e. the company managed to avoid “wrongness” on a large scale. The pilot project was set up, outside consultants were hired to provide inside knowledge, pile of documentation was compiled. Does that mean the process is right? Not really. The groundwork may be there, but whether the process is working still remains to be seen. 

The only way of understanding whether process is working is continuous monitoring; the only way to make it work is continuous adjustment. Same as in software development in general, it appears that being agile pays better than being know-it-all-right-from-the-start.

The signs showing that the process is not working correctly are many and may be monitored with relative ease. One such easy to detect sign is a developer bitching how it is impossible to work with YYY (while with XXX everything was a breeze). Countless cases of such “issues” may be found by searching for "YYY sucks” on the WWW, though sometimes the reason is case of the "wrong tool" (it is not always easy to distinguish what is the reason – find one extreme example here; I think that one combines wrong process, wrong tool with a touch of good old RTFM advise applicable for a good measure).

Adjustments to the process should follow the signs and eliminate the bottlenecks by healing the cause – whether by developing custom utility to provide automatic generation of release notes, by training your developers in the art of branching concepts or by chucking your carefully crafted (theoretical) workflow in favour of less elegant but working one. The mantra of “developers, developers, developers” is well applicable to SCM process (just do not forget to change the words every now and then to “release managers, release managers, release managers” etc.); making everybody happy or at least content is the best metric of well implemented process.

My purpose in writing this lengthy and somewhat generic post was this – when one has an urge to write spiteful post about certain SCM tool (be it TFS, SVN, Perforce or whatever), it would really help to stop for a moment and deliberate. I am willing to bet that in majority of cases either the tool is being bend to deliver something it was not intended for or the process built around the tool is less than perfect (and I leave out the eternal cases of RTFM). Myself, I am proud of never blasting any SCM tool in public :)

Monday, August 04, 2008

Check in your stuff now or else!

How often do you check in? Do you have organization-wide policy mandating the maximum check in period? And should you care at all about those pending check ins?

As the general wisdom has it, you should check in “often”. In the past, I myself was quick to cite that maxima, but thanks to several discussions around this issue I have been swayed and now believe that "often" is not a right qualifier' rather that checking in often, one should check in when “ready”. Indeed, when you think about it, committing new revision of code to the repository is (or should be) driven by the code readiness rather than by the arbitrary time period.

But that raises another question – what is code “readiness”? While “often” is easy to define (“Thou shalt check in code once in a fortnight!”), ready to check in code is trickier and depends on your company practices. Code readiness may include one or more of the following:

  • Code compiles (poor man testing)
  • Code compiles and all code dependencies compile (poor man integration testing)
  • Code satisfies (static) code analysis rules
  • Code passes unit tests
  • Code passes integration tests
  • Code passes code review
  • Code & its unit tests pass unit test review

It is at that stage that many decide to go back to “check in often” principle, since making sure that the code being checked in is ready code is much more complex than making sure the code is checked in every three days.

However, if you are unable or unwilling to define code check in criteria, that says a lot of (bad) things about your development process. Basically, check in should be used for committing snapshot of the development; but not just any snapshot. If the only thing definite about the code revisions checked in is that it is checked in with daily intervals, the usefulness of your source code control repository is very limited - try to rollback or go to certain state of the repository in the past, when the revision is synonymous with the date. So establishing at least elementary criteria for check in (read "code compiles") is a good start and is preferable on the face of any time based criteria.

One other argument against "check-in-when-ready" and in favour of "check-in-often" is backing up the code revisions (“when your workstation crashes, we have the copy in the repository”). With the modern SCM solutions the problem is easily solvable; for example, TFS provides shelving functionality that ought to make “check in for backup” thing of the past.

And here I am going to contradict myself a bit and say that even when you set the check in criteria, having “check in often” policy is still valuable (with “often” set to 5+ days) – but only as additional measure. That way you may discover long lasting development effort (“it is still not ready, we need another week, since if it is checked in now, everything will break” sort of effort), the effort that should not be a single check in unit anyway (probably TFS branch construct is the one to use in such cases). By the way, another interesting outcome of that policy may be to discover that granularity of development tasks assigned is too coarse. In that case working on breaking up development into smaller pieces may mitigate the check ins problem.

So to conclude my somewhat rambling post, here is my “pending check ins manifesto”

  1. Check in when the code is ready to check in. Establish you criteria for “code readiness”
  2. If you are not checking in, back up the code you work on daily
  3. Enforce the policy “check in once in a X”, but only as additional measure. Make sure nobody is forced to check in; try to understand the original cause

But hey, what about about the initial question – whether one should care about those pending check-ins at all? Hopefully, the discussion above makes it somewhat clearer, I believe that yes, one should care about check-ins left floating around. When somebody checks out the file(s), he essentially makes the statement “I am about to modify these files”. From that point there are two ways - that person may decide otherwise and do not commit any changes (undo) or make and commit the changes. So by using pending check-ins indication you have very simple and yet powerful tool to monitor the state of software development and to improve the process if needed.

And it is a shame that sometimes simplistic view of Configuration Management concepts prevails and makes lots of people unhappy. Let us deliberate before enforcing any policy - perhaps we could do better?