Thursday, November 12, 2009

TFS Destroy – friend or foe?

While everyone else is blogging about VS 2010 Beta 2, I thought it still may be worth publishing this post that talks about VS 2008 behavior (yes, the old release ;).

One of the features missing in VS 2005 and added in VS 2008 was destroy command; people wanted to get rid of the source control artifacts for good and were unable to do so.

Interestingly, once the command become available it did not become too popular. Come think of it, there are very few cases where one can afford permanently deleting data; after all, source code is most valuable asset any software company has.

But should you decide on using destroy command, there are few important points to keep in mind:

1. Before executing destroy command, you might consider deleting item first. Leaving the item in “quarantine” while deleted for a week or so makes sure nobody uses the item (for example, as part of automated build) will miss it once it is completely gone. And once you are ready to delete it, use /preview switch to double check what files you are going to permanently wiped out

2. If you use destroy, destroy all versions of the item and do not fall for /keephistory option (with or without /stopat flag):

tf destroy $/Project/FolderOldName;C123 /stopat:C156 /keephistory

This option would destroy all (or some as in example above) versions of the item while retaining item’s history. It may be tempting to clean up database from old revisions leaving the history intact; the problem with this usage is that you will not be able to distinguish the revisions deleted when viewing the item’s history. That may lead to the following message when trying to view seemingly valid history:

3. When you execute destroy command, data is not deleted from DB immediately. There is TFSVersionControl Administration job running on TFS Data Tier at scheduled interval that takes care of actual DB purging. You can trigger the job run immediately by using /startcleanup option (or running on SQL Server manually). The job does not take care of cleaning up the warehouse,  it will get updated at warehouse processing scheduled intervals.

On a personal note, my usage of destroy was limited to removing sample & test TFS projects content; I never was able to get enough justification to permanently delete source code, however unused it may be. But your mileage may differ – if you do decide to get into destruction business, there are couple of very useful resources on TFS destroy that are not immediately discoverable through simple search; summary MSDN article and screencast How Do I: Use the TF Destroy Command in Visual Studio Team System 2008? by Richard Hundhausen.

Mirror from my MSDN blog

Friday, November 06, 2009

StyleCop checkin policy updated for StyleCop 4.3.2.1

StyleCop check-in policy was updated for the latest build of StyleCop available (4.3.2.1). The only change in this drop is newer StyleCop reference assemblies.

Version 1.2.2 of the policy is available as MSI installer (available AS IS).

Related posts:
- Updated StyleCop Checkin Policy (v1.2.1)
- StyleCop 4.3 Checkin Policy version 1.2
- StyleCop 4.3 Checkin Policy available
- Get New Version Of StyleCop
- StyleCop Confusion Cleared
- Source Analysis For C# Checkin Policy
- Source Analysis For C#: Custom Rules And More
- Beautify Your Code With Microsoft Source Analysis

Monday, August 17, 2009

Disposing of checkin policy

Recently while I was fixing up StyleCop checkin policy, I came across one small not-so-obvious snippet of knowledge worthwhile to share.

Any custom checkin policy inherits from PolicyBase, which in turn implements IDisposable. Meaning – if you need to clean up after yourself in your custom policy, Dispose method is the place for that.

So in StyleCop policy, I do a lot of Visual Studio related stuff and thus I thought I’d dispose of VS extensibility objects in Dispose method.

And here where non-obvious stuff starts. The policy is loaded in either of those cases:

  • Project Source Control configuration (through menu Team->Team Project Settings->Source Control)
  • Right-clicking in Solution Explorer and invoking “Check In…” menu
  • Invoking “View Pending Changes” toolwindow

While the first case is not very interesting (no pending changes will be evaluated in configuration), two other cases are important.

In case of “Check In …”, “Check In” modal window is displayed (the policy is loaded), and when window is closed, custom policy class is unloaded and Dispose called. However, in case of “View Pending Changes” toolwindow the policy is loaded once when window is first created, and Dispose will be called only when Visual Studio is closed or TFS server connection is closed. That means you probably should not hold on any expensive resources until Dispose.

Wednesday, August 12, 2009

New check-in policy for VSS fans: keywords expanded

One of the much-talked-about missing features of TFS is the keyword expansion feature. You know, the ability to place the template in the beginning of every single file and then have every revision tracked in the body of the file (in addition to tracking in source control history, that is).

Personally, I am not a huge fan of the feature – mostly because the usefulness of the feature limited by the following factors

  1. The comments to check in still have to be detailed (if the comments are crappy, you get a lot of garbage in the file that adds nothing)
  2. If the file gets branched a lot, the revision history tend to get muddy and not to reflect the branching history very adequately
  3. If the code churn is great, you might get 100 lines of code adorned with 400 lines of revisions history (yes, I actually seen this)

So I am of the opinion that all of the above is the function of source control and if your source control is not good enough for tracking history of changes – that ain’t good source control :) However, for some folks ability to have history of changes contained in the same file outweighs the disadvantages. And these folks were pretty vocal, so vocal that Buck Hodges stopped one stop short of writing actual solution and provided the verbal recipe for writing one, using check-in policy as a workaround (since keywords expansion is not making it into official product).

And voila! Two years after this post was published, there appears TFS keyword expansion checkin policy, written by Jochen Kalmbach. Jochen also published the policy on CodePlex site, under the name of LogSubstPol.

I did a short test drive of the policy, and it does work as advertised, in three simple steps:

  1. Install the policy (currently using batch script)
  2. Add the policy to your Team project and configure the format of the keywords string
  3. Add  keyword ($log$ etc.) monikers to the file modified prior to check in

Once all of that done (and steps 1 & 2 are once per project, step 3 once per file step), as you check in you will see the revision history being updated and checked in as part of the file.

While the policy is awesome, there are few things to be aware of.

  • As the policy requires you to supply the comment, it effectively replaces “Changeset Comments Policy”, so if you have it defined for Team project you might want to remove it
  • Configuration dialog for the policy is somewhat complex, so read the documentation first (supplied PDF is real good)
  • If you evaluate the policy but decide not to check in, due to limitations of the checkin policy mechanism the keywords in the file will get expanded anyway
  • As the files are updated by the policy, VS will display the message “The file has been modified outside of the source editor” (as policy touches it right before check in). That again may be limitation of the checkin policy mechanism for keyword expansion (but perhaps can be mitigated with some creative VSX tweaking)

But regardless of these small thingies, the policy is mighty useful and it fills the big gap for those accustomed to keyword expansion. Big kudos to Jochen for creating the policy!

Mirror from MSDN blog

Saturday, July 25, 2009

Updated StyleCop Checkin Policy (v1.2.1)

After countless nugdes I have updated StyleCop check-in policy for newest build of StyleCop available (4.3.1.3). While at it, I was able to incorporate a few bug fixes and add improvements.

Version 1.2.1 of the policy is available as MSI installer or as zipped source code (available AS IS).

Bugs fixed are:

  • Solution containing projects with same Name is not evaluated correctly (keying schema is now by Project VS object instead of Name)
  • Same file appearing in different projects causes policy exception
  • C# web site projects are now supported

For the last two items, huge thanks to Clement Bouillier for bringing them to my attention.

Additionally, I have added pretty neat feature (in my opinion) to improve the navigation for the errors found by the policy. When policy is evaluated, now any violation found is added both to Checkin dialog and to Visual Studio Error List pane. Thus you can review the violations in the form similar to non-policy StyleCop violations. Additionally, clicking on policy violation either in Checkin window or Error List pane now will bring up the file and the line the violation is found in.

Policy violations are cleared from Error List pane when the policy is re-evaluated or on project build.

If you encounter any issue with the new drop, please make sure to leave a comment (and I promise to handle the issues promptly this time).

And while at StyleCop topick, I’d like to point to an excellent project driven by Howard Van RooijenStyleCop integration with ReSharper. If you use both, make sure you get the latest drop from Codeplex.

Related posts:
- StyleCop 4.3 Checkin Policy version 1.2
- StyleCop 4.3 Checkin Policy available
- Get New Version Of StyleCop
- StyleCop Confusion Cleared
- Source Analysis For C# Checkin Policy
- Source Analysis For C#: Custom Rules And More
- Beautify Your Code With Microsoft Source Analysis

Saturday, May 09, 2009

Work Item customization tidbits: custom controls (part 14 of X)

In one of my previous posts I mentioned that I consider custom controls in WI one of the most complex types of customization to implement. Since I got asked related question let me expand on the topic.

Custom work item controls provide a way to implement truly specialized behavior for WI, by writing managed class conforming to well-known interface. Sometimes it may be very tempting to write the logic in C# instead of learning intricacies of WIT XML syntax (and sometimes there is no alternative).

Why is this type of customization so complex? Because you have to code, integrate, test & deploy additional component, and that in a way different from other WIT customizations in XML. And not only that - consider the following important drawbacks custom control has:

  • Additional version of the control needs to be implemented if you want to support the same logic in Web UI as in Visual Studio environment
  • Custom control is not supported when editing WI in Excel (or MS Project)
  • Custom control assembly needs to be deployed on every client machine (or on Web server if the custom control targets Web UI)

All of the above means that custom WIT control should be implemented only when you have absolutely no other answer to outstanding business requirement.

For further information there is extensive summary article on the topic by Ognjen Bajic. Another interesting article by Neno Loje provides additional details on customizing Work Item types depending on the client (WinForms or Web), which is very relevant for custom controls.

Related posts:
- Work Item Customization: limits of complexity (part 13)
- Work Item Customization: estimate the effort (part 12)
- Work Item Customization: customization and global lists (part 11)
- Work Item Customization: customization process (part 10)
- Work Item Customization: customization tools (part 9)
- Work Item Customization: special fields (part 8)
- Work Item Customization: fields maintenance (part 7)
- Work Item Customization: global lists (part 6)
- Work Item Customization: system fields (part 5)
- Work Item Customization: user interface (part 4)
- Work Item Customization: state transitions (part 3)
- Work Item Customization: conditional field behavior (part 2)
- Work Item Customization: fields definition (part 1)

Mirror from MSDN blog

Wednesday, May 06, 2009

Work Item customization tidbits: limits of complexity (part 13 of X)

Today I’d like to talk about WIT customization recommendations that will mostly become applicable as your custom Work Item types increase in complexity.

Keep the number of custom fields limited (per TFS server)

One can have a maximum of 1024 fields defined per Team Foundation Server (as every field is represented by a column in SQL Server table, the limitation is that of maximum number of columns per table in SQL Server). That means that if you define new fields (FIELD with the distinct refname attribute) per WIT, you can easily hit this limit after creating a few complex Work Item types. Once the limit is reached, you have to deal with fields maintenance chores (you must delete some of the fields, and to delete them the fields must not be used in any WIT) – not a lot of fun when what you actually tried to achieve was to create new template.

How do you prevent this problem from occurring? Reuse is the key here – remember that even though it may look like you define a new field per Work Item type, fields are (precious) server resource; and even if the same field is used in WI you can specify different behavior for the same field in a different WIT.

Keep the number of rules limited (per Work Item Type)

While you can create multiple rules in WIT, be aware that rules not only affect maintenance complexity (you have to make it work ;), but also affect the performance. So your users may experience less than stellar performance when they create or modify work items. And there is an additional consideration which I will expand upon in the next section, which is called

Keep the number of WI types small (per Team project)

While there is no hard limit on the number of WIT you can create in one Team project, there is technology limitation (SQL Server again!) on how much complexity one may have per project, with numeric complexity index in this case being defined as [Number of rules in WIT] x [Number of WIT in project]. When you have too many WIT (or few of very complex ones) you may hit a limitation of maximum size of columns in SQL Server statement (65,535). It turns out all rules you define in WIT in Team project are eventually represented as part of real complex SQL statement used for WI validation when changing its data (read more techy details in this forum post by Amit Ghosh)

Keep the number of reportable fields small (per TFS server)

If you are not planning on including the fields into SQL Reporting reports, do not mark fields reportable just for the heck of it, since the reportable fields will propagate into TFS data warehouse and that would add extra in terms of performance and space on your TFS data layer.

By the way, to really understand how reporting in TFS works (and how it fits in a big picture) read this excellent post from Vince Blasberg.

In conclusion, I’d like to highlight once more the importance of having test environment whereto you deploy the potential WIT changes prior to production rollout. Consider the situation where you have just deployed new WIT to a Team project, and as a result the users cannot update any WI in the project. Not a happy place to be, is it?

Related posts:
- Work Item Customization: estimate the effort (part 12)
- Work Item Customization: customization and global lists (part 11)
- Work Item Customization: customization process (part 10)
- Work Item Customization: customization tools (part 9)
- Work Item Customization: special fields (part 8)
- Work Item Customization: fields maintenance (part 7)
- Work Item Customization: global lists (part 6)
- Work Item Customization: system fields (part 5)
- Work Item Customization: user interface (part 4)
- Work Item Customization: state transitions (part 3)
- Work Item Customization: conditional field behavior (part 2)
- Work Item Customization: fields definition (part 1)

Mirror on MSDN blog

Friday, May 01, 2009

MSBuild UsingTask gotchas

One significant drawback of MSBuild UsingTask element is that you must specify exactly the task name you are importing. That is if the assembly you are importing contains 200 tasks, you will have to import them explicitly one by one. And since you probably do not want to do that in every project you author, usually these 200 tasks will be defined in separate project file that can be imported whenever the tasks are needed.

While there is no workaround for specifying the task name, there is another, somewhat easier way to make sure that the tasks are available to your projects without explicitly importing tasks project file.

Let’s suppose that you have created MSBuild project file that contains UsingTask statements for all custom tasks you want to have available in your projects. Then if you rename this project file to have .tasks extension and place it in .NET framework folder (e.g. C:\WINDOWS\Microsoft.NET\Framework\v3.5 folder for .NET 3.5), the tasks defined there will be available in any project using that version of MSBuild without explicit import statement.

This is the mechanism used to make tasks shipped with MSBuild by default available to all projects (look into Microsoft.Common.tasks file to see these tasks defined there). No magick required!

By the way, looking into Microsoft.Common.tasks file imparts two additional pieces of wisdom (to quote):

NOTE: Listing a <UsingTask> tag in a *.tasks file like this one rather than in a project or targets file can give a significant performance advantage in a large build, because every time a <UsingTask> tag is encountered, it will cause the task to be rediscovered next time the task is used.

Another useful comment relates to the way the tasks are defined in UsingTask – you can either specify fully-qualified task name (including namespaces) or a short one; however, (again, quote from Microsoft.Common.tasks file):

NOTE: Using the fully qualified class name in a <UsingTask> tag is faster than using a partially qualified name.

In addition to performance win, you will also be able to disambiguate the task used. For example, both SDC tasks and MSBuild Community tasks packages define a bunch of tasks that differ only by name. In such cases you will have to be explicit both in UsingTask statement and when using the imported task:

<!-- Import SDC Sleep task -->
<UsingTask AssemblyFile="Microsoft.Sdc.Tasks.dll" 
          TaskName="Microsoft.Sdc.Tasks.Sleep"/>
<!-- Import MSBuild Community Sleep task -->
<UsingTask AssemblyFile="MSBuild.Community.Tasks.dll" 
          TaskName="MSBuild.Community.Tasks.Sleep" />
<!-- Use SDC Sleep task, full name to disambiguate -->
<Target Name="Sleep">
  <Microsoft.Sdc.Tasks.Sleep SleepTimeout="1"/>
</Target>

Mirror from MSDN blog

Friday, April 24, 2009

Branching off renamed trunk

Recently I got asked a small but unobvious branching question. Suppose you have a folder named FolderName, and for some reason you have renamed it to NewFolderName. All is well, but now you decided you want to create a branch from that folder, and to branch from the version prior to renaming.

Due to the reasons detailed in my older post, you will not be able to use branching UI for the operation. The only way to achieve that is to use tf command-line client branch command where you will explicitly specify version you branch from and the folder name at that revision

tf branch /Project/FolderName /Project/Branch /version:C123

Typical mistake people make is to use current item name, NewFolderName instead of the name that existed in the past(i.e. FolderName at the time of changeset 123).

Mirrored from MSDN blog

Saturday, April 11, 2009

Work Item customization tidbits: estimating the effort (part 12 of X)

My apologies for a long silence on the subject of the series (due to several recent events) but hopefully now am back on the track and I have a long back log :)

In my previous posts I have discussed various bits that are important know before taking on Work Item types customization. Today I’d like to talk more about approaching the whole process.

I would like to advocate a conservative approach, since in most organizations (at least in my experience) there are limited resources dedicated to customization, users support and maintenance of Work Item types.

The easiest way to jump-start the customization process is to use one of the existing templates; I’d recommend stock templates coming with TFS (MSF or CMMI); however, nowadays there are other decent templates available (for example, Conchango Scrum is well known and widely used). At the very least, that provides you with minimum work item logic implemented in professional manner.

To understand the customization effort required, it is helpful to review the following:

1. Detail new data fields to be added to those existing in Work Item type; note if existing fields rules need to be modified

2. Identify the work item state lifecycle desired and how it compares with the existing one for Work Item type (mostly paying attention to the flow rather than to the states names).

3. For your new custom fields, see if there is any special logic to be implemented viz. 

  • Whether field rules are to be scoped by user/group
  • Whether field rules are to be scoped for different states
  • Whether field needs to be associated with static/dynamic list of values

Once you create mapping table of the desired vs. existing fields, these data may be used to estimate the complexity of the development & maintenance. I have tried to compile (somewhat biased) complexity list of elementary field customization task (ordered by the simplest to the most demanding):

i. New data field. Simplest customization possible both from the point of implementation and subsequent maintenance. May require additional effort if the field is to be reported on (since the integration into reports will be required)

ii. Data field with lists of values (local or global lists). For static (i.e. rarely updated) lists of values (such as priorities), both the implementation and maintenance are fairly simple. However, if the lists content is dynamic (such as customers), make sure you plan for maintenance and, more importantly, do not make any assumption as to list content in fields’ rules.

iii. Data field with static rules logic (no dependency on state/user). Since rules implemented may be pretty complex, the scenario is as complex as you made it from the point of implementation complexity. And depending on how well you test the implementation, maintenance may range from nightmare to none.

vi. Data field with rules logic dependent on state transitions. When rules are defined include dependency states lifecycle, that generally means that you need to put extra effort into testing (for large states chart the effort may be very significant) and require regression testing when the state lifecycle is modified.

v. Data field with logic dependent on user/group. When rules are scoped to specific groups (rarely to users in corporate environment), the complexity of environment may have a bearing on WIT. Namely, in Active Directory environment with multiple levels of inclusion between groups it might not be easy to diagnose why your rules function incorrectly (either from the point of being too loose or too restrictive). Extra maintenance may well be expected.

vi. Data field with custom controls. When your data field in addition to rules expressed in WI Type definition has logic defined in custom control assembly, you have just added extra dimension on implementation, testing and maintenance. That becomes even more complex task if the custom control should work for Web interface

Once you identified the work to be executed, you will be able to plan effort required for implementation, testing, deployment and maintenance.

In conclusion, I’d like to highlight two very important principles which when followed will prevent a plethora of issues: a) never deploy to production before deploying to test environment and b) plan and execute the whole WI Types customization process as if it was an ordinary software development effort.  

Related posts:
- Work Item Customization: customization and global lists (part 11)
- Work Item Customization: customization process (part 10)
- Work Item Customization: customization tools (part 9)
- Work Item Customization: special fields (part 8)
- Work Item Customization: fields maintenance (part 7)
- Work Item Customization: global lists (part 6)
- Work Item Customization: system fields (part 5)
- Work Item Customization: user interface (part 4)
- Work Item Customization: state transitions (part 3)
- Work Item Customization: conditional field behavior (part 2)
- Work Item Customization: fields definition (part 1)

Mirror from MSDN blog

Wednesday, April 01, 2009

TFS Administrator chores – space offender strikes again!

In my previous post I talked about management of large files in TFS version control database. Today I’d like to talk about what you can do to optimize space management in work item tracking database.

As you know, it is possible to add file attachments to Work Item, with the maximum attachment size of 2Mb (by default); but most people who use attachments with WI change that limit to something larger (this MSDN article details how to change the maximum attachment size), since default frequently does not suffice for video captures and such.

Which naturally brings us to the question – if the maximum size set, say, to 32 Mb, how could one prevent misuse of the attachment feature?

There is nothing in Team Explorer UI to help you with figuring out the size of the added attachment; and nothing to prevent a user from adding however many large attachments (if they are not greater than maximum size). That leaves you with user education as a form of prevention; and to report the usage it is possible to run raw SQL on the relational database (all of the below queries are strictly AS IS etc.):

-- Query WIT database
USE TfsWorkItemTracking;
SELECT 
    -- parent work item 
    ID AS WorkItemID, 
    -- name of the attachment file
    OriginalName AS AttachementName, 
    -- attachment comment 
    Comment, 
    -- file size
    [Length] AS [Size], 
    -- whether attachment was deleted
    CASE WHEN RemovedDate = '01/01/9999' THEN 0 
              ELSE 1 END AS Deleted 
FROM WorkItemFiles    
    WHERE 
    -- File attachments only
    FldID = 50
    -- return only large files
    AND    [Length] > @LargeFile 

The query will give you the list of WI with large attachments, so you could figure out whether this feature is used in a sensible way.


If you look at the query closely, you’ll notice that the attachment in the database can be removed from WI and still exist in the database. What does that mean, say you? Whereas with version control one can delete item (where the item still will be in DB) and then destroy it (where item will be purged from DB), there is no such feature with Work Item attachments.


It turns out when you delete attachment from Work Item, the actual content is never deleted from database unless you do it manually. There is even helpful but incredibly well-hidden and vague article in MSDN on the subject, titled “How to: Delete Orphaned Files Permanently”.


That means even if you have managed to delete large attachments from WI, your job to recover the space is still half-done, and you need to actually delete the attachment content from the database.


The query below will enumerate all orphaned (deleted from Work Items, but still in DB) attachments, whereas subsequent query can be used to actually purge the deleted items from the database.


-- Query for all orphaned attachments
SELECT WorkItems.ID AS WorkItemID, 
        WorkItems.OriginalName AS AttachementName,
        WorkItems.Comment 
FROM TfsWorkItemTrackingAttachments.dbo.Attachments Attachements, 
        TfsWorkItemTracking.dbo.WorkItemFiles WorkItems
    WHERE Attachements.FileGuid = WorkItems.FilePath 
        AND WorkItems.RemovedDate <> '01/01/9999'
        AND WorkItems.FldID = 50


-- When absolutely sure - delete the orphans
DELETE 
    FROM TfsWorkItemTrackingAttachments.dbo.Attachments
-- join to WIT tables to identify orphans
WHERE FileGuid IN (SELECT FilePath 
        FROM TfsWorkItemTracking.dbo.WorkItemFiles
        WHERE RemovedDate <> '01/01/9999'
        AND FldID = 50)

Purging orphans seems to me a good candidate for the recurring job (not sure why it is not part of core TFS setup).



Mirrored from MSDN blog

Saturday, March 28, 2009

TFS Administrator chores – dealing with the space offender

These are the days of cheap storage - but even the cheap storage may run out. And running Team Foundation Server storing artifacts in its (multiple) databases may use up your space rack faster than you might have expected (and if you want to know what to expect, refer to this classical post by Buck Hodges on database size calculations).

If that happens, the most probable culprit is version control database (TfsVersionControl) – in other words, all these files that people check in into version control. The size of the file matters because TFS stores difference only for each new revision of “small” files but for the “large” files every new revision gets full-blown copy (by default TFS considers the file to be large if it is over 16 Mb - read more on that topic in my previous post).

There are several ways of making sure that your users do not fill up your version control with memory dumps, images of installation CDs and such. Mind you – I am not saying that large files do not belong to version control; I am saying that the addition of large files should be a) conscious step and b) “revisionless” (i.e. with no versioning).

Myself, I have been always ambivalent about storing large binary thingies in source control – on one hand, you get all content in one place (which is mighty convenient for builds etc.), on the other hand, many users will probably check in the content that does not belong in source control. So here is my hit list of  measures to deal with large files in version control

  • Educate your user – make sure your average user understands that DVD ISO added to version control ends up being transmitted and stored in the database; perhaps what the user is looking for is file server, not version control
  • Make user aware of his actions – it is possible to write check-in policy that would alert the user at the time of check-in, that the files being checked in are large and perhaps should not be in version control. And then, even if the user decides to override the policy you may run report on policy overrides
  • Monitor your storage – if high level prevention and low level prevention fail, you can query the database to identify the offending files. The query below (with usual caveats – it is AS IS etc.) will give you a list of large files in the database (it will not take into account the summary size of all versions, only the latest version):
DECLARE @LargeFile int;
-- return files larger than 16 Mb
SET @LargeFile = 16 * 1024 * 1024; 
 
USE TfsVersionControl; –– use source control DB 
SELECT -- item path 
    Versions.ParentPath + Versions.ChildItem AS ItemPath,
    -- size of latest version in DB 
    Files.CompressedLength AS DatabaseSize, 
    -- size of original file
    Files.FileLength AS [Size], 
    -- whether item deleted
    CASE WHEN Versions.DeletionId = 0 THEN 0 
        ELSE 1 END AS Deleted 
FROM tbl_File Files, tbl_Version Versions
WHERE -- get item latest version 
    Versions.VersionTo = 2147483647 
    -- join to table with sizes
    AND Versions.FileId = Files.FileId 
    -- return only large files
    AND Files.CompressedLength > @LargeFile 
ORDER BY ItemPath;

I would be happy to hear your horror stories of the application of the above query; mine was nothing more than a bunch of ISO images checked in :)


Thanks for reviewing the query go to Chandru Ramakrishnan


Mirrored from MSDN blog

Moving on

As you might now, from November ‘08 I am working at Microsoft, and as every MS employee I am now entitled to Community Server powered blog at blogs.msdn.com (mine is http://blogs.msdn.com/eugenez).

Though Blogger is a convenient platform, Community Server is ways better, and thus I will be making MSDN my new [blog] home. So update your reader with the new feed! But if you don’t – do not worry, I will be mirroring TFS related posts here.

See you around!

Something good to read

This is a short post to let you know that there is one new blog of note that you should add to your RSS reader. Welcome (back) Richard Berg, blogging about TFS,  PowerShell and more at http://www.richardberg.net/blog. Until recently, Richard used to work at Microsoft, and if you ever asked a question at MSDN TFS Version Control forum, you probably had it answered  by Richard.

His new blog already has a whole lot of information about TFS PowerShell cmdlets, so stay tuned for more!

Tuesday, February 10, 2009

TFS Code Review Tools digest

If you are interested in getting most out of your TFS toolset, an extensive summary presentation on Code Review tools in TFS domain is available from JB Brown. I had a pleasure of watching the presentation in person, and it covers most alternatives available.

For those who do not know, JB Brown is the original author and main contributor for TeamReview project. TeamReview allows performing collaborative code reviews using less than traditional approach (see a good intro post on how it works by Willy-Peter Schaub ), where you can actually replay the steps performed in code review (and it stores the code review comments using TFS work items).

Is TFS FDA compliant? Is anything?

I received an interesting comment to my previous post on securing intellectual property, the question being whether TFS meets the requirement of FDA compliance.

Let’s think about it for a second. What kind of software is FDA concerned with? Software used in or with medical devices, which obviously does not include TFS or say, Visual Studio compiler.

But FDA does recognize the importance of tools used in process of development of medical software. As part of FDA software validation process (as described in General Principles of Software Validation document), the tools of trade needs to be validated as well:

Software tools are frequently used to design, build, and test the software that goes into an automated medical device. Many other commercial software applications, such as word processors, spreadsheets, databases, and flowcharting software are used to implement the quality system. All of these applications are subject to the requirement for software validation, but the validation approach used for each application can vary widely.

So you will say – you still have to do that validation whatever it might be (and if you have any past experience with medical software, you are probably not too excited about the prospect). Not exactly – because one has to draw a line somewhere.

Do you have to validate OS you use for development? C++ compiler you are using? That surely would be too much work, and FDA recognizes that, defining that the degree of validation for off-the-shelf software applications used in quality process depends – depends on risk posed by the specific software usage,  role of the software in the process, vendor supplied information etc. (for more details, have a look at “Validation of off-the-shelf software and automated equipment”)

While I do not mean that your regulatory guys do not earn their bread and butter, the whole standard thing seems to be a little bit overrated. FDA is not that much of a boogeyman – for example, one of the principles of software validation document mentioned above is “least burdensome approach”. Meaning that the quality of the software is not necessarily measured by the weight of the documents you produce :).

And getting back to initial question – does TFS meet FDA compliance criteria? Yes it does, but specifics differ depending on TFS place in software development process and on how your regulatory read FDA documentation (with the latter usually being responsible for most grief; that’s why I recommend reading FDA guidelines yourself – to get magic out of the process and ask the educated questions).

And if anything, FDA documents are probably making much more sense than some unregulated documents I came across.

Tuesday, January 20, 2009

Word on securing intellectual property

Do you care about your intellectual property? I am sure that the answer is yes. Now how about related question – are you doing anything to make sure that your intellectual property stays yours?

Even if you answered yes to the last question, it is not easy to cover all aspects of the problem. Some of more detailed questions you might want to answer:

  • Can you establish that certain code is yours in the face of possible legal action?
  • Can you establish the fact that reasonable precautions were undertaken to secure the source code?
  • How do you make sure that your proprietary code is not leaking into public domain?

* - If you have additional compliance to worry about (such as FDA), additional questions may need to be answered.

There are some small things you have to do proactively to make sure you are covered from the legal perspective. While I am not an expert in law, I do have couple of them to offer for your consideration:

  • Start adding copyright notices in your source code (such tools as StyleCop can help you with enforcing this practice)
  • While term “reasonable precautions” has a lot of legal nuances, at the very least that means that source code never leaves the premises (think about the situation where the developer uses source code at client’s site as a shortcut to fixing the problem)
  • If you have sensitive information as part of your source code repository (such as proprietary algorithms), you may have to be more restrictive; that is to make sure that the access to such information is granted only on “need-to-know” basis

If you happen to have any hard earned advice on the matter, please share it in the comment.

Sunday, January 11, 2009

HP Quality Center connector is available

If your organization owns both TFS and Quality Center, and you have to work with both, I bet you asked yourself: “What if two could be magically synchronized?” (without even mentioning the idea of moving QC artifacts into TFS :)

So today you can get the bit responsible for the magick from Microsoft itself – read all about HP Quality Center Connector at Jim Lamb’s blog. And since it is pre-release you have a golden chance to use it and let MS know if they missed something you’d absolutely require.