Dell driver CDs

There’s a command-line tool on the CD which comes with Dell servers that allows one to extract drivers for a particular platform, e.g.:

E:\server_assistant\driver_tool\bin>make_driver_dir.exe -i e:\ -d c:\server_drivers -p per510 -o w2008_64 --extract

Which would extract the Server 2008 (x64) drivers for the R510 server system.

They distribute drivers for quite a range of platforms, unfortunately on the revision of the CD I have the listing functionality is broken, the Windows ones can be fairly easily guessed as:

w2003
w2003_64
w2008
w2008_64
w2008r2_64 (no x86 version of 2008 R2 ofc)

The drivers will be used to deploy various Windows onto our new server lab using MDT.

EDIT:

I actually found that the utility that extracts the drivers is written in Python and Dell helpfully supply the source. The issue was traced back to a lack of error handling for a particular case and a two-line fix sorted the problem. This is an excellent example of why open source is a good idea, if something doesn’t work you can fix it.

Advertisements

Trac on Windows

I recently set up a Trac server on Windows, here’s how I did it.

1. Prerequisites

I used a fresh Server 2003 image (not 2008 for licensing related reasons)

On top of this the packages which need to be installed are:

Apache 2.2 (http://httpd.apache.org/download.cgi#apache22)
mod_auth_sspi (http://sourceforge.net/projects/mod-auth-sspi/)
mod_wsgi for Python 2.6 (http://code.google.com/p/modwsgi/wiki/DownloadTheSoftware?tm=2)
Python 2.6 (http://www.python.org/download/releases/2.6/)
setuptools 0.6c11 (http://pypi.python.org/pypi/setuptools)
Trac 0.12 (http://trac.edgewall.org/wiki/TracDownload)
Genshi 0.6 (http://genshi.edgewall.org/wiki/Download)

2. Installation

Most of these packages have Windows installers available, which simplifies matters. I installed in the order above, with the following additional steps:

To install mod_auth_sspi/mod_auth_wsgi the .so files were copied to:

C:\Program Files\Apache Software Foundation\Apache2.2\modules

The modules needed to be activated in the Apache configuration file:

LoadModule sspi_auth_module modules/mod_auth_sspi.so
LoadModule wsgi_module modules/mod_wsgi.so

3. Configuring Trac

I decided to keep my trac installation in the directory C:\trac, along with this the only other directory which needs to be backed up is:

C:\Program Files\Apache Software Foundation\Apache2.2

The database backend for Trac was chosen as sqlite, for simplicity. Unfortunately there isn’t yet a set of bindings between Trac and MS SQL, though the sqlite option is just as good for this purpose.

To initialise the Trac project the following commands need to be run:

C:\>mkdir C:\trac
C:\>cd C:\Python26\Scripts
C:\Python26\Scripts>trac-admin C:\trac initenv

At this point you can test the setup with:

C:\Python26\Scripts>tracd --port 8000 C:\trac

Try navigating to:

http:\\localhost:8000\

The main configuration file for Trac can be found at:

C:\trac\conf\trac.ini

(Or wherever you chose to put your trac instance).

One thing you’ll likely want to customise early on is to set a project-specific banner image, the configuration section to look for is:

[header_logo]
alt = Logo alt-text
height = 50
link = /
src = site/development.png
width = 400

Note that the path “site/development.png” actually appears to refer to:

C:\trac\htdocs\development.png

The “link” parameter is relative to your document root, e.g. in this case it’ll take us to the top level index of the Trac site.

4. Configuring Apache

It is possible to just use Trac with its own web server, but this is probably not a very efficient use of resources. The next steps detail getting Apache to serve the Trac site instead.

In order to use Trac with WSGI (and consequently with Apache) you need to deploy the trac environment so that it can be served by Apache. This process creates various files under the C:\trac\htdocs and C:\trac\cgi-bin directories, including the file trac.wsgi. This file plugs into the WSGI module as defined in the httpd.conf to enable the trac subsystem to be served via this method. The command to deploy your trac instance is:

C:\Python26\Scripts>trac-admin /trac deploy /trac

(You can change the last parameter if you want to put the deployed files somewhere else).

We also need to modify our httpd.conf (Apache’s configuration file) in order to serve the Trac site. This file can typically be found here:

C:\Program Files\Apache Software Foundation\Apache2.2\conf\httpd.conf

After enabling the mod_wsgi and mod_auth_sspi modules (see above) we need to modify the configuration file to add:

DocumentRoot "C:/trac/htdocs"

WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all


WSGIScriptAlias / "C:/trac/cgi-bin/trac.wsgi"



Options Indexes
AllowOverride None
Order allow,deny
Allow from all

AuthName "Trac Server"
AuthType SSPI
SSPIAuth On
SSPIAuthoritative On
SSPIOfferBasic Off
SSPIUsernameCase lower

require valid-user

(Note that this should replace any existing directive in the configuration file. Also note that if you are using Virtual Hosting with Apache then the configuration would likely be quite different, but include the same basic concepts.)

The first Directory section sets up access to the WSGI application (Trac), and then sets the WSGIScriptAlias for the root directory. Any requests for the root directory will thus be served by WSGI, and therefore by the Trac application.

The second section (which only comes into play if we have the sspi_auth_module (mod_suth_sspi) installed. This configures access permissions to use Domain authentication. This means that your windows domain users shouldn’t need to log in to Trac, or at least they can do so using their domain credentials. Note the “SSPIUsernameCase lower” directive. This will convert the entire username string to lowercase before passing it to Trac as the authenticated user. Since Trac’s access controls are case-sensitive and Windows domain accounts are not we have to ensure that users logging in with uppercase characters all get mapped to the same user in Trac.

The other SSPI directives indicate that SSPI should be enabled and authoritative (so we are not going to consult any other authentication methods below SSPI, e.g. Basic auth). Finally the “require valid-user” directive indicates that any domain-authenticated user should be able to access this resource. Since we will be defining fine-grained access controls at the Trac level this is sufficient.

Once the configuration file is saved you should be able to start (or restart) Apache and navigate to http://localhost/ – you will be prompted for your domain login credentials if needed and you should see “logged in as domain\user.name” at the top-right when accessing Trac.

5. Configure Trac access permissions

To set up user permissions I’d recommend using the administration functions built into Trac itself (and accessible via the website). In order to use these you need to define at least one administrator using the command line utilities. The steps to do this are:

C:\Python26\Scripts>trac-admin /trac permission add domain\user.name TRAC_ADMIN

You can confirm the user’s permissions by this command:

C:\Python26\Scripts>trac-admin /trac permission list domain\user.name

Remember, these usernames are case-sensitive, so ensure that you use all lowercase to match with the usernames as they will be passed in from Apache.

6. Conclusion

After doing this and logging into Trac as the admin user defined you should see a new Admin option in the Trac top-bar. This will let you configure further user permissions.

At this point you should have a fully functioning Trac installation, and you can go about customising it as per the Trac documentation:

http://trac.edgewall.org/wiki/TracGuide

FireSheep

It’s nice to see issues of transaction security being brought to public attention, it makes me hope that people might take notice and start to improve things. Unfortunately in this case I suspect the “improvements” may be anything but.

FireSheep basically exploits the “html form + session” authentication model used by the vast majority of websites today. In this model a user authenticates themselves to the server (typically this authentication stage is secured using TLS) via a web form built into a website. This form is usually presented either on a login page, or in response to a user requesting a service they do not currently have access to. There is little or no standardisation to this process beyond the fact that it uses HTTP and TLS.

The part of this model that FireSheep actually exploits is the session component. In order for the server to know that a user is authenticated to access a particular service a cached authentication token must be left behind on the client side, which can be sent with all future requests. This takes the form of a cookie. This authentication token must then be sent alongside every further request by the client. While the initial negotiation is encrypted using TLS in a lot of cases, e.g. for FaceBook, the subsequent requests are not. Thus the authentication token gets passed over the network in the clear.

While this token may not be used to get the user’s login credentials it can be used in a replay attack to access any services the user might currently be authenticated to use. E.g. to post stuff on their wall on FaceBook.

The obvious way to fix this is to simply encrypt all traffic between the user and the website. Simple.

Or is it? For a site like FaceBook encrypting all authenticated session data would be a horrifying prospect because transport-layer crypto is quite expensive in terms of processing requirements. This is a big part of the reason why websites are not currently encrypted as a matter of course.

The user experience of encrypted websites is also usually poorer than for their non-encrypted counterparts. The encryption adds latency to the connection as the data is encrypted and decrypted. Worse still most web browsers do not cache TLS encrypted data, so images downloaded over secured connections must be downloaded over and over.

Fundamentally it is quite wasteful to encrypt everything when the only information that actually needs to be encrypted is that tiny authentication token sent along with every request.

A better way to solve the problem would be to improve and then use HTTP’s built-in authentication mechanism. The two main ones in use are Basic auth and Digest auth. The former sends passwords in cleartext and is only really useful over TLS secured connections. The latter uses an MD5 digest with some cryptographic magic to achieve the same thing without sending paintext passwords. Both are widely supported in web browsers, but neither is widely used.

The two main reasons for lack of widespread adoption are twofold. One big problem is that there is no easy way for the server to “log out” a client, authentication credentials stored in the browser expire only when the browser is closed. There is also the issue of how this method of authentication is presented to the user. In most web browsers (and certainly all the major ones) the authentication is presented as a modal dialog which cannot be integrated into pages in the way that forms can. This is actually a bigger issue than any technical limitation of HTTP auth – web designers want to be able to integrate their own take on how logging in should happen.

It’s my opinion that the problems raised by authentication token hijacking would be best solved by extending HTTP, rather than moving to further use of wasteful encryption. Add a new HTTP authentication method which uses modern cryptographic methods (e.g. SHA256 version of digest auth) and a method for the server to log out clients.

Such a system would need the following characteristics:
1. Allow authentication to be initiated by the client alongside a request, e.g. submission of a login form with special HTTP auth attributes.
2. Allow the server to prompt for authentication via custom 401 error page, e.g. a login page/form.
3. Protect against replay attacks by encrypting the auth tokens.
4. Provide a mechanism for the server to internally expire authentication tokens and thus request a new authentication process from the client (i.e. logout).

The initial login could still use TLS to provide for assessing the authenticity of the server in question (e.g. to tell that you’re talking to the server you think you are). TLS would then only be required to assess the authenticity of the server with which you are communicating, or to provide transport layer encryption for when you are sending sensitive data (e.g. online banking). Potentially TLS could be replaced in this server-authentication role via the use of a DNSSEC-derived mechanism at some point in the future.

The other thing that would need to happen would be to allow these credentials to be pre-emptively specified in web form rather than being requested by the server as a 401 response. This would allow similar branded logins as we have today but using a system which would be built into HTTP. I am fairly sure that this could be achieved using a FireFox plugin initially as a proof of concept.

Given that the (in my opinion) misguided feature “HTTP Strict Transport Security” (which basically allows a server to specify a flag to a client that it should always request pages from that server using TLS-secured connections) is already a part of the latest beta releases of FireFox such a system ought to have a fighting chance, especially since it ought to be much more attractive an option to big websites than the massive cost of full encryption.

This system (lets dub it HTTP Advanced Auth) would provide a standardised login/authentication mechanism for all websites which doesn’t necessarily require transport layer encryption. It would also do away with the need for tracking sessions using cookies, and allow cookie-less login, so you could present stateless access to authenticated resources (which is already possible using the existing HTTP auth mechanisms).

The disadvantages would likely mostly be down to inertia in use of existing technologies, it’s hard to change things like HTTP and get your changes widely adopted. It’s also hard to re-educate users who have been told for years that the “green address bar” means they are secure… I do wonder though if the sheer amount of money involved in implementing the brute-force TLS-everything solution might make the bigger players on the net more open to the idea of supporting a better solution.