Wednesday 21 November 2007

ASP.NET 3.5 has arrived

It has been less than a year, since I started working with ASP.NET, started with 1.1, using VB.NET and gradually getting hold of 2.0 using C#. I havent yet fully understood and implemented a single site in the so called technology, and here it comes!

On November 19, 2007 Microsoft officially released the ASP.NET version 3.5, and ushered the developers into the .NET 3.5 era. Enhancements have been released across the platform, which would now mean I have to be quick enough to embark on the technology or be left out.

As many of us would expect the changes or features in the new version are additive. Nothing has been taken off from the 2.0 version, and nothing has been drastically modified. Like people coming from ASP.net 1.1 to 2.0, it was a complete shock. Quite a few things have been taken off, many of things modified, and very little if not, new things added. But this time, it seems Microsoft has been considerate with the developers. In short, ASP.net 3.5 does'nt take away or change anythingor even break any functionality, concepts, or code present in 2.0 - it just adds new types and features and capabilities to the framework.

As you could have guessed it Visual Studio 2008 the Microsoft IDE(Integrated Developement Environment) is the recommended tool for developing the applications. Unlike the previous flavours of Visual Studio, this one can be used to develop applications using .net 1.1, 2.0, 3.0 as well as 3.5. So if you are not thinking of developing any new system, or upgrading your code at this moment of time, it would be wise decision to purchase Visual studio 2008, as it would support your older version of codes as well. It also includes some extra features such as improved Designer experience, JavaScript Debugging (Finally Microsoft got it in their heads!), and intellisense features; and the developers have the ability to view and even step into the core .NET framework code during debugging.

So in short, it would be really wonderful coding and even better experience in debugging the applications using the new system. I have got hold of a beta version of the Visual studio 2008. Will be playing with it in some times from now, and will keep you all posted.


Powered by ScribeFire.

The Internal Structure of .pdb Files

Whenever you are faced with an unknown data format, the first thing to do is to run some instances of it through a hex dump viewer. The w2k_dump.exe utility does a good job in this respect. Examining the hex dump of a Windows 2000 PDB file like ntoskrnl.pdb or ntfs.pdb reveals some interesting properties:
  • The file seems to be divided into blocks of fixed size—typically 0[ts]400 bytes.

  • Some blocks consist of long runs of 1-bits, occasionally interrupted by shorter sequences of 0-bits.

  • The information in the file is not necessarily contiguous. Sometimes, the data ends abruptly at a block boundary, but continues somewhere else in the file.

  • Some data blocks appear repeatedly within the file.

It took me some time until I finally realized that these are typical properties of a compound file. A compound file is a small file system packaged into a single file. The "file system" metaphor readily explains some of the above observations:

  • A file system subdivides a disk into sectors of fixed size, and groups the sectors into files of variable size. The sectors representing a file can be located anywhere on the disk and don't need to be contiguous—the file/sector assignments are defined in a file directory.

  • A compound file subdivides a raw disk file into pages of fixed size, and groups the pages into streams of variable size. The pages representing a file can be located anywhere in the raw disk file and don't need to be contiguous—the stream/page assignments are defined in a stream directory.

Obviously, almost any assertions about file systems can be mapped to compound files by simply replacing "sector" by "page", and "file" by "stream". The file system metaphor explains why a PDB file is organized in fixed-size blocks. It also explains why the blocks are not necessarily contiguous. What about the pages with the masses of 1-bits? Actually, this type of data is something very common in file systems. To keep track of used and unused sectors on the disk, many file systems maintain an allocation bit array that provides one bit for each sector (or sector cluster). If a sector is unused, its bit is set. Whenever the file system allocates space for a file, it searches for unused sectors by scanning the allocation bits. After adding a sector to a file, its allocation bit is set to zero. The same procedure is applied to the pages and streams of a compound file. The long runs of 1-bits represent unused pages, while the 0-bits are assigned to existing streams.

The only thing that is left now is the observation that some data blocks reoccur within a PDB file. The same thing happens with sectors on a disk. When a file in a file system is rewritten a couple of times, each write operation might use different sectors to store the data. Thus, it can happen that the disk contains free sectors with older duplicates of the file information. This doesn't constitute a problem for the file system. If the sector is marked free in the allocation bit array, it is unimportant what data it contains. As soon as the sector is reclaimed for another file, the data will be overwritten, anyway. Applying the file system metaphor once more to compound files, this means that the observed duplicate pages are usually left over from earlier versions of a stream that has been rewritten to different pages in the compound file. They can be safely ignored—all we have to care for are the pages that are referred to by the stream directory. The remaining unassigned pages should be regarded as garbage.

With the basic paradigm of PDB files being introduced now, we can step to the more interesting task of examining their basic building blocks. Listing 1 shows the layout of the PDB header. The PDB_HEADER starts with a lengthy signature that specifies the PDB version as a text string. The text is terminated with an end-of-file (EOF) character (ASCII code 0[ts]1A) and supplemented with the magic number 0[ts]0000474A, or "JG\0\0" if interpreted as a string. Maybe these are the initials of the designer of the PDB format. The embedded EOF character has the nice effect that an ignorant user can issue a command like type ntoskrnl.pdb in a console window without getting any garbage on the screen. The only thing that will be displayed is the message "Microsoft C/C++ program database 2.00\r\n". All Windows 2000 symbol files are shipped as PDB 2.00 files. Apparently, a PDB 1.00 format exists as well, but it seems to be structured quite differently.

Listing 1 The PDB File Header

#define PDB_SIGNATURE_200 \
"Microsoft C/C++ program database 2.00\r\n\x1AJG\0"

#define PDB_SIGNATURE_TEXT 40

// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

typedef struct _PDB_SIGNATURE
{
BYTE abSignature [PDB_SIGNATURE_TEXT+4]; // PDB_SIGNATURE_nnn
}
PDB_SIGNATURE, *PPDB_SIGNATURE, **PPPDB_SIGNATURE;

#define PDB_SIGNATURE_ sizeof (PDB_SIGNATURE)

// -----------------------------------------------------------------

#define PDB_STREAM_FREE -1

// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

typedef struct _PDB_STREAM
{
DWORD dStreamSize; // in bytes, -1 = free stream
PWORD pwStreamPages; // array of page numbers
}
PDB_STREAM, *PPDB_STREAM, **PPPDB_STREAM;

#define PDB_STREAM_ sizeof (PDB_STREAM)

// -----------------------------------------------------------------

#define PDB_PAGE_SIZE_1K 0x0400 // bytes per page
#define PDB_PAGE_SIZE_2K 0x0800
#define PDB_PAGE_SIZE_4K 0x1000

#define PDB_PAGE_SHIFT_1K 10 // log2 (PDB_PAGE_SIZE_*)
#define PDB_PAGE_SHIFT_2K 11
#define PDB_PAGE_SHIFT_4K 12

#define PDB_PAGE_COUNT_1K 0xFFFF // page number < PDB_PAGE_COUNT_*
#define PDB_PAGE_COUNT_2K 0xFFFF
#define PDB_PAGE_COUNT_4K 0x7FFF

// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

typedef struct _PDB_HEADER
{
PDB_SIGNATURE Signature; // PDB_SIGNATURE_200
DWORD dPageSize; // 0x0400, 0x0800, 0x1000
WORD wStartPage; // 0x0009, 0x0005, 0x0002
WORD wFilePages; // file size / dPageSize
PDB_STREAM RootStream; // stream directory
WORD awRootPages []; // pages containing PDB_ROOT
}
PDB_HEADER, *PPDB_HEADER, **PPPDB_HEADER;

#define PDB_HEADER_ sizeof (PDB_HEADER)

Following the signature at offset 0[ts]2C is a DWORD named dPageSize that specifies the size of the compound file pages in bytes. Legal values are 0[ts]0400 (1KB), 0[ts]0800 (2KB), and 0[ts]1000 (4KB). The wFilePages member reflects the total number of pages used by the PDB file image. Multiplying this value by the page size should always exactly match the file size in bytes. wStartPage is a zero-based page number that points to the first data page. The byte offset of this page can be computed by multiplying the page number by the page size. Typical values are 9 for 1KB pages (byte offset 0[ts]2400), 5 for 2KB pages (byte offset 0[ts]2800), or 2 for 4KB pages (byte offset 0[ts]2000). The pages between the PDB_HEADER and the first data page are reserved for the allocation bit array of the compound file, always starting at the beginning of the second page. This means that the PDB file maintains 0[ts]2000 bytes with 0[ts]10000 allocation bits if the page size is 1 or 2KB, and 0[ts]1000 bytes with 0[ts]8000 allocation bits if the page size is 4KB. In turn, this implies that the maximum amount of data a PDB file can manage is 64MB in 1KB page mode, and 128MB in 2KB or 4KB page mode.

The RootStream and awRootPages[] members concluding the PDB_HEADER describe the location of the stream directory within the PDB file. As already noted, the PDB file is conceptually a collection of variable-length streams that carry the actual data. The locations and compositions of the streams are managed in a single stream directory. Funny as it might seem, the stream directory itself is stored in a stream. I have called this very special stream the "root stream". The root stream holding the stream directory can be located anywhere in the PDB file. Its location and size are supplied by the RootStream and awRootPages[] members of the PDB_HEADER. The dStreamSize member of the PDB_STREAM substructure specifies the number of pages occupied by the stream directory, and the entries in the awRootPages[] array point to the pages containing the data.

The stream directory is composed of two sections: A header part in the form of a PDB_ROOT structure, as defined in Listing 2, and a data part consisting of an array of 16-bit page numbers. The wCount member of the PDB_ROOT section specifies the number of streams stored in the PDB compound file. The aStreams[] array contains a PDB_STREAM entry (see Listing 1) for each stream, and the page number slots follow immediately after the last aStreams[] entry.

Listing 2 The PDB Stream Directory

#define PDB_STREAM_DIRECTORY 0
#define PDB_STREAM_PDB 1
#define PDB_STREAM_PUBSYM 7

// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

typedef struct _PDB_ROOT
{
WORD wCount; // < PDB_STREAM_MAX
WORD wReserved; // 0
PDB_STREAM aStreams []; // stream #0 reserved for stream table
}
PDB_ROOT, *PPDB_ROOT, **PPPDB_ROOT;

#define PDB_ROOT_ sizeof (PDB_ROOT)

Finding the page number block associated to a given stream is somewhat tricky because the page directory doesn't provide any cues except the stream size. If you are interested in stream #3, you have to compute the number of pages occupied by streams #1 and #2 to get the desired start index within the page number array. Once the stream's page number list is located, reading the stream data is simple. Just walk through the list and multiply each page number by the page size to yield the file offset, and read pages from the computed offsets until the end of the stream is reached. Isn't it funny: On first sight, parsing a PDB file seemed rather tough. Now it turns out that it is actually quite simple—probably much simpler than parsing a .dbg file. The compound-file nature of the PDB format with its clear-cut random access to stream pages reduces the task of reading a stream to a mere concatenation of fixed-sized pages. I'm really amazed at this elegant data access mechanism!

An even greater benefit of the PDB format becomes apparent if it comes to updating an existing PDB file. Inserting data into a file with a sequential structure usually means reshuffling large portions of the contents. The PDB file's random-access structure borrowed from file systems allows addition and deletion of data with minimal effort, just like files can be modified with ease on a file system media. Only the stream directory has to be reshuffled at times when a stream grows or shrinks across a page boundary. This important property facilitates incremental updating of PDB files. As Microsoft puts it in a Knowledge Base article titled "INFO: PDB and DBG Files—What They Are and How They Work":

"The .PDB extension stands for 'program database.' It holds the new format for storing debugging information that was introduced in Visual C++ version 1.0. In the future, the .PDB file will also hold other project state information. One of the most important motivations for the change in format was to allow incremental linking of debug versions of programs, a change first introduced in Visual C++ version 2.0." (Microsoft Knowledge Base, article Q121366)

Now that the internal format of PDB files is clear, the next problem is to identify the contents of their streams. After examining various PDB files, I have come to the conclusion that each stream number serves a predefined purpose. For example, the first stream seems to always contain a stream directory, and the second one contains information about the PDB file that can be used to verify that the file matches an associated .dbg file. For example, the latter stream contains dSignature and dAge members that should have the same values as the corresponding members of an NB10 CodeView section. The eighth stream is most interesting in the context of this chapter because it hosts the CodeView symbol information we have been searching for. The meaning of the other streams is still unclear to me and constitutes another vast area for future research.

I am not going to include PDB reader sample code here because this would exceed the scope of this article without being rather interesting. You already know this program—it is the w2k_dump.exe utility that I have used to create some of the hex dump examples above. This simple console-mode utility provides a +p command line option that enables PDB stream decomposition. If the specified file is not a valid PDB file, the program falls back to sequential hex dump mode.



Powered by ScribeFire.

Tuesday 13 November 2007

Useful Tool for browsing class definitions

Reflector is a class browser for .NET components. It is difficult if not impossible, to scan the Microsoft Documentation for the class definition, when you want to use a particular, function, but do not know about its parent class. Ofcourse, google is the ultimate key, but sometimes, even google screws up, by providing multiple options for the same results.

Having a class browser in such cases is very handy. Guess what, it comes from the person directly responsible for developing Microsoft projects, so can be trusted for atleast the basic definitions. Okay jokes apart, the following is the link to the page where you can download the product.
Lutz Roeder
And while you are at the page, do also browse through the plugins and add-ons for the reflector, even they are a cool utility!


Powered by ScribeFire.

Monday 12 November 2007

C# ?? Operator

The ?? operator also termed as null Coalescing operator is one of the latest cool stuff in the .net 2.0. This operator returns the left hand operand if it is not null, or else it returns the right operand.

So using the statemtent in ways like

int y = x ?? -1;

so if the value of integer x is not null, then y is assigned the value of x, or in case if x is null, then y is assigned the value of -1. The point worth noticing is the return value if x is null, is -1, so the system knows that the assignement has not been successful.

I am not aware of a mission critical requirement for this operator, but there have been various examples, wherein the non-nullable data field, has been received as null. For eg. if you are displaying a table with user age, it might be possible, that the user has not entered the date, or date of birth. It might be useful to use this operator to display a message in such cases, instead of displaying a "0" or a random number in the report field.


Powered by ScribeFire.

Monday 5 November 2007

Validating Email IDs in a user form

Validation is a key step in User registration. Though not intentional, many users tend to enter correct email address, in a form. Also many users just tend to enter a fictitious email to avoid spams from a public domain forum. But that is again a nightmare for webmasters, as they tend to collect a huge pile of unuseful data for users.
To avoid this, many websites, do send out a test email, which has a random link, which the user has to click, in order to activate their accounts. But then, not all public domain email servers are fast enough to send or receive the validation email. This might result into a lost visitor for the website.
So what should be done to optimise the process. I came across, a three step email validation process today, which does seem to promise a better validation. Though not completely tested by me, but still would recommend atleast a visit and try if you need.

<a href="http://www.codeproject.com/aspnet/Valid_Email_Addresses.asp" target="_blank">Click here</a> to visit Vasudevan's blog for further details.


Powered by ScribeFire.

Wednesday 31 October 2007

How to prevent browser and proxy caching of web pages

I ran into the issue of if the user presses the "BACK" button, the page does not refresh. So I placed the meta tag in the header:<meta equiv="CACHE-CONTROL" content="NO-CACHE">To attempt to resolve the issue. This fixed the issue for IE and other browsers. But firefox gave me a particular problem. They interpret this tag unlike the other browsers, thus not refreshing the page. After a bit of digging, I discovered this code: (Place in the OnInit block, prefereably)


Note, this is new code as of 8-31-06 Response.ClearHeaders();
Response.AppendHeader("Cache-Control", "no-cache"); //HTTP 1.1
Response.AppendHeader("Cache-Control", "private"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "no-store"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "must-revalidate"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "max-stale=0"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "post-check=0"); // HTTP 1.1
Response.AppendHeader("Cache-Control", "pre-check=0"); // HTTP 1.1
Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.1
Response.AppendHeader("Keep-Alive", "timeout=3, max=993"); // HTTP 1.1
Response.AppendHeader("Expires", "Mon, 26 Jul 1997 05:00:00 GMT"); // HTTP 1.1 forces all browsers to grab new copies of the pages when the user pressed the BACK or FORWARD button on their browsers, which is quite annoying.

Parse delimited string in a Stored procedure

Sometimes we need to pass an array to the Stored Procrdure and split the array inside the stored proc. For example, lets say there is a datagrid displaying sales orders, each sales order associated with an orderid (PK in the Sales table). If the user needs to delete a bunch of sales orders ( say 10-15 etc)..it would be easier to concatenate all the orderid's into one string like 10-24-23-34-56-57-....etc and pass it to the sql server stored proc and inside the stored proc, split the string into individual ids and delete each sales order.
There can be plenty of other situations where passing a delimited string to the stored proc is faster than making n number of trips to the server.

CREATE PROCEDURE ParseArray (@Array VARCHAR(1000),@separator CHAR(1))
AS
BEGIN
SET NOCOUNT ON
-- @Array is the array we wish to parse
-- @Separator is the separator charactor such as a comma
DECLARE @separator_position INT -- This is used to locate each separator character
DECLARE @array_value VARCHAR(1000) -- this holds each array value as it is returned
-- For my loop to work I need an extra separator at the end. I always look to the
-- left of the separator character for each array value
SET @array = @array + @separator
-- Loop through the string searching for separtor characters
WHILE PATINDEX('%' + @separator + '%', @array) <> 0
BEGIN
-- patindex matches the a pattern against a string
SELECT @separator_position = PATINDEX('%' + @separator + '%',@array)
SELECT @array_value = LEFT(@array, @separator_position - 1)
-- This is where you process the values passed.
-- Replace this select statement with your processing
-- @array_value holds the value of this element of the array
SELECT Array_Value = @array_value
-- This replaces what we just processed with and empty string
SELECT @array = STUFF(@array, 1, @separator_position, '')
END
SET NOCOUNT OFF
END
GO


The credit for the above code : Mr Dinakar Nethi on his blog
http://weblogs.sqlteam.com/dinakar/archive/2007/03/28/60150.aspx

Thursday 25 October 2007

What is SQL injection attack?

"SQL Injection" is subset of the unverified/unsanitized user input vulnerability ("buffer overflows" are a different subset), and the idea is to convince the application to run SQL code that was not intended. If the application is creating SQL strings naively on the fly and then running them, it's straightforward to create some real surprises.

There have been instances of dropping a table, or displaying a list of users and their passwords, from the database using this kind of applications.

A simple example of such an attack, can be using a forget password page into a deadly display all records page.How? read ahead!

Usually the sites are used to generate SQL statements on the fly. So they will generate them as

SELECT fieldlist
FROM table
WHERE field = 'Textbox value';

Here the string that the user enters into the textbox of the page is substituted. So for a page that retrieves the password, the SQL statement would be:

SELECT password
FROM tblLogin
WHERE userid = 'txtUserID.Text.ToString()';

So if we enter abc into the text box, it is passed on to the server as

SELECT password
FROM tblLogin
WHERE userid = 'abc';

Now if we want to gain unauthorised access to the table all we have to do is enter some malicious code into the text box. Watch it now!

If we enter anything' OR 'x' = 'x what happens? The resultant query will be:

SELECT password
FROM tblLogin
WHERE userid = 'anything' OR 'x' = 'x'

Did you just say wow??? So next time you are designing some form, keep this thing in mind to check if the text input in the text boxes, does not complement any inbuilt sql query. You have been warned!

Wednesday 3 October 2007

Could not load type '****' from assembly'****, Version = 0.0.0.0, Culture=neutral, PublicKeyToken=null'

I was on a maintainence call this morning, and there was a minor problem to a certain page. I wont run into what the problem was, as it is out of context. After I was done with the bug, I swiftly published the project, and uploaded the project on to the live server. because I did not make any drastic changes to the project, out of habit, I just rolled the web.config file to the original settings, as to prevent typing the connection string for the database back again.

I made sure everything was working and in place and uploaded the project on to the live server! Bingo! Everthing was fine on the site and even for extended site testing on the links and database connection, it worked fine. The bug was resolved, and time to hit the next job!

Then towards the end of the day, I received a couple of mails regarding the web-parts section of the page is bugging up! I clicked the link on that page and it gave me this error:

Could not load type '****' from assembly'****, Version = 0.0.0.0, Culture=neutral, PublicKeyToken=null'

I could not figure out what exactly the problem was as, the version on the development server was still working smoothly. There was no other problem to the site, apart from web-parts. so what went wrong?

I tried to hit Google for the same topic, but to my amazement, there were many people out there who faced the same problem many a times, there was not a single solution, that explained the cause of the error. The closest I got was at www.dotnet247.com where the author said that there was some issue with app.config file, which he tried to reuse in another project. But I am not trying to reuse any file in a different project. It is necessarily the same project, with a couple of html and vbscript lines added to a page.

So in principle I am suppose to be able to update the project straight away! Me and my collegue, set on the journey to discover what is going wrong! I started doing all sort of permutations for the settings. After a while I had a thought, why not try to change the web.config file on the local copy and then publish the entire project to the live server, instead of replacing the web.config file!

I did that, and miracles do happen! The project started working at its best again. But the best thing was that instead of stopping after the problem was solved, I kept trying all the possible change, just to make sure, it was no fluke, and indeed it wasnt one.

So I figured out that even if there are minor changes to the site, do not recycle any files straight out of the pile. The second issue that has come to my mind now, is that if I can just change a single file from the published result! Will get back as soon as I have enough to write about that; till then happy programming!

Wednesday 12 September 2007

Stored procedures or inline SQL

The question is to be or not to be?

Right from the university days I have been tought that stored procedures are the best thing that can happen to a web developer. The reason why stored procedures are being advocated to students who are just into the field is unknown to be and I have never tried to explore them. Following are the reasons that are cited to them why to use stored procudures, but first let us know what is stored procedure.

Stored procedure are a very useful feature in SQL server. A stored procedure is a group of SQL statements that are compiled together and can be executed by calling a single command.

Uses for Stored Procedures:

1) Modular programming
2) Fast Execution
3) Network Traffic
4) Security

Sounds exciting when you read the book, but in practical, the picture is not as good as is depicted. I know of several projects, that are being developed with consideration of utilising the objects and/or modules from the project later on into another project. This is exactly why developers have implemented object oriented programming, and this is the reason, why .net has gained so much of popularity.

Moreover if the project is developed using n-tier architecture, then it would be more advisable to completely separate out the database from the rest of the modules. I have been wondering in the last project, about what is feasible, writing stored procedures or using the DAL(Data Access Layer) in the .NET architecure? Well technically both are same things, the former being stored on Data layer, which passes the data to the Business Logic Layer; and the later one being stored on Business Logic Layer, and accessing the data in the Data Layer!

Considering security, both are equally secure, because the system does not allow access to any processes outside the system to access the functions from the DAL. So what makes stored procedure, always a sought after option! I have been reading a couple of blogs recently regarding the same issue, and have found that the programming world has been divided into two segments on this issue! Those who recommend SPs and the ones that find them obsolete. Though there are many developers just like me, who are yet to make up their minds on whether to go for it, of its time to ditch them completely.

So far I have come across so many blogs that advocate both the approaches, that it is impossible for me to make a stand. But what I have learnt is both the camps are correct, in their ideologies. It all depends on the project and the approach a developer wants to implement!

For developers like me, who wish to implement the same module over and over again, without worrying for the database technology underneath; I recommend they should go for DAL. If you are worrying about the performance, leave it till the end and develop the whole project using DAL. You can tweak the bits and bobs here and there, later on to enhance the performance. If the purpose of the website is reporting, then you dont need security anyways, as you have the built in user profiling, and access control. If your website is not kind of reporting tool, then you dont have necessarily a performance issue, as the database connection is not going to be that frequent! so in either case, you dont justify the compulsion of using stored procedure.

I know there are issues related to using inline SQL commands, but then using stored procedure is not the only way to deal with it. If you would like to have a more detailed read, you can visit Frans Bouma's blog

Monday 3 September 2007

Today was another learning experience. We in all are three developers usually working on the same files, repeatedly checking in and out of the source safe, but then at times, we all tend to save the files at the same time. this occurred when we accidentally commented out the email addresses, that a particular form need to be sent.

Now as things happen, we only realised it when we had an issue raised that people are not receiving confirmation email after submitting the forms. Lesson learnt!

Now we have devised two different methods, to bypass this bug. we are having all the email addresses in the web.config file, so no more accidental messing with the addresses in database or code file. we also are checking if the form is on live server, so as to prevent commenting out the send code, in order to prevent annoyed managers with inbox full of test emails.

The second method is to use the preprocessor directives, which will be disabled at the developer server. till today i haven't actually felt the need to use the preprocessor directives (Except from C++ classes in my uni; which was to score higher!). the only drawback for this system, is to remember when to turn off the declaration, and check it back in. this is the only reason we went for the first solution.

we have two different servers for live and development, and hence check the URL or IP address for the server. if the server isn't live, then don't send the mails. Also we have the preprocessor directives in place, so whenever we want to test the live server pages, just edit the directives, and prevent the emails being sent.

Simple solutions, but never thought of it that way!

BC30451: Name is not declared build error, for a label and radio button which are both declared.

This is a peculiar problem, which is not really a problem, but is a price that a developer has to pay for object oriented programming structure. because there may be another page in the system, that defines similar hierarchy of the elements, and uses the same names, the .Net framework, gets sort of confused which ones to load and what priority does it have.

for me it was a case, because i used to derive another page and name it as xyzTest.aspx instead of xyz.aspx thus having a work around for not messing up the working code, and can return to the original settings without worrying about which version to roll back to from the source safe. this technique works fine, but when i tried to publish the project, it gives out this error. apart from publishing, there is no error or warning that gets fired off, but strangely the publishing does not succeed! after reporting this to a couple of forums, and trial and error methods, i have found this to work.

still not sure if this was the reason for this error, so any one who has a alternate reason, can post a response.

CSS is case sensitive

Never thought Cascaded Style Sheets will be case sensitive. After a long time of coding in VB, and ASP, using Visual Studio 2005; I was kind of accustomed to coding in whatever case I wanted and wait for the VS2005 to take care of the case. Lately I have been having this problem in viewing the image button, incorrectly. the problem was further aggreivated, when there were two similar buttons with one showing the background image and the other one completely blank. After about an hour of excercise, with the stylesheet, I found out that I was writing one as Jobs_Feature and the other one as Jobs_feature. Bingo! Probably a lesson learnt hard way, but then i prefer to learn it this way rather than keep on reading books and trying to memorize
the facts and features of the stuff.

Further research shows that if the doctype declaration in the start of HTML page is not defined as XML, then CSS does not mind the different case in the class name, but the moment we declare the XML doctype, it turns out to be case sensitive. So this property of CSS can be attributed to XML. If you remember, XML has a standard, wherein the case of the tags and filename should always match.

Changing style of Drop down list.

I have been trying to change the border setting for multiple selection drop down menu in ASP.NET. After a bit of testing, I realised that all the style sheet code was not being rendered to the drop down list, when viewed in Internet Explorer. Mozilla FireFox was good enough, and even Safari, did help me out, but it was IE that did not budge! Then googling it gave me following explanation why IE did not work as expected.

"For historical reasons, some form-elements are elements from the operating system’s GUI. Rendering of these elements differs from browser to browser. Most of the browser, take into consideration the style sheet information about the element, and if not present renders the default properties from the OS’s GUI. For IE unfortunately these elements are <select>, <checkbox> and <radio>."