Search Results

Search found 9106 results on 365 pages for 'course'.

Page 350/365 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • checking virtual sub domains

    - by Persian.
    I create a project that check the sub domain and redirect to the exist subdomain ( username ) but I can't find out why when the username is in database it can't show it . on local system it works finely .. but when I upload it on server it not works .. of course I change the commented place to uncomment for test .. but it's not working .. it shows this error : Object reference not set to an instance of an object. My code is this in page load : //Uri MyUrl = new Uri(Request.Url.ToString()); //string Url = MyUrl.Host.ToString(); Uri MyUrl = new Uri("http://Subdomain.Mydomain.com/"); string Url = MyUrl.Host.ToString(); string St1 = Url.Split('.')[0]; if ((St1.ToLower() == "Mydomain") || (St1.ToLower() == "Mydomain")) { Response.Redirect("Intro.aspx"); } else if (St1.ToLower() == "www") { string St2 = Url.Split('.')[1]; if ((St2.ToLower() == "Mydomain") || (St2.ToLower() == "Mydomain")) { Response.Redirect("Intro.aspx"); } else { object Blogger = ClsPublic.GetBlogger(St2); if (Blogger != null) { lblBloger.Text = Blogger.ToString(); if (Request.QueryString["id"] != null) { GvImage.DataSourceID = "SqlDataSourceImageId"; GvComments.DataSourceID = "SqlDataSourceCommentsId"; this.BindItemsList(); GetSubComments(); } else { SqlConnection scn = new SqlConnection(ClsPublic.GetConnectionString()); SqlCommand scm = new SqlCommand("SELECT TOP (1) fId FROM tblImages WHERE (fxAccepted = 1) AND (fBloging = 1) AND (fxSender = @fxSender) ORDER BY fId DESC", scn); scm.Parameters.AddWithValue("@fxSender", lblBloger.Text); scn.Open(); lblLastNo.Text = scm.ExecuteScalar().ToString(); scn.Close(); GvImage.DataSourceID = "SqlDataSourceLastImage"; GvComments.DataSourceID = "SqlDataSourceCommentsWId"; this.BindItemsList(); GetSubComments(); } if (Session["User"] != null) { MultiViewCommenting.ActiveViewIndex = 0; } else { MultiViewCommenting.ActiveViewIndex = 1; } } else { Response.Redirect("Intro.aspx"); } } } else { object Blogger = ClsPublic.GetBlogger(St1); if (Blogger != null) { lblBloger.Text = Blogger.ToString(); if (Request.QueryString["id"] != null) { GvImage.DataSourceID = "SqlDataSourceImageId"; GvComments.DataSourceID = "SqlDataSourceCommentsId"; this.BindItemsList(); GetSubComments(); } else { SqlConnection scn = new SqlConnection(ClsPublic.GetConnectionString()); SqlCommand scm = new SqlCommand("SELECT TOP (1) fId FROM tblImages WHERE (fxAccepted = 1) AND (fBloging = 1) AND (fxSender = @fxSender) ORDER BY fId DESC", scn); scm.Parameters.AddWithValue("@fxSender", lblBloger.Text); scn.Open(); lblLastNo.Text = scm.ExecuteScalar().ToString(); scn.Close(); GvImage.DataSourceID = "SqlDataSourceLastImage"; GvComments.DataSourceID = "SqlDataSourceCommentsWId"; this.BindItemsList(); GetSubComments(); } if (Session["User"] != null) { MultiViewCommenting.ActiveViewIndex = 0; } else { MultiViewCommenting.ActiveViewIndex = 1; } } else { Response.Redirect("Intro.aspx"); } } and my class : public static object GetBlogger(string User) { SqlConnection scn = new SqlConnection(ClsPublic.GetConnectionString()); SqlCommand scm = new SqlCommand("SELECT fUsername FROM tblMembers WHERE fUsername = @fUsername", scn); scm.Parameters.AddWithValue("@fUsername", User); scn.Open(); object Blogger = scm.ExecuteScalar(); if (Blogger != null) { SqlCommand sccm = new SqlCommand("SELECT COUNT(fId) AS Exp1 FROM tblImages WHERE (fxSender = @fxSender) AND (fxAccepted = 1)", scn); sccm.Parameters.AddWithValue("fxSender", Blogger); object HasQuty = sccm.ExecuteScalar(); scn.Close(); if (HasQuty != null) { int Count = Int32.Parse(HasQuty.ToString()); if (Count < 10) { Blogger = null; } } } return Blogger; } Which place if my code has problem ?

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • How to determine if CNF formula is satisfiable in Scheme?

    - by JJBIRAN
    Program a SCHEME function sat that takes one argument, a CNF formula represented as above. If we had evaluated (define cnf '((a (not b) c) (a (not b) (not d)) (b d))) then evaluating (sat cnf) would return #t, whereas (sat '((a) (not a))) would return (). You should have following two functions to work: (define comp (lambda (lit) ; This function takes a literal as argument and returns the complement literal as the returning value. Examples: (comp 'a) = (not a), and (comp '(not b)) = b. (define consistent (lambda (lit path) This function takes a literal and a list of literals as arguments, and returns #t whenever the complement of the first argument is not a member of the list represented by the 2nd argument; () otherwise. . Now for the sat function. The real searching involves the list of clauses (the CNF formula) and the path that has currently been developed. The sat function should merely invoke the real "workhorse" function, which will have 2 arguments, the current path and the clause list. In the initial call, the current path is of course empty. Hints on sat. (Ignore these at your own risk!) (define sat (lambda (clauselist) ; invoke satpath (define satpath (lambda (path clauselist) ; just returns #t or () ; base cases: ; if we're out of clauses, what then? ; if there are no literals to choose in the 1st clause, what then? ; ; then in general: ; if the 1st literal in the 1st clause is consistent with the ; current path, and if << returns #t, ; then return #t. ; ; if the 1st literal didn't work, then search << ; the CNF formula in which the 1st clause doesn't have that literal Don't make this too hard. My program is a few functions averaging about 2-8 lines each. SCHEME is consise and elegant! The following expressions may help you to test your programs. All but cnf4 are satisfiable. By including them along with your function definitions, the functions themselves are automatically tested and results displayed when the file is loaded. (define cnf1 '((a b c) (c d) (e)) ) (define cnf2 '((a c) (c))) (define cnf3 '((d e) (a))) (define cnf4 '( (a b) (a (not b)) ((not a) b) ((not a) (not b)) ) ) (define cnf5 '((d a) (d b c) ((not a) (not d)) (e (not d)) ((not b)) ((not d) (not e)))) (define cnf6 '((d a) (d b c) ((not a) (not d) (not c)) (e (not c)) ((not b)) ((not d) (not e)))) (write-string "(sat cnf1) ") (write (sat cnf1)) (newline) (write-string "(sat cnf2) ") (write (sat cnf2)) (newline) (write-string "(sat cnf3) ") (write (sat cnf3)) (newline) (write-string "(sat cnf4) ") (write (sat cnf4)) (newline) (write-string "(sat cnf5) ") (write (sat cnf5)) (newline)

    Read the article

  • Using CreateSourceQuery in CTP4 Code First

    - by Adam Rackis
    I'm guessing this is impossible, but I'll throw it out there anyway. Is it possible to use CreateSourceQuery when programming with the EF4 CodeFirst API, in CTP4? I'd like to eagerly load properties attached to a collection of properties, like this: var sourceQuery = this.CurrentInvoice.PropertyInvoices.CreateSourceQuery(); sourceQuery.Include("Property").ToList(); But of course CreateSourceQuery is defined on EntityCollection<T>, whereas CodeFirst uses plain old ICollection (obviously). Is there some way to convert? I've gotten the below to work, but it's not quite what I'm looking for. Anyone know how to go from what's below to what's above (code below is from a class that inherits DbContext)? ObjectSet<Person> OSPeople = base.ObjectContext.CreateObjectSet<Person>(); OSPeople.Include(Pinner => Pinner.Books).ToList(); Thanks! EDIT: here's my version of the solution posted by zeeshanhirani - who's book by the way is amazing! dynamic result; if (invoice.PropertyInvoices is EntityCollection<PropertyInvoice>) result = (invoices.PropertyInvoices as EntityCollection<PropertyInvoice>).CreateSourceQuery().Yadda.Yadda.Yadda else //must be a unit test! result = invoices.PropertyInvoices; return result.ToList(); EDIT2: Ok, I just realized that you can't dispatch extension methods whilst using dynamic. So I guess we're not quite as dynamic as Ruby, but the example above is easily modifiable to comport with this restriction EDIT3: As mentioned in zeeshanhirani's blog post, this only works if (and only if) you have change-enabled proxies, which will get created if all of your properties are declared virtual. Here's another version of what the method might look like to use CreateSourceQuery with POCOs public class Person { public virtual int ID { get; set; } public virtual string FName { get; set; } public virtual string LName { get; set; } public virtual double Weight { get; set; } public virtual ICollection<Book> Books { get; set; } } public class Book { public virtual int ID { get; set; } public virtual string Title { get; set; } public virtual int Pages { get; set; } public virtual int OwnerID { get; set; } public virtual ICollection<Genre> Genres { get; set; } public virtual Person Owner { get; set; } } public class Genre { public virtual int ID { get; set; } public virtual string Name { get; set; } public virtual Genre ParentGenre { get; set; } public virtual ICollection<Book> Books { get; set; } } public class BookContext : DbContext { public void PrimeBooksCollectionToIncludeGenres(Person P) { if (P.Books is EntityCollection<Book>) (P.Books as EntityCollection<Book>).CreateSourceQuery().Include(b => b.Genres).ToList(); }

    Read the article

  • CSS optimization - extra classes in dom or preprocessor-repetitive styling in css file?

    - by anna.mi
    I'm starting on a fairly large project and I'm considering the option of using LESS for pre-processing my css. the useful thing about LESS is that you can define a mixin that contains for example: .border-radius(@radius) { -webkit-border-radius: @radius; -moz-border-radius: @radius; -o-border-radius: @radius; -ms-border-radius: @radius; border-radius: @radius; } and then use it in a class declaration as .rounded-div { .border-radius(10px); } to get the outputted css as: .rounded-div { -webkit-border-radius: 10px; -moz-border-radius: 10px; -o-border-radius: 10px; -ms-border-radius: 10px; border-radius: 10px; } this is extremely useful in the case of browser prefixes. However this same concept could be used to encapsulate commonly-used css, for example: .column-container { overflow: hidden; display: block; width: 100%; } .column(@width) { float: left; width: @width; } and then use this mixin whenever i need columns in my design: .my-column-outer { .column-container(); background: red; } .my-column-inner { .column(50%); font-color: yellow; } (of course, using the preprocessor we could easily expand this to be much more useful, eg. pass the number of columns and the container width as variables and have LESS determine the width of each column depending on the number of columns and container width!) the problem with this is that when compliled, my final css file would have 100 such declarations, copy&pasted, making the file huge and bloated and repetitive. The alternative to this would be to use a grid system which has predefined classes for each column-layout option, eg .c-50 ( with a "float: left; width:50%;" definition ), .c-33, .c-25 to accomodate for a 2-column, 3-column and 4-column layout and then use these classes to my dom. i really mislike the idea of the extra classes, from experience it results to bloated dom (creating extra divs just to attach the grid classes to). Also the most basic tutorial for html/css would tell you that the dom should be separated from the styling - grid classes are styling related! to me, its the same as attaching a "border-radius-10" class to the .rounded-div example above! on the other hand, the large css file that would result from the repetitive code is also a disadvantage so i guess my question is, which one would you recommend? and which do you use? and, which solution is best for optimization? apart from the larger file size, has there even been any research on whether browser renders multiple classes faster than a large css file, or the other way round? tnx! i'd love to hear your opinion!

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • Trying to packetize TCP with non-blocking IO is hard! Am I doing something wrong?

    - by Ricket
    Oh how I wish TCP was packet-based like UDP is! But alas, that's not the case, so I'm trying to implement my own packet layer. Here's the chain of events so far (ignoring writing packets) Oh, and my Packets are very simply structured: two unsigned bytes for length, and then byte[length] data. (I can't imagine if they were any more complex, I'd be up to my ears in if statements!) Server is in an infinite loop, accepting connections and adding them to a list of Connections. PacketGatherer (another thread) uses a Selector to figure out which Connection.SocketChannels are ready for reading. It loops over the results and tells each Connection to read(). Each Connection has a partial IncomingPacket and a list of Packets which have been fully read and are waiting to be processed. On read(): Tell the partial IncomingPacket to read more data. (IncomingPacket.readData below) If it's done reading (IncomingPacket.complete()), make a Packet from it and stick the Packet into the list waiting to be processed and then replace it with a new IncomingPacket. There are a couple problems with this. First, only one packet is being read at a time. If the IncomingPacket needs only one more byte, then only one byte is read this pass. This can of course be fixed with a loop but it starts to get sorta complicated and I wonder if there is a better overall way. Second, the logic in IncomingPacket is a little bit crazy, to be able to read the two bytes for the length and then read the actual data. Here is the code, boiled down for quick & easy reading: int readBytes; // number of total bytes read so far byte length1, length2; // each byte in an unsigned short int (see getLength()) public int getLength() { // will be inaccurate if readBytes < 2 return (int)(length1 << 8 | length2); } public void readData(SocketChannel c) { if (readBytes < 2) { // we don't yet know the length of the actual data ByteBuffer lengthBuffer = ByteBuffer.allocate(2 - readBytes); numBytesRead = c.read(lengthBuffer); if(readBytes == 0) { if(numBytesRead >= 1) length1 = lengthBuffer.get(); if(numBytesRead == 2) length2 = lengthBuffer.get(); } else if(readBytes == 1) { if(numBytesRead == 1) length2 = lengthBuffer.get(); } readBytes += numBytesRead; } if(readBytes >= 2) { // then we know we have the entire length variable // lazily-instantiate data buffers based on getLength() // read into data buffers, increment readBytes // (does not read more than the amount of this packet, so it does not // need to handle overflow into the next packet's data) } } public boolean complete() { return (readBytes > 2 && readBytes == getLength()+2); } Basically I need feedback on my code. Please suggest any improvements. Even overhauling my entire system would be okay, if you have suggestions for how better to implement the whole thing. Book recommendations are welcome too; I love books. I just get the feeling that something isn't quite right.

    Read the article

  • What's my best approach on this simple hierarchy Java Problem?

    - by Nazgulled
    First, I'm sorry for the question title but I can't think of a better one to describe my problem. Feel free to change it :) Let's say I have this abstract class Box which implements a couple of constructors, methods and whatever on some private variables. Then I have a couple of sub classes like BoxA and BoxB. Both of these implement extra things. Now I have another abstract class Shape and a few sub classes like Square and Circle. For both BoxA and BoxB I need to have a list of Shape objects but I need to make sure that only Square objects go into BoxA's list and only Circle objects go into BoxB's list. For that list (on each box), I need to have a get() and set() method and also a addShape() and removeShape() methods. Another important thing to know is that for each box created, either BoxA or BoxB, each respectively Shape list is exactly the same. Let's say I create a list of Square's named ls and two BoxA objects named boxA1 and boxA2. No matter what, both boxA1 and boxA2 must have the same ls list. This is my idea: public abstract class Box { // private instance variables public Box() { // constructor stuff } // public instance methods } public class BoxA extends Box { // private instance variables private static List<Shape> list; public BoxA() { // constructor stuff } // public instance methods public static List<Square> getList() { List<Square> aux = new ArrayList<Square>(); for(Square s : list.values()) { aux.add(s.clone()); // I know what I'm doing with this clone, don't worry about it } return aux; } public static void setList(List<Square> newList) { list = new ArrayList<Square>(newList); } public static void addShape(Square s) { list.add(s); } public static void removeShape(Square s) { list.remove(list.indexOf(s)); } } As the list needs to be the same for that type of object, I declared as static and all methods that work with that list are also static. Now, for BoxB the class would be almost the same regarding the list stuff. I would only replace Square by Triangle and the problem was solved. So, for each BoxA object created, the list would be only one and the same for each BoxB object created, but a different type of list of course. So, what's my problem you ask? Well, I don't like the code... The getList(), setList(), addShape() and removeShape() methods are basically repeated for BoxA and BoxB, only the type of the objects that the list will hold is different. I can't think of way to do it in the super class Box instead. Doing it statically too, using Shape instead of Square and Triangle, wouldn't work because the list would be only one and I need it to be only one but for each sub class of Box. How could I do this differently and better? P.S: I could not describe my real example because I don't know the correct words in English for the stuff I'm doing, so I just used a box and shapes example, but it's basically the same.

    Read the article

  • PHP Form - Empty input enter this text - Validation

    - by James Skelton
    No doubt very simple question for someone with php knowledge. I have a form with a datepicker, all is fine when a user has selected a date the email is send with: Date: 2012 04 10 But i would like if the user has skipped this and left blank (as i have not made this required) to send as: Date: Not Entered (<-- Or something) Instead at the minute of course it reads: Date: Form input <input type="text" class="form-control" id="datepicker" name="datepicker" size="50" value="Date Of Wedding" /> This is the validator $(document).ready(function(){ //validation contact form $('#submit').click(function(event){ event.preventDefault(); var fname = $('#name').val(); var validInput = new RegExp(/^[a-zA-Z0-9\s]+$/); var email = $('#email').val(); var validEmail = new RegExp(/^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/); var message = $('#message').val(); if(fname==''){ showError('<div class="alert alert-danger">Please enter your name.</div>', $('#name')); $('#name').addClass('required'); return;} if(!validInput.test(fname)){ showError('<div class="alert alert-danger">Please enter a valid name.</div>', $('#name')); $('#name').addClass('required'); return;} if(email==''){ showError('<div class="alert alert-danger">Please enter an email address.</div>', $('#email')); $('#email').addClass('required'); return;} if(!validEmail.test(email)){ showError('<div class="alert alert-danger">Please enter a valid email.</div>', $('#email')); $('#email').addClass('required'); return;} if(message==''){ showError('<div class="alert alert-danger">Please enter a message.</div>', $('#message')); $('#message').addClass('required'); return;} // setup some local variables var request; var form = $(this).closest('form'); // serialize the data in the form var serializedData = form.serialize(); // fire off the request to /contact.php request = $.ajax({ url: "contact.php", type: "post", data: serializedData }); // callback handler that will be called on success request.done(function (response, textStatus, jqXHR){ $('.contactWrap').show( 'slow' ).fadeIn("slow").html(' <div class="alert alert-success centered"><h3>Thank you! Your message has been sent.</h3></div> '); }); // callback handler that will be called on failure request.fail(function (jqXHR, textStatus, errorThrown){ // log the error to the console console.error( "The following error occured: "+ textStatus, errorThrown ); }); }); //remove 'required' class and hide error $('input, textarea').keyup( function(event){ if($(this).hasClass('required')){ $(this).removeClass('required'); $('.error').hide("slow").fadeOut("slow"); } }); // show error showError = function (error, target){ $('.error').removeClass('hidden').show("slow").fadeIn("slow").html(error); $('.error').data('target', target); $(target).focus(); console.log(target); console.log(error); return; } });

    Read the article

  • StoreGeneratedPattern T4 EntityFramework concern

    - by LoganWolfer
    Hi everyone, Here's the situation : I use SQL Server 2008 R2, SQL Replication, Visual Studio 2010, EntityFramework 4, C# 4. The course-of-action from our DBA is to use a rowguid column for SQL Replication to work with our setup. These columns need to have a StoreGeneratedPattern property set to Computed on every one of these columns. The problem : Every time the T4 template regenerate our EDMX (ADO.NET Entity Data Model) file (for example, when we update it from our database), I need to go manually in the EDMX XML file to add this property to every one of them. It has to go from this : <Property Name="rowguid" Type="uniqueidentifier" Nullable="false" /> To this : <Property Name="rowguid" Type="uniqueidentifier" Nullable="false" StoreGeneratedPattern="Computed"/> The solution : I'm trying to find a way to customize an ADO.NET EntityObject Generator T4 file to generate a StoreGeneratedPattern="Computed" to every rowguid that I have. I'm fairly new to T4, I only did customization to AddView and AddController T4 templates for ASP.NET MVC 2, like List.tt for example. I've looked through the EF T4 file, and I can't seem to find through this monster where I could do that (and how). My best guess is somewhere in this part of the file, line 544 to 618 of the original ADO.NET EntityObject Generator T4 file : //////// //////// Write PrimitiveType Properties. //////// private void WritePrimitiveTypeProperty(EdmProperty primitiveProperty, CodeGenerationTools code) { MetadataTools ef = new MetadataTools(this); #> /// <summary> /// <#=SummaryComment(primitiveProperty)#> /// </summary><#=LongDescriptionCommentElement(primitiveProperty, 1)#> [EdmScalarPropertyAttribute(EntityKeyProperty=<#=code.CreateLiteral(ef.IsKey(primitiveProperty))#>, IsNullable=<#=code.CreateLiteral(ef.IsNullable(primitiveProperty))#>)] [DataMemberAttribute()] <#=code.SpaceAfter(NewModifier(primitiveProperty))#><#=Accessibility.ForProperty(primitiveProperty)#> <#=code.Escape(primitiveProperty.TypeUsage)#> <#=code.Escape(primitiveProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(primitiveProperty))#>get { <#+ if (ef.ClrType(primitiveProperty.TypeUsage) == typeof(byte[])) { #> return StructuralObject.GetValidValue(<#=code.FieldName(primitiveProperty)#>); <#+ } else { #> return <#=code.FieldName(primitiveProperty)#>; <#+ } #> } <#=code.SpaceAfter(Accessibility.ForSetter((primitiveProperty)))#>set { <#+ if (ef.IsKey(primitiveProperty)) { if (ef.ClrType(primitiveProperty.TypeUsage) == typeof(byte[])) { #> if (!StructuralObject.BinaryEquals(<#=code.FieldName(primitiveProperty)#>, value)) <#+ } else { #> if (<#=code.FieldName(primitiveProperty)#> != value) <#+ } #> { <#+ PushIndent(CodeRegion.GetIndent(1)); } #> <#=ChangingMethodName(primitiveProperty)#>(value); ReportPropertyChanging("<#=primitiveProperty.Name#>"); <#=code.FieldName(primitiveProperty)#> = StructuralObject.SetValidValue(value<#=OptionalNullableParameterForSetValidValue(primitiveProperty, code)#>); ReportPropertyChanged("<#=primitiveProperty.Name#>"); <#=ChangedMethodName(primitiveProperty)#>(); <#+ if (ef.IsKey(primitiveProperty)) { PopIndent(); #> } <#+ } #> } } private <#=code.Escape(primitiveProperty.TypeUsage)#> <#=code.FieldName(primitiveProperty)#><#=code.StringBefore(" = ", code.CreateLiteral(primitiveProperty.DefaultValue))#>; partial void <#=ChangingMethodName(primitiveProperty)#>(<#=code.Escape(primitiveProperty.TypeUsage)#> value); partial void <#=ChangedMethodName(primitiveProperty)#>(); <#+ } Any help would be appreciated. Thanks in advance. EDIT : Didn't find answer to this problem yet, if anyone have ideas to automate this, would really be appreciated.

    Read the article

  • Insert a transformed integer_sequence into a variadic template argument?

    - by coderforlife
    How do you insert a transformed integer_sequence (or similar since I am targeting C++11) into a variadic template argument? For example I have a class that represents a set of bit-wise flags (shown below). It is made using a nested-class because you cannot have two variadic template arguments for the same class. It would be used like typedef Flags<unsigned char, FLAG_A, FLAG_B, FLAG_C>::WithValues<0x01, 0x02, 0x04> MyFlags. Typically, they will be used with the values that are powers of two (although not always, in some cases certain combinations would be made, for example one could imagine a set of flags like Read=0x1, Write=0x2, and ReadWrite=0x3=0x1|0x2). I would like to provide a way to do typedef Flags<unsigned char, FLAG_A, FLAG_B, FLAG_C>::WithDefaultValues MyFlags. template<class _B, template <class,class,_B> class... _Fs> class Flags { public: template<_B... _Vs> class WithValues : public _Fs<_B, Flags<_B,_Fs...>::WithValues<_Vs...>, _Vs>... { // ... }; }; I have tried the following without success (placed inside the Flags class, outside the WithValues class): private: struct _F { // dummy class which can be given to a flag-name template template <_B _V> inline constexpr explicit _F(std::integral_constant<_B, _V>) { } }; // we count the flags, but only in a dummy way static constexpr unsigned _count = sizeof...(_Fs<_B, _F, 1>); static inline constexpr _B pow2(unsigned exp, _B base = 2, _B result = 1) { return exp < 1 ? result : pow2(exp/2, base*base, (exp % 2) ? result*base : result); } template <_B... _Is> struct indices { using next = indices<_Is..., sizeof...(_Is)>; using WithPow2Values = WithValues<pow2(_Is)...>; }; template <unsigned N> struct build_indices { using type = typename build_indices<N-1>::type::next; }; template <> struct build_indices<0> { using type = indices<>; }; //// Another attempt //template < _B... _Is> struct indices { // using WithPow2Values = WithValues<pow2(_Is)...>; //}; //template <unsigned N, _B... _Is> struct build_indices // : build_indices<N-1, N-1, _Is...> { }; //template < _B... _Is> struct build_indices<0, _Is...> // : indices<_Is...> { }; public: using WithDefaultValues = typename build_indices<_count>::type::WithPow2Values; Of course, I would be willing to have any other alternatives to the whole situation (supporting both flag names and values in the same template set, etc). I have included a "working" example at ideone: http://ideone.com/NYtUrg - by "working" I mean compiles fine without using default values but fails with default values (there is a #define to switch between them). Thanks!

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • If I use a facade class with generic methods to access the JPA API, how should I provide additional processing for specific types?

    - by Shaun
    Let's say I'm making a fairly simple web application using JAVA EE specs (I've heard this is possible). In this app, I only have about 10 domain/data objects, and these are represented by JPA Entities. Architecturally, I would consider the JPA API to perform the role of a DAO. Of course, I don't want to use the EntityManager directly in my UI (JSF) and I need to manage transactions, so I delegate these tasks to the so-called service layer. More specifically, I would like to be able to handle these tasks in a single DataService class (often also called CrudService) with generic methods. See this article by Adam Bien for an example interface: http://www.adam-bien.com/roller/abien/entry/generic_crud_service_aka_dao My project differs from that article in that I can't use EJBs, so my service classes are essentially just named beans and I handle transactions manually. Regardless, what I want is a single interface for simple CRUD operations on my data objects because having a different class for each data type would lead to a lot of duplicate and/or unnecessary code. Ideally, my views would be able to use a method such as public <T> List<T> findAll(Class<T> type) { ... } to retrieve data. Using JSF, it might look something like this: <h:dataTable value="#{dataService.findAll(data.class)}" var="d"> ... </h:dataTable> Similarly, after validating forms, my controller could submit the data with a method such as: public <T> void add(T entity) { ... } Granted, you'd probably actually want to return something useful to the caller. In any case, this works well if your data can be treated as homogenous in this manner. Alas, it breaks down when you need to perform additional processing on certain objects before passing them on to JPA. For example, let's say I'm dealing with Books and Authors which have a many-to-many relationship. Each Book has a set of IDs referring to its authors, and each Author has a set of IDs referring to their books. Normally, JPA can manage this kind of relationship for you, but in some cases it can't (for example, the google app engine JPA provider doesn't support this). Thus, when I persist a new book for example, I may need to update the corresponding author entities. My question, then, is if there's an elegant way to handle this or if I should reconsider the sanity of my whole design. Here's a couple ways I see of dealing with it: The instanceof operator. I could use this to target certain classes when special processing is needed. Perhaps maintainability suffers and it isn't beautiful code, but if there's only 10 or so domain objects it can't be all that bad... could it? Make a different service for each entity type (ie, BookService and AuthorService). All services would inherit from a generic DataService base class and override methods if special processing is needed. At this point, you could probably also just call them DAOs instead. As always, I appreciate the help. Let me know if any clarifications are needed, as I left out many smaller details.

    Read the article

  • conditional selects with jQuery and the Validation plugin

    - by dbonomo
    Hi, I've got a form that I am validating with the jQuery validation plugin. I would like to add a conditional select box (a selection box that is populated/shown depending on the selection of another) and have it validate as well. Here is what I have so far: $(document).ready(function(){ $("#customer_information").validate({ //disable the submit button after it is clicked to prevent multiple submissions submitHandler: function(form){ if(!this.wasSent){ this.wasSent = true; $(':submit', form).val('Please wait...') .attr('disabled', 'disabled') .addClass('disabled'); form.submit(); } else { return false; } }, //Customizes error placement errorPlacement: function(error, element) { error.insertAfter(element) error.wrap("<div class=\"form_error\">") } }); $(".courses").hide(); $("#course_select").change(function() { switch($(this).val()){ case "Certificates": $(".courses").hide().parent().find("#Certificates").show(); $(".filler").hide(); break; case "Associates": $(".courses").hide().parent().find("#Associates").show(); $(".filler").hide(); break; case "": $(".filler").show(); $(".courses").hide(); } }); }); And the HTML: <select id="course_select"> <option value="">Please Select</option> <option value="Certificates">Certificates</option> <option value="Associates">Associates</option> </select> <div id="Form0" class="filler"><select name="filler_select"><option value="">Please Select Course Type</option></select></div> <div id="Associates" class="courses"> <select name="lead_source_id" id="Requested Program" class="required"> <option value="">Please Select</option> <option value="01">Health Information Technology</option> <option value="02">Human Resources </option> <option value="03">Marketing </option> </select> </div> <div id="Certificates" class="courses"> <select name="lead_source_id" id="Requested Program" class="required"> <option value="">Please Select</option> <option value="04">Accounting Services</option> <option value="05">Bookkeeping</option> <option value="06">Child Day Care</option> </select> </div> So far, the select is working for me, but validation thinks that the field is empty even when a value is selected. It looks like there are a ton of ways to do conditional selects in jQuery. This was the best way I managed to work out (I'm new to jQuery), but I'd love to hear what you folks feel is the "best" way, especially if it works well with the validation plugin. Thanks!

    Read the article

  • if statement to check available elements

    - by nicmare
    Hi there, i allways found an answer here just be reading the threads but now its time for my very first thread because i was unable to find related questions: this is my target code: <li> <span class="panel"> <a href="…" class="image" rel="group"> <img title="…" src="…" alt="…" width="128" height="96" /> </a> <span class="sizes"> <span class="button"> <a href="#" class="size1" rel="…" title="1">9x13</a> </span> <span class="button"> <a href="#" class="size2" rel="…" title="3">10x15</a> </span> <span class="button"> <a href="#" class="size3" rel="…" title="6">20x30</a> </span> </span> </span> <a href="…" class="image" rel="group"> <span class="added"></span> <img title="…" src="…" alt="…" width="128" height="96" class="thumbnail" /> </a> </li> i add two "span" elements through jquery: $('.sizes a').click( function (e) { e.preventDefault(); $(this).after("<span class='checked'></span><span class='remove'></span>"); } ); the maximum amount of those s are 3 times in a wrapped div. so i am trying to do is the following: $(".remove").live('click', function(){ $(this).hide(); $(this).prev().hide(); // hide checked span if ($(this).parent().parent().children().children().each().hasClass('checked')) { $(this).parent().parent().parent().next().children(".added").hide(); } } ); of course my if statement is crap. i try it in my own words: if there are more than one span(class=remove) in span(class=panel) then nothing should happen. but if there is just one span(class=remove) left in span(class=panel) THEN should be hidden how do i achieve this? there is an online demonstration of this: http://bit.ly/972be4 as you can see, every time when i hit a button below an image, the image gets marked by a label. and i want to hide this label (class=added) when no button is selected Thank you very much! best wishes from germany

    Read the article

  • DataGrid : Binding with two different classes with lists ? WPF C#

    - by MyRestlessDream
    It is my first question on StackOverflow so I hope I am doing nothing wrong ! Sorry if it is the case ! I need some help because I can not find the solution of my problem. Of course I have searched everywhere on the web but I can not find it (can not post the links that I am using because of my low reputation :( ). Moreover, I am new in C# and WPF (and self-learning). I used to work in C++/Qt so I do not know how everything works in WPF. And sorry for my English, I am French. My problem My basic classes are that an Employee can use a computer. The id of the computer and the date of use are stored into the class Connection. I would like to display the list information in a DataGrid and in RowDetailsTemplate like here : http://i.stack.imgur.com/Bvn1z.png So it will do a binding to the Employee class but also to the Connection class with only the last value of the property (here the last value of the list "Computer ID" and the last value of the list "Connection Date" on this last computer). So it is a loop in the different lists. How can I do it ? Is it too much to do ? :( I succeed to get the Employee informations but I do not know how to bind the list of computer. When I am trying, it shows me "(Collection)" so it does not go inside the list :( Summary of Questions How to display/bind a value from a list AND from a different class in a DataGrid ? How to display all the values of a list into the RowDetailsTemplate ? Under Windows 7 and Visual Studio 2010 Pro version. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EDIT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Solution I have used the solution of Stefan Denchev. Here the modification of my class : http://i.stack.imgur.com/Ijx5i.png And the code used: <DataGrid ItemsSource="{Binding}" Name="table"> <DataGrid.Columns> <DataGridTextColumn Header="First Name" Binding="{Binding FirstName}"/> <DataGridTextColumn Header="Last Name" Binding="{Binding LastName}"/> <DataGridTextColumn Header="Gender" Binding="{Binding Gender}"/> <DataGridTextColumn Header="Last computer used" Binding="{Binding LastComputerID}"/> <DataGridTextColumn Header="Last connection date" Binding="{Binding LastDate}"/> </DataGrid.Columns> <DataGrid.RowDetailsTemplate> <DataTemplate> <DataGrid ItemsSource="{Binding ListOfConnection}"> <DataGrid.Columns> <DataGridTextColumn Header="Computer ID" Binding="{Binding ComputerID}"/> <DataGridTemplateColumn> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <ListView ItemsSource="{Binding ListOfDate}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid.Columns> </DataGrid> </DataTemplate> </DataGrid.RowDetailsTemplate> </DataGrid> With in code behind : List<Employee> allEmployees = WorkflowMgr.Instance.AllEmployees; table.DataContext = allEmployees; And it works ! I have tryed to improve my fake example :) Hope it will help to another developer !

    Read the article

  • Shrinking image by 57% and centering inside css structure

    - by Johua
    Hy, i'm really stuck. I'll go step by step and hope to make it short. This is the html structure: <li class="FAVwithimage"> <a href=""> <img src="pics/Joshua.png"> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Before i paste the css classes, some info about the exact goal to accomplish: Resize the picture (img) by 57%. If it cannot be done with css, then jquery/javascript solution. For example: Original pic is 240x240px, i need to resize it by 57%. That means that a pic of 400x400 would be bigger after resizing. After resizing, the picture needs to be centered vertical&horizontal inside a: 68x90 boundaries. So you have an LI element, wich has an A element, and inside A we have IMG, IMG is resized by 57% and centered where the maximum width can be of course 68px and maximum height 90px. No for that to work i was adding a SPAN element arround the IMG. This is what i was thinking: <li class="FAVwithimage"> <a href=""> <span class="picHolder"><img src="pics/Joshua.png"></span> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Then i would give the span element: display:block and w=68px, h=90px. But unforunatelly that didn't work. I know it's a long post but i'v did my best to describe it very simple. Beneath are the css classes and a picture to see what i need. li.FAVwithimage { height: 90px!important; } li.FAVwithimage a, li.FAVwithimage:hover a { height: 81px!important; } That's it what's relevant. I have not included the classes for: name,comment,arrow And now the classes that are incomplete and refer to IMG. li.FAVwithimage a span.picHolder{ /*put the picHolder to the beginning of the LI element*/ position: absolute; left: 0; top: 0; width: 68px; height: 90px; diplay:block; border:1px solid #F00; } Border is used just temporary to show the actuall picHolder. It is now on the beginning of LI, width and height is set. li.FAVwithimage span.picHolder img { max-width:68px!important; max-height:90px!important; } This is the class wich should shrink the pic by 57% and center inside picHolder Here I have a drawing describing what i need:

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • WiX 3 Tutorial: Custom EULA License and MSI localization

    - by Mladen Prajdic
    In this part of the ongoing Wix tutorial series we’ll take a look at how to localize your MSI into different languages. We’re still the mighty SuperForm: Program that takes care of all your label color needs. :) Localizing the MSI With WiX 3.0 localizing an MSI is pretty much a simple and straightforward process. First let look at the WiX project Properties->Build. There you can see "Cultures to build" textbox. Put specific cultures to build into the testbox or leave it empty to build all of them. Cultures have to be in correct culture format like en-US, en-GB or de-DE. Next we have to tell WiX which cultures we actually have in our project. Take a look at the first post in the series about Solution/Project structure and look at the Lang directory in the project structure picture. There we have de-de and en-us subfolders each with its own localized stuff. In the subfolders pay attention to the WXL files Loc_de-de.wxl and Loc_en-us.wxl. Each one has a <String Id="LANG"> under the WixLocalization root node. By including the string with id LANG we tell WiX we want that culture built. For English we have <String Id="LANG">1033</String>, for German <String Id="LANG">1031</String> in Loc_de-de.wxl and for French we’d have to create another file Loc_fr-FR.wxl and put <String Id="LANG">1036</String>. WXL files are localization files. Any string we want to localize we have to put in there. To reference it we use loc keyword like this: !(loc.IdOfTheVariable) => !(loc.MustCloseSuperForm) This is our Loc_en-us.wxl. Note that German wxl has an identical structure but values are in German. <?xml version="1.0" encoding="utf-8"?><WixLocalization Culture="en-us" xmlns="http://schemas.microsoft.com/wix/2006/localization" Codepage="1252"> <String Id="LANG">1033</String> <String Id="ProductName">SuperForm</String> <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String> <String Id="ManufacturerName">My Company Name</String> <String Id="AppNotSupported">This application is is not supported on your current OS. Minimal OS supported is Windows XP SP2</String> <String Id="DotNetFrameworkNeeded">.NET Framework 3.5 is required. Please install the .NET Framework then run this installer again.</String> <String Id="MustCloseSuperForm">Must close SuperForm!</String> <String Id="SuperFormNewerVersionInstalled">A newer version of !(loc.ProductName) is already installed.</String> <String Id="ProductKeyCheckDialog_Title">!(loc.ProductName) setup</String> <String Id="ProductKeyCheckDialogControls_Title">!(loc.ProductName) Product check</String> <String Id="ProductKeyCheckDialogControls_Description">Plese Enter following information to perform the licence check.</String> <String Id="ProductKeyCheckDialogControls_FullName">Full Name:</String> <String Id="ProductKeyCheckDialogControls_Organization">Organization:</String> <String Id="ProductKeyCheckDialogControls_ProductKey">Product Key:</String> <String Id="ProductKeyCheckDialogControls_InvalidProductKey">The product key you entered is invalid. Please call user support.</String> </WixLocalization>   As you can see from the file we can use localization variables in other variables like we do for SuperFormNewerVersionInstalled string. ProductKeyCheckDialog* strings are to localize a custom dialog for Product key check which we’ll look at in the next post. Built in dialog text localization Under the de-de folder there’s also the WixUI_de-de.wxl file. This files contains German translations of all texts that are in WiX built in dialogs. It can be downloaded from WiX 3.0.5419.0 Source Forge site. Download the wix3-sources.zip and go to \src\ext\UIExtension\wixlib. There you’ll find already translated all WiX texts in 12 Languages. Localizing the custom EULA license Here it gets ugly. We can override the default EULA license easily by overriding WixUILicenseRtf WiX variable like this: <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> where License.rtf is the name of your custom EULA license file. The downside of this method is that you can only have one license file which means no localization for it. That’s why we need to make a workaround. License is checked on a dialog name LicenseAgreementDialog. What we have to do is overwrite that dialog and insert the functionality for localization. This is a code for LicenseAgreementDialogOverwritten.wxs, an overwritten LicenseAgreementDialog that supports localization. LicenseAcceptedOverwritten replaces the LicenseAccepted built in variable. <?xml version="1.0" encoding="UTF-8" ?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <UI> <Dialog Id="LicenseAgreementDialogOverwritten" Width="370" Height="270" Title="!(loc.LicenseAgreementDlg_Title)"> <Control Id="LicenseAcceptedOverwrittenCheckBox" Type="CheckBox" X="20" Y="207" Width="330" Height="18" CheckBoxValue="1" Property="LicenseAcceptedOverwritten" Text="!(loc.LicenseAgreementDlgLicenseAcceptedCheckBox)" /> <Control Id="Back" Type="PushButton" X="180" Y="243" Width="56" Height="17" Text="!(loc.WixUIBack)" /> <Control Id="Next" Type="PushButton" X="236" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)"> <Publish Event="SpawnWaitDialog" Value="WaitForCostingDlg">CostingComplete = 1</Publish> <Condition Action="disable"> <![CDATA[ LicenseAcceptedOverwritten <> "1" ]]> </Condition> <Condition Action="enable">LicenseAcceptedOverwritten = "1"</Condition> </Control> <Control Id="Cancel" Type="PushButton" X="304" Y="243" Width="56" Height="17" Cancel="yes" Text="!(loc.WixUICancel)"> <Publish Event="SpawnDialog" Value="CancelDlg">1</Publish> </Control> <Control Id="BannerBitmap" Type="Bitmap" X="0" Y="0" Width="370" Height="44" TabSkip="no" Text="!(loc.LicenseAgreementDlgBannerBitmap)" /> <Control Id="LicenseText" Type="ScrollableText" X="20" Y="60" Width="330" Height="140" Sunken="yes" TabSkip="no"> <!-- This is original line --> <!--<Text SourceFile="!(wix.WixUILicenseRtf=$(var.LicenseRtf))" />--> <!-- To enable EULA localization we change it to this --> <Text SourceFile="$(var.ProjectDir)\!(loc.LicenseRtf)" /> <!-- In each of localization files (wxl) put line like this: <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String>--> </Control> <Control Id="Print" Type="PushButton" X="112" Y="243" Width="56" Height="17" Text="!(loc.WixUIPrint)"> <Publish Event="DoAction" Value="WixUIPrintEula">1</Publish> </Control> <Control Id="BannerLine" Type="Line" X="0" Y="44" Width="370" Height="0" /> <Control Id="BottomLine" Type="Line" X="0" Y="234" Width="370" Height="0" /> <Control Id="Description" Type="Text" X="25" Y="23" Width="340" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgDescription)" /> <Control Id="Title" Type="Text" X="15" Y="6" Width="200" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgTitle)" /> </Dialog> </UI> </Fragment></Wix>   Look at the Control with Id "LicenseText” and read the comments. We’ve changed the original license text source to "$(var.ProjectDir)\!(loc.LicenseRtf)". var.ProjectDir is the directory of the project file. The !(loc.LicenseRtf) is where the magic happens. Scroll up and take a look at the wxl localization file example. We have the LicenseRtf declared there and it’s been made overridable so developers can change it if they want. The value of the LicenseRtf is the path to our localized EULA relative to the WiX project directory. With little hacking we’ve achieved a fully localizable installer package.   The final step is to insert the extended LicenseAgreementDialogOverwritten license dialog into the installer GUI chain. This is how it’s done under the <UI> node of course.   <UI> <!-- code to be discussed in later posts –> <!-- BEGIN UI LOGIC FOR CLEAN INSTALLER --> <Publish Dialog="WelcomeDlg" Control="Next" Event="NewDialog" Value="LicenseAgreementDialogOverwritten">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Back" Event="NewDialog" Value="WelcomeDlg">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Next" Event="NewDialog" Value="ProductKeyCheckDialog">LicenseAcceptedOverwritten = "1" AND NOT OLDER_VERSION_FOUND</Publish> <Publish Dialog="InstallDirDlg" Control="Back" Event="NewDialog" Value="ProductKeyCheckDialog">1</Publish> <!-- END UI LOGIC FOR CLEAN INSTALLER –> <!-- code to be discussed in later posts --></UI> For a thing that should be simple for the end developer to do, localization can be a bit advanced for the novice WiXer. Hope this post makes the journey easier and that next versions of WiX improve this process. WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe  WiX 3 Tutorial: Custom EULA License and MSI localization WiX 3 Tutorial: Product Key Check custom action WiX 3 Tutorial: Building an updater WiX 3 Tutorial: Icons and installer pictures WiX 3 Tutorial: Creating a Bootstrapper

    Read the article

  • Ajax Control Toolkit July 2011 Release and the New HTML Editor Extender

    - by Stephen Walther
    I’m happy to announce the July 2011 release of the Ajax Control Toolkit which includes important bug fixes and a completely new HTML Editor Extender control. You can download the July 2011 Release by visiting the Ajax Control Toolkit CodePlex site at: http://AjaxControlToolkit.CodePlex.com Using the New HTML Editor Extender Control You can use the new HTML Editor Extender to extend any standard ASP.NET TextBox control so that it supports rich formatting such as bold, italics, bulleted lists, numbered lists, typefaces and different foreground and background colors. The following code illustrates how you can extend a standard ASP.NET TextBox control with the HtmlEditorExtender: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Simple.aspx.cs" Inherits="WebApplication1.Simple" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager runat="Server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="60" Rows="8" runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server" /> </form> </body> </html> This page has the following three controls: ToolkitScriptManager – The ToolkitScriptManager renders all of the scripts required by the Ajax Control Toolkit. TextBox – The TextBox control is a standard ASP.NET TextBox which is set to display multiple lines (a TextArea instead of an Input element). HtmlEditorExtender – The HtmlEditorExtender is set to extend the TextBox control. You can use the standard TextBox Text property to read the rich text entered into the TextBox control on the server. Lightweight and HTML5 The HTML Editor Extender works on all modern browsers including the most recent versions of Mozilla Firefox (Firefox 5), Google Chrome (Chrome 12), and Apple Safari (Safari 5). Furthermore, the HTML Editor Extender is compatible with Microsoft Internet Explorer 6 and newer. The HTML Editor Extender is very lightweight. It takes advantage of the HTML5 ContentEditable attribute so it does not require an iframe or complex browser workarounds. If you select View Source in your browser while using the HTML Editor Extender, we hope that you will be pleasantly surprised by how little markup and script is generated by the HTML Editor Extender. Customizable Toolbar Buttons Depending on the web application that you are building, you will want to display different toolbar buttons with the HTML Editor Extender. One of the design goals of the HTML Editor Extender was to make it very easy for you to customize the toolbar buttons. Imagine, for example, that you want to use the HTML Editor Extender when accepting comments on blog posts. In that case, you might want to restrict the type of formatting that a user can display. You might want to enable a user to format text as bold or italic but you do not want the user to make any other formatting changes. The following page illustrates how you can customize the HTML Editor Extender toolbar: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CustomToolbar.aspx.cs" Inherits="WebApplication1.CustomToolbar" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Custom Toolbar</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager Runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="10" Text="Hello <b>world!</b>" Runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server"> <Toolbar> <asp:Bold /> <asp:Italic /> </Toolbar> </asp:HtmlEditorExtender> </form> </body> </html> Notice that the HTML Editor Extender in the page above has a Toolbar subtag. You can list the toolbar buttons which you want to appear within the subtag. In the case above, only Bold and Italic buttons are displayed. Here is a complete list of the Toolbar buttons currently supported by the HTML Editor Extender: Undo Redo Bold Italic Underline StrikeThrough Subscript Superscript JustifyLeft JustifyCenter JustifyRight JustifyFull InsertOrderedList InsertUnorderedList CreateLink UnLink RemoveFormat SelectAll UnSelect Delete Cut Copy Paste BackgroundColorSelector ForeColorSelector FontNameSelector FontSizeSelector Indent Outdent InsertHorizontalRule HorizontalSeparator Of course the HTML Editor Extender was designed to be extensible. You can create your own buttons and add them to the control. Compatible with the AntiXSS Library When using the HTML Editor Extender on a public facing website, we strongly recommend that you use the HTML Editor Extender with the AntiXSS Library. If you allow users to submit arbitrary HTML, and you don’t take any action to strip out malicious markup, then you are opening your website to Cross-Site Scripting Attacks (XSS attacks). The HTML Editor Extender uses the Provider Model to support different Sanitizer Providers. The July 2011 release of the Ajax Control Toolkit ships with a single Sanitizer Provider which uses the AntiXSS library (see http://AntiXss.CodePlex.com ). A Sanitizer Provider is responsible for sanitizing HTML markup by removing any malicious elements, attributes, and attribute values. For example, the AntiXss Sanitizer Provider will take the following block of HTML: <b><a href=""javascript:doEvil()"">Visit Grandma</a></b> <script>doEvil()</script> And return the following sanitized block of HTML: <b><a href="">Visit Grandma</a></b> Notice that the JavaScript href and <SCRIPT> tag are both stripped out. Be aware that there are a depressingly large number of ways to sneak evil markup into your HTML. You definitely want a Sanitizer as a safety net. Before you can use the AntiXSS Sanitizer Provider, you must add three assemblies to your web application: AntiXSSLibrary.dll, HtmlSanitizationLibrary.dll, and SanitizerProviders.dll. All three assemblies are included with the CodePlex download of the Ajax Control Toolkit in the SanitizerProviders folder. Here’s how you modify your web.config file to use the AntiXSS Sanitizer Provider: <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <compilation targetFramework="4.0" debug="true"/> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> </system.web> </configuration> You can detect whether the HTML Editor Extender is using the AntiXSS Sanitizer Provider by checking the HtmlEditorExtender SanitizerProvider property like this: if (MyHtmlEditorExtender.SanitizerProvider == null) { throw new Exception("Please enable the AntiXss Sanitizer!"); } When the SanitizerProvider property has the value null, you know that a Sanitizer Provider has not been configured in the web.config file. Because the AntiXSS library requires Full Trust, you cannot use the AntiXSS Sanitizer Provider with most shared website hosting providers. Because most shared hosting providers only support Medium Trust and not Full Trust, we do not recommend using the HTML Editor Extender with a public website hosted with a shared hosting provider. Why a New HTML Editor Control? The Ajax Control Toolkit now includes two HTML Editor controls. Why did we introduce a new HTML Editor control when there was already an existing HTML Editor? We think you will like the new HTML Editor much more than the previous one. We had several goals with the new HTML Editor Extender: Lightweight – We wanted to leverage HTML5 to create a lightweight HTML Editor. The new HTML Editor generates much less markup and script than the previous HTML Editor. Secure – We wanted to make it easy to integrate the AntiXSS library with the HTML Editor. If you are creating a public facing website, we strongly recommend that you use the AntiXSS Provider. Customizable – We wanted to make it easy for users to customize the toolbar buttons displayed by the HTML Editor. Compatibility – We wanted to ensure that the HTML Editor will work with the latest versions of the most popular browsers (including Internet Explorer 6 and higher). The old HTML Editor control is still included in the Ajax Control Toolkit and continues to live in the AjaxControlToolkit.HTMLEditor namespace. We have not modified the control and you can continue to use the control in the same way as you have used it in the past. However, we hope that you will consider migrating to the new HTML Editor Extender for the reasons listed above. Summary We’ve introduced a new Ajax Control Toolkit control with this release. I want to thank the developers and testers on the Superexpert team for the huge amount of work which they put into this control. It was a non-trivial task to build an entirely new control which has the complexity of the HTML Editor in less than 6 weeks. Please let us know what you think! We want to hear your feedback. If you discover issues with the new HTML Editor Extender control, or you have questions about the control, or you have ideas for how it can be improved, then please post them to this blog. Tomorrow starts a new sprint

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >