Search Results

Search found 3492 results on 140 pages for 'subject'.

Page 133/140 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Passing a list of files to javac

    - by Robert Menteer
    How can I get the javac task to use an existing fileset? In my build.xml I have created several filesets to be used in multiple places throughout build file. Here is how they have been defined: <fileset dir = "${src}" id = "java.source.all"> <include name = "**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.examples"> <include name = "**/Examples/**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.tests"> <include name = "**/Tests/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.project"> <include name = "**/*.java" /> <exclude name = "**/Examples/**/*.java" /> <exclude name = "**/Tests/**/*.java" /> </fileset> I have also used macrodef to compile the java files so the javac task does not need to be repeated multiple times. The macro looks like this: <macrodef name="compile"> <attribute name="sourceref"/> <sequential> <javac srcdir = "${src}" destdir = "${build}" classpathref = "classpath" includeantruntime = "no" debug = "${debug}"> <filelist dir="." files="@{sourceref}" /> <-- email is about this </javac> </sequential> What I'm trying to do is compile only the classes that are needed for specific targets not all the targets in the source tree. And do so without having to specify the files every time. Here are how the targets are defined: <target name = "compile-examples" depends = "init"> <compile sourceref = "${toString:java.source.examples}" /> </target> <target name = "compile-project" depends = "init"> <compile sourceref = "${toString:java.source.project}" /> </target> <target name = "compile-tests" depends = "init"> <compile sourceref = "${toString:java.source.tests}" /> </target> As you can see each target specifies the java files to be compiled as a simi-colon separated list of absolute file names. The only problem with this is that javac does not support filelist. It also does not support fileset, path or pathset. I've tried using but it treats the list as a single file name. Another thing I tried is sending the reference directly (not using toString) and using but include does not have a ref attribute. SO THE QUESTION IS: How do you get the javac task to use a reference to a fileset that was defined in another part of the build file? I'm not interested in solutions that cause me to have multiple javac tasks. Completely re-writting the macro is acceptable. Changes to the targets are also acceptable provided redundant code between targets is kept to a minimum. p.s. Another problem is that fileset wants a comma separated list. I've only done a brief search for a way to convert semi-colons to commas and haven't found a way to do that. p.p.s. Sorry for the yelling but some people are too quick to post responses that don't address the subject.

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Broken Multithreading With Core Data

    - by spamguy
    This is a better-focused version of an earlier question that touches upon an entirely different subject from before. I am working on a Cocoa Core Data application with multiple threads. There is a Song and Artist; every Song has an Artist relation. There is a delegate code file not cited here; it more or less looks like the template XCode generates. I am far better working with the former technology than the latter, and any multithreading capability came from a Core Data template. When I'm doing all my ManagedObjectContext work in one method, I am fine. When I put fetch-or-insert-then-return-object work into a separate method, the application halts (but does not crash) at the new method's return statement, seen below. The new method even gets its own MOC to be safe, and it has not helped any. The result is one addition to Song and a halt after generating an Artist. I get no errors or exceptions, and I don't know why. I've debugged out the wazoo. My theory is that any errors occurring are in another thread, and the one I'm watching is waiting on something forever. What did I do wrong with getArtistObject: , and how can I fix it? Thanks. - (void)main { NSInteger songCount = 1; NSManagedObjectContext *moc = [[NSManagedObjectContext alloc] init]; [moc setPersistentStoreCoordinator:[[self delegate] persistentStoreCoordinator]]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(contextDidSave:) name:NSManagedObjectContextDidSaveNotification object:moc]; /* songDict generated here */ for (id key in songDict) { NSManagedObject *song = [NSEntityDescription insertNewObjectForEntityForName:@"Song" inManagedObjectContext:moc]; [song setValue:[songDictItem objectForKey:@"Name"] forKey:@"title"]; [song setValue:[self getArtistObject:(NSString *) [songDictItem objectForKey:@"Artist"]] forKey:@"artist"]; [songDictItem release]; songCount++; } NSError *error; if (![moc save:&error]) [NSApp presentError:error]; [[NSNotificationCenter defaultCenter] removeObserver:self name:NSManagedObjectContextDidSaveNotification object:moc]; [moc release], moc = nil; [[self delegate] importDone]; } - (NSManagedObject*) getArtistObject:(NSString*)theArtist { NSError *error = nil; NSManagedObjectContext *moc = [[NSManagedObjectContext alloc] init]; [moc setPersistentStoreCoordinator:[[self delegate] persistentStoreCoordinator]]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(contextDidSave:) name:NSManagedObjectContextDidSaveNotification object:moc]; NSFetchRequest *fetch = [[[NSFetchRequest alloc] init] autorelease]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Artist" inManagedObjectContext:moc]; [fetch setEntity:entityDescription]; // object to be returned NSManagedObject *artistObject = [[NSManagedObject alloc] initWithEntity:entityDescription insertIntoManagedObjectContext:moc]; // set predicate (artist name) NSPredicate *pred = [NSPredicate predicateWithFormat:[NSString stringWithFormat:@"name = \"%@\"", theArtist]]; [fetch setPredicate:pred]; NSArray *response = [moc executeFetchRequest:fetch error:&error]; if (error) [NSApp presentError:error]; if ([response count] == 0) // empty resultset --> no artists with this name { [artistObject setValue:theArtist forKey:@"name"]; NSLog(@"%@ not found. Adding.", theArtist); return artistObject; } else return [response objectAtIndex:0]; } @end

    Read the article

  • MVC design in Cocoa - are all 3 always necessary? Also: naming conventions, where to put Controller

    - by Nektarios
    I'm new to MVC although I've read a lot of papers and information on the web. I know it's somewhat ambiguous and there are many different interpretations of MVC patterns.. but the differences seem somewhat minimal My main question is - are M, V, and C always going to be necessary to be doing this right? I haven't seen anyone address this in anything I've read. Examples (I'm working in Cocoa/Obj-c although that shouldn't much matter).. 1) If I have a simple image on my GUI, or a text entry field that is just for a user's convenience and isn't saved or modified, these both would be V (view) but there's no M (no data and no domain processing going on), and no C to bridge them. So I just have some aspects that are "V" - seems fine 2) I have 2 different and visible windows that each have a button on them labeled as "ACTIVATE FOO" - when a user clicks the button on either, both buttons press in and change to say "DEACTIVATE FOO" and a third window appears with label "FOO". Clicking the button again will change the button on both windows to "ACTIVATE FOO" and will remove the third "FOO" window. In this case, my V consists of the buttons on both windows, and I guess also the third window (maybe all 3 windows). I definitely have a C, my Controller object will know about these buttons and windows and will get their clicks and hold generic states regarding windows and buttons. However, whether I have 1 button or 10 button, my window is called "FOO" or my window is called "BAR", this doesn't matter. There's no domain knowledge or data here - just control of views. So in this example, I really have "V" and "C" but no "M" - is that ok? 3) Final example, which I am running in to the most. I have a text entry field as my View. When I enter text in this, say a number representing gravity, I keep it in a Model that may do things like compute physics of a ball while taking in to account my gravity parameter. Here I have a V and an M, but I don't understand why I would need to add a C - a controller would just accept the signals from the View and pass it along to the Model, and vice versa. Being as the C is just a pure passthrough, it's really "junk" code and isn't making things any more reusable in my opinion. In most situations, when something changes I will need to change the C and M both in nearly identical ways. I realize it's probably an MVC beginner's mistake to think most situations call for only V and M.. leads me in to next subject 4) In Cocoa / Xcode / IB, I guess my Controllers should always be an instantiated object in IB? That is, I lay all of my "V" components in IB, and for each collection of View objects (things that are related) I should have an instantiated Controller? And then perhaps my Models should NOT be found in IB, and instead only found as classes in Xcode that tie in with Controller code found there. Is this accurate? This could explain why you'd have a Controller that is not really adding value - because you are keeping consistent.. 5) What about naming these things - for my above example about FOO / BAR maybe something that ends in Controller would be the C, like FancyWindowOpeningController, etc? And for models - should I suffix them with like GravityBallPhysicsModel etc, or should I just name those whatever I like? I haven't seen enough code to know what's out there in the wild and I want to get on the right track early on Thank you in advance for setting me straight or letting me know I'm on the right track. I feel like I'm starting to get it and most of what I say here makes sense, but validation of my guessing would help me feel confident..

    Read the article

  • Please recommend me intermediate-to-advanced Python books to buy.

    - by anonnoir
    I'm in the final year, final semester of my law degree, and will be graduating very soon. (April, to be specific.) But before I begin practice, I plan to take 2 two months off, purely for serious programming study. So I'm currently looking for some Python-related books, gauged intermediate to advanced, which are interesting (because of the subject matter itself) and possibly useful to my future line of work. I've identified 2 possible purchases at the moment: Natural Language Processing with Python. The law deals mostly with words, and I've quite a number of ideas as to where I might go with NLP. Data extraction, summaries, client management systems linked with document templates, etc. Programming Collective Intelligence. This book fascinates me, because I've always liked the idea of machine learning (and I'm currently studying it by the side too, for fun). I'd like to build/play around with Web 2.0 applications; and who knows if I can apply some of the things I learn to my legal work. (E.g. Playground experiments to determine how and under what circumstances judges might be biased, by forcing algorithms to pore through judgments and calculate similarities, etc.) Please feel free to criticize my current choices, but do at least offer or recommend other books that I should read in their place. My budget can deal with 4 books, max. These books will be used heavily throughout the 2 months; I will be reading them back to back, absorbing the explanations given, and hacking away at their code. Also, the books themselves should satisfy 2 main criteria: Application. The book must teach how to solve problems. I like reading theory, but I want to build things and solve problems first. Even playful applications are fine, because games and experiments always have real-world applications sooner or later. Readability. I like reading technical books, no matter how difficult they are. I enjoy the effort and the feeling that you're learning something. But the book shouldn't contain code or explanations that are too cryptic or erratic. Even if it's difficult, the book's content should be accessible with focused reading. Note: I realize that I am somewhat of a beginner to the whole programming thing, so please don't put me down. But from experience, I think it's better to aim up and leave my comfort zone when learning new things, rather than to just remain stagnant the way I am. (At least the difficulty gives me focus: i.e. if a programmer can be that good, perhaps if I sustain my own efforts I too can be as good as him someday.) If anything, I'm also a very determined person, so two months of day-to-night intensive programming study with nothing else on my mind should, I think, give me a bit of a fighting chance to push my programming skills to a much higher level.

    Read the article

  • Help me upgrade my pf.conf for OpenBSD 4.7

    - by polemon
    I'm planning on upgrading my OpenBSD to 4.7 (from 4.6) and as you may or may not know, they changed the syntax for pf.conf. This is the relevant portion from the upgrade guide: pf(4) NAT syntax change As described in more detail in this mailing list post, PF's separate nat/rdr/binat (translation) rules have been replaced with actions on regular match/filter rules. Simple rulesets may be converted like this: nat on $ext_if from 10/8 -> ($ext_if) rdr on $ext_if to ($ext_if) -> 1.2.3.4 becomes match out on $ext_if from 10/8 nat-to ($ext_if) match in on $ext_if to ($ext_if) rdr-to 1.2.3.4 and... binat on $ext_if from $web_serv_int to any -> $web_serv_ext becomes match on $ext_if from $web_serv_int to any binat-to $web_serv_ext nat-anchor and/or rdr-anchor lines, e.g. for relayd(8), ftp-proxy(8) and tftp-proxy(8), are no longer used and should be removed from pf.conf(5), leaving only the anchor lines. Translation rules relating to these and spamd(8) will need to be adjusted as appropriate. N.B.: Previously, translation rules had "stop at first match" behaviour, with binat being evaluated first, followed by nat/rdr depending on direction of the packet. Now the filter rules are subject to the usual "last match" behaviour, so care must be taken with rule ordering when converting. pf(4) route-to/reply-to syntax change The route-to, reply-to, dup-to and fastroute options in pf.conf move to filteropts; pass in on $ext_if route-to (em1 192.168.1.1) from 10.1.1.1 pass in on $ext_if reply-to (em1 192.168.1.1) to 10.1.1.1 becomes pass in on $ext_if from 10.1.1.1 route-to (em1 192.168.1.1) pass in on $ext_if to 10.1.1.1 reply-to (em1 192.168.1.1) Now, this is my current pf.conf: # $OpenBSD: pf.conf,v 1.38 2009/02/23 01:18:36 deraadt Exp $ # # See pf.conf(5) for syntax and examples; this sample ruleset uses # require-order to permit mixing of NAT/RDR and filter rules. # Remember to set net.inet.ip.forwarding=1 and/or net.inet6.ip6.forwarding=1 # in /etc/sysctl.conf if packets are to be forwarded between interfaces. ext_if="pppoe0" int_if="nfe0" int_net="192.168.0.0/24" polemon="192.168.0.10" poletopw="192.168.0.12" segatop="192.168.0.20" table <leechers> persist set loginterface $ext_if set skip on lo match on $ext_if all scrub (no-df max-mss 1440) altq on $ext_if priq bandwidth 950Kb queue {q_pri, q_hi, q_std, q_low} queue q_pri priority 15 queue q_hi priority 10 queue q_std priority 7 priq(default) queue q_low priority 0 nat-anchor "ftp-proxy/*" rdr-anchor "ftp-proxy/*" nat on $ext_if from !($ext_if) -> ($ext_if) rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 anchor "ftp-proxy/*" block pass on $int_if queue(q_hi, q_pri) pass out on $ext_if queue(q_std, q_pri) pass out on $ext_if proto icmp queue q_pri pass out on $ext_if proto {tcp, udp} to any port ssh queue(q_hi, q_pri) pass out on $ext_if proto {tcp, udp} to any port http queue(q_std, q_pri) #pass out on $ext_if proto {tcp, udp} all queue(q_low, q_hi) pass out on $ext_if proto {tcp, udp} from <leechers> queue(q_low, q_std) pass in on $ext_if proto tcp to ($ext_if) port ident queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port ssh queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port http queue(q_hi, q_pri) pass in on $ext_if inet proto icmp all icmp-type echoreq queue q_pri If someone has experience with porting the 4.6 pf.conf to 4.7, please help me do the correct changes. OK, this is how far I've got: I commented out nat-anchor and rdr-anchor, as describted in the guide: #nat-anchor "ftp-proxy/*" #rdr-anchor "ftp-proxy/*" And this is how I've "converted" the rdr rules: #nat on $ext_if from !($ext_if) -> ($ext_if) match out on $ext_if from !($ext_if) nat-to ($ext_if) #rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 match in on $int_if proto tcp to port ftp rdr-to 127.0.0.1 port 8021 #rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 match in on $ext_if proto tcp tp port 2080 rdr-to $segatop port 80 #rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 match in on $ext_if proto tcp tp port 2022 rdr-to $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 match in on $ext_if proto tcp tp port 4000 rdr-to $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 match in on $ext_if proto tcp tp port 6600 rdr-to $polemon port 6600 Did I miss anything? Is the anchor for ftp-proxy OK as it is now? Do I need to change something in the other pass in on... lines?

    Read the article

  • Expanding Rows for Unique Checkboxes

    - by Marc Morgan
    I was just recently given a project for my job to write a script that compares two mysql databases and print out information into an html table. Currently, I am trying to insert a checkbox by each individual's name and when selected, rows pertaining to that individual will expand underneath the person's name. I am combining javascript into my script to do this, although I really have no experience is it. The problem I am having is that when any checkbox is selected, all the rows for each individual is expanding instead of the rows pertaining only to the one individual selected. Here is the coding so far: <?php $link = mysql_connect ($server = "harris.lib.fit.edu", $username = "", $password = "") or die(mysql_error()); $db = mysql_select_db ("library-test") or die(mysql_error()); $ids = mysql_query("SELECT * FROM `ifc_studylog`") or die(mysql_error()); //not single quotes (tilda apostrophy) $x=0; $n=0; while($row = mysql_fetch_array( $ids )) { $tracksid1[$x] = $row['fitID']; $checkin[$x] = $row['checkin']; $checkout[$x] = $row['checkout']; $n++; $x++; } $names = mysql_query("SELECT * FROM `ifc_users`") or die(mysql_error()); //not single quotes (tilda apostrophy) $x=0; while($row = mysql_fetch_array( $names )) { $tracksnamefirst[$x] = $row['firstName']; $tracksnamesecond[$x] = $row['lastname']; $tracksid2[$x] = $row['fitID']; $tracksuser[$x] = $row['tracks']; $x++; } $x=0; foreach($tracksid2 as $comparename) { $chk = strval($x); ?> <script type='text/javascript' src='http://code.jquery.com/jquery-1.4.2.js'></script> <script type='text/javascript'> $(window).load(function () { $('.varx').click(function () { $('.text').toggle(this.checked); });}); </script> <?php echo '<td><input id = "<?=$chk?>" type="checkbox" class="varx" /></td>'; echo '<td align="center">'.$comparename.'</td>'; echo'<td align="center">'.$tracksnamefirst[$x].'</td>'; echo'<td align="center">'.$tracksnamesecond[$x].'</td>'; $z=0; foreach($tracksid1 as $compareid) { $HH=0; $MM =0; $SS =0; if($compareid == $comparename)// && $tracks==$tracksuser[$x]) { $SS = sprintf("%02s",(($checkout[$z]-$checkin[$z])%60)); $MM = sprintf("%02s",(($checkout[$z]-$checkin[$z])/60 %60)); $HH = sprintf("%02s",(($checkout[$z]-$checkin[$z])/3600 %24)); // echo'<td align="center">'.$HH.':'.$MM.':'.$SS.'</td>'; echo '</tr>'; echo '<tr>'; echo "<td id='txt' class='text' align='center' colspan='2' style='display:none'></td>"; echo "<td id='txt' class='text' align='center' style='display:none'>".$checkin[$z]."</td>"; echo '</tr>'; } echo '<tr>'; $z++; echo '</tr>'; } $x++; } } ?> Any Help is appreciated and sorry if I am too vague on the subject. The username and password is left off for security purposes.

    Read the article

  • HTML form isn't emailing

    - by Anonmattymous
    I have this as my form <div class="contactInputs"> <p>Send us a message</p> <form class="messageForm" autocomplete="on" name="contactform" method="post" action="/freequote.php"> <input type="text" name="name" placeholder="Name*" required> <input type="text" name="companyname" placeholder="Company Name"> <input type="email" name="email" placeholder="Email*" required> <input type="tel" name="phone" placeholder="Phone"> <input class="contact-submit" type="submit"> </form> <textarea type="textarea" name="message" placeholder="Your Messages*" required></textarea> </div> And this is the PHP used to do send the Email. <?php if(isset($_POST['email'])) { $email_to = "[email protected]"; $email_subject = "Your email subject line"; function died($error) { echo "We are very sorry, but there were error(s) found with the form you submitted. "; echo "These errors appear below.<br /><br />"; echo $error."<br /><br />"; echo "Please go back and fix these errors.<br /><br />"; die(); } if(!isset($_POST['name']) || !isset($_POST['companyname']) || !isset($_POST['email']) || !isset($_POST['phone']) || !isset($_POST['comments'])) { died('We are sorry, but there appears to be a problem with the form you submitted.'); } $name = $_POST['name']; $companyname = $_POST['companyname']; $email_from = $_POST['email']; $phone = $_POST['phone']; $message = $_POST['comments']; $error_message = ""; $email_exp = '/^[A-Za-z0-9._%-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}$/'; if(!preg_match($email_exp,$email_from)) { $error_message .= 'The Email Address you entered does not appear to be valid.<br />'; } $string_exp = "/^[A-Za-z .'-]+$/"; if(!preg_match($string_exp,$name)) { $error_message .= 'The First Name you entered does not appear to be valid.<br />'; } if(!preg_match($string_exp,$companyname)) { $error_message .= 'The Last Name you entered does not appear to be valid.<br />'; } if(strlen($message) < 2) { $error_message .= 'The Comments you entered do not appear to be valid.<br />'; } if(strlen($error_message) > 0) { died($error_message); } $email_message = "Form details below.\n\n"; function clean_string($string) { $bad = array("content-type","bcc:","to:","cc:","href"); return str_replace($bad,"",$string); } $email_message .= "First Name: ".clean_string($name)."\n"; $email_message .= "Last Name: ".clean_string($companyname)."\n"; $email_message .= "Email: ".clean_string($email_from)."\n"; $email_message .= "phone: ".clean_string($phone)."\n"; $email_message .= "Comments: ".clean_string($message)."\n"; $headers = 'From: '.$email_from."\r\n". 'Reply-To: '.$email_from."\r\n" . 'X-Mailer: PHP/' . phpversion(); @mail($email_to, $email_subject, $email_message, $headers); ?> Thank you for contacting us. We will be in touch with you very soon. <?php } ?> But whenever i try to submit it, i get the errors We are very sorry, but there were error(s) found with the form you submitted. These errors appear below. We are sorry, but there appears to be a problem with the form you submitted. Please go back and fix these errors. Does anyone see whats wrong

    Read the article

  • How to find if a Item in a ListBox has the focus?

    - by eitan barazani
    I have a List box defined like this: <ListBox x:Name="EmailList" ItemsSource="{Binding MailBoxManager.Inbox.EmailList}" SelectedItem="{Binding SelectedMessage, Mode=TwoWay}" Grid.Row="1"> <ListBox.ItemTemplate> <DataTemplate> <usrctrls:MessageSummary /> </DataTemplate> </ListBox.ItemTemplate> </ListBox> The UserControl is defined like this: <UserControl x:Class="UserControls.MessageSummary" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="600"> <UserControl.Resources> </UserControl.Resources> <Grid HorizontalAlignment="Left"> <Grid.ColumnDefinitions> <ColumnDefinition Width="50" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <CheckBox Grid.Column="0" VerticalAlignment="Center" /> <Grid Grid.Column="1" Margin="0,0,12,0"> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <Grid Grid.Row="0" Grid.Column="0" HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="30" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="80" /> <ColumnDefinition Width="80" /> </Grid.ColumnDefinitions> <Image x:Name="FlaggedImage" Grid.Column="0" Width="20" Height="10" Margin="0" VerticalAlignment="Center" HorizontalAlignment="Center" Source="/Assets/ico_flagged_white.png" /> <TextBlock x:Name="Sender" Grid.Column="1" Text="{Binding EmailProperties.DisplayFrom}" Style="{StaticResource TextBlock_SenderRowTitle}" HorizontalAlignment="Left" VerticalAlignment="Center" /> <Grid x:Name="ImagesContainer" Grid.Column="2" VerticalAlignment="Center"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Image x:Name="ImgImportant" Grid.Column="0" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_important_red.png" /> <Image x:Name="ImgFolders" Grid.Column="1" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_ico_addtofolder.png" /> <Image x:Name="ImgAttachment" Grid.Column="2" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_attachment_lightgray.png" /> <Image x:Name="ImgFlag" Grid.Column="3" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_flag.png" /> </Grid> <TextBlock x:Name="Time" Grid.Column="3" Text="{Binding EmailProperties.DateReceived, Converter={StaticResource EmailHeaderTimeConverter}}" TextAlignment="Center" FontSize="16" VerticalAlignment="Center" Margin="0" /> </Grid> <TextBlock Grid.Row="1" Text="{Binding EmailProperties.Subject}" TextTrimming="WordEllipsis" Margin="0,10" /> <TextBlock Grid.Row="2" Text="{Binding EmailProperties.Preview}" TextTrimming="WordEllipsis" /> </Grid> </Grid> The MessageSummary is a UserControl. I would like to bind the foreground color of the Items of the ListBox to whether the item is the one selected in the list box, i.e. I would like the Item's foreground color to be Black if not selected and White if the item is selected. How can it be done? Thanks,

    Read the article

  • What is the best practice when coding math class/functions ?

    - by Isaac Clarke
    Introductory note : I voluntarily chose a wide subject. You know that quote about learning a cat to fish, that's it. I don't need an answer to my question, I need an explanation and advice. I know you guys are good at this ;) Hi guys, I'm currently implementing some algorithms into an existing program. Long story short, I created a new class, "Adder". An Adder is a member of another class representing the physical object actually doing the calculus , which calls adder.calc() with its parameters (merely a list of objects to do the maths on). To do these maths, I need some parameters, which do not exist outside of the class (but can be set, see below). They're neither config parameters nor members of other classes. These parameters are D1 and D2, distances, and three arrays of fixed size : alpha, beta, delta. I know some of you are more comfortable reading code than reading text so here you go : class Adder { public: Adder(); virtual Adder::~Adder(); void set( float d1, float d2 ); void set( float d1, float d2, int alpha[N_MAX], int beta[N_MAX], int delta[N_MAX] ); // Snipped prototypes float calc( List& ... ); // ... inline float get_d1() { return d1_ ;}; inline float get_d2() { return d2_ ;}; private: float d1_; float d2_; int alpha_[N_MAX]; // A #define N_MAX is done elsewhere int beta_[N_MAX]; int delta_[N_MAX]; }; Since this object is used as a member of another class, it is declared in a *.h : private: Adder adder_; By doing that, I couldn't initialize the arrays (alpha/beta/delta) directly in the constructor ( int T[3] = { 1, 2, 3 }; ), without having to iterate throughout the three arrays. I thought of putting them in static const, but I don't think that's the proper way of solving such problems. My second guess was to use the constructor to initialize the arrays Adder::Adder() { int alpha[N_MAX] = { 0, -60, -120, 180, 120, 60 }; int beta[N_MAX] = { 0, 0, 0, 0, 0, 0 }; int delta[N_MAX] = { 0, 0, 180, 180, 180, 0 }; set( 2.5, 0, alpha, beta, delta ); } void Adder::set( float d1, float d2 ) { if (d1 > 0) d1_ = d1; if (d2 > 0) d2_ = d2; } void Adder::set( float d1, float d2, int alpha[N_MAX], int beta[N_MAX], int delta[N_MAX] ) { set( d1, d2 ); for (int i = 0; i < N_MAX; ++i) { alpha_[i] = alpha[i]; beta_[i] = beta[i]; delta_[i] = delta[i]; } } My question is : Would it be better to use another function - init() - which would initialize arrays ? Or is there a better way of doing that ? My bonus question is : Did you see some mistakes or bad practice along the way ?

    Read the article

  • bind parent child listview asp.net

    - by Chris
    I'm trying to display two linq query results in a prent and child listview. I have the following code which gets the correct values to populate the listviews but when I nest the child listview inside the parent list view, I get an error saying "The name 'ListView2' does not exist in the current context". I presume I need to bind the two listviews in the code behind but my problem is I don't know how or the best way to do this. I have read a couple of posts on similar problems but my lack of knowledge on the subject doesn't make it clear to me. Please can somebody help me figure this out? It's the final piece of code I need to complete. Many thanks in advance. Here is my .aspx code: <asp:ListView ID="ListView1" runat="server"> <EmptyDataTemplate>No data was returned.</EmptyDataTemplate> <ItemSeparatorTemplate><br /></ItemSeparatorTemplate> <ItemTemplate> <li> <asp:Label ID="LabelID" runat="server" Text='<%# Eval("RecordID") %>'></asp:Label><br /> <asp:Label ID="LabelNumber" runat="server" Text='<%# Eval("CartID") %>'></asp:Label><br /> <asp:Label ID="LabelName" runat="server" Text='<%# Eval("Number") %>'></asp:Label><br /> <asp:Label ID="LabelDestination" runat="server" Text='<%# Eval("Destination") %>'></asp:Label><br /> <asp:Label ID="LabelPkgName" runat="server" Text='<%# Eval("PkgName") %>'></asp:Label> <li> <!-- LIST VIEW FOR FEATURES --> <asp:ListView ID="ListView2" runat="server"> <EmptyDataTemplate>No data was returned.</EmptyDataTemplate> <ItemSeparatorTemplate><br /></ItemSeparatorTemplate> <ItemTemplate> <li> <asp:Label ID="LabelFeatureName" runat="server" Text='<%# Eval("FeatureName") %>'></asp:Label><br /> <asp:Label ID="LabelFeatureSetUp" runat="server" Text='<%# Eval("FeatureSetUp") %>'></asp:Label><br /> <asp:Label ID="LabelFeatureMonthly" runat="server" Text='<%# Eval("FeatureMonthly") %>'></asp:Label><br /> </li> </ItemTemplate> <LayoutTemplate> <ul ID="itemPlaceholderContainer" runat="server" style=""> <li runat="server" id="itemPlaceholder" /> </ul> </LayoutTemplate> </asp:ListView> <!-- LIST VIEW FOR FEATURES [END] --> </li> </li> </ItemTemplate> <LayoutTemplate> <ul ID="itemPlaceholderContainer" runat="server" style=""> <li runat="server" id="itemPlaceholder" /> </ul> </LayoutTemplate> </asp:ListView> Here is my code behind: MyShoppingCart userShoppingCart = new MyShoppingCart(); string cartID = userShoppingCart.GetShoppingCartId(); using (ShoppingCartv2Entities db = new ShoppingCartv2Entities()) { var CartNumber = from c in db.NewViews where c.CartID == cartID select c; foreach (NewView item in CartNumber) { ListView1.DataSource = CartNumber; ListView1.DataBind(); } var CartFeature = from f in db.NewViews join o in db.NumberFeatureViews on f.RecordID equals o.RecordID where f.CartID == cartID select new { o.FeatureName, o.FeatureSetUp, o.FeatureMonthly }; foreach (var x in CartFeature) { ListView2.DataSource = CartFeature; ListView2.DataBind(); } }

    Read the article

  • Can I use the same machine as a client and server for SSH?

    - by achraf
    For development tests, I need to setup an SFTP server. So I want to know if it's possible to use the same machine as the client and the server. I tried and I keep getting this error: > Permission denied (publickey). > Connection closed and by running ssh -v agharroud@localhost i get : > OpenSSH_3.8.1p1,OpenSSL 0.9.7d 17 Mar > debug1: Reading configuration data /etc/ssh_config > debug1: Connecting to localhost [127.0.0.1] port 22. > debug1: Connection established. > debug1: identity file /home/agharroud/.ssh/identity type -1 > debug1: identity file /home/agharroud/.ssh/id_rsa type 1 > debug1: identity file /home/agharroud/.ssh/id_dsa type -1 > debug1: Remote protocol version 2.0, remote software version OpenSSH_3.8.1p1 > debug1: match: OpenSSH_3.8.1p1 pat OpenSSH* > debug1: Enabling compatibility mode for protocol 2.0 > debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1 > debug1: SSH2_MSG_KEXINIT sent > debug1: SSH2_MSG_KEXINIT received > debug1: kex:server->client aes128-cbc hmac-md5 none > debug1: kex: client->server aes128-cbc hmac-md5 none > debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent > debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP > debug1: SSH2_MSG_KEX_DH_GEX_INIT sent > debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY > debug1: Host 'localhost' is known and matches the RSA host key. > debug1: Found key in /home/agharroud/.ssh/known_hosts:1 > debug1: ssh_rsa_verify: signature correct > debug1: SSH2_MSG_NEWKEYS sent > debug1: expecting SSH2_MSG_NEWKEYS > debug1: SSH2_MSG_NEWKEYS received > debug1: SSH2_MSG_SERVICE_REQUEST sent > debug1: SSH2_MSG_SERVICE_ACCEPT > received > > ****USAGE WARNING**** > > This is a private computer system. This computer system, including all > related equipment, networks, and network devices (specifically > including Internet access) are provided only for authorized use. This > computer system may be monitored for all lawful purposes, including to > ensure that its use is authorized, for management of the system, to > facilitate protection against unauthorized access, and to verify > security procedures, survivability, and operational security. Monitoring > includes active attacks by authorized entities to test or verify the > security of this system. During monitoring, information may be > examined, recorded, copied and used for authorized purposes. All > information, including personal information, placed or sent over this > system may be monitored. > > Use of this computer system, authorized or unauthorized, > constitutes consent to monitoring of this system. Unauthorized use may > subject you to criminal prosecution. Evidence of unauthorized use collected > during monitoring may be used for administrative, criminal, or other > adverse action. Use of this system constitutes consent to monitoring for > these purposes. > > debug1: Authentications that can continue: publickey > debug1: Next authentication method: publickey > debug1: Trying private key:/home/agharroud/.ssh/identity > debug1: Offering public key:/home/agharroud/.ssh/id_rsa > debug1:Authentications that can continue:publickey > debug1: Trying private key:/home/agharroud/.ssh/id_dsa > debug1: No more authentication methods to try. > Permission denied (publickey). Any ideas about the problem ? thanks !

    Read the article

  • HTML 5 <video> tag vs Flash video. What are the pros and cons?

    - by Vilx-
    Seems like the new <video> tag is all the hype these days, especially since Firefox now supports it. News of this are popping up in blogs all over the place, and everyone seems to be excited. But what about? As much as I searched I could not find anything that would make it better than the good old Flash video. In fact, I see only problems with it: It will still be some time before all the browsers start supporting it, and much more time before most people upgrade; Flash is available already and everyone has it; You can couple Flash with whatever fancy UI you want for controlling the playback. I gather that the tag will be controllable as well (via JavaScript probably), but will it be able to go fullscreen? The only two pros for a <video> tag that I can see are: It is more "semantic" - which probably holds no importance to a whole lot of people, including me; It is not dependent on a single commercial 3rd party entity (Adobe) - which I also don't see as a compelling reason to switch, because free players and video converters are already available, and Adobe is not hindering the whole process in any way (it's not in their interests even). So... what's the big deal? Added: OK, so there is one more Pro... maybe. Support for mobile devices. Hard to say though. A number of thoughts race through my head about the subject: How many mobile devices are actually able to decode video at a decent speed anyway, Flash or otherwise? How long until mainstream mobile devices get the <video> support? Even if it is available through updates, how many people actually do that? How many people watch videos on web pages on their mobile phones at all? As for the semantics part - I understand that search engines might be able to detect videos better now, but... what will they do with them anyway? OK, so they know that there is a video in the page. And? They can't index a video! I'd like some more arguments here. Added: Just thought of another Cons. This opens up a whole new area of cross-browser incompatibility. HTML and CSS is quite messy already in this aspect. Flash at least is the same everywhere. But it's enough for at least one major browser vendor to decide against the <video> tag (can anyone say "Internet Explorer"?) and we have a nice new area of hell to explore. Added: A Pro just came in. More competition = more innovation. That's true. Giving Adobe more competition will probably force them to improve Flash in areas it has been lacking so far. Linux seems to be a weak spot for it, cited by many.

    Read the article

  • Jquery / PhP / Joomla Select one of two comboboxes does not get updated

    - by bluesbrother
    I am making a Joomla component wich has 3 comboboxes/selects on the page. One with languages and 2 with subjects. If you change the language the other two get filled with the same data (the subjects in the selected language) the name of the selectbox are different but otherwise the same. I get an error for one of the subject boxes (hence the url gets red), but there is no logic in wich one will give an error. In Firebug i get the HTML back for the one without the other and this one gets updated but the other one gives nothing back. If i right click in firebug on the one that gave the error, and do "send again" it will load fine. Is their a timing problem? The change event of the language selectbox: jQuery('#cmbldcoi_ldlink_language').bind('change', function() { var cmbLangID = jQuery('#cmbldcoi_ldlink_language').val(); if (cmbLangID !=0) { getSubjectCmb_lang(cmbLangID, 'cmbldcoi_ldlink_subjects', '#ldlinksubjects'); } }); Function that requests the php file to create the html for the select: function getSubjectCmb_lang(langID, cmbName, DivWhereIn) { var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } And in the php file there is a connection made to the database to ge the information to build the selectbox. I use a function for this that is okay because it makes al my selectboxes. The only place where there are problems with select boxes is on the pages that has 2 selects that need to change when a third one changed. My guess it is somewhere in the Jquery where this goes wrong. And i think it has to do with timing. But i am open for all sugestions. Thanx. UPDATE: No the ID and Name fields are different. They are named : cmbldcoi_child cmbldcoi_parent Here is my code: The change event for the first combobox which makes the other two change: jQuery('#cmbldcoi_language_chain_subj').bind('change', function(){ var langID = jQuery('#cmbldcoi_language_chain_subj').val(); if (langID != 0){ getSubjectCmb_lang(langID, 'cmbldcoi_child', '#div_cmbldcoi_child'); getSubjectCmb_lang(langID, 'cmbldcoi_parent', '#div_cmbldcoi_parent'); } }); } The function wicht calls the php file to get the info from the database: function getSubjectCmb_lang(langID, cmbName, DivWhereIn){ var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } The PHP code function getcmbsubj_lang(){ $langid = JRequest::getVar('langid'); if ($langid > 0 ){ $langid = JRequest::getVar('langid'); }else{ $langid = 1; } $cmbName = JRequest::getVar('cmbname'); //$lang_sufx = self::get_#__sufx($langid); print ld_html::ld_create_cmb_html($cmbName, '#__ldcoi_subjects','id', 'subject_name', " WHERE id_language={$langid} ORDER BY subject_name" ); } There is a class wich is called ld_html wich has an funnction in it that creates a combobox. ld_html::ld_create_cmb_html() It gets an table name, id field, namefield and optional an where clause. It all works fine if there is just one combobox thats needs updating. It give a problem when there are two. Thanks for the help !

    Read the article

  • Doubly linked lists

    - by user1642677
    I have an assignment that I am terribly lost on involving doubly linked lists (note, we are supposed to create it from scratch, not using built-in API's). The program is supposed to keep track of credit cards basically. My professor wants us to use doubly-linked lists to accomplish this. The problem is, the book does not go into detail on the subject (doesn't even show pseudo code involving doubly linked lists), it merely describes what a doubly linked list is and then talks with pictures and no code in a small paragraph. But anyway, I'm done complaining. I understand perfectly well how to create a node class and how it works. The problem is how do I use the nodes to create the list? Here is what I have so far. public class CardInfo { private String name; private String cardVendor; private String dateOpened; private double lastBalance; private int accountStatus; private final int MAX_NAME_LENGTH = 25; private final int MAX_VENDOR_LENGTH = 15; CardInfo() { } CardInfo(String n, String v, String d, double b, int s) { setName(n); setCardVendor(v); setDateOpened(d); setLastBalance(b); setAccountStatus(s); } public String getName() { return name; } public String getCardVendor() { return cardVendor; } public String getDateOpened() { return dateOpened; } public double getLastBalance() { return lastBalance; } public int getAccountStatus() { return accountStatus; } public void setName(String n) { if (n.length() > MAX_NAME_LENGTH) throw new IllegalArgumentException("Too Many Characters"); else name = n; } public void setCardVendor(String v) { if (v.length() > MAX_VENDOR_LENGTH) throw new IllegalArgumentException("Too Many Characters"); else cardVendor = v; } public void setDateOpened(String d) { dateOpened = d; } public void setLastBalance(double b) { lastBalance = b; } public void setAccountStatus(int s) { accountStatus = s; } public String toString() { return String.format("%-25s %-15s $%-s %-s %-s", name, cardVendor, lastBalance, dateOpened, accountStatus); } } public class CardInfoNode { CardInfo thisCard; CardInfoNode next; CardInfoNode prev; CardInfoNode() { } public void setCardInfo(CardInfo info) { thisCard.setName(info.getName()); thisCard.setCardVendor(info.getCardVendor()); thisCard.setLastBalance(info.getLastBalance()); thisCard.setDateOpened(info.getDateOpened()); thisCard.setAccountStatus(info.getAccountStatus()); } public CardInfo getInfo() { return thisCard; } public void setNext(CardInfoNode node) { next = node; } public void setPrev(CardInfoNode node) { prev = node; } public CardInfoNode getNext() { return next; } public CardInfoNode getPrev() { return prev; } } public class CardList { CardInfoNode head; CardInfoNode current; CardInfoNode tail; CardList() { head = current = tail = null; } public void insertCardInfo(CardInfo info) { if(head == null) { head = new CardInfoNode(); head.setCardInfo(info); head.setNext(tail); tail.setPrev(node) // here lies the problem. tail must be set to something // to make it doubly-linked. but tail is null since it's // and end point of the list. } } } Here is the assignment itself if it helps to clarify what is required and more importantly, the parts I'm not understanding. Thanks https://docs.google.com/open?id=0B3vVwsO0eQRaQlRSZG95eXlPcVE

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • July 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m super excited to announce the July 2013 release of the Ajax Control Toolkit. You can download the new version of the Ajax Control Toolkit from CodePlex (http://ajaxControlToolkit.CodePlex.com) or install the Ajax Control Toolkit from NuGet: With this release, we have completely rewritten the way the Ajax Control Toolkit combines, minifies, gzips, and caches JavaScript files. The goal of this release was to improve the performance of the Ajax Control Toolkit and make it easier to create custom Ajax Control Toolkit controls. Improving Ajax Control Toolkit Performance Previous releases of the Ajax Control Toolkit optimized performance for a single page but not multiple pages. When you visited each page in an app, the Ajax Control Toolkit would combine all of the JavaScript files required by the controls in the page into a new JavaScript file. So, even if every page in your app used the exact same controls, visitors would need to download a new combined Ajax Control Toolkit JavaScript file for each page visited. Downloading new scripts for each page that you visit does not lead to good performance. In general, you want to make as few requests for JavaScript files as possible and take maximum advantage of caching. For most apps, you would get much better performance if you could specify all of the Ajax Control Toolkit controls that you need for your entire app and create a single JavaScript file which could be used across your entire app. What a great idea! Introducing Control Bundles With this release of the Ajax Control Toolkit, we introduce the concept of Control Bundles. You define a Control Bundle to indicate the set of Ajax Control Toolkit controls that you want to use in your app. You define Control Bundles in a file located in the root of your application named AjaxControlToolkit.config. For example, the following AjaxControlToolkit.config file defines two Control Bundles: <ajaxControlToolkit> <controlBundles> <controlBundle> <control name="CalendarExtender" /> <control name="ComboBox" /> </controlBundle> <controlBundle name="CalendarBundle"> <control name="CalendarExtender"></control> </controlBundle> </controlBundles> </ajaxControlToolkit> The first Control Bundle in the file above does not have a name. When a Control Bundle does not have a name then it becomes the default Control Bundle for your entire application. The default Control Bundle is used by the ToolkitScriptManager by default. For example, the default Control Bundle is used when you declare the ToolkitScriptManager like this:  <ajaxToolkit:ToolkitScriptManager runat=”server” /> The default Control Bundle defined in the file above includes all of the scripts required for the CalendarExtender and ComboBox controls. All of the scripts required for both of these controls are combined, minified, gzipped, and cached automatically. The AjaxControlToolkit.config file above also defines a second Control Bundle with the name CalendarBundle. Here’s how you would use the CalendarBundle with the ToolkitScriptManager: <ajaxToolkit:ToolkitScriptManager runat="server"> <ControlBundles> <ajaxToolkit:ControlBundle Name="CalendarBundle" /> </ControlBundles> </ajaxToolkit:ToolkitScriptManager> In this case, only the JavaScript files required by the CalendarExtender control, and not the ComboBox, would be downloaded because the CalendarBundle lists only the CalendarExtender control. You can use multiple named control bundles with the ToolkitScriptManager and you will get all of the scripts from both bundles. Support for ControlBundles is a new feature of the ToolkitScriptManager that we introduced with this release. We extended the ToolkitScriptManager to support the Control Bundles that you can define in the AjaxControlToolkit.config file. Let me be explicit about the rules for Control Bundles: 1. If you do not create an AjaxControlToolkit.config file then the ToolkitScriptManager will download all of the JavaScript files required for all of the controls in the Ajax Control Toolkit. This is the easy but low performance option. 2. If you create an AjaxControlToolkit.config file and create a ControlBundle without a name then the ToolkitScriptManager uses that Control Bundle by default. For example, if you plan to use only the CalendarExtender and ComboBox controls in your application then you should create a default bundle that lists only these two controls. 3. If you create an AjaxControlToolkit.config file and create one or more named Control Bundles then you can use these named Control Bundles with the ToolkitScriptManager. For example, you might want to use different subsets of the Ajax Control Toolkit controls in different sections of your app. I should also mention that you can use the AjaxControlToolkit.config file with custom Ajax Control Toolkit controls – new controls that you write. For example, here is how you would register a set of custom controls from an assembly named MyAssembly: <ajaxControlToolkit> <controlBundles> <controlBundle name="CustomBundle"> <control name="MyAssembly.MyControl1" assembly="MyAssembly" /> <control name="MyAssembly.MyControl2" assembly="MyAssembly" /> </controlBundle> </ajaxControlToolkit> What about ASP.NET Bundling and Minification? The idea of Control Bundles is similar to the idea of Script Bundles used in ASP.NET Bundling and Minification. You might be wondering why we didn’t simply use Script Bundles with the Ajax Control Toolkit. There were several reasons. First, ASP.NET Bundling does not work with scripts embedded in an assembly. Because all of the scripts used by the Ajax Control Toolkit are embedded in the AjaxControlToolkit.dll assembly, ASP.NET Bundling was not an option. Second, Web Forms developers typically think at the level of controls and not at the level of individual scripts. We believe that it makes more sense for a Web Forms developer to specify the controls that they need in an app (CalendarExtender, ToggleButton) instead of the individual scripts that they need in an app (the 15 or so scripts required by the CalenderExtender). Finally, ASP.NET Bundling does not work with older versions of ASP.NET. The Ajax Control Toolkit needs to support ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Therefore, using ASP.NET Bundling was not an option. There is nothing wrong with using Control Bundles and Script Bundles side-by-side. The ASP.NET 4.0 and 4.5 ToolkitScriptManager supports both approaches to bundling scripts. Using the AjaxControlToolkit.CombineScriptsHandler Browsers cache JavaScript files by URL. For example, if you request the exact same JavaScript file from two different URLs then the exact same JavaScript file must be downloaded twice. However, if you request the same JavaScript file from the same URL more than once then it only needs to be downloaded once. With this release of the Ajax Control Toolkit, we have introduced a new HTTP Handler named the AjaxControlToolkit.CombineScriptsHandler. If you register this handler in your web.config file then the Ajax Control Toolkit can cache your JavaScript files for up to one year in the future automatically. You should register the handler in two places in your web.config file: in the <httpHandlers> section and the <system.webServer> section (don’t forget to register the handler for the AjaxFileUpload while you are there!). <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </httpHandlers> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add name="CombineScriptsHandler" verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </handlers> <system.webServer> The handler is only used in release mode and not in debug mode. You can enable release mode in your web.config file like this: <compilation debug=”false”> You also can override the web.config setting with the ToolkitScriptManager like this: <act:ToolkitScriptManager ScriptMode=”Release” runat=”server”/> In release mode, scripts are combined, minified, gzipped, and cached with a far future cache header automatically. When the handler is not registered, scripts are requested from the page that contains the ToolkitScriptManager: When the handler is registered in the web.config file, scripts are requested from the handler: If you want the best performance, always register the handler. That way, the Ajax Control Toolkit can cache the bundled scripts across page requests with a far future cache header. If you don’t register the handler then a new JavaScript file must be downloaded whenever you travel to a new page. Dynamic Bundling and Minification Previous releases of the Ajax Control Toolkit used a Visual Studio build task to minify the JavaScript files used by the Ajax Control Toolkit controls. The disadvantage of this approach to minification is that it made it difficult to create custom Ajax Control Toolkit controls. Starting with this release of the Ajax Control Toolkit, we support dynamic minification. The JavaScript files in the Ajax Control Toolkit are minified at runtime instead of at build time. Scripts are minified only when in release mode. You can specify release mode with the web.config file or with the ToolkitScriptManager ScriptMode property. Because of this change, the Ajax Control Toolkit now depends on the Ajax Minifier. You must include a reference to AjaxMin.dll in your Visual Studio project or you cannot take advantage of runtime minification. If you install the Ajax Control Toolkit from NuGet then AjaxMin.dll is added to your project as a NuGet dependency automatically. If you download the Ajax Control Toolkit from CodePlex then the AjaxMin.dll is included in the download. This change means that you no longer need to do anything special to create a custom Ajax Control Toolkit. As an open source project, we hope more people will contribute to the Ajax Control Toolkit (Yes, I am looking at you.) We have been working hard on making it much easier to create new custom controls. More on this subject with the next release of the Ajax Control Toolkit. A Single Visual Studio Solution We also made substantial changes to the Visual Studio solution and projects used by the Ajax Control Toolkit with this release. This change will matter to you only if you need to work directly with the Ajax Control Toolkit source code. In previous releases of the Ajax Control Toolkit, we maintained separate solution and project files for ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Starting with this release, we now support a single Visual Studio 2012 solution that takes advantage of multi-targeting to build ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5 versions of the toolkit. This change means that you need Visual Studio 2012 to open the Ajax Control Toolkit project downloaded from CodePlex. For details on how we setup multi-targeting, please see Budi Adiono’s blog post: http://www.budiadiono.com/2013/07/25/visual-studio-2012-multi-targeting-framework-project/ Summary You can take advantage of this release of the Ajax Control Toolkit to significantly improve the performance of your website. You need to do two things: 1) You need to create an AjaxControlToolkit.config file which lists the controls used in your app and 2) You need to register the AjaxControlToolkit.CombineScriptsHandler in the web.config file. We made substantial changes to the Ajax Control Toolkit with this release. We think these changes will result in much better performance for multipage apps and make the process of building custom controls much easier. As always, we look forward to hearing your feedback.

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Friday, February 26, 2010

    CodePlex Daily Summary for Friday, February 26, 2010New Projectsaion-gamecp: Aion Gamecp for aion Private server based on Aion UniqueAzure Email Queuer: Azure Email Queuer makes it easier for Developers Programming in the Cloud to Queue Emails to keep the UI Thread Clear for Requests. Developed w...BIG1: Bob and Ian's Game. Written using XNA Game Studio Express. Basically an update of David Braben and Ian Bell's classic game "Elite." This is a nonco...CMS7: CMS7 The CMS7 is composed of three module. (1)Main CMS Business (2)Process Customization (3)Role/Department CustomizationCoreSharp Networking Core: A simple to use framework to develop efficient client/server application. The framework is part of my project at school and I hope it will benefit ...Fullscreen Countdown: Small and basic countdown application. The countdown window can be resized to fit any size to display the minutes elapsed. Developped in C#, .NET F...IRC4N00bz: Learning sockets, events, delegates, SQL, and IRC commands all in one big project! It's written in C# (Csharp) and hope you find it helpfull, or ev...LjSystem: This project is a collection of my extensions to the BCLMP3 Tags Management: A software to manage the tags of MP3 filesnetone: All net in oneNext Dart (Dublin Area Rapid Transport): The shows the times of the next darts from a given station. It is a windows application that updates automatically and so is easier to use than th...PChat - An OCDotNet.Org Presentation: PChat is a multithreaded pinnable chat server and client. It is designed to be a demonstration of Visual Studio 2010 MVC 2, for ocdotnet.org Use...Pittsburgh Code Camp iPhone App: The Pittsburgh Code Camp iPhone Application is meant as a demonstration of the creation of an iPhone application while at the same time providing t...Radical: Radical is an infrastructure frameworkRadioAutomation: Windows application for radio automation.SilverSynth - Digital Audio Synthesis for Silverlight: SilverSynth is a digial audio synthesis library for Silverlight developers to create synthesized wave forms from code. It supports synthesis of sin...SkeinLibManaged: This implementation of the Skein Cryptographic Hash function is written entirely in Managed CSharp. It is posted here to share with the world at l...SpecExplorerEval: We are checking out spec explorer and presenting on its useSPOJemu: This is a SPOJ emulator. It allows you to define tests in xml and then check your application if it's working as you expected.The C# Skype Chat bot: A Skype bot in C# for managing Skype chats.VS 2010 Architecture Layers Patterns: Architecture layers patterns toolbox items for layers diagrams.Yakiimo3D: Mostly DirectX 11 programming tutorials.代码生成器: Project DetailsNew ReleasesArkSwitch: ArkSwitch v1.1.1: This release fixes a crash that occurs when certain processes with multiple primary windows are encountered.BTP Tools: CSB, CUV and HCSB e-Sword files 2010-02-26: include csb.bbl csb+.bbl csb.cmt csbc.dct cuv.bbl cuv+.bbl cuv.cmt cuvc.dct hcsb+.bbl hcsbc.dct files for e-Sword 8.0BubbleBurst: BubbleBurst v1.1: This is the second release of BubbleBurst, the subject of the book Advanced MVVM. This release contains a minor fix that was added after the book ...DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, alpha 3b: Alpha 3b simplifies and strengthens state management. With the exception of linked lists, the internal mechanics of addins have not been improved...Dragonrealms PvpStance plugin for Genie: 1.0.0.4: This updated is needed now that the DR server move broke the "profile soandso pvp" syntax. This version will capture the pvp stance out of the full...FastCode: FastCode 1.0: Definitions <integerType> : byte, sbyte, short, ushort, int, uint, long, ulond <floatType> : float, double, decimal Base types extensions Intege...Fullscreen Countdown: Fullscreen Countdown 1.0: First versionIRC4N00bz: IRC4N00bz_02252010.zip: I'm calling it a night. Here's the dll for where I'm at so far. It works, just lakcs some abilities. Anything not included can be pulled from th...Labrado: Labrado MiniTimer: Labrado MiniTimer is a convenient timer tool designed and implemented for GMAT test preparation.LINQ to VFP: LinqToVfp (v1.0.17.1): Cleaned up WCF Data Service Expression Tree. (details...) This build requires IQToolkit v0.17b.Microsoft Health Common User Interface: Release 8.0.200.000: This is version 8.0 of the Microsoft® Health Common User Interface Control Toolkit. The scope and requirements of this release are based on materia...Mini SQL Query: Mini SQL Query Funky Dev Build (RC1+): The "Funk Dev Build" bit is that I added a couple of features I think are pretty cool. It is a "dev" build but I class it as stable. Find Object...Neovolve: Neovolve.BlogEngine.Extensions 1.2: Updated extensions to work with BE 1.6. Updated Snippets extension to better handle excluded tags and fixed regex bug. Added SyntaxHighlighter exte...Neovolve: Neovolve.BlogEngine.Web 1.1: Update to support BE version 1.6 Neovolve.BlogEngine.Web 1.1 contains a redirector module that translates Community Server url formats into BlogEn...Next Dart (Dublin Area Rapid Transport): 1.0: There are 2 files NextDart 1.0.zip This contains just the files. Extract it to a folder and run NextDart.exe. NextDart 1.0 Intaller.zip This c...Powershell4SQL: Version 1.2: Changes from version 1.1 Added additional attributes to simplify syntax. Server and Database become optional. Defaulted to (local) and 'master' ...Radical: Radical (Desktop) 1.0: First stable dropRaidTracker: Raid Tracker: a few tweaksRaiser's Edge API Developer Toolkit: Alpha Release 1: This is an untested, alpha release. Contains RE API Toolkit built using 7.85 Dlls and 7.91 Dlls.SharePoint Enhanced Calendar by ArtfulBits: ArtfulBits.EnhancedCalendar v1.3: New Features: Simple to activate mechanism added (add Enhanced Calendar Web Part on the same page as standard calendar) Support for any type of S...Silverlight 4.0 Com Library for SQL Server Access: Version 1.0: This is the intial alpha release. It includes ExecuteQuery, ExecuteNonQuery and ExecuteScalar routines. See roadmap section of home page for detai...Silverlight HTML 5 Canvas: SLCanvas 1.1: This release enables <canvas renderMethod="auto" onload="runme(this)"></canvas> or <canvas renderMethod="Silverlight" onload="runme(this)"></ca...SilverSynth - Digital Audio Synthesis for Silverlight: SilverSynth 1.0: Source code including demo application.StringDefs: StringDefs Alpha Release 1.01: In this release of the Library few namespaces are added.STSDev 2008: STSDev 2008 2.1: Update to the StsDev 2008 project to correct Manifest Building issues.Text to HTML: 0.4.0.2: Cambios de la versión:Correcciones menores en el sistema de traducción. Controlada la excepción aparecida al suprimir los archivos de idioma. A...The Silverlight Hyper Video Player [http://slhvp.com]: Release 4 - Friendly User Release (Pre-Beta): Release 4 - Friendly User Release (Pre-Beta) This version of the code has much of the design that we plan to go forward with for Mix and utilizes a...TreeSizeNet: TreeSizeNet 0.10.2: - Assemblies merged in one executableVCC: Latest build, v2.1.30225.0: Automatic drop of latest buildVCC: Latest build, v2.1.30225.1: Automatic drop of latest buildVS 2010 Architecture Layers Patterns: VS 2010 RC Architecture Layers Patterns v1.0: Architecture layers patterns toolbox items based on the Microsoft Application Architecture Guide, 2nd Edition for the layer diagram designer of Vi...Yakiimo3D: DirectX11 BitonicSortCPU Source and Binary: DirectX11 BitonicSortCPU sample source and binary.Yakiimo3D: DirectX11 MandelbrotGPU Source and Binary: DirectX11 MandelbrotGPU source and binary.Most Popular ProjectsVSLabOSIS Interop TestsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSLARToolkit - Silverlight Augmented Reality ToolkitInfoServiceSharpMap - Geospatial Application Framework for the CLRCommon Context AdaptersNB_Store - Free DotNetNuke Ecommerce Catalog ModulejQuery Library for SharePoint Web Servicespatterns & practices – Enterprise Library

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • CodePlex Daily Summary for Saturday, February 20, 2010

    CodePlex Daily Summary for Saturday, February 20, 2010New ProjectsBerkeliumDotNet: BerkeliumDotNet is a .NET wrapper for the Berkelium library written in C++/CLI. Berkelium provides off-screen browser rendering via Google's Chromi...BoxBinary Descriptive WebCacheManager Framework: Allows you to take a simple, different, and more effective method of caching items in ASP.NET. The developer defines descriptive categories to deco...CHS Extranet: CHS Extranet Project, working with the RM Community to build the new RM Easylink type applicationDbModeller: Generates one class having properties with the basic C# types aimed to serve as a business object by using reflection from the passed objects insta...Dice.NET: Dice.NET is a simple dice roller for those cases where you just kinda forgot your real dices. It's very simple in setup/use.EBI App with the SQL Server CE and websync: EBI App with the SQL Server CE and SQL Server DEV with Merge Replication(Web Synchronization) ere we are trying to develop an application which yo...Family Tree Analyzer: This project with be a c# project which aims to allow users to import their GEDCOM files and produce various data analysis reports such as a list o...Go! Embedded Device Builder: Go! is a complete software engineering environment for the creation of embedded Linux devices. It enables you to engineer, design, develop, build, ...HiddenWordsReadingPlan: HiddenWordsReadingPlanHtml to OpenXml: A library to convert simple or advanced html to plain OpenXml document.Jeffrey Palermo's shared source: This project contains multiple samples with various snippets and projects from blog posts, user group talks, and conference sessions.Krypton Palette Selectors: A small C# control library that allows for simplified palette selection and management. It makes use of and relies on Component Factory's excellen...OCInject: A DI container on a diet. This is a basic DI container that lives in your project not an external assembly with support for auto generated delegat...Photo Organiser: A small utility to sort photos into a new file structure based on date held in their XMP or EXIF tags (YYYY/MM/DD/hhmmss.jpg). Developed in C# / WPF.QPAPrintLib: Print every document by its recommended programmReusable Library: A collection of reusable abstractions for enterprise application developer.Runtime Intelligence Data Visualizer: WPF application used to visualize Runtime Intelligence data using the Data Query Service from the Runtime Intelligence Endpoint Starter Kit.ScreenRec: ScreenRec is program for record your desktop and save to images or save one picture.Silverlight Internet Desktop Application Guidance: SLIDA (Silverlight Internet Desktop Applications) provides process guidance for developers creating Silverlight applications that run exclusively o...WSUS Web Administration: Web Application to remotely manage WSUSNew Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.7.0 Stable: Bug Solved : Test-Path-Writable fails on root of system drive on Windows 7. Therefore the function now accepts an optional parameter to specify if ...aqq: sec 1.02: Projeto SEC - Sistema economico Comercial - em Visual FoxPro 9.0 OpenSource ou seja gratis e com fontes, licença GNU/GPL para maiores informações e...ASP.NET MVC Attribute Based Route Mapper: Attribute Based Routes v0.2: ASP.NET MVC Attribute Based Route MapperBoxBinary Descriptive WebCacheManager Framework: Initial release: Initial assembly release for anyone wanting the files referenced in my talk at Umbraco's 5th Birthday London meetup 16/Feb/2010 The code is fairly...Build Version Increment Add-In Visual Studio: Build Version Increment v2.2 Beta: 2.2.10050.1548Added support for custom increment schemes via pluginsBuild Version Increment Add-In Visual Studio: BuildVersionIncrement v2.1: 2.1.10050.1458Fix: Localization issues Feature: Unmanaged C support Feature: Multi-Monitor support Feature: Global/Default settings Fix: De...CHS Extranet: Beta 2.3: Beta 2.3 Release Change Log: Fixed the update my details not updating the department/form Tried to fix the issue when the ampersand (&) is in t...Cover Creator: CoverCreator 1.2.2: Resolved BUG: If there is only one CD entry in freedb.org application do nothing. Instalation instructions Just unzip CoverCreator and run CoverCr...Employee Scheduler: Employee Scheduler 2.3: Extract the files to a directory and run Lab Hours.exe. Add an employee. Double click an employee to modify their times. Please contact me through ...EnOceanNet: EnOceanNet v1.11: Recompiled for .NET Framework 4 RCFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 beta 4 Released: Hi, This release contains fix for the following bugs: * DataBinding was not working as expected with RIA services. * DataSeries visual wa...Html to OpenXml: HtmlToOpenXml 0.1 Beta: This is a beta version for testing purpose.Jeffrey Palermo's shared source: Web Forms front controller: This code goes along with my blog post about adding code that executes before your web form http://jeffreypalermo.com/blog/add-post-backs-to-mvc-nd...Krypton Palette Selectors: Initial Release: The initial release. Contains only the KryptonPaletteDropButton control.LaunchMeNot: LaunchMeNot 1.10: Lots has been added in this release. Feel free, however, to suggest features you'd like on the forums. Changes in LaunchMeNot 1.10 (19th February ...Magellan: Magellan 1.1.36820.4796 Stable: This is a stable release. It contains a fix for a bug whereby the content of zones in a shared layout couldn't use element bindings (due to name sc...Magellan: Magellan 1.1.36883.4800 Stable: This release includes a couple of major changes: A new Forms object model. See this page for details. Magellan objects are now part of the defau...MAISGestão: LayerDiagram: LayerDiagramMatrix3DEx: Matrix3DEx 1.0.2.0: Fixes the SwapHandedness method. This release includes factory methods for all common transformation matrices like rotation, scaling, translation, ...MDownloader: MDownloader-0.15.1.55880: Fixed bugs.NewLineReplacer: QPANewLineReplacer 1.1: Replace letter fast and easy in great textfilesOAuthLib: OAuthLib (1.5.0.1): Difference between 1.5.0.0 is just version number.OCInject: First Release: First ReleasePhoto Organiser: Installer alpha release: First release - contains known bugs, but works for the most part.Pinger: Pinger-1.0.0.0 Source: The Latest and First Source CodePinger: Pinger-1.0.0.2 Binary: Hi, This version can! work on Older versions of windows than 7 but i haven't test it! tnxPinger: Pinger-1.0.0.2 Source: Hi, It's the raw source!Reusable Library: v1.0: A collection of reusable abstractions for enterprise application developer.ScreenRec: Version 1: One version of this programSense/Net Enterprise Portal & ECMS: SenseNet 6.0 Beta 5: Sense/Net 6.0 Beta 5 Release Notes We are proud to finally present you Sense/Net 6.0 Beta 5, a major feature overhaul over beta43, and hopefully th...Silverlight Internet Desktop Application Guidance: v1: Project templates for Silverlight IDA and Silverlight Navigation IDA.SLAM! SharePoint List Association Manager: SLAM v1.3: The SharePoint List Association Manager is a platform for managing lists in SharePoint in a relational manner. The SLAM Hierarchy Extension works ...SQL Server PowerShell Extensions: 2.0.2 Production: Release 2.0.1 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 8 modules with 133 advanced functions, 2 cmdlets and 7 sc...StoryQ: StoryQ 2.0.2 Library and Converter UI: Fixes: 16086 This release includes the following files: StoryQ.dll - the actual StoryQ library for you to reference StoryQ.xml - the xmldoc for ...Text to HTML: 0.4.0 beta: Cambios de la versión:Inclusión de los idiomas castellano, inglés y francés. Adición de una ventana de configuración. Carga dinámica de variabl...thor: Version 1.1: What's New in Version 1.1Specify whether or not to display the subject of appointments on a calendar Specify whether or not to use a booking agen...TweeVo: Tweet What Your TiVo Is Recording: TweeVo v1.0: TweeVo v1.0VCC: Latest build, v2.1.30219.0: Automatic drop of latest buildVFPnfe: Arquivo xml gerado manualmente: Segue um aquivo que gera o xml para NF-e de forma manual, estou trabalhando na versão 1.1 deste projeto, aguarde, enquanto isso baixe outro projeto...Windows Double Explorer: WDE v0.3.7: -optimization -locked tabs can be reset to locked directory (single & multi) -folder drag drop to tabcontrol creates new tab -splash screen -direcl...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.5: Visual Studio 2008 and RC 2010 are now supported. Different profiles can now be used to compile the shader with. ChangesVisual Studio RC 2010 is ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrSharpyBlogEngine.NETSharePoint ContribjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFluent Ribbon Control Suite

    Read the article

  • Calling Web Service Functions Asynchronously from a Web Page

    - by SGWellens
    Over on the Asp.Net forums where I moderate, a user had a problem calling a Web Service from a web page asynchronously. I tried his code on my machine and was able to reproduce the problem. I was able to solve his problem, but only after taking the long scenic route through some of the more perplexing nuances of Web Services and Proxies. Here is the fascinating story of that journey. Start with a simple Web Service     public class Service1 : System.Web.Services.WebService    {        [WebMethod]        public string HelloWorld()        {            // sleep 10 seconds            System.Threading.Thread.Sleep(10 * 1000);            return "Hello World";        }    } The 10 second delay is added to make calling an asynchronous function more apparent. If you don't call the function asynchronously, it takes about 10 seconds for the page to be rendered back to the client. If the call is made from a Windows Forms application, the application freezes for about 10 seconds. Add the web service to a web site. Right-click the project and select "Add Web Reference…" Next, create a web page to call the Web Service. Note: An asp.net web page that calls an 'Async' method must have the Async property set to true in the page's header: <%@ Page Language="C#"          AutoEventWireup="true"          CodeFile="Default.aspx.cs"          Inherits="_Default"           Async='true'  %> Here is the code to create the Web Service proxy and connect the event handler. Shrewdly, we make the proxy object a member of the Page class so it remains instantiated between the various events. public partial class _Default : System.Web.UI.Page {    localhost.Service1 MyService;  // web service proxy     // ---- Page_Load ---------------------------------     protected void Page_Load(object sender, EventArgs e)    {        MyService = new localhost.Service1();        MyService.HelloWorldCompleted += EventHandler;          } Here is the code to invoke the web service and handle the event:     // ---- Async and EventHandler (delayed render) --------------------------     protected void ButtonHelloWorldAsync_Click(object sender, EventArgs e)    {        // blocks        ODS("Pre HelloWorldAsync...");        MyService.HelloWorldAsync();        ODS("Post HelloWorldAsync");    }    public void EventHandler(object sender, localhost.HelloWorldCompletedEventArgs e)    {        ODS("EventHandler");        ODS("    " + e.Result);    }     // ---- ODS ------------------------------------------------    //    // Helper function: Output Debug String     public static void ODS(string Msg)    {        String Out = String.Format("{0}  {1}", DateTime.Now.ToString("hh:mm:ss.ff"), Msg);        System.Diagnostics.Debug.WriteLine(Out);    } I added a utility function I use a lot: ODS (Output Debug String). Rather than include the library it is part of, I included it in the source file to keep this example simple. Fire up the project, open up a debug output window, press the button and we get this in the debug output window: 11:29:37.94 Pre HelloWorldAsync... 11:29:37.94 Post HelloWorldAsync 11:29:48.94 EventHandler 11:29:48.94 Hello World   Sweet. The asynchronous call was made and returned immediately. About 10 seconds later, the event handler fires and we get the result. Perfect….right? Not so fast cowboy. Watch the browser during the call: What the heck? The page is waiting for 10 seconds. Even though the asynchronous call returned immediately, Asp.Net is waiting for the event to fire before it renders the page. This is NOT what we wanted. I experimented with several techniques to work around this issue. Some may erroneously describe my behavior as 'hacking' but, since no ingesting of Twinkies was involved, I do not believe hacking is the appropriate term. If you examine the proxy that was automatically created, you will find a synchronous call to HelloWorld along with an additional set of methods to make asynchronous calls. I tried the other asynchronous method supplied in the proxy:     // ---- Begin and CallBack ----------------------------------     protected void ButtonBeginHelloWorld_Click(object sender, EventArgs e)    {        ODS("Pre BeginHelloWorld...");        MyService.BeginHelloWorld(AsyncCallback, null);        ODS("Post BeginHelloWorld");    }    public void AsyncCallback(IAsyncResult ar)    {        String Result = MyService.EndHelloWorld(ar);         ODS("AsyncCallback");        ODS("    " + Result);    } The BeginHelloWorld function in the proxy requires a callback function as a parameter. I tested it and the debug output window looked like this: 04:40:58.57 Pre BeginHelloWorld... 04:40:58.57 Post BeginHelloWorld 04:41:08.58 AsyncCallback 04:41:08.58 Hello World It works the same as before except for one critical difference: The page rendered immediately after the function call. I was worried the page object would be disposed after rendering the page but the system was smart enough to keep the page object in memory to handle the callback. Both techniques have a use: Delayed Render: Say you want to verify a credit card, look up shipping costs and confirm if an item is in stock. You could have three web service calls running in parallel and not render the page until all were finished. Nice. You can send information back to the client as part of the rendered page when all the services are finished. Immediate Render: Say you just want to start a service running and return to the client. You can do that too. However, the page gets sent to the client before the service has finished running so you will not be able to update parts of the page when the service finishes running. Summary: YourFunctionAsync() and an EventHandler will not render the page until the handler fires. BeginYourFunction() and a CallBack function will render the page as soon as possible. I found all this to be quite interesting and did a lot of searching and researching for documentation on this subject….but there isn't a lot out there. The biggest clues are the parameters that can be sent to the WSDL.exe program: http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.100).aspx Two parameters are oldAsync and newAsync. OldAsync will create the Begin/End functions; newAsync will create the Async/Event functions. Caveat: I haven't tried this but it was stated in this article. I'll leave confirming this as an exercise for the student J. Included Code: I'm including the complete test project I created to verify the findings. The project was created with VS 2008 SP1. There is a solution file with 3 projects, the 3 projects are: Web Service Asp.Net Application Windows Forms Application To decide which program runs, you right-click a project and select "Set as Startup Project". I created and played with the Windows Forms application to see if it would reveal any secrets. I found that in the Windows Forms application, the generated proxy did NOT include the Begin/Callback functions. Those functions are only generated for Asp.Net pages. Probably for the reasons discussed earlier. Maybe those Microsoft boys and girls know what they are doing. I hope someone finds this useful. Steve Wellens

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >