Search Results

Search found 4501 results on 181 pages for 'vostro 1000'.

Page 23/181 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Beginner SQL section: avoiding repeated expression

    - by polygenelubricants
    I'm entirely new at SQL, but let's say that on the StackExchange Data Explorer, I just want to list the top 15 users by reputation, and I wrote something like this: SELECT TOP 15 DisplayName, Id, Reputation, Reputation/1000 As RepInK FROM Users WHERE RepInK > 10 ORDER BY Reputation DESC Currently this gives an Error: Invalid column name 'RepInK', which makes sense, I think, because RepInK is not a column in Users. I can easily fix this by saying WHERE Reputation/1000 > 10, essentially repeating the formula. So the questions are: Can I actually use the RepInK "column" in the WHERE clause? Do I perhaps need to create a virtual table/view with this column, and then do a SELECT/WHERE query on it? Can I name an expression, e.g. Reputation/1000, so I only have to repeat the names in a few places instead of the formula? What do you call this? A substitution macro? A function? A stored procedure? Is there an SQL quicksheet, glossary of terms, language specification, anything I can use to quickly pick up the syntax and semantics of the language? I understand that there are different "flavors"?

    Read the article

  • A more effecient millisecond conversion method?

    - by cube
    I am currently using this method to convert milliseconds to min:sec:1/10sec. However it does not seem to be efficient at all. Would anyone know of a faster more efficient and optimized way of accomplishing the same. mills.prototype.formatTime = function(time) { var elapsedTime = (time * 1000); //Minutes var elapsedM = (elapsedTime/60000)|0; var remaining = elapsedTime - (elapsedM * 60000); //add a leading zero if it's a single digit number if (elapsedM < 10) { elapsedM = "0"+elapsedM; } //Seconds var elapsedS = ((remaining/1000)|0); remaining -= (elapsedS*1000); ////add leading zero if (elapsedS<10) { elapsedS = "0"+elapsedS; } //Hundredths var elapsedFractions = ((remaining/10)|0); if (elapsedFractions < 10) { elapsedFractions = "0"+elapsedFractions; } //display results nicely var time_data = elapsedM+":"+elapsedS+":"+elapsedFractions; //return time_data; return[time_data,elapsedM,elapsedS,elapsedFractions] };

    Read the article

  • In Mercurial, can I apply changes from one file to another file in the same branch?

    - by Stephen
    In the good old days of Subversion, I would sometimes derive a new file from an existing one using svn copy. Then if something changed in sections they had in common, I could still use svn merge to update the derived version. To use the example from hginit.com, say the "guac" recipe already exists, and I want to create a "superguac" that includes instructions on how to serve guacamole to 1000 raving soccer fans. Using the process I just described, I could: svn cp guac superguac svn ci -m "Created superguac by copying guac" (edit superguac) svn ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) svn ci -m "Fixed a typo in guac" svn merge -r3:4 guac superguac and thus the typo fix would be applied to superguac. Mercurial provides an hg copy command that marks a file as a copy of the original, but I'm not sure the repository structure supports a similar workflow. Here's the same example, and I carefully only edit a single file in the commit I want to use in the merge: hg cp guac superguac hg ci -m "Created superguac by copying guac" (edit superguac) hg ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) hg ci -m "Fixed a typo in guac" I now want to apply the change in guac to superguac. Is that possible? If so, what's the right command? Is there a different workflow in Mercurial that achieves the same results (limited to a single branch)?

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

  • Matlab fft function

    - by CTZStef
    The code below is from the Matlab 2011a help about fft function. I think there is a problem here : why do they multiply t(1:50) by Fs, and then say it's time in millisecond ? Certainly, it happens to be true in this very particular case, but change the value of Fs to, say, 2000, and it won't work anymore, obviously because of this factor of 2. Right ? Quite misleading, isn't it ? What do I miss ? Fs = 1000; % Sampling frequency T = 1/Fs; % Sample time L = 1000; % Length of signal t = (0:L-1)*T; % Time vector % Sum of a 50 Hz sinusoid and a 120 Hz sinusoid x = 0.7*sin(2*pi*50*t) + sin(2*pi*120*t); y = x + 2*randn(size(t)); % Sinusoids plus noise plot(Fs*t(1:50),y(1:50)) title('Signal Corrupted with Zero-Mean Random Noise') xlabel('time (milliseconds)') Clearer with this : fs = 2000; % Sampling frequency T = 1 / fs; % Sample time L = 1000; % Length of signal t2 = (0:L-1)*T; % Time vector f = 50; % signal frequency s2 = sin(2*pi*f*t2); figure, plot(fs*t2(1:50),s2(1:50)); % NOT good figure, plot(t2(1:50),s2(1:50)); % good

    Read the article

  • Implement a threading to prevent UI block on a bug in an async function

    - by Marcx
    I think I ran up againt a bug in an async function... Precisely the getDirectoryListingAsync() of the File class... This method is supposted to return an object containing the lists of files in a specified folder. I found that calling this method on a direcory with a lot of files (in my tests more than 20k files), after few seconds there is a block on the UI until the process is completed... I think that this method is separated in two main block: 1) get the list of files 2) create the array with the details of the files The point 1 seems to be async (for a few second the ui is responsive), then when the process pass from point 1 to point 2 the block of the UI occurs until the complete event is dispathed... Here's some (simple) code: private function checkFiles(dir:File):void { if (dir.exists) { dir.addEventListener( FileListEvent.DIRECTORY_LISTING, listaImmaginiLocale); dir.getDirectoryListingAsync(); // after this point, for the firsts seconds the UI respond well (point 1), // few seconds later (point 2) the UI is frozen } } private function listaImmaginiLocale( event:FileListEvent ):void { // from this point on the UI is responsive again... } Actually in my projects there are some function that perform an heavy cpu usage and to prevent the UI block I implemented a simple function that after some iteration will wait giving time to UI to be refreshed. private var maxIteration:int = 150000; private function sampleFunct(offset:int = 0) :void { if (offset < maxIteration) { // do something // call the recursive function using a timeout.. // if the offset in multiple by 1000 the function will wait 15 millisec, // otherwise it will be called immediately // 1000 is a random number for the pourpose of this example, but I usually change the // value based on how much heavy is the function itself... setTimeout(function():void{aaa(++offset);}, (offset%1000?15:0)); } } Using this method I got a good responsive UI without afflicting performance... I'd like to implement it into the getDirectoryListingAsync method but I don't know if it's possibile how can I do it where is the file to edit or extend.. Any suggestion???

    Read the article

  • std::vector elements initializing

    - by Chameleon
    std::vector<int> v1(1000); std::vector<std::vector<int>> v2(1000); std::vector<std::vector<int>::const_iterator> v3(1000); How elements of these 3 vectors initialized? About int, I test it and I saw that all elements become 0. Is this standard? I believed that primitives remain undefined. I create a vector with 300000000 elements, give non-zero values, delete it and recreate it, to avoid OS memory clear for data safety. Elements of recreated vector were 0 too. What about iterator? Is there a initial value (0) for default constructor or initial value remains undefined? When I check this, iterators point to 0, but this can be OS When I create a special object to track constructors, I saw that for first object, vector run the default constructor and for all others it run the copy constructor. Is this standard? Is there a way to completely avoid initialization of elements? Or I must create my own vector? (Oh my God, I always say NOT ANOTHER VECTOR IMPLEMENTATION) I ask because I use ultra huge sparse matrices with parallel processing, so I cannot use push_back() and of course I don't want useless initialization, when later I will change the value.

    Read the article

  • Javascript inheritance: call super-constructor or use prototype chain?

    - by Jeremy S.
    Hi folks, quite recently I read about javascript call usage in MDC https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/call one linke of the example shown below, I still don't understand. Why are they using inheritance here like this Prod_dept.prototype = new Product(); is this necessary? Because there is a call to the super-constructor in Prod_dept() anyway, like this Product.call is this just out of common behaviour? When is it better to use call for the super-constructor or use the prototype chain? function Product(name, value){ this.name = name; if(value >= 1000) this.value = 999; else this.value = value; } function Prod_dept(name, value, dept){ this.dept = dept; Product.call(this, name, value); } Prod_dept.prototype = new Product(); // since 5 is less than 1000, value is set cheese = new Prod_dept("feta", 5, "food"); // since 5000 is above 1000, value will be 999 car = new Prod_dept("honda", 5000, "auto"); Thanks for making things clearer

    Read the article

  • Is this the best way to make an API request using PHP CURL?

    - by Abs
    Hello all, I have a site that has a simple API which can be used via http. I wish to make use of the API and submit data about 1000-1500 times at one time. Here is their API: http://api.jum.name/ I have constructed the URL to make a submission but now I am wondering what is the best way to make these 1000-1500 API GET requests? Here is the PHP CURL implementation I was thinking of: $add = 'http://www.mysite.com/3rdparty/API/api.php?fn=post&username=test&password=tester&url=http://google.com&category=21&title=story a&content=content text&tags=Season,news'; curl_setopt ($ch, CURLOPT_URL, "$add"); curl_setopt ($ch, CURLOPT_POST, 0); curl_setopt ($ch, CURLOPT_COOKIEFILE, 'files/cookie.txt'); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $postdata = curl_exec ($ch); Shall I close the CURL connection every time I make a submission? Can I re-write the above in a better way that will make these 1000-1500 submissions quicker? Thanks all

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • Getting started with MIT Proto

    - by Charles
    MIT Proto lacks a basic getting started guide. How do I find a shell that accepts commands like (def foo...) and proto -n 1000 -l -m ...? http://groups.csail.mit.edu/stpg/proto.html I can run in my bash shell: ./proto -n 1000 -s 0.1 -T -l "(red (gradient (= (mid) 0)))" I can't figure out how to run e.g. channel.proto: (def channel (src dst width) (let* ((d (distance src dst)) (trail (<= (+ (gradient src) (gradient dst)) (+ d 0.01))) ;; float error ;; (trail (= (+ (gradient src) (gradient dst)) d)) ) (dilate trail width))) ;; To see a channel calculated from geometric primitives, run: ;; proto -n 1000 -l -m -s 0.5 "(blue (channel (sense 1) (sense 2) 10))" ;; click on a device and hit 't' to set up the source, then click on ;; another device and hit 'y' to designate the destination. At first ;; every device will be blue, but then it should clear and you should ;; see a thick blue path connecting the two devices you selected. Thanks! P.S. Somebody please tag this mit-proto. I can't.

    Read the article

  • JQuery ajax success help

    - by Jason
    Hi all, I am implementing a "Quick delete" function into a page I am creating. The way it works is like this: 1: You click the "delete" button in the table row for the record that you want to delete. 2: The page sends a request to the ajax page and return a successfully message of "yes" or a failure message of "no". My issue is that if I get a successful message of "yes" I want to hide the row for that record. I am having issue "finding" the row using JQuery. Here is my jquery code: $(document).ready(function(){ $(".pane .btn-delete").click(function(){ var element = $(this); var del_id = element.attr("id"); var dataString = 'action=del&cid=' + del_id; if(confirm("Are you sure you want to delete this content block?")) { $("#msgbox").addClass('ajaxmsg').text('Checking permissions....').fadeIn(1000); $.ajax({ type: "get", url: "ajax/admArticles_ajax.php", data: dataString, success: function(data){ switch(data) { case "yes": $("#msgbox").addClass('ajaxmsg').text('Deleting content block....').fadeIn(1000); $(this).parents(".pane").animate({ backgroundColor: "#fbc7c7" }, "fast") .animate({ opacity: "hide" }, "slow") break case "no": $("#msgbox").removeClass().addClass('error').text('You do not have the correct permissions to delete this content....').fadeIn(1000); break default: }; } }); } return false; }); }); This is the lines of code I am using to hide the row however it is not working because I don't think $(this).parents(".pane") finds the element. $(this).parents(".pane").animate({ backgroundColor: "#fbc7c7" }, "fast") .animate({ opacity: "hide" }, "slow") Any help would be greatly appreciated. Thanks...

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • Diving into OpenStack Network Architecture - Part 1

    - by Ronen Kofman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} rkofman Normal rkofman 83 3045 2014-05-23T21:11:00Z 2014-05-27T06:58:00Z 3 1883 10739 Oracle Corporation 89 25 12597 12.00 140 Clean Clean false false false false EN-US X-NONE HE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Before we begin OpenStack networking has very powerful capabilities but at the same time it is quite complicated. In this blog series we will review an existing OpenStack setup using the Oracle OpenStack Tech Preview and explain the different network components through use cases and examples. The goal is to show how the different pieces come together and provide a bigger picture view of the network architecture in OpenStack. This can be very helpful to users making their first steps in OpenStack or anyone wishes to understand how networking works in this environment.  We will go through the basics first and build the examples as we go. According to the recent Icehouse user survey and the one before it, Neutron with Open vSwitch plug-in is the most widely used network setup both in production and in POCs (in terms of number of customers) and so in this blog series we will analyze this specific OpenStack networking setup. As we know there are many options to setup OpenStack networking and while Neturon + Open vSwitch is the most popular setup there is no claim that it is either best or the most efficient option. Neutron + Open vSwitch is an example, one which provides a good starting point for anyone interested in understanding OpenStack networking. Even if you are using different kind of network setup such as different Neutron plug-in or even not using Neutron at all this will still be a good starting point to understand the network architecture in OpenStack. The setup we are using for the examples is the one used in the Oracle OpenStack Tech Preview. Installing it is simple and it would be helpful to have it as reference. In this setup we use eth2 on all servers for VM network, all VM traffic will be flowing through this interface.The Oracle OpenStack Tech Preview is using VLANs for L2 isolation to provide tenant and network isolation. The following diagram shows how we have configured our deployment: This first post is a bit long and will focus on some basic concepts in OpenStack networking. The components we will be discussing are Open vSwitch, network namespaces, Linux bridge and veth pairs. Note that this is not meant to be a comprehensive review of these components, it is meant to describe the component as much as needed to understand OpenStack network architecture. All the components described here can be further explored using other resources. Open vSwitch (OVS) In the Oracle OpenStack Tech Preview OVS is used to connect virtual machines to the physical port (in our case eth2) as shown in the deployment diagram. OVS contains bridges and ports, the OVS bridges are different from the Linux bridge (controlled by the brctl command) which are also used in this setup. To get started let’s view the OVS structure, use the following command: # ovs-vsctl show 7ec51567-ab42-49e8-906d-b854309c9edf     Bridge br-int         Port br-int             Interface br-int type: internal         Port "int-br-eth2"             Interface "int-br-eth2"     Bridge "br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2" ovs_version: "1.11.0" We see a standard post deployment OVS on a compute node with two bridges and several ports hanging off of each of them. The example above is a compute node without any VMs, we can see that the physical port eth2 is connected to a bridge called “br-eth2”. We also see two ports "int-br-eth2" and "phy-br-eth2" which are actually a veth pair and form virtual wire between the two bridges, veth pairs are discussed later in this post. When a virtual machine is created a port is created on one the br-int bridge and this port is eventually connected to the virtual machine (we will discuss the exact connectivity later in the series). Here is how OVS looks after a VM was launched: # ovs-vsctl show efd98c87-dc62-422d-8f73-a68c2a14e73d     Bridge br-int         Port "int-br-eth2"             Interface "int-br-eth2"         Port br-int             Interface br-int type: internal         Port "qvocb64ea96-9f" tag: 1             Interface "qvocb64ea96-9f"     Bridge "br-eth2"         Port "phy-br-eth2"             Interface "phy-br-eth2"         Port "br-eth2"             Interface "br-eth2" type: internal         Port "eth2"             Interface "eth2" ovs_version: "1.11.0" Bridge "br-int" now has a new port "qvocb64ea96-9f" which connects to the VM and tagged with VLAN 1. Every VM which will be launched will add a port on the “br-int” bridge for every network interface the VM has. Another useful command on OVS is dump-flows for example: # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=735.544s, table=0, n_packets=70, n_bytes=9976, idle_age=17, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=76679.786s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=76681.36s, table=0, n_packets=68, n_bytes=7950, idle_age=17, hard_age=65534, priority=1 actions=NORMAL As we see the port which is connected to the VM has the VLAN tag 1. However the port on the VM network (eth2) will be using tag 1000. OVS is modifying the vlan as the packet flow from the VM to the physical interface. In OpenStack the Open vSwitch agent takes care of programming the flows in Open vSwitch so the users do not have to deal with this at all. If you wish to learn more about how to program the Open vSwitch you can read more about it at http://openvswitch.org looking at the documentation describing the ovs-ofctl command. Network Namespaces (netns) Network namespaces is a very cool Linux feature can be used for many purposes and is heavily used in OpenStack networking. Network namespaces are isolated containers which can hold a network configuration and is not seen from outside of the namespace. A network namespace can be used to encapsulate specific network functionality or provide a network service in isolation as well as simply help to organize a complicated network setup. Using the Oracle OpenStack Tech Preview we are using the latest Unbreakable Enterprise Kernel R3 (UEK3), this kernel provides a complete support for netns. Let's see how namespaces work through couple of examples to control network namespaces we use the ip netns command: Defining a new namespace: # ip netns add my-ns # ip netns list my-ns As mentioned the namespace is an isolated container, we can perform all the normal actions in the namespace context using the exec command for example running the ifconfig command: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:16436 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) We can run every command in the namespace context, this is especially useful for debug using tcpdump command, we can ping or ssh or define iptables all within the namespace. Connecting the namespace to the outside world: There are various ways to connect into a namespaces and between namespaces we will focus on how this is done in OpenStack. OpenStack uses a combination of Open vSwitch and network namespaces. OVS defines the interfaces and then we can add those interfaces to namespace. So first let's add a bridge to OVS: # ovs-vsctl add-br my-bridge Now let's add a port on the OVS and make it internal: # ovs-vsctl add-port my-bridge my-port # ovs-vsctl set Interface my-port type=internal And let's connect it into the namespace: # ip link set my-port netns my-ns Looking inside the namespace: # ip netns exec my-ns ifconfig -a lo        Link encap:Local Loopback           LOOPBACK  MTU:65536 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) my-port   Link encap:Ethernet HWaddr 22:04:45:E2:85:21           BROADCAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) Now we can add more ports to the OVS bridge and connect it to other namespaces or other device like physical interfaces. Neutron is using network namespaces to implement network services such as DCHP, routing, gateway, firewall, load balance and more. In the next post we will go into this in further details. Linux Bridge and veth pairs Linux bridge is used to connect the port from OVS to the VM. Every port goes from the OVS bridge to a Linux bridge and from there to the VM. The reason for using regular Linux bridges is for security groups’ enforcement. Security groups are implemented using iptables and iptables can only be applied to Linux bridges and not to OVS bridges. Veth pairs are used extensively throughout the network setup in OpenStack and are also a good tool to debug a network problem. Veth pairs are simply a virtual wire and so veths always come in pairs. Typically one side of the veth pair will connect to a bridge and the other side to another bridge or simply left as a usable interface. In this example we will create some veth pairs, connect them to bridges and test connectivity. This example is using regular Linux server and not an OpenStack node: Creating a veth pair, note that we define names for both ends: # ip link add veth0 type veth peer name veth1 # ifconfig -a . . veth0     Link encap:Ethernet HWaddr 5E:2C:E6:03:D0:17           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) veth1     Link encap:Ethernet HWaddr E6:B6:E2:6D:42:B8           BROADCAST MULTICAST  MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b) . . To make the example more meaningful this we will create the following setup: veth0 => veth1 => br-eth3 => eth3 ======> eth2 on another Linux server br-eth3 – a regular Linux bridge which will be connected to veth1 and eth3 eth3 – a physical interface with no IP on it, connected to a private network eth2 – a physical interface on the remote Linux box connected to the private network and configured with the IP of 50.50.50.1 Once we create the setup we will ping 50.50.50.1 (the remote IP) through veth0 to test that the connection is up: # brctl addbr br-eth3 # brctl addif br-eth3 eth3 # brctl addif br-eth3 veth1 # brctl show bridge name     bridge id               STP enabled     interfaces br-eth3         8000.00505682e7f6       no              eth3                                                         veth1 # ifconfig veth0 50.50.50.50 # ping -I veth0 50.50.50.51 PING 50.50.50.51 (50.50.50.51) from 50.50.50.50 veth0: 56(84) bytes of data. 64 bytes from 50.50.50.51: icmp_seq=1 ttl=64 time=0.454 ms 64 bytes from 50.50.50.51: icmp_seq=2 ttl=64 time=0.298 ms When the naming is not as obvious as the previous example and we don't know who are the paired veth interfaces we can use the ethtool command to figure this out. The ethtool command returns an index we can look up using ip link command, for example: # ethtool -S veth1 NIC statistics: peer_ifindex: 12 # ip link . . 12: veth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Summary That’s all for now, we quickly reviewed OVS, network namespaces, Linux bridges and veth pairs. These components are heavily used in the OpenStack network architecture we are exploring and understanding them well will be very useful when reviewing the different use cases. In the next post we will look at how the OpenStack network is laid out connecting the virtual machines to each other and to the external world. @RonenKofman

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Problem trying to lock framerate at 60 FPS

    - by shad0w
    I've written a simple class to limit the framerate of my current project. But it does not work as it should. Here is the code: void FpsCounter::Process() { deltaTime = static_cast<double>(frameTimer.GetMsecs()); waitTime = 1000.0/fpsLimit - deltaTime; frameTimer.Reset(); if(waitTime <= 0) { std::cout << "error, waittime: " << waitTime << std::endl; } else { SDL_Delay(static_cast<Uint32>(waitTime)); } if(deltaTime == 0) { currFps = -1; } else { currFps = 1000/deltaTime; } std::cout << "--Timings--" << std::endl; std::cout << "Delta: \t" << deltaTime << std::endl; std::cout << "Delay: \t" << waitTime << std::endl; std::cout << "FPS: \t" << currFps << std::endl; std::cout << "-- --" << std::endl; } Timer::Timer() { startMsecs = 0; } Timer::~Timer() { // TODO Auto-generated destructor stub } void Timer::Start() { started = true; paused = false; Reset(); } void Timer::Pause() { if(started && !paused) { paused = true; pausedMsecs = SDL_GetTicks() - startMsecs; } } void Timer::Resume() { if(paused) { paused = false; startMsecs = SDL_GetTicks() - pausedMsecs; pausedMsecs = 0; } } int Timer::GetMsecs() { if(started) { if(paused) { return pausedMsecs; } else { return SDL_GetTicks() - startMsecs; } } return 0; } void Timer::Reset() { startMsecs = SDL_GetTicks(); } The "FpsCounter::Process()" Method is called everytime at the end of my gameloop. I've got the problem that the loop is correctly delayed only every second frame, so it runs one frame delayed at 60 FPS and the next without delay at over 1000 fps. I am searching the error quite a while now, but I do not find it. I hope somebody can point me in the right direction.

    Read the article

  • Why does the screen resolution of 1440x900 suddenly disappear from Intel GMA Control Panel?

    - by GeneQ
    I'm using a Vostro 1200 laptop with the Mobile Intel(R) 965 Express Chipset powering its graphics and running Vista 32-bit SP2. I've been using the Vostro with a Dell SE198WFP LCD Monitor as the external display since day one for about two years without any problems. Recently, I plugged the Vostro into a couple of other monitors. The problem is, now the native resolution for my main monitor's (the SE198WFP) resolution of 1440x900 @ 60 Hz is no longer available. (See below) I've tried everything from uninstalling and reinstalling the Intel drivers as well as the monitor drivers to no avail. I've goggled this problem and it appears that this has happened to other people but all the answers involve people giving up in frustration or reinstalling; both terrible outcomes. Has anybody ever figured why this happens and have a good solution? UPDATE: This dude has a complicated solution, which I haven't tried yet. His explanations for the problem was After an exausting search for an answer to the matter of why my brand new 19? widescreen monitor’s native resolution (1440×900) was unavailible (sic) in the display properties, I finally stumbled upon an article a person posted on Intel’s forums that basically explained what shannanigans Intel had been up to with their GMA 950 line of onboard graphic solutions. Not very comforting.

    Read the article

  • Why does the screen resolution of 1440x900 suddenly disappears from Intel GMA Control Panel?

    - by GeneQ
    I'm using a Vostro 1200 laptop with the Mobile Intel(R) 965 Express Chipset powering its graphics and running Vista 32-bit SP2 . I've been using the Vostro with a Dell SE198WFP LCD Monitor as the external display since day one for about two years without any problems. Recently, I plugged the Vostro into a couple of other monitors. The problem is, now the native resolution for my main monitor's (the SE198WFP) resolution of 1440x900 @ 60 Hz is no longer available. (See below) I've tried everything from uninstalling and reinstalling the Intel drivers as well as the monitor drivers to no avail. I've Goggled that this problem and it appears that this has happened to other people but all the answers involve people giving up in frustration or reinstalling; both terrible outcomes. Has anybody ever figured why this happens and have a good solution? Thanks. UPDATE: This dude has a complicated solution, which I haven't tried yet. His explanations for the problem was After an exausting search for an answer to the matter of why my brand new 19? widescreen monitor’s native resolution (1440×900) was unavailible (sic) in the display properties, I finally stumbled upon an article a person posted on Intel’s forums that basically explained what shannanigans Intel had been up to with their GMA 950 line of onboard graphic solutions. Not very comforting.

    Read the article

  • Simple jquery ajax call leaks memory in ie.

    - by Thomas Lane
    I created a web page that makes an ajax call every second. In Internet Explorer 7, it leaks memory badly (20MB in about 15 minutes). The program is very simple. It just runs a javascript function that makes an ajax call. The server returns an empty string, and the javascript does nothing with it. I use setTimout to run the function every second, and I'm using Drip to watch the thing. Here is the source: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); google.load('jqueryui', '1.7.2'); </script> <script type="text/javascript"> setTimeout('testJunk()',1000); function testJunk() { $.ajax({ url: 'http://xxxxxxxxxxxxxx/test', // The url returns an empty string dataType: 'html', success: function(data){} }); setTimeout('testJunk()',1000) } </script> </head> <body> Why is memory usage going up? </body> </html> Anyone have an idea how to plug this leak? I have a real application that updates a large table this way, but left unattended will eat up Gigabytes of memory. Okay, so after some good suggestions, I modified the code to: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); google.load('jqueryui', '1.7.2'); </script> <script type="text/javascript"> setTimeout(testJunk,1000); function testJunk() { $.ajax({ url: 'http://xxxxxxxxxxxxxx/test', // The url returns an empty string dataType: 'html', success: function(data){setTimeout(testJunk,1000)} }); } </script> </head> <body> Why is memory usage going up? </body> </html> It didn't seem to make any difference though. I'm not doing anything with the DOM, and if I comment out the ajax call, the memory leak stops. So it looks like the leak is entirely in the ajax call. Does jquery ajax inherently create some sort of circular reference, and if so, how can I free it? By the way, it doesn't leak in Firefox. Someone suggested running the test in another VM and see if the results are the same. Rather than setting up another VM, I found a laptop that was running XP Home with IE8. It exhibits the same problem. I tried some older versions of jquery and got better results, but the problem didn't go away entirely until I abandoned ajax in jquery and went with more traditional (and ugly) ajax.

    Read the article

  • Sage 50 Accounts 2010 wont run on windows 7

    - by admintech
    I have sucessfuly installed Sage 50 Accounts 2010 onto my 32 bit Windows 7 machine, yet whenever i try to run it i encounter - Log Name: Application Source: Application Error Date: 24/05/2010 17:14:13 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: LukeThomas-PC.domain.co.uk Description: Faulting application name: Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe, version: 2.0.0.91, time stamp: 0x4a8c22fe Faulting module name: igdumd32.dll, version: 8.15.10.1872, time stamp: 0x4a848a05 Exception code: 0xc0000409 Fault offset: 0x00012f96 Faulting process id: 0x1778 Faulting application start time: 0x01cafb5c2493b609 Faulting application path: C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe Faulting module path: C:\Windows\system32\igdumd32.dll Report Id: 63e2246b-674f-11df-96ba-002564c97988 Event Xml: 1000 2 100 0x80000000000000 5062 Application LukeThomas-PC.domain.co.uk Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe 2.0.0.91 4a8c22fe igdumd32.dll 8.15.10.1872 4a848a05 c0000409 00012f96 1778 01cafb5c2493b609 C:\Program Files\Common Files\Sage SBD\Sage.SBD.Platform.Installation.SoftwareUpdates.UI.exe C:\Windows\system32\igdumd32.dll 63e2246b-674f-11df-96ba-002564c97988 Any help would be appreciated as i cant find anything about this error

    Read the article

  • IIS URL Rewrite HTTP to HTTPS with Port

    - by Andy Arismendi
    My website has two bindings: 1000 and 1443 (port 80/443 are in use by another website on the same IIS instance). Port 1000 is HTTP, port 1443 is HTTPS. What I want to do is redirect any incoming request using "htt p://server:1000" to "htt ps://server:1443". I'm playing around with IIS 7 rewrite module 2.0 but I'm banging my head against the wall. Any insight is appreciated! BTW the rewrite configuration below works great with a site that has an HTTP binding on port 80 and HTTPS binding on port 443, but it doesn't work with my ports. P.S. My URLs intentionally have spaces because the 'spam prevention mechanism' kicked in. For some reason google login doesn't work anymore so I had to create an OpenID account (No Script could be the culprit). I'm not sure how to get XML to display nicely so I added spaces after the opening brackets. < ?xml version="1.0" encoding="utf-8"? < configuration < system.webServer < rewrite < rules < rule name="HTTP to HTTPS redirect" stopProcessing="true" < match url="(.*)" / < conditions trackAllCaptures="true" < add input="{HTTPS}" pattern="off" / < /conditions < action type="Redirect" redirectType="Found" url="htt ps: // {HTTP_HOST}/{R:1}" / < /rule < /rules < /rewrite < /system.webServer < /configuration

    Read the article

  • sudo -i not behaving as expected

    - by jpdoyle
    Why sudo -i command is not setting the TERM, PATH, HOME, SHELL, LOGNAME, USER and USERNAME on my fresh Ubuntu 12.04.1 LTS as decribed in the manual? # sudo -u johnny -i echo $HOME && echo $USER /root root Using -H is not setting $HOME either. And my user does exist with a home : # cat /etc/passwd [..] johnny:x:1000:1000::/home/johnny:/bin/bash Update : Why am I having this issue? Because I am trying to create an ubuntu upstart job for multiple unicorn applications & I am using user installation of RVM + Bundle : without $HOME being properly evaluated, RVM do not find ~/.rvm.

    Read the article

  • Cant access a remote server due mistake by setting firewall rule

    - by LMIT
    I need help due a my silly mistake! So for long time i have a dedicate server hosted by register.it Usually i access remotly to this server (Windows 2008 server) by Terminal Server. Today i wanted to block one site that continually send request to my server. So i was adding a new rule in the firewall (the native firewall on windows 2008 server), as i did many time, but this time, probably i was sleeping with my brain i add a general rules that stop everything! So i cant access to the server anymore, as no any users can browse the sites, nothing is working because this rule block everything. I know that is a silly mistake, no need to tell me :) so please what i can do ? The only 1 thing that my provider let me is reboot the server by his control panel, but this not help me in any way because the firewall block me again. i have administrator username and password, so what i really can do ? there are some trick some tecnique, some expert guru that can help me in this very bad situation ? UPDATE i follow the Tony suggest and i did a NMAP to check if some ports are open but look like all closed: NMAP RESULT Starting Nmap 6.00 ( http://nmap.org ) at 2012-05-29 22:32 W. Europe Daylight Time NSE: Loaded 93 scripts for scanning. NSE: Script Pre-scanning. Initiating Parallel DNS resolution of 1 host. at 22:32 Completed Parallel DNS resolution of 1 host. at 22:33, 13.00s elapsed Initiating SYN Stealth Scan at 22:33 Scanning xxx.xxx.xxx.xxx [1000 ports] SYN Stealth Scan Timing: About 29.00% done; ETC: 22:34 (0:01:16 remaining) SYN Stealth Scan Timing: About 58.00% done; ETC: 22:34 (0:00:44 remaining) Completed SYN Stealth Scan at 22:34, 104.39s elapsed (1000 total ports) Initiating Service scan at 22:34 Initiating OS detection (try #1) against xxx.xxx.xxx.xxx Retrying OS detection (try #2) against xxx.xxx.xxx.xxx Initiating Traceroute at 22:34 Completed Traceroute at 22:35, 6.27s elapsed Initiating Parallel DNS resolution of 11 hosts. at 22:35 Completed Parallel DNS resolution of 11 hosts. at 22:35, 13.00s elapsed NSE: Script scanning xxx.xxx.xxx.xxx. Initiating NSE at 22:35 Completed NSE at 22:35, 0.00s elapsed Nmap scan report for xxx.xxx.xxx.xxx Host is up. All 1000 scanned ports on xxx.xxx.xxx.xxx are filtered Too many fingerprints match this host to give specific OS details TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 ... ... ... 13 ... 30 NSE: Script Post-scanning. Read data files from: D:\Program Files\Nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 145.08 seconds Raw packets sent: 2116 (96.576KB) | Rcvd: 61 (4.082KB) Question: The provider locally can access by username and password ?

    Read the article

  • Error message during installation of Windows XP Professional

    - by user270686
    During installation of Windows XP Professional the installer had an error which points to a bad CD, so I got another one and tried it, but the same thing happened. What can I do? It says: One of the components Windows needs to continue could not be installed.. The component is called: sys.dll D:/i386/asms/1000/msft/windows/gdiplus/gdiplus.man. I tried running chkdsk /f to fix the errors on the disk. I've searched the internet all day, and I don't know what else to do. I get this error message: SYS.DLL Syntax error in manifest or policy file D:/i386/asms/1000/msft/windows/gdiplus/gdiplus.man on line 4 Error Installation failed d:/I386/asms Error message Data Error (cyclic redundancy check) Fatal error One of the components Windows needs to continue could not be installed. If you are using a CD there might be a problem with the CD try cleaning it and trying again or using another CD.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >