Search Results

Search found 5903 results on 237 pages for 'generic variance'.

Page 173/237 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

  • how to version minder for web application data

    - by dankyy1
    hi all;I'm devoloping a web application which renders data from DB and also updates datas with editor UI Pages.So i want to implement a versioning mechanism for render pages got data over db again if only data on db updated by editor pages.. I decided to use Session objects for the version information that client had taken latestly.And the Application object that the latest DB version of objects ,i used the data objects guid as key for each data item client version holder class like below ItemRunnigVersionInformation class holds currentitem guid and last loadtime from DB public class ClientVersionManager { public static List<ItemRunnigVersionInformation> SessionItemRunnigVersionInformation { get { if (HttpContext.Current.Session["SessionItemRunnigVersionInformation"] == null) HttpContext.Current.Session["SessionItemRunnigVersionInformation"] = new List<ItemRunnigVersionInformation>(); return (List<ItemRunnigVersionInformation>)HttpContext.Current.Session["SessionItemRunnigVersionInformation"]; } set { HttpContext.Current.Session["SessionItemRunnigVersionInformation"] = value; } } /// <summary> /// this will be updated when editor pages /// </summary> /// <param name="itemRunnigVersionInformation"></param> public static void UpdateItemRunnigSessionVersion(string itemGuid) { ItemRunnigVersionInformation itemRunnigVersionAtAppDomain = PlayListVersionManager.GetItemRunnigVersionInformationByID(itemGuid); ItemRunnigVersionInformation itemRunnigVersionInformationAtSession = SessionItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemGuid); if ((itemRunnigVersionInformationAtSession == null) && (itemRunnigVersionAtAppDomain != null)) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(SessionItemRunnigVersionInformation, itemRunnigVersionAtAppDomain); } else if (itemRunnigVersionAtAppDomain != null) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Remove(SessionItemRunnigVersionInformation, itemRunnigVersionInformationAtSession); ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(SessionItemRunnigVersionInformation, itemRunnigVersionAtAppDomain); } } /// <summary> /// by given parameters check versions over PlayListVersionManager versions and /// adds versions to clientversion manager if any item version on /// playlist not found it will also added to PlaylistManager list /// </summary> /// <param name="playList"></param> /// <param name="userGuid"></param> /// <param name="ownerGuid"></param> public static void UpdateCurrentSessionVersion(PlayList playList, string userGuid, string ownerGuid) { ItemRunnigVersionInformation tmpItemRunnigVersionInformation; List<ItemRunnigVersionInformation> currentItemRunnigVersionInformationList = new List<ItemRunnigVersionInformation>(); if (!string.IsNullOrEmpty(userGuid)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(userGuid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(userGuid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } if (!string.IsNullOrEmpty(ownerGuid)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(ownerGuid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(ownerGuid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } if ((playList != null) && (playList.PlayListItemCollection != null)) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(playList.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(playList.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } currentItemRunnigVersionInformationList.Add(tmpItemRunnigVersionInformation); foreach (PlayListItem playListItem in playList.PlayListItemCollection) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(playListItem.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(playListItem.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } currentItemRunnigVersionInformationList.Add(tmpItemRunnigVersionInformation); foreach (SoftKey softKey in playListItem.PlayListSoftKeys) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(softKey.GUID); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(softKey.GUID, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } foreach (MenuItem menuItem in playListItem.MenuItems) { tmpItemRunnigVersionInformation = PlayListVersionManager.GetItemRunnigVersionInformationByID(menuItem.Guid); if (tmpItemRunnigVersionInformation == null) { tmpItemRunnigVersionInformation = new ItemRunnigVersionInformation(menuItem.Guid, DateTime.Now.ToUniversalTime()); PlayListVersionManager.UpdateItemRunnigAppDomainVersion(tmpItemRunnigVersionInformation); } ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Add(currentItemRunnigVersionInformationList, tmpItemRunnigVersionInformation); } } } SessionItemRunnigVersionInformation = currentItemRunnigVersionInformationList; } public static ItemRunnigVersionInformation GetItemRunnigVersionInformationById(string itemGuid) { return SessionItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemGuid); } public static void DeleteItemRunnigAppDomain(string itemGuid) { ExtensionMethodsForClientVersionManager.ExtensionMethodsForClientVersionManager.Remove(SessionItemRunnigVersionInformation, NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(SessionItemRunnigVersionInformation, itemGuid)); } } and that was for server one public class PlayListVersionManager { public static List<ItemRunnigVersionInformation> AppDomainItemRunnigVersionInformation { get { if (HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] == null) HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] = new List<ItemRunnigVersionInformation>(); return (List<ItemRunnigVersionInformation>)HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"]; } set { HttpContext.Current.Application["AppDomainItemRunnigVersionInformation"] = value; } } public static ItemRunnigVersionInformation GetItemRunnigVersionInformationByID(string itemGuid) { return ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemGuid); } /// <summary> /// this will be updated when editor pages /// if any record at playlistversion is found it will be addedd /// </summary> /// <param name="itemRunnigVersionInformation"></param> public static void UpdateItemRunnigAppDomainVersion(ItemRunnigVersionInformation itemRunnigVersionInformation) { ItemRunnigVersionInformation itemRunnigVersionInformationAtAppDomain = NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation.ItemGuid); if (itemRunnigVersionInformationAtAppDomain == null) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); } else { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Remove(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformationAtAppDomain); ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); } } //this will be checked each time if needed to update item over DB public static bool IsRunnigItemLastVersion(ItemRunnigVersionInformation itemRunnigVersionInformation, bool ignoreNullEntry, out bool itemNotExistsAtAppDomain) { itemNotExistsAtAppDomain = false; if (itemRunnigVersionInformation != null) { ItemRunnigVersionInformation itemRunnigVersionInformationAtAppDomain = AppDomainItemRunnigVersionInformation.FirstOrDefault(t => t.ItemGuid == itemRunnigVersionInformation.ItemGuid); itemNotExistsAtAppDomain = (itemRunnigVersionInformationAtAppDomain == null); if (itemNotExistsAtAppDomain && (ignoreNullEntry)) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Add(AppDomainItemRunnigVersionInformation, itemRunnigVersionInformation); return true; } else if (!itemNotExistsAtAppDomain && (itemRunnigVersionInformationAtAppDomain.LastLoadTime <= itemRunnigVersionInformation.LastLoadTime)) return true; else return false; } else return ignoreNullEntry; } public static void DeleteItemRunnigAppDomain(string itemGuid) { ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.Remove(AppDomainItemRunnigVersionInformation, NG.IPTOffice.Paloma.Helper.ExtensionMethodsFoPlayListVersionManager.ExtensionMethodsFoPlayListVersionManager.GetMatchingItemRunnigVersionInformation(AppDomainItemRunnigVersionInformation, itemGuid)); } } when more than one client requests the page i got "Collection was modified; enumeration operation may not execute." like below.. xception: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) at System.Collections.Generic.List1.Enumerator.MoveNextRare() at System.Collections.Generic.List1.Enumerator.MoveNext() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable1 source, Func2 predicate) at NG.IPTOffice.Paloma.Helper.PlayListVersionManager.UpdateItemRunnigAppDomainVersion(ItemRunnigVersionInformation itemRunnigVersionInformation) in at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.playlistwebform_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\ipservicestest\8921e5c8\5d09c94d\App_Web_n4qdnfcq.2.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)----------- how to implement version management like this scnerio? how can i to avoid this exception? thnx

    Read the article

  • F# - Facebook Hacker Cup - Double Squares

    - by Jacob
    I'm working on strengthening my F#-fu and decided to tackle the Facebook Hacker Cup Double Squares problem. I'm having some problems with the run-time and was wondering if anyone could help me figure out why it is so much slower than my C# equivalent. There's a good description from another post; Source: Facebook Hacker Cup Qualification Round 2011 A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 3^2 + 1^2. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 3^2 + 1^2 (we don't count 1^2 + 3^2 as being different). On the other hand, 25 can be written as 5^2 + 0^2 or as 4^2 + 3^2. You need to solve this problem for 0 = X = 2,147,483,647. Examples: 10 = 1 25 = 2 3 = 0 0 = 1 1 = 1 My basic strategy (which I'm open to critique on) is to; Create a dictionary (for memoize) of the input numbers initialzed to 0 Get the largest number (LN) and pass it to count/memo function Get the LN square root as int Calculate squares for all numbers 0 to LN and store in dict Sum squares for non repeat combinations of numbers from 0 to LN If sum is in memo dict, add 1 to memo Finally, output the counts of the original numbers. Here is the F# code (See code changes at bottom) I've written that I believe corresponds to this strategy (Runtime: ~8:10); open System open System.Collections.Generic open System.IO /// Get a sequence of values let rec range min max = seq { for num in [min .. max] do yield num } /// Get a sequence starting from 0 and going to max let rec zeroRange max = range 0 max /// Find the maximum number in a list with a starting accumulator (acc) let rec maxNum acc = function | [] -> acc | p::tail when p > acc -> maxNum p tail | p::tail -> maxNum acc tail /// A helper for finding max that sets the accumulator to 0 let rec findMax nums = maxNum 0 nums /// Build a collection of combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) let rec combos range = seq { let count = ref 0 for inner in range do for outer in Seq.skip !count range do yield (inner, outer) count := !count + 1 } let rec squares nums = let dict = new Dictionary<int, int>() for s in nums do dict.[s] <- (s * s) dict /// Counts the number of possible double squares for a given number and keeps track of other counts that are provided in the memo dict. let rec countDoubleSquares (num: int) (memo: Dictionary<int, int>) = // The highest relevent square is the square root because it squared plus 0 squared is the top most possibility let maxSquare = System.Math.Sqrt((float)num) // Our relevant squares are 0 to the highest possible square; note the cast to int which shouldn't hurt. let relSquares = range 0 ((int)maxSquare) // calculate the squares up front; let calcSquares = squares relSquares // Build up our square combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) for (sq1, sq2) in combos relSquares do let v = calcSquares.[sq1] + calcSquares.[sq2] // Memoize our relevant results if memo.ContainsKey(v) then memo.[v] <- memo.[v] + 1 // return our count for the num passed in memo.[num] // Read our numbers from file. //let lines = File.ReadAllLines("test2.txt") //let nums = [ for line in Seq.skip 1 lines -> Int32.Parse(line) ] // Optionally, read them from straight array let nums = [1740798996; 1257431873; 2147483643; 602519112; 858320077; 1048039120; 415485223; 874566596; 1022907856; 65; 421330820; 1041493518; 5; 1328649093; 1941554117; 4225; 2082925; 0; 1; 3] // Initialize our memoize dictionary let memo = new Dictionary<int, int>() for num in nums do memo.[num] <- 0 // Get the largest number in our set, all other numbers will be memoized along the way let maxN = findMax nums // Do the memoize let maxCount = countDoubleSquares maxN memo // Output our results. for num in nums do printfn "%i" memo.[num] // Have a little pause for when we debug let line = Console.Read() And here is my version in C# (Runtime: ~1:40: using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Text; namespace FBHack_DoubleSquares { public class TestInput { public int NumCases { get; set; } public List<int> Nums { get; set; } public TestInput() { Nums = new List<int>(); } public int MaxNum() { return Nums.Max(); } } class Program { static void Main(string[] args) { // Read input from file. //TestInput input = ReadTestInput("live.txt"); // As example, load straight. TestInput input = new TestInput { NumCases = 20, Nums = new List<int> { 1740798996, 1257431873, 2147483643, 602519112, 858320077, 1048039120, 415485223, 874566596, 1022907856, 65, 421330820, 1041493518, 5, 1328649093, 1941554117, 4225, 2082925, 0, 1, 3, } }; var maxNum = input.MaxNum(); Dictionary<int, int> memo = new Dictionary<int, int>(); foreach (var num in input.Nums) { if (!memo.ContainsKey(num)) memo.Add(num, 0); } DoMemoize(maxNum, memo); StringBuilder sb = new StringBuilder(); foreach (var num in input.Nums) { //Console.WriteLine(memo[num]); sb.AppendLine(memo[num].ToString()); } Console.Write(sb.ToString()); var blah = Console.Read(); //File.WriteAllText("out.txt", sb.ToString()); } private static int DoMemoize(int num, Dictionary<int, int> memo) { var highSquare = (int)Math.Floor(Math.Sqrt(num)); var squares = CreateSquareLookup(highSquare); var relSquares = squares.Keys.ToList(); Debug.WriteLine("Starting - " + num.ToString()); Debug.WriteLine("RelSquares.Count = {0}", relSquares.Count); int sum = 0; var index = 0; foreach (var square in relSquares) { foreach (var inner in relSquares.Skip(index)) { sum = squares[square] + squares[inner]; if (memo.ContainsKey(sum)) memo[sum]++; } index++; } if (memo.ContainsKey(num)) return memo[num]; return 0; } private static TestInput ReadTestInput(string fileName) { var lines = File.ReadAllLines(fileName); var input = new TestInput(); input.NumCases = int.Parse(lines[0]); foreach (var lin in lines.Skip(1)) { input.Nums.Add(int.Parse(lin)); } return input; } public static Dictionary<int, int> CreateSquareLookup(int maxNum) { var dict = new Dictionary<int, int>(); int square; foreach (var num in Enumerable.Range(0, maxNum)) { square = num * num; dict[num] = square; } return dict; } } } Thanks for taking a look. UPDATE Changing the combos function slightly will result in a pretty big performance boost (from 8 min to 3:45): /// Old and Busted... let rec combosOld range = seq { let rangeCache = Seq.cache range let count = ref 0 for inner in rangeCache do for outer in Seq.skip !count rangeCache do yield (inner, outer) count := !count + 1 } /// The New Hotness... let rec combos maxNum = seq { for i in 0..maxNum do for j in i..maxNum do yield i,j }

    Read the article

  • InputDispatcher Error

    - by StarDust
    INFO/ActivityManager(68): Process com.example (pid 390) has died. ERROR/InputDispatcher(68): channel '406ed580 com.example/com.example.afeTest (server)' ~ Consumer closed input channel or an error occurred. events=0x8 ERROR/InputDispatcher(68): channel '406ed580 com.example/com.example.afeTest (server)' ~ Channel is unrecoverably broken and will be disposed! ERROR/InputDispatcher(68): Received spurious receive callback for unknown input channel. fd=165, events=0x8 Can anyone tell what may be the reason behind this error? I've ported a native code on the Android-ndk. One thing I noticed regarding fd (that may be some reason :S) My code uses fd_sets which was defined in winsock2.h But I didn't find fd_sets defined in android-ndk. So I had included "select.h" where fd_set is a typedef in the android-ndk: typedef __kernel_fd_set fd_set; Here is the log cat: 04-06 11:15:32.405: INFO/DEBUG(31): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 04-06 11:15:32.405: INFO/DEBUG(31): Build fingerprint: 'generic/sdk/generic:2.3.3/GRI34/101070:eng/test-keys' 04-06 11:15:32.415: INFO/DEBUG(31): pid: 335, tid: 348 >>> com.example <<< 04-06 11:15:32.426: INFO/DEBUG(31): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr deadbaad 04-06 11:15:32.426: INFO/DEBUG(31): r0 deadbaad r1 0000000c r2 00000027 r3 00000000 04-06 11:15:32.445: INFO/DEBUG(31): r4 00000080 r5 afd46668 r6 0000a000 r7 00000078 04-06 11:15:32.445: INFO/DEBUG(31): r8 804ab00d r9 002a9778 10 00100000 fp 00000001 04-06 11:15:32.445: INFO/DEBUG(31): ip ffffffff sp 44295d10 lr afd19375 pc afd15ef0 cpsr 00000030 04-06 11:15:32.756: INFO/DEBUG(31): #00 pc 00015ef0 /system/lib/libc.so 04-06 11:15:32.756: INFO/DEBUG(31): #01 pc 00013852 /system/lib/libc.so 04-06 11:15:32.767: INFO/DEBUG(31): code around pc: 04-06 11:15:32.785: INFO/DEBUG(31): afd15ed0 68241c23 d1fb2c00 68dae027 d0042a00 04-06 11:15:32.785: INFO/DEBUG(31): afd15ee0 20014d18 6028447d 48174790 24802227 04-06 11:15:32.785: INFO/DEBUG(31): afd15ef0 f7f57002 2106eb56 ec92f7f6 0563aa01 04-06 11:15:32.796: INFO/DEBUG(31): afd15f00 60932100 91016051 1c112006 e818f7f6 04-06 11:15:32.807: INFO/DEBUG(31): afd15f10 2200a905 f7f62002 f7f5e824 2106eb42 04-06 11:15:32.815: INFO/DEBUG(31): code around lr: 04-06 11:15:32.815: INFO/DEBUG(31): afd19354 b0834a0d 589c447b 26009001 686768a5 04-06 11:15:32.825: INFO/DEBUG(31): afd19364 220ce008 2b005eab 1c28d003 47889901 04-06 11:15:32.836: INFO/DEBUG(31): afd19374 35544306 d5f43f01 2c006824 b003d1ee 04-06 11:15:32.836: INFO/DEBUG(31): afd19384 bdf01c30 000281a8 ffffff88 1c0fb5f0 04-06 11:15:32.846: INFO/DEBUG(31): afd19394 43551c3d a904b087 1c16ac01 604d9004 04-06 11:15:32.856: INFO/DEBUG(31): stack: 04-06 11:15:32.856: INFO/DEBUG(31): 44295cd0 00000408 04-06 11:15:32.867: INFO/DEBUG(31): 44295cd4 afd18407 /system/lib/libc.so 04-06 11:15:32.875: INFO/DEBUG(31): 44295cd8 afd4270c /system/lib/libc.so 04-06 11:15:32.875: INFO/DEBUG(31): 44295cdc afd426b8 /system/lib/libc.so 04-06 11:15:32.885: INFO/DEBUG(31): 44295ce0 00000000 04-06 11:15:32.896: INFO/DEBUG(31): 44295ce4 afd19375 /system/lib/libc.so 04-06 11:15:32.896: INFO/DEBUG(31): 44295ce8 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:32.896: INFO/DEBUG(31): 44295cec afd183d9 /system/lib/libc.so 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf0 00000078 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf4 00000000 04-06 11:15:32.906: INFO/DEBUG(31): 44295cf8 afd46668 04-06 11:15:32.906: INFO/DEBUG(31): 44295cfc 0000a000 [heap] 04-06 11:15:32.916: INFO/DEBUG(31): 44295d00 00000078 04-06 11:15:32.927: INFO/DEBUG(31): 44295d04 afd18677 /system/lib/libc.so 04-06 11:15:32.927: INFO/DEBUG(31): 44295d08 df002777 04-06 11:15:32.945: INFO/DEBUG(31): 44295d0c e3a070ad 04-06 11:15:32.945: INFO/DEBUG(31): #00 44295d10 002c43a0 [heap] 04-06 11:15:32.945: INFO/DEBUG(31): 44295d14 002a9900 [heap] 04-06 11:15:32.956: INFO/DEBUG(31): 44295d18 afd46608 04-06 11:15:32.966: INFO/DEBUG(31): 44295d1c afd11010 /system/lib/libc.so 04-06 11:15:32.976: INFO/DEBUG(31): 44295d20 002c4298 [heap] 04-06 11:15:32.976: INFO/DEBUG(31): 44295d24 fffffbdf 04-06 11:15:33.006: INFO/DEBUG(31): 44295d28 000000da 04-06 11:15:33.006: INFO/DEBUG(31): 44295d2c afd46450 04-06 11:15:33.006: INFO/DEBUG(31): 44295d30 000001b4 04-06 11:15:33.026: INFO/DEBUG(31): 44295d34 afd13857 /system/lib/libc.so 04-06 11:15:33.026: INFO/DEBUG(31): #01 44295d38 afd46450 04-06 11:15:33.035: INFO/DEBUG(31): 44295d3c afd13857 /system/lib/libc.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d40 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d44 44295e8c 04-06 11:15:33.056: INFO/DEBUG(31): 44295d48 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d4c 804bfec3 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.056: INFO/DEBUG(31): 44295d50 002c43a0 [heap] 04-06 11:15:33.066: INFO/DEBUG(31): 44295d54 44295e8c 04-06 11:15:33.066: INFO/DEBUG(31): 44295d58 804ab00d /data/data/com.example/lib/libAFE.so 04-06 11:15:33.076: INFO/DEBUG(31): 44295d5c 002a9778 [heap] 04-06 11:15:33.085: INFO/DEBUG(31): 44295d60 00000078 04-06 11:15:33.085: INFO/DEBUG(31): 44295d64 afd14769 /system/lib/libc.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d68 44295e8c 04-06 11:15:33.085: INFO/DEBUG(31): 44295d6c 805d9763 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d70 44295e8c 04-06 11:15:33.085: INFO/DEBUG(31): 44295d74 8051dc35 /data/data/com.example/lib/libAFE.so 04-06 11:15:33.085: INFO/DEBUG(31): 44295d78 0000003a 04-06 11:15:33.085: INFO/DEBUG(31): 44295d7c 002a9900 [heap] 04-06 11:15:37.126: DEBUG/Zygote(33): Process 335 terminated by signal (11) 04-06 11:15:37.146: INFO/ActivityManager(68): Process com.example (pid 335) has died. 04-06 11:15:37.178: ERROR/InputDispatcher(68): channel '406f03a0 com.example/com.example.afeTest (server)' ~ Consumer closed input channel or an error occurred. events=0x8 04-06 11:15:37.178: ERROR/InputDispatcher(68): channel '406f03a0 com.example/com.example.afeTest (server)' ~ Channel is unrecoverably broken and will be disposed! 04-06 11:15:37.185: INFO/BootReceiver(68): Copying /data/tombstones/tombstone_09 to DropBox (SYSTEM_TOMBSTONE) 04-06 11:15:37.576: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 266K, 47% free 4404K/8199K, external 3520K/3903K, paused 306ms 04-06 11:15:37.835: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 203K, 47% free 4457K/8391K, external 3520K/3903K, paused 120ms 04-06 11:15:37.886: INFO/WindowManager(68): WIN DEATH: Window{406f03a0 com.example/com.example.afeTest paused=false} 04-06 11:15:38.095: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 67K, 47% free 4518K/8391K, external 3511K/3903K, paused 94ms 04-06 11:15:38.095: INFO/dalvikvm-heap(68): Grow heap (frag case) to 10.575MB for 196628-byte allocation 04-06 11:15:38.126: DEBUG/dalvikvm(126): GC_EXPLICIT freed 110K, 51% free 2903K/5895K, external 4701K/5293K, paused 2443ms 04-06 11:15:38.217: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 1K, 46% free 4708K/8647K, external 3511K/3903K, paused 96ms 04-06 11:15:38.225: INFO/WindowManager(68): WIN DEATH: Window{406f72f8 com.example/com.example.afeTest paused=false} 04-06 11:15:38.405: DEBUG/dalvikvm(68): GC_FOR_MALLOC freed 492K, 50% free 4345K/8647K, external 3511K/3903K, paused 96ms 04-06 11:15:38.485: ERROR/InputDispatcher(68): Received spurious receive callback for unknown input channel. fd=164, events=0x8

    Read the article

  • How can i learn file name and create a folder?

    - by Phsika
    i try to make TCP/Ip Application to listen any other roemote computer to recieve any file. So i try to get files. i can do that. on the other hand every sample on google about giving SaveDialogBox to recived path folder.Forexample my old server.cs is that: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server3 { public partial class Form1 : Form { Thread kanal; public Form1() { InitializeComponent(); try { kanal = new Thread(new ThreadStart(Dinle)); kanal.Start(); kanal.Priority = ThreadPriority.Normal; this.Text = "Kanal Çalisti"; } catch (Exception ex) { this.Text = "kanal çalismadi"; MessageBox.Show("hata:" + ex.ToString()); kanal.Abort(); throw; } } void Dinle() { TcpListener server = null; try { Int32 port = 51124; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); server = new TcpListener(localAddr, port); server.Start(); Byte[] bytes = new Byte[1024 * 250000]; // string ReceivedPath = "C:/recieved"; while (true) { MessageBox.Show("Waiting for a connection... "); TcpClient client = server.AcceptTcpClient(); MessageBox.Show("Connected!"); NetworkStream stream = client.GetStream(); if (stream.CanRead) { saveFileDialog1.ShowDialog(); string pathfolder = saveFileDialog1.FileName; StreamWriter yaz = new StreamWriter(pathfolder); string satir; StreamReader oku = new StreamReader(stream); while ((satir = oku.ReadLine()) != null) { satir = satir + (char)13 + (char)10; yaz.WriteLine(satir); } oku.Close(); yaz.Close(); client.Close(); } } } catch (SocketException e) { Console.WriteLine("SocketException: {0}", e); } finally { // Stop listening for new clients. server.Stop(); } Console.WriteLine("\nHit enter to continue..."); Console.Read(); } private void Form1_Load(object sender, EventArgs e) { Dinle(); } } } i want to give automatically folder without SAVEDIALOGBOX. Also i want to learn my file name om my stream.Like that: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server3 { public partial class Form1 : Form { Thread kanal; public Form1() { InitializeComponent(); try { kanal = new Thread(new ThreadStart(Dinle)); kanal.Start(); kanal.Priority = ThreadPriority.Normal; this.Text = "Kanal Çalisti"; } catch (Exception ex) { this.Text = "kanal çalismadi"; MessageBox.Show("hata:" + ex.ToString()); kanal.Abort(); throw; } } void Dinle() { TcpListener server = null; try { Int32 port = 51124; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); server = new TcpListener(localAddr, port); server.Start(); Byte[] bytes = new Byte[1024 * 250000]; string ReceivedPath = "C:/recieved"; while (true) { MessageBox.Show("Waiting for a connection... "); TcpClient client = server.AcceptTcpClient(); MessageBox.Show("Connected!"); NetworkStream stream = client.GetStream(); if (stream.CanRead) { saveFileDialog1.ShowDialog(); string pathfolder = " i have to give property creating path and want to learn file name"; StreamWriter yaz = new StreamWriter(pathfolder); string satir; StreamReader oku = new StreamReader(stream); while ((satir = oku.ReadLine()) != null) { satir = satir + (char)13 + (char)10; yaz.WriteLine(satir); } oku.Close(); yaz.Close(); client.Close(); } } } catch (SocketException e) { Console.WriteLine("SocketException: {0}", e); } finally { // Stop listening for new clients. server.Stop(); } Console.WriteLine("\nHit enter to continue..."); Console.Read(); } private void Form1_Load(object sender, EventArgs e) { Dinle(); } } } Also i need : FileStream fs; FileInfo fi = new FileInfo(@"c:/recieved"); if (fi.Exists) fs = new FileStream(fi.FullName, FileMode.Append); else fs = new FileStream(fi.FullName, FileMode.Create); StreamWriter yazici = new StreamWriter(fs); How can i do that. Creating C:/recieved if it does not exist. And how can i learn File name on my network stream sending File Name.

    Read the article

  • MVC 3 ModelView passing parameters between view & controller

    - by Tobias Vandenbempt
    I've been playing with MVC 3 in a test project and have the following issue. I have Group & Subscriber entities and those are coupled through a SubscriberGroup table. Using the DetailView of Group I open a view of SubscriberGroup containing all subscribers. This list has the option to filter. So far it all works, however when I call the AddToGroup method on the controller it fails. Specifically it goes into the method but doesn't pass the subscriberCheckedModels list. Am I doing something wrong? View: SubscriberGroup Index.aspx <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.master" Inherits="System.Web.Mvc.ViewPage<Mail.Models.SubscriberCheckedListViewModel>" %> … <h2 class="common-box-title"> Add Subscribers to Group</h2> <p> <% using (Html.BeginForm("Index", "SubscriberGroup")) { %> <input name="filter" id="filter" type="text" /> <input type="submit" value="Search" /> <%} %> </p> <% using (Html.BeginForm("AddToGroup", "SubscriberGroup", Model,FormMethod.Get, null)) { %> <fieldset> <div style="display: inline-block; width: 70%; vertical-align: top;"> <% if (Model.subscribers.Count() != 0) { %> <table class="hor-minimalist-b"> <tr> <th> Add To Group </th> <th> Full Name </th> <th> Email </th> <th> Customer </th> </tr> <% foreach (var item in Model.subscribers) { %> <tr> <td> <%= Html.CheckBoxFor(modelItem => item.AddToGroup)%> </td> <td> <%= Html.DisplayFor(modelItem => item.subscriber.LastName)%> <%= Html.ActionLink(item.subscriber.FirstName + " " + item.subscriber.LastName, "Details", new { id = item.subscriber.SubscriberID })%> </td> <td> <%: Html.DisplayFor(modelItem => item.subscriber.Email)%> </td> <td> <%: Html.DisplayFor(modelItem => item.subscriber.Customer.Company)%> <%= Html.HiddenFor(modelItem => item.subscriber) %> </td> </tr> <% } %> <% ViewBag.subscribers = Model.subscribers; %> probeersel <%= Html.HiddenFor(model => model.subscribers) %> probeersel </table> <%} %> <%else { %> <p> No subscribers found.</p> <%} %> <input type="submit" value="Add Subscribers" /> </div> </fieldset> <%} %> Controller: SubscriberGroupController using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Security; using Mail.Models; namespace Mail.Controllers { public class SubscriberGroupController : Controller { private int groupID; private MailDBEntities db = new MailDBEntities(); // // GET: /SubscriberGroup/ public ActionResult Index(int id) { groupID = id; MembershipUser myObject = Membership.GetUser(); Guid UserID = Guid.Parse(myObject.ProviderUserKey.ToString()); UserCustomer usercustomer = db.UserCustomers.Single(s => s.UserID == UserID); var subscribers = from subscriber in db.Subscribers where (subscriber.CustomerID == usercustomer.CustomerID) | (subscriber.CustomerID == 0) select new SubscriberCheckedModel { subscriber = subscriber, AddToGroup = false }; SubscriberCheckedListViewModel test = new SubscriberCheckedListViewModel(); test.subscribers = subscribers; return View(test); } [HttpPost] public ActionResult Index(string filter) { MembershipUser myObject = Membership.GetUser(); Guid UserID = Guid.Parse(myObject.ProviderUserKey.ToString()); UserCustomer usercustomer = db.UserCustomers.Single(s => s.UserID == UserID); var subscribers2 = from subscriber in db.Subscribers where ((subscriber.FirstName.Contains(filter)|| subscriber.LastName.Contains(filter)) && (subscriber.CustomerID == usercustomer.CustomerID || subscriber.CustomerID == 0)) select new SubscriberCheckedModel { subscriber = subscriber, AddToGroup = false }; SubscriberCheckedListViewModel test = new SubscriberCheckedListViewModel(); test.subscribers = subscribers2.ToList(); return View(test); } [HttpPost] public ActionResult AddToGroup(SubscriberCheckedListViewModel test) { //test is null return RedirectToAction("Details", "Group", new { id = groupID }); } } } ViewModel: SubscriberGroupModel using System.Collections.Generic; using Mail; namespace Mail.Models { public class SubscriberCheckedModel { public Subscriber subscriber { get; set; } public bool AddToGroup { get; set; } } public class SubscriberCheckedListViewModel { public IEnumerable<SubscriberCheckedModel> subscribers { get; set; } } }

    Read the article

  • C++/CLI HTTP Proxy problems...

    - by darkantimatter
    Hi, I'm trying(very hard) to make a small HTTP Proxy server which I can use to save all communications to a file. Seeing as I dont really have any experience in the area, I used a class from codeproject.com and some associated code to get started (It was made in the old CLI syntax, so I converted it). I couldn't get it working, so I added lots more code to make it work (threads etc), and now it sort of works. Basically, it recieves something from a client (I just configured Mozilla Firefox to route its connections through this proxy) and then routes it to google.com. After it sends Mozilla's data to google, recieves a responce, and sends that to Mozilla. This works fine, but then the proxy fails to recieve any data from Mozilla. It just loops in the Sleep(50) section. Anyway, heres the code: ProxyTest.cpp: #include "stdafx.h" #include "windows.h" #include "CHTTPProxy.h" public ref class ClientThread { public: System::Net::Sockets::TcpClient ^ pClient; CHttpProxy ^ pProxy; System::Int32 ^ pRecieveBufferSize; System::Threading::Thread ^ Thread; ClientThread(System::Net::Sockets::TcpClient ^ sClient, CHttpProxy ^ sProxy, System::Int32 ^ sRecieveBufferSize) { pClient = sClient; pProxy = sProxy; pRecieveBufferSize = sRecieveBufferSize; }; void StartReading() { Thread = gcnew System::Threading::Thread(gcnew System::Threading::ThreadStart(this,&ClientThread::ThreadEntryPoint)); Thread->Start(); }; void ThreadEntryPoint() { char * bytess; bytess = new char[(int)pRecieveBufferSize]; memset(bytess, 0, (int)pRecieveBufferSize); array<unsigned char> ^ bytes = gcnew array<unsigned char>((int)pRecieveBufferSize); array<unsigned char> ^ sendbytes; do { if (pClient->GetStream()->DataAvailable) { try { do { Sleep(100); //Lets wait for whole packet to get cached (If it even does...) unsigned int k = pClient->GetStream()->Read(bytes, 0, (int)pRecieveBufferSize); //Read it for(unsigned int i=0; i<(int)pRecieveBufferSize; i++) bytess[i] = bytes[i]; Console::WriteLine("Packet Received:\n"+gcnew System::String(bytess)); pProxy->SendToServer(bytes,pClient->GetStream()); //Now send it to google! pClient->GetStream()->Flush(); } while(pClient->GetStream()->DataAvailable); } catch (Exception ^ e) { break; } } else { Sleep(50); //It just loops here because it thinks mozilla isnt sending anything if (!(pClient->Connected)) break; }; } while (pClient->GetStream()->CanRead); delete [] bytess; pClient->Close(); }; }; int main(array<System::String ^> ^args) { System::Collections::Generic::Stack<ClientThread ^> ^ Clients = gcnew System::Collections::Generic::Stack<ClientThread ^>(); System::Net::Sockets::TcpListener ^ pTcpListener = gcnew System::Net::Sockets::TcpListener(8080); pTcpListener->Start(); System::Net::Sockets::TcpClient ^ pTcpClient; while (1) { pTcpClient = pTcpListener->AcceptTcpClient(); //Wait for client ClientThread ^ Client = gcnew ClientThread(pTcpClient, gcnew CHttpProxy("www.google.com.au", 80), pTcpClient->ReceiveBufferSize); //Make a new object for this client Client->StartReading(); //Start the thread Clients->Push(Client); //Add it to the list }; pTcpListener->Stop(); return 0; } CHTTPProxy.h, from http://www.codeproject.com/KB/IP/howtoproxy.aspx with a lot of modifications: //THIS FILE IS FROM http://www.codeproject.com/KB/IP/howtoproxy.aspx. I DID NOT MAKE THIS! BUT I HAVE MADE SEVERAL MODIFICATIONS! #using <mscorlib.dll> #using <SYSTEM.DLL> using namespace System; using System::Net::Sockets::TcpClient; using System::String; using System::Exception; using System::Net::Sockets::NetworkStream; #include <stdio.h> ref class CHttpProxy { public: CHttpProxy(System::String ^ szHost, int port); System::String ^ m_host; int m_port; void SendToServer(array<unsigned char> ^ Packet, System::Net::Sockets::NetworkStream ^ sendstr); }; CHttpProxy::CHttpProxy(System::String ^ szHost, int port) { m_host = gcnew System::String(szHost); m_port = port; } void CHttpProxy::SendToServer(array<unsigned char> ^ Packet, System::Net::Sockets::NetworkStream ^ sendstr) { TcpClient ^ tcpclnt = gcnew TcpClient(); try { tcpclnt->Connect(m_host,m_port); } catch (Exception ^ e ) { Console::WriteLine(e->ToString()); return; } // Send it if ( tcpclnt ) { NetworkStream ^ networkStream; networkStream = tcpclnt->GetStream(); int size = Packet->Length; networkStream->Write(Packet, 0, size); array<unsigned char> ^ bytes = gcnew array<unsigned char>(tcpclnt->ReceiveBufferSize); char * bytess = new char[tcpclnt->ReceiveBufferSize]; Sleep(500); //Wait for responce do { unsigned int k = networkStream->Read(bytes, 0, (int)tcpclnt->ReceiveBufferSize); //Read from google for(unsigned int i=0; i<k; i++) { bytess[i] = bytes[i]; if (bytess[i] == 0) bytess[i] = ' '; //Dont terminate the string if (bytess[i] < 8) bytess[i] = ' '; //Somethings making the computer beep, and its not 7?!?! }; Console::WriteLine("\n\nAbove packet sent to google. Google Packet Received:\n"+gcnew System::String(bytess)); sendstr->Write(bytes,0,k); //Send it to mozilla Console::WriteLine("\n\nAbove packet sent to client..."); //Sleep(1000); } while(networkStream->DataAvailable); delete [] bytess; } return; } Any help would be much appreciated, I've tried for hours.

    Read the article

  • Cannot login to Ubuntu 14.04 returns to the login screen

    - by Safder
    I updated to 14.04 from 12.04.03. I was using cinammon at that time. When I upgraded to 14.04 ,I got a serious problem. I was unable to login, as whenever I hit enter after entering password, it is taking me straight to login screen again.I unable to login like Guest even. So, here is my .xsession-errors file. possibly anyone could help. Thanks in advance. Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: gnome-settings-daemon main process (14717) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.1000.crash) main process (14607) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.1000.crash) main process (14623) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.1000.crash) main process (14663) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.1000.crash) main process (14620) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.1000.crash) main process (14612) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.1000.crash) main process (14616) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.1000.crash) main process (14629) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.1000.crash) main process (14625) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_update-notifier_system-crash-notification.1000.crash) main process (14662) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-session_gnome-session-check-accelerated.127.crash) main process (14613) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.1000.crash) main process (14665) terminated with status 133 init: update-notifier-crash (/var/crash/linux-image-3.13.0-24-generic.0.crash) main process (14606) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_Xorg.0.crash) main process (14609) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-x11.127.crash) main process (14624) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_ibus_ibus-ui-gtk3.127.crash) main process (14622) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_gnome-settings-daemon_gnome-settings-daemon.127.crash) main process (14618) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity-settings-daemon_unity-settings-daemon.127.crash) main process (14628) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_x86_64-linux-gnu_bamf_bamfdaemon.127.crash) main process (14664) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_bin_gnome-session.127.crash) main process (14608) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_lib_unity_unity-panel-service.127.crash) main process (14634) terminated with status 133 init: update-notifier-crash (/var/crash/_usr_share_apport_apport-gtk.127.crash) main process (14667) terminated with status 133 init: gnome-settings-daemon main process (14843) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-settings-daemon main process (14850) killed by TRAP signal init: gnome-settings-daemon main process ended, respawning init: gnome-session (GNOME) main process (14727) killed by TRAP signal init: gnome-settings-daemon main process (14859) killed by TRAP signal init: logrotate main process (14570) killed by TERM signal init: upstart-dbus-session-bridge main process (14683) terminated with status 1 init: Disconnected from notified D-Bus bus

    Read the article

  • Custom ASP.Net MVC 2 ModelMetadataProvider for using custom view model attributes

    - by SeanMcAlinden
    There are a number of ways of implementing a pattern for using custom view model attributes, the following is similar to something I’m using at work which works pretty well. The classes I’m going to create are really simple: 1. Abstract base attribute 2. Custom ModelMetadata provider which will derive from the DataAnnotationsModelMetadataProvider   Base Attribute MetadataAttribute using System; using System.Web.Mvc; namespace Mvc2Templates.Attributes {     /// <summary>     /// Base class for custom MetadataAttributes.     /// </summary>     public abstract class MetadataAttribute : Attribute     {         /// <summary>         /// Method for processing custom attribute data.         /// </summary>         /// <param name="modelMetaData">A ModelMetaData instance.</param>         public abstract void Process(ModelMetadata modelMetaData);     } } As you can see, the class simple has one method – Process. Process accepts the ModelMetaData which will allow any derived custom attributes to set properties on the model meta data and add items to its AdditionalValues collection.   Custom Model Metadata Provider For a quick explanation of the Model Metadata and how it fits in to the MVC 2 framework, it is basically a set of properties that are usually set via attributes placed above properties on a view model, for example the ReadOnly and HiddenInput attributes. When EditorForModel, DisplayForModel or any of the other EditorFor/DisplayFor methods are called, the ModelMetadata information is used to determine how to display the properties. All of the information available within the model metadata is also available through ViewData.ModelMetadata. The following class derives from the DataAnnotationsModelMetadataProvider built into the mvc 2 framework. I’ve overridden the CreateMetadata method in order to process any custom attributes that may have been placed above a property in a view model.   CustomModelMetadataProvider using System; using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using Mvc2Templates.Attributes; namespace Mvc2Templates.Providers {     public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider     {         protected override ModelMetadata CreateMetadata(             IEnumerable<Attribute> attributes,             Type containerType,             Func<object> modelAccessor,             Type modelType,             string propertyName)         {             var modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);               attributes.OfType<MetadataAttribute>().ToList().ForEach(x => x.Process(modelMetadata));               return modelMetadata;         }     } } As you can see, once the model metadata is created through the base method, a check for any attributes deriving from our new abstract base attribute MetadataAttribute is made, the Process method is then called on any existing custom attributes with the model meta data for the property passed in.   Hooking it up The last thing you need to do to hook it up is set the new CustomModelMetadataProvider as the current ModelMetadataProvider, this is done within the Global.asax Application_Start method. Global.asax protected void Application_Start()         {             AreaRegistration.RegisterAllAreas();               RegisterRoutes(RouteTable.Routes);               ModelMetadataProviders.Current = new CustomModelMetadataProvider();         }   In my next post, I’m going to demonstrate a cool custom attribute that turns a textbox into an ajax driven AutoComplete text box. Hope this is useful. Kind Regards, Sean McAlinden.

    Read the article

  • DESIGNING FOR WIN PHONE 7

    Designing applications for the Win Phone 7 is very similar to designing for print. In my opinion, it feels like a cross between a tri-fold brochure and a poster. I based my prototype designs on Microsofts Metro style guide, with typography as the main focus and stunning imagery for support. Its nice to have fixed factors regulating the design, making it a fun and fresh design experience. Microsoft provides a UI Design Guidelines document that outlines layout sizes, background image size, recommended typefaces and spacing. You know what you are designing for and you know how it will look and act on the win phone 7 platform. Although applications are not required to strictly adhere to the Metro style guide I feel it makes the best use of the panorama view  and navigation. With strong examples of this UI concept in place like their Zune-like music + videos hub, I found it fairly easy to put together a few quick app mockups (see below). In addition to design guidelines, using a ready built design templates, or a win phone 7 specific panorama control like the one by Clarity Consulting will make the process of bringing your designs to life much more efficient. Likes, Dislikes, and Challenges I think the idea of the hub is completely intuitive. This concept clearly breaks down info into more manageable pieces, and greatly helps with organization when designing for the phone. I like the chromeless appearance, allowing the core functionality of the application to take precedence over gradients, textures, bevels, drop shadows, and the complicated animations you see on the web. Although I understand the Win Phone 7 guidelines are a work in progress, I found a few contradictions. I also noticed that certain design specifications did not translate well to the phone emulator . If you use their guidelines as suggested best practices and not as fixed definitions you will have more success. Multi-directional vs Linear The main challenge I had was stepping away from familiar navigational examples seen in other mobile phones. I had to keep reminding myself that the content to the right and to the left of what I was working on didnt necessarily have to have a direct link to one another. I started thinking multi-directional as opposed to linear. Win phone 7 vs IPhone The Metro styling of the Win Phone 7 is similar to the Zune HD and the Windows Media Center UI and offers a different interface paradigm than the IPhone. When navigating an application it feels like you are panning a long seamless page of information in contrast to the multiple panels of an IPhone. I think there is less of an opportunity to overdesign your application, which happens often with IPhone applications. While both interfaces are simple and sleek, win phone 7 really gets down to the basics. IPhone sets a high standard for designing for touch, designing for win phone 7 could improve on that user experience with a consistent and strategic use of white space and staying away from a menu and icon heavy UI. Design Examples for Win Phone 7 Applications Here are some concepts for both generic and brand specific applications for Win Phone 7: View Full Album Resources to get you going with your own Win Phone 7 design: Helpful design templates for Win Phone 7  http://www.shazaml.com/archives/windows-phone-7-ui-templates Here is the interaction design guide for Win Phone 7 http://go.microsoft.com/?linkid=9713252 Windows has a project template for Blend 4 and Visual Studio 2010 RC1 http://developer.windowsphone.com/ Clarity Consulting developed a panorama control for Win Phone 7 http://blogs.claritycon.com/blogs/design/archive/2010/03/30/building-the-elusive-windows-phone-panorama-control.aspxDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • DESIGNING FOR WIN PHONE 7

    Designing applications for the Win Phone 7 is very similar to designing for print. In my opinion, it feels like a cross between a tri-fold brochure and a poster. I based my prototype designs on Microsofts Metro style guide, with typography as the main focus and stunning imagery for support. Its nice to have fixed factors regulating the design, making it a fun and fresh design experience. Microsoft provides a UI Design Guidelines document that outlines layout sizes, background image size, recommended typefaces and spacing. You know what you are designing for and you know how it will look and act on the win phone 7 platform. Although applications are not required to strictly adhere to the Metro style guide I feel it makes the best use of the panorama view  and navigation. With strong examples of this UI concept in place like their Zune-like music + videos hub, I found it fairly easy to put together a few quick app mockups (see below). In addition to design guidelines, using a ready built design templates, or a win phone 7 specific panorama control like the one by Clarity Consulting will make the process of bringing your designs to life much more efficient. Likes, Dislikes, and Challenges I think the idea of the hub is completely intuitive. This concept clearly breaks down info into more manageable pieces, and greatly helps with organization when designing for the phone. I like the chromeless appearance, allowing the core functionality of the application to take precedence over gradients, textures, bevels, drop shadows, and the complicated animations you see on the web. Although I understand the Win Phone 7 guidelines are a work in progress, I found a few contradictions. I also noticed that certain design specifications did not translate well to the phone emulator . If you use their guidelines as suggested best practices and not as fixed definitions you will have more success. Multi-directional vs Linear The main challenge I had was stepping away from familiar navigational examples seen in other mobile phones. I had to keep reminding myself that the content to the right and to the left of what I was working on didnt necessarily have to have a direct link to one another. I started thinking multi-directional as opposed to linear. Win phone 7 vs IPhone The Metro styling of the Win Phone 7 is similar to the Zune HD and the Windows Media Center UI and offers a different interface paradigm than the IPhone. When navigating an application it feels like you are panning a long seamless page of information in contrast to the multiple panels of an IPhone. I think there is less of an opportunity to overdesign your application, which happens often with IPhone applications. While both interfaces are simple and sleek, win phone 7 really gets down to the basics. IPhone sets a high standard for designing for touch, designing for win phone 7 could improve on that user experience with a consistent and strategic use of white space and staying away from a menu and icon heavy UI. Design Examples for Win Phone 7 Applications Here are some concepts for both generic and brand specific applications for Win Phone 7: View Full Album Resources to get you going with your own Win Phone 7 design: Helpful design templates for Win Phone 7  http://www.shazaml.com/archives/windows-phone-7-ui-templates Here is the interaction design guide for Win Phone 7 http://go.microsoft.com/?linkid=9713252 Windows has a project template for Blend 4 and Visual Studio 2010 RC1 http://developer.windowsphone.com/ Clarity Consulting developed a panorama control for Win Phone 7 http://blogs.claritycon.com/blogs/design/archive/2010/03/30/building-the-elusive-windows-phone-panorama-control.aspxDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • Required Skill Sets Of A Software Architect

    The question has been asked as to what is the required skill sets of a software architect. The answer to this is that it truly depends. When I state that it depend, it depends on the organization, industry, and skill sets available on the open market and internally within a company. With open ended skill sets even Napoleon Dynamite could be an architect. Napoleon Dynamite’s Skills Pedro: Have you asked anybody yet? Napoleon Dynamite: No, but who would? I don't even have any good skills. Pedro: What do you mean? Napoleon Dynamite: You know, like nunchuck skills, bow hunting skills, computer hacking skills... Girls only want boyfriends who have great skills. Pedro: Aren't you pretty good at drawing, like animals and warriors and stuff? This example might be a little off base but it does illustrate a point. What are the real required skills of a software architect? In my opinion, an architect needs to demonstrate the knowledge of the following three main skill set categories so that they are successful. General Skill Sets of an Architect Basic Engineering Skills Organizational  Skills Interpersonal Skills Basic Engineering Skills are a very large part of what a software architect deal with on a daily bases when designing or updating systems. Think about it, how good would a lead mechanic be if they did not know how to fix or repair cars? They would not be, and that is my point that architects need to have at least some basic skills regarding engineering. The skills listed below are generic in nature because they change from job to job, so in this discussion I am trying to focus more on generalities so that anyone can apply this information to their individual situation. Common Basic Engineering Skills Data Modeling Code Creation Configuration Testing Deployment/Publishing System and Environment Knowledge Organizational Skills If an Architect works for or with an origination then they will need strong organization skills to survive. An architect is no use to a project if the project is missed managed. Additionally, budgets and timelines can really affect a company and their products when established deadlines are repeated not meet. By not meeting these timelines a company is forced to cancel the project and waste all the money and time spent or spend more money until it is completed, if it is ever completed. Common Organizational Skills Project Management Estimation (Cost and Time) Creation and Maintenance of Accepted Standards Interpersonal Skills For me personally Interpersonal skill ranks above the other types of skill sets because an architect can quickly pick up the other two skill sets by communicating with other team/project members so that they are quickly up to speed on a project. Additionally, in order for an architect to manage a project or even derive rough estimates they will more than likely have to consult with others actually working on the code (Programmers/Software Engineers) to get there estimates since they will be the ones actually working on the changes to be implemented. Common Interpersonal Skills Good Communicator Focus on projects success over personal Honors roles within a team Reference: Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

    - by pinaldave
    This is a very interesting wait type and quite often seen as one of the top wait types. Let us discuss this today. From Book On-Line: Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. SOS_SCHEDULER_YIELD Explanation: SQL Server has multiple threads, and the basic working methodology for SQL Server is that SQL Server does not let any “runnable” thread to starve. Now let us assume SQL Server OS is very busy running threads on all the scheduler. There are always new threads coming up which are ready to run (in other words, runnable). Thread management of the SQL Server is decided by SQL Server and not the operating system. SQL Server runs on non-preemptive mode most of the time, meaning the threads are co-operative and can let other threads to run from time to time by yielding itself. When any thread yields itself for another thread, it creates this wait. If there are more threads, it clearly indicates that the CPU is under pressure. You can fun the following DMV to see how many runnable task counts there are in your system. SELECT scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count FROM sys.dm_os_schedulers WHERE scheduler_id < 255 GO If you notice a two-digit number in runnable_tasks_count continuously for long time (not once in a while), you will know that there is CPU pressure. The two-digit number is usually considered as a bad thing; you can read the description of the above DMV over here. Additionally, there are several other counters (%Processor Time and other processor related counters), through which you can refer to so you can validate CPU pressure along with the method explained above. Reducing SOS_SCHEDULER_YIELD wait: This is the trickiest part of this procedure. As discussed, this particular wait type relates to CPU pressure. Increasing more CPU is the solution in simple terms; however, it is not easy to implement this solution. There are other things that you can consider when this wait type is very high. Here is the query where you can find the most expensive query related to CPU from the cache Note: The query that used lots of resources but is not cached will not be caught here. SELECT SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC -- CPU time You can find the most expensive queries that are utilizing lots of CPU (from the cache) and you can tune them accordingly. Moreover, you can find the longest running query and attempt to tune them if there is any processor offending code. Additionally, pay attention to total_worker_time because if that is also consistently higher, then  the CPU under too much pressure. You can also check perfmon counters of compilations as they tend to use good amount of CPU. Index rebuild is also a CPU intensive process but we should consider that main cause for this query because that is indeed needed on high transactions OLTP system utilized to reduce fragmentations. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Installer Reboots at "Detecting hardware" (disks and other hardware) on all recent Server Installs

    - by Ryan Rosario
    I have a very frustrating problem with my PC. I cannot install any recent version of Ubuntu Server (or even Desktop) since 9.04 even using the text-based installer. I boot from a USB stick created by Unetbootin (I also tried other methods such as startup disk creator with no difference). On the Server installer, it gets to "Detecting Hardware" (the second one about disks and all other hardware, not network hardware) and then either hangs at 0% (waited 24 hours), or reboots after a minute or two. My system (late 2007): ASUS P5NSLI motherboard Intel Core 2 Duo E6600 2.4Ghz 2 x 1GB Corsair 667MHz RAM nVidia GeForce 6600 I have unplugged everything (including the only hard disk, CD-ROMs and floppy). I have only one stick of RAM (tried each one to no avail) and am booting the installer from a USB stick (booting from CD-ROM yields the same problem). I also tried several of the boot options (nomodeset, nousb, acpi=off, noapic, i915.modeset=1/0, xforcevesa) in all combinations) to no avail. The only active parts of my system are the video card, mouse, keyboard and USB stick. I have also updated the BIOS to the most recent version. (FWIW, on the Desktop installer, I get a black screen after hitting the Install option.) Even after removing "quiet" I am unable to see what kernel panic is occurring (or not occurring) to cause the install to crash. I am only able to save the debug logs via a simple webserver in the installer. After the last line (I repeatedly refreshed), the server stops responding and the installer hangs or reboots: Jan 2 01:04:03 main-menu[302]: INFO: Menu item 'disk-detect' selected Jan 2 01:04:04 kernel: [ 309.154372] sata_nv 0000:00:0e.0: version 3.5 Jan 2 01:04:04 kernel: [ 309.154409] sata_nv 0000:00:0e.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.154531] sata_nv 0000:00:0e.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.164442] scsi0 : sata_nv Jan 2 01:04:04 kernel: [ 309.167610] scsi1 : sata_nv Jan 2 01:04:04 kernel: [ 309.167762] ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xd400 irq 10 Jan 2 01:04:04 kernel: [ 309.167774] ata2: SATA max UDMA/133 cmd 0x970 ctl 0xb70 bmdma 0xd408 irq 10 Jan 2 01:04:04 kernel: [ 309.167948] sata_nv 0000:00:0f.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.168071] sata_nv 0000:00:0f.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.171931] scsi2 : sata_nv Jan 2 01:04:04 kernel: [ 309.173793] scsi3 : sata_nv Jan 2 01:04:04 kernel: [ 309.173943] ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe800 irq 11 Jan 2 01:04:04 kernel: [ 309.173954] ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe808 irq 11 Jan 2 01:04:04 kernel: [ 309.174061] pata_amd 0000:00:0d.0: version 0.4.1 Jan 2 01:04:04 kernel: [ 309.174160] pata_amd 0000:00:0d.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.177045] scsi4 : pata_amd Jan 2 01:04:04 kernel: [ 309.178628] scsi5 : pata_amd Jan 2 01:04:04 kernel: [ 309.178801] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf000 irq 14 Jan 2 01:04:04 kernel: [ 309.178811] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf008 irq 15 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface lo Jan 2 01:04:04 kernel: [ 309.485062] ata3: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:04 kernel: [ 309.633094] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 2 01:04:04 kernel: [ 309.641647] ata1.00: ATA-8: ST31000528AS, CC38, max UDMA/133 Jan 2 01:04:04 kernel: [ 309.641658] ata1.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32) Jan 2 01:04:04 kernel: [ 309.657614] ata1.00: configured for UDMA/133 Jan 2 01:04:04 kernel: [ 309.657969] scsi 0:0:0:0: Direct-Access ATA ST31000528AS CC38 PQ: 0 ANSI: 5 Jan 2 01:04:04 kernel: [ 309.658482] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 2 01:04:04 kernel: [ 309.658588] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jan 2 01:04:04 kernel: [ 309.658812] sd 0:0:0:0: [sda] Write Protect is off Jan 2 01:04:04 kernel: [ 309.658823] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 2 01:04:04 kernel: [ 309.658918] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 2 01:04:04 kernel: [ 309.675630] sda: sda1 sda2 Jan 2 01:04:04 kernel: [ 309.676440] sd 0:0:0:0: [sda] Attached SCSI disk Jan 2 01:04:05 kernel: [ 309.969102] ata2: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:05 kernel: [ 310.281137] ata4: SATA link down (SStatus 0 SControl 300) Anybody have any additional ideas I could try? I am getting ready to just toss the motherboard.

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • PetaPoco with parameterised stored procedure and Asp.Net MVC

    - by Jalpesh P. Vadgama
    I have been playing with Micro ORMs as this is very interesting things that are happening in developer communities and I already liked the concept of it. It’s tiny easy to use and can do performance tweaks. PetaPoco is also one of them I have written few blog post about this. In this blog post I have explained How we can use the PetaPoco with stored procedure which are having parameters.  I am going to use same Customer table which I have used in my previous posts. For those who have not read my previous post following is the link for that. Get started with ASP.NET MVC and PetaPoco PetaPoco with stored procedures Now our customer table is ready. So let’s Create a simple process which will fetch a single customer via CustomerId. Following is a code for that. CREATE PROCEDURE mysp_GetCustomer @CustomerId as INT AS SELECT * FROM [dbo].Customer where CustomerId=@CustomerId Now  we are ready with our stored procedures. Now lets create code in CustomerDB class to retrieve single customer like following. using System.Collections.Generic; namespace CodeSimplified.Models { public class CustomerDB { public IEnumerable<Customer> GetCustomers() { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; return databaseContext.Query<Customer>("exec mysp_GetCustomers"); } public Customer GetCustomer(int customerId) { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; var customer= databaseContext.SingleOrDefault<Customer>("exec mysp_GetCustomer @customerId",new {customerId}); return customer; } } } Here in above code you can see that I have created a new method call GetCustomer which is having customerId as parameter and then I have written to code to use stored procedure which we have created to fetch customer Information. Here I have set EnableAutoSelect=false because I don’t want to create Select statement automatically I want to use my stored procedure for that. Now Our Customer DB class is ready and now lets create a ActionResult Detail in our controller like following using System.Web.Mvc; namespace CodeSimplified.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return View(); } public ActionResult Customer() { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomers()); } public ActionResult Details(int id) { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomer(id)); } } } Now Let’s create view based on that ActionResult Details method like following. Now everything is ready let’s test it in browser. So lets first goto customer list like following. Now I am clicking on details for first customer and Let’s see how we can use the stored procedure with parameter to fetch the customer details and below is the output. So that’s it. It’s very easy. Hope you liked it. Stay tuned for more..Happy Programming

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Intel Dual Band Wireless-AC 7260 keeps dropping wifi

    - by Rick T
    My wifi Intel Dual Band Wireless-AC 7260 keeps dropping wificonnection drops and the network to which I was connected disappears from the list of available networks in network manager. The only way to fix it is to disable wifi and re-enable it How can I fix this. I'm using ubuntu 14.04 64bit. It mostly drops connections on the 5ghz network. My other devices don't drop connections over wifi. see logs and versions rt@simon:~$ uname -a Linux simon 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 15:45:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux rt@simon:~$ rt@simon:~$ dmesg | grep iwl [ 3.370777] iwlwifi 0000:03:00.0: irq 46 for MSI/MSI-X [ 3.381089] iwlwifi 0000:03:00.0: loaded firmware version 22.24.8.0 op_mode iwlmvm [ 3.414637] iwlwifi 0000:03:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 3.414695] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 3.414913] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 3.630208] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' [ 9.304838] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 9.305068] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 605.483174] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S [ 605.483396] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S rt@simon:~$ cat /var/log/syslog | grep -e iwl -e 80211 | tail -n25 Aug 14 08:13:02 simon kernel: [ 3.452780] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:13:02 simon kernel: [ 3.630208] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' Aug 14 08:13:06 simon NetworkManager[1125]: <info> rfkill1: found WiFi radio killswitch (at /sys/devices/pci0000:00/0000:00:1c.2/0000:03:00.0/ieee80211/phy0/rfkill1) (driver iwlwifi) Aug 14 08:13:06 simon NetworkManager[1125]: <info> (wlan0): using nl80211 for WiFi device control Aug 14 08:13:06 simon NetworkManager[1125]: <info> (wlan0): new 802.11 WiFi device (driver: 'iwlwifi' ifindex: 3) Aug 14 08:13:06 simon kernel: [ 9.304838] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:13:06 simon kernel: [ 9.305068] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:14:18 simon kernel: [ 81.230162] cfg80211: Calling CRDA to update world regulatory domain Aug 14 08:14:18 simon kernel: [ 81.232330] cfg80211: World regulatory domain updated: Aug 14 08:14:18 simon kernel: [ 81.232332] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Aug 14 08:14:18 simon kernel: [ 81.232333] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232334] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232335] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232336] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:14:18 simon kernel: [ 81.232337] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:02 simon kernel: [ 605.483174] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:23:02 simon kernel: [ 605.483396] iwlwifi 0000:03:00.0: L1 Disabled; Enabling L0S Aug 14 08:23:18 simon kernel: [ 621.223905] cfg80211: Calling CRDA to update world regulatory domain Aug 14 08:23:18 simon kernel: [ 621.228945] cfg80211: World regulatory domain updated: Aug 14 08:23:18 simon kernel: [ 621.228950] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Aug 14 08:23:18 simon kernel: [ 621.228954] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228956] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228959] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228961] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Aug 14 08:23:18 simon kernel: [ 621.228963] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)

    Read the article

  • Take,Skip and Reverse Operator in Linq

    - by Jalpesh P. Vadgama
    I have found three more new operators in Linq which is use full in day to day programming stuff. Take,Skip and Reverse. Here are explanation of operators how it works. Take Operator: Take operator will return first N number of element from entities. Skip Operator: Skip operator will skip N number of element from entities and then return remaining elements as a result. Reverse Operator: As name suggest it will reverse order of elements of entities. Here is the examples of operators where i have taken simple string array to demonstrate that. C#, using GeSHi 1.0.8.6 using System; using System.Collections.Generic; using System.Linq; using System.Text;     namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {             string[] a = { "a", "b", "c", "d" };                           Console.WriteLine("Take Example");             var TkResult = a.Take(2);             foreach (string s in TkResult)             {                 Console.WriteLine(s);             }               Console.WriteLine("Skip Example");             var SkResult = a.Skip(2);             foreach (string s in SkResult)             {                 Console.WriteLine(s);             }               Console.WriteLine("Reverse Example");             var RvResult = a.Reverse();             foreach (string s in RvResult)             {                 Console.WriteLine(s);             }                       }     } } Parsed in 0.020 seconds at 44.65 KB/s Here is the output as expected. hope this will help you.. Technorati Tags: Linq,Linq-To-Sql,ASP.NET,C#.NET

    Read the article

  • ASP.NET Error Handling: Creating an extension method to send error email

    - by Jalpesh P. Vadgama
    Error handling in asp.net required to handle any kind of error occurred. We all are using that in one or another scenario. But some errors are there which will occur in some specific scenario in production environment in this case We can’t show our programming errors to the End user. So we are going to put a error page over there or whatever best suited as per our requirement. But as a programmer we should know that error so we can track the scenario and we can solve that error or can handle error. In this kind of situation an Error Email comes handy. Whenever any occurs in system it will going to send error in our email. Here I am going to write a extension method which will send errors in email. From asp.net 3.5 or higher version of .NET framework  its provides a unique way to extend your classes. Here you can fine more information about extension method. So lets create extension method via implementing a static class like following. I am going to use same code for sending email via my Gmail account from here. Following is code for that. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Net.Mail; namespace Experiement { public static class MyExtension { public static void SendErrorEmail(this Exception ex) { MailMessage mailMessage = new MailMessage(new MailAddress("[email protected]") , new MailAddress("[email protected]")); mailMessage.Subject = "Exception Occured in your site"; mailMessage.IsBodyHtml = true; System.Text.StringBuilder errorMessage = new System.Text.StringBuilder(); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>","Exception",ex.Message)); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", "Stack Trace", ex.StackTrace)); if (ex.InnerException != null) { errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", " Inner Exception", ex.InnerException.Message)); errorMessage.AppendLine(string.Format("<B>{0}</B>:{1}<BR/>", "Inner Stack Trace", ex.InnerException.StackTrace)); } mailMessage.Body = errorMessage.ToString(); System.Net.NetworkCredential networkCredentials = new System.Net.NetworkCredential("[email protected]", "password"); SmtpClient smtpClient = new SmtpClient(); smtpClient.EnableSsl = true; smtpClient.UseDefaultCredentials = false; smtpClient.Credentials = networkCredentials; smtpClient.Host = "smtp.gmail.com"; smtpClient.Port = 587; smtpClient.Send(mailMessage); } } } After creating an extension method let us that extension method to handle error like following in page load event of page. using System; namespace Experiement { public partial class WebForm1 : System.Web.UI.Page { protected void Page_Load(object sender,System.EventArgs e) { try { throw new Exception("My custom Exception"); } catch (Exception ex) { ex.SendErrorEmail(); Response.Write(ex.Message); } } } } Now in above code I have generated custom exception for example but in production It can be any Exception. And you can see I have use ex.SendErrorEmail() function in catch block to send email. That’s it.. Now it will throw exception and you will email in your email box like below.   That’s its. It’s so simple…Stay tuned for more.. Happy programming.. Technorati Tags: Exception,Extension Mehtod,Error Handling,ASP.NET

    Read the article

  • How do you install a USB CD Rom drive?

    - by Matt Allen
    Hello, I recently purchased a USB CD ROM drive, but I don't know how to get it to work with my computer which runs Ubuntu 10.04. http://www.amazon.com/gp/product/B00303H908/ref=oss_product When I issue the lsusb command, it shows up as: Bus 002 Device 016: ID 05e3:0701 Genesys Logic, Inc. USB 2.0 IDE Adapter The computer doesn't recognize it automatically. How can I get this drive to show up as an actual drive on my computer? If this particular drive can't handle Linux, can you recommended one which can and provide a link to it so I can purchase it? Thanks! Update: I was asked by Scaine to run a command and report back with the output: joe@joe-laptop:~$ tail -f /var/log/kern.log Dec 29 12:51:35 joe-laptop kernel: [103190.551437] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.551446] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.551463] end_request: I/O error, dev sr1, sector 0 Dec 29 12:51:35 joe-laptop kernel: [103190.877542] sr 7:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 12:51:35 joe-laptop kernel: [103190.877551] sr 7:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 12:51:35 joe-laptop kernel: [103190.877559] Info fld=0x0, ILI Dec 29 12:51:35 joe-laptop kernel: [103190.877562] sr 7:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 12:51:35 joe-laptop kernel: [103190.877572] sr 7:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 12:51:35 joe-laptop kernel: [103190.877588] end_request: I/O error, dev sr1, sector 0 Dec 29 13:08:46 joe-laptop kernel: [104221.558911] usb 2-2.2: USB disconnect, address 16 Then when I plugged the drive back into the computer, I got: Dec 29 13:10:29 joe-laptop kernel: [104324.668320] usb 2-2.2: new high speed USB device using ehci_hcd and address 17 Dec 29 13:10:29 joe-laptop kernel: [104324.761702] usb 2-2.2: configuration #1 chosen from 1 choice Dec 29 13:10:29 joe-laptop kernel: [104324.762700] scsi8 : SCSI emulation for USB Mass Storage devices Dec 29 13:10:29 joe-laptop kernel: [104324.762935] usb-storage: device found at 17 Dec 29 13:10:29 joe-laptop kernel: [104324.762938] usb-storage: waiting for device to settle before scanning Dec 29 13:10:34 joe-laptop kernel: [104329.760521] usb-storage: device scan complete Dec 29 13:10:34 joe-laptop kernel: [104329.761344] scsi 8:0:0:0: CD-ROM TEAC CD-224E 1.7A PQ: 0 ANSI: 0 CCS Dec 29 13:10:34 joe-laptop kernel: [104329.767425] sr1: scsi3-mmc drive: 24x/24x cd/rw xa/form2 cdda tray Dec 29 13:10:34 joe-laptop kernel: [104329.767612] sr 8:0:0:0: Attached scsi CD-ROM sr1 Dec 29 13:10:34 joe-laptop kernel: [104329.767720] sr 8:0:0:0: Attached scsi generic sg2 type 5 Dec 29 13:10:34 joe-laptop kernel: [104330.141060] sr 8:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Dec 29 13:10:34 joe-laptop kernel: [104330.141069] sr 8:0:0:0: [sr1] Sense Key : Illegal Request [current] Dec 29 13:10:34 joe-laptop kernel: [104330.141077] Info fld=0x0, ILI Dec 29 13:10:34 joe-laptop kernel: [104330.141081] sr 8:0:0:0: [sr1] Add. Sense: Illegal mode for this track Dec 29 13:10:34 joe-laptop kernel: [104330.141090] sr 8:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 Dec 29 13:10:34 joe-laptop kernel: [104330.141106] end_request: I/O error, dev sr1, sector 0 Dec 29 13:10:34 joe-laptop kernel: [104330.141113] __ratelimit: 18 callbacks suppressed There was more output than this (the number of lines started growing after the drive was plugged back in, and kept growing), but this is the first few lines.

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >