Search Results

Search found 5469 results on 219 pages for 'compare and swap'.

Page 73/219 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • IComparer for integers and force empty strings to end

    - by paulio
    Hi, I've written the following IComparer but I need some help. I'm trying to sort a list of numbers but some of the numbers may not have been filled in. I want these numbers to be sent to the end of the list at all times.. for example... [EMPTY], 1, [EMPTY], 3, 2 would become... 1, 2, 3, [EMPTY], [EMPTY] and reversed this would become... 3, 2, 1, [EMPTY], [EMPTY] Any ideas? public int Compare(ListViewItem x, ListViewItem y) { int comparison = int.MinValue; ListViewItem.ListViewSubItem itemOne = x.SubItems[subItemIndex]; ListViewItem.ListViewSubItem itemTwo = y.SubItems[subItemIndex]; if (!string.IsNullOrEmpty(itemOne.Text) && !string.IsNullOrEmpty(itemTwo.Text)) { uint itemOneComparison = uint.Parse(itemOne.Text); uint itemTwoComparison = uint.Parse(itemTwo.Text); comparison = itemOneComparison.CompareTo(itemTwoComparison); } else { // ALWAYS SEND TO BOTTOM/END OF LIST. } // Calculate correct return value based on object comparison. if (OrderOfSort == SortOrder.Descending) { // Descending sort is selected, return negative result of compare operation. comparison = (-comparison); } else if (OrderOfSort == SortOrder.None) { // Return '0' to indicate they are equal. comparison = 0; } return comparison; } Cheers.

    Read the article

  • F# numeric associations

    - by b1g3ar5
    I have a numeric association for a custom type as follows: let DiffNumerics = { new INumeric<Diff> with member op.Zero = C 0.0 member op.One = C 1.0 member op.Add(a,b) = a + b member op.Subtract(a,b) = a - b member op.Multiply(a,b) = a * b member ops.Negate(a) = Diff.negate a member ops.Abs(a) = Diff.abs a member ops.Equals(a, b) = ((=) a b) member ops.Compare(a, b) = Diff.compare a b member ops.Sign(a) = int (Diff.sign a).Val member ops.ToString(x,fmt,fmtprovider) = failwith "not implemented" member ops.Parse(s,numstyle,fmtprovider) = failwith "not implemented" } GlobalAssociations.RegisterNumericAssociation(DiffNumerics) It works fine in f# interactive, but crashes when I run, because .ElementOps is not filled correctly for a matrix of these types. Any ideas why this might be? EDIT: In fsi, the code let A = dmatrix [[Diff.C 1.;Diff.C 2.;Diff.C 3.];[Diff.C 4.;Diff.C 5.;Diff.C 6.]] let B = matrix [[1.;2.;3.];[4.;5.;6.]] gives: > A.ElementOps;; val it : INumeric<Diff> = FSI_0003.NewAD+DiffNumerics@258 > B.ElementOps;; val it : INumeric<float> = Microsoft.FSharp.Math.Instances+FloatNumerics@115 > in the debugger A.ElementOps shows: '(A).ElementOps' threw an exception of type 'System.NotSupportedException' and, for the B matrix: Microsoft.FSharp.Math.Instances+FloatNumerics@115 So somehow the DiffNumerics isn't making it to the compiled program.

    Read the article

  • How to partition bits in a bit array with less than linear time

    - by SiLent SoNG
    This is an interview question I faced recently. Given an array of 1 and 0, find a way to partition the bits in place so that 0's are grouped together, and 1's are grouped together. It does not matter whether 1's are ahead of 0's or 0's are ahead of 1's. An example input is 101010101, and output is either 111110000 or 000011111. Solve the problem in less than linear time. Make the problem simpler. The input is an integer array, with each element either 1 or 0. Output is the same integer array with integers partitioned well. To me, this is an easy question if it can be solved in O(N). My approach is to use two pointers, starting from both ends of the array. Increases and decreases each pointer; if it does not point to the correct integer, swap the two. int * start = array; int * end = array + length - 1; while (start < end) { // Assume 0 always at the end if (*end == 0) { --end; continue; } // Assume 1 always at the beginning if (*start == 1) { ++start; continue; } swap(*start, *end); } However, the interview insists there is a sub-linear solution. This makes me thinking hard but still not get an answer. Can anyone help on this interview question?

    Read the article

  • "no inclosing instance error " while getting top term frequencies for document from Lucene index

    - by Julia
    Hello ! I am trying to get the most occurring term frequencies for every particular document in Lucene index. I am trying to set the treshold of top occuring terms that I care about, maybe 20 However, I am getting the "no inclosing instance of type DisplayTermVectors is accessible" when calling Comparator... So to this function I pass vector of every document and max top terms i would like to know protected static Collection getTopTerms(TermFreqVector tfv, int maxTerms){ String[] terms = tfv.getTerms(); int[] tFreqs = tfv.getTermFrequencies(); List result = new ArrayList(terms.length); for (int i = 0; i < tFreqs.length; i++) { TermFrq tf = new TermFrq(terms[i], tFreqs[i]); result.add(tf); } Collections.sort(result, new FreqComparator()); if(maxTerms < result.size()){ result = result.subList(0, maxTerms); } return result; } /Class for objects to hold the term/freq pairs/ static class TermFrq{ private String term; private int freq; public TermFrq(String term,int freq){ this.term = term; this.freq = freq; } public String getTerm(){ return this.term; } public int getFreq(){ return this.freq; } } /*Comparator to compare the objects by the frequency*/ class FreqComparator implements Comparator{ public int compare(Object pair1, Object pair2){ int f1 = ((TermFrq)pair1).getFreq(); int f2 = ((TermFrq)pair2).getFreq(); if(f1 > f2) return 1; else if(f1 < f2) return -1; else return 0; } } Explanations and corrections i will very much appreciate, and also if someone else had experience with term frequency extraction and did it better way, I am opened to all suggestions! Please help!!!! Thanx!

    Read the article

  • How do the operators < and > work with pointers?

    - by Øystein
    Just for fun, I had a std::list of const char*, each element pointing to a null-terminated text string, and ran a std::list::sort() on it. As it happens, it sort of (no pun intended) did not sort the strings. Considering that it was working on pointers, that makes sense. According to the documentation of std::list::sort(), it (by default) uses the operator < between the elements to compare. Forgetting about the list for a moment, my actual question is: How do these (, <, =, <=) operators work on pointers in C++ and C? Do they simply compare the actual memory addresses? char* p1 = (char*) 0xDAB0BC47; char* p2 = (char*) 0xBABEC475; e.g. on a 32-bit, little-endian system, p1 p2 because 0xDAB0BC47 0xBABEC475? Testing seems to confirm this, but I thought it'd be good to put it on StackOverflow for future reference. C and C++ both do some weird things to pointers, so you never really know...

    Read the article

  • Diff/Merge functionality for objects (not files!)

    - by gehho
    I have a collection of objects of the same type, let's call it DataItem. The user can view and edit these items in an editor. It should also be possible to compare and merge different items, i.e. some sort of diff/merge for DataItem instances. The DIFF functionality should compare all (relevant) properties/fields of the items and detect possible differences. The MERGE functionality should then be able to merge two instances by applying selected differences to one of the items. For example (pseudo objects): DataItem1 { DataItem2 { Prop1 = 10 Prop1 = 10 Prop2 = 25 Prop2 = 13 Prop3 = 0 Prop3 = 5 Coll = { 7, 4, 8 } Coll = { 7, 4, 8, 12 } } } Now, the user should be provided with a list of differences (i.e. Prop2, Prop3, and Coll) and he should be able to select which differences he wants to eliminate by assigning the value from one item to the other. He should also be able to choose if he wants to assign the value from DataItem1 to DataItem2 or vice versa. Are there common practices which should be used to implement this functionality? Since the same editor should also provide undo/redo functionality (using the Command pattern), I was thinking about reusing the ICommand implementations because both scenarios basically handle with property assignments, collection changes, and so on... My idea was to create Difference objects with ICommand properties which can be used to perform a merge operation for this specific Difference. Btw: The programming language will be C# with .NET 3.5SP1/4.0. However, I think this is more of a language-independent question. Any design pattern/idea/whatsoever is welcome!

    Read the article

  • Why is this Java Calendar comparison bad?

    - by joe7pak
    Hello folks. I'm having an inexplicable problem with the Java Calendar class when I'm trying to compare to dates. I'm trying to compare to Calendars and determine if their difference is than 1 day, and do things bases on that difference or not. But it doesn't work. If I do this with the two dates: String currDate = aCurrentUTCCalendar.getTime().toString(); String localDate = aLocalCalendar.getTime().toString(); I get these results: currDate = "Thu Jan 06 05:58:00 MST 2010" localDate = "Tue Jan 05 00:02:00 MST 2010" This is correct. But if I do this: long curr = aCurrentUTCCalendar.getTime().getTime(); long local = aLocalCalendar.getTime().getTime(); I get these results: ( in milliseconds since the epoch ) curr = -125566110120000 local = 1262674920000 Since there is only about a 30 hour different between the two, the magnitudes are vastly different, not to mention that annoying negative sign. This causes problems if I do this: long day = 60 * 60 * 24 * 1000; // 86400000 millis, one day if( local - curr > day ) { // do something } What's wrong? Why are the getTime().toString() calls correct, but the getTime().getTime() calls are vastly different? I'm using jdk 1.6_06 on WinXP. I can't upgrade the JDK for various reasons.

    Read the article

  • Supersede users need to press enter after inputting a string in python 3.0

    - by Cimex
    I've been attempting to create a simple Rock, Paper, Scissors game in python 3.0 -- very standard task for anybody learning programming. But, as I finish up to a point, I think,"wow, that'd be awesome", or,"it'd be cool to do this!" So anyways, I keep building upon the project... I've developed a menu, a single player game vs the computer, and just finished the multiplayer game. But, I've realized, that the multiplayer game isn't very effective. It just dosen't work like the analog version of the game. Currently, it'll ask for player1's input, then player2's input, compare them, and spit out the result and the current score. What I'd rather have happen is that the program asks for both players input at the same time and both players input their choice at the same time. I understand that I can easily do that by just grabbing the index of the first and second answer and compare the 2 inputs -- easy. But what I'd rather have happen is that after both players enter their one character answers at the same time (r for rock, p for paper, or s for scissors), then the program will auto enter the input. Not needing someone to press enter. The input would be dictated by the fact that 2 characters have been entered. I guess my question is: Is there any way to dictate what can be used as an input for 'enter'?

    Read the article

  • How to convert Big Endian and how to flip the highest bit?

    - by Robert Frank
    I am using a TStream to read binary data (thanks to this post: http://stackoverflow.com/questions/2878180/how-to-use-a-tfilestream-to-read-2d-matrices-into-dynamic-array). My next problem is that the data is Big Endian. From my reading, the Swap() method is seemingly deprecated. How would I swap the types below? 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point - Are IEEE affected by Big Endian? And, finally, since the data is unsigned, the creators of this dataset have stored the unsigned values as signed integers (excluding the IEEE). They instruct that one need only add an offset (2^15, 2^31, and 2^63) to recover the unsigned data. But, they note that flipping the most significant bit is the fastest way to do that. How does one efficiently flip the most significant bit of a 16, 32, or 64-bit integer? So, if the data on disk (16-bit) is "85 FB" - the desired result after reading the data and swapping and bit flipping would be 1531. Is there a way to accomplish the swapping and bit flipping with generics so it fits into the generic answer at the link above? Yes, kids, THIS is how scientific astronomical data is stored by NASA, ESO, and all professional astronomers. This FITS standard is considered by some to be one of the most successful standards ever created in its proliferation and flexibility!

    Read the article

  • generic binary Search in c#

    - by Pro_Zeck
    Below is my Generic Binary Search it works ok with the intgers type array it finds all the elements in it . But the Problem Arises when i use a string array to find any string data. It runs ok for the first index and last index elements but i cant find the middle elements. Stringarray = new string[] { "b", "a", "ab", "abc", "c" }; public static void BinarySearch<T>(T[] array, T searchFor, Comparer<T> comparer) { int high, low, mid; high = array.Length - 1; low = 0; if (array[0].Equals(searchFor)) Console.WriteLine("Value {0} Found At Index {1}",array[0],0); else if (array[high].Equals(searchFor)) Console.WriteLine("Value {0} Found At Index {1}", array[high], high); else { while (low <= high) { mid = (high + low) / 2; if (comparer.Compare(array[mid], searchFor) == 0) { Console.WriteLine("Value {0} Found At Index {1}", array[mid], mid); break; } else { if (comparer.Compare(searchFor, array[mid]) > 0) high = mid + 1; else low = mid + 1; } } if (low > high) { Console.WriteLine("Value Not Found In the Collection"); } } }

    Read the article

  • Sorting arrays in java

    - by user360706
    Write a static method in Java : public static void sortByFour (int[] arr) That receives as a paramater an array full of non-negative numbers (zero or positive) and sorts the array in the following way : In the beginning of the array all the numbers that devide by four without a remainder will appear. After them all the numbers in the array that devide by 4 with a remainder of 1 will appear. After them all the numbers in the array that devide by 4 with a remainder of 2 will appear. In the end of the array all the rest numbers (those who divide by 4 with the remainder 3) will appear. (The order of the numbers in each group doesn't matter) The method must be the most efficient it can. This is what I wrote but unfortunately it doesn't work well... :( public static void swap( int[] arr, int left, int right ) { int temp = arr[left]; arr[left] = arr[right]; arr[right] = temp; } public static void sortByFour( int[] arr ) { int left = 0; int right = ( arr.length - 1 ); int mid = ( arr.length / 2 ); while ( left < right ) { if ( ( arr[left] % 4 ) > ( arr[right] % 4 ) ) { swap( arr, left, right ); right--; } if ( ( arr[left] % 4 ) == ( arr[right] % 4 ) ) left++; else left++; } } Can someone please help me by fixing my code so that it will work well or rewriting it?

    Read the article

  • Contrary to Python 3.1 Docs, hash(obj) != id(obj). So which is correct?

    - by Don O'Donnell
    The following is from the Python v3.1.2 documentation: From The Python Language Reference Section 3.3.1 Basic Customization: object.__hash__(self) ... User-defined classes have __eq__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns id(x). From The Glossary: hashable ... Objects which are instances of user-defined classes are hashable by default; they all compare unequal, and their hash value is their id(). This is true up through version 2.6.5: Python 2.6.5 (r265:79096, Mar 19 2010 21:48:26) ... ... >>> class C(object): pass ... >>> c = C() >>> id(c) 11335856 >>> hash(c) 11335856 But in version 3.1.2: Python 3.1.2 (r312:79149, Mar 21 2010, 00:41:52) ... ... >>> class C: pass ... >>> c = C() >>> id(c) 11893680 >>> hash(c) 743355 So which is it? Should I report a documentation bug or a program bug? And if it's a documentation bug, and the default hash() value for a user class instance is no longer the same as the id() value, then it would be interesting to know what it is or how it is calculated, and why it was changed in version 3.

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • "end()" iterator for back inserters?

    - by Thanatos
    For iterators such as those returned from std::back_inserter(), is there something that can be used as an "end" iterator? This seems a little nonsensical at first, but I have an API which is: template<typename InputIterator, typename OutputIterator> void foo( InputIterator input_begin, InputIterator input_end, OutputIterator output_begin, OutputIterator output_end ); foo performs some operation on the input sequence, generating an output sequence. (Who's length is known to foo but may or may not be equal to the input sequence's length.) The taking of the output_end parameter is the odd part: std::copy doesn't do this, for example, and assumes you're not going to pass it garbage. foo does it to provide range checking: if you pass a range too small, it throws an exception, in the name of defensive programming. (Instead of potentially overwriting random bits in memory.) Now, say I want to pass foo a back inserter, specifically one from a std::vector which has no limit outside of memory constraints. I still need a "end" iterator - in this case, something that will never compare equal. (Or, if I had a std::vector but with a restriction on length, perhaps it might sometimes compare equal?) How do I go about doing this? I do have the ability to change foo's API - is it better to not check the range, and instead provide an alternate means to get the required output range? (Which would be needed anyways for raw arrays, but not required for back inserters into a vector.) This would seem less robust, but I'm struggling to make the "robust" (above) work.

    Read the article

  • Image swaping with click on <tr>

    - by Jonny Sooter
    I've edited this piece of code from the Data Tables Plugin witch makes a tr click able and pops open another tr with details from the clicked tr. This piece of code is the event listener for opening and closing. When "details" are open the img should be images/details_close.png and they are closed the img should be images/details_open.png and have a swap occurring when it opens and closes when clicked. What is happening is there is no swap going on when its open or closed I still only get the "details_open.png". I don't knoe if I'm just not selecting the img tag properly or whats going on. Link to project: http://www2.kent.k12.wa.us/ksd/it/www/mobile/elementary.html $('#example tbody').on('click', 'td', function (e) { var myImage = $(this).find("img"); var nTr = $(this).parents('tr')[0]; if ( oTable.fnIsOpen(nTr) ) { /* This row is already open - close it */ myImage.src = "images/details_open.png"; oTable.fnClose( nTr ); } else { /* Open this row */ myImage.src = "images/details_close.png"; oTable.fnOpen( nTr, fnFormatDetails(oTable, nTr), 'details' ); } } );

    Read the article

  • Performance issue finding weekdays over a given period

    - by Oysio
    I have some methods that return the number of weekdays between two given dates. Since calling these methods become very expensive to call when the two dates lie years apart, I'm wondering how these methods could be refactored in a more efficient way. The returned result is correct but I feel that the iphone processor is struggling to keep up and consequently freezes up the application when I would call these methods over a period of say 10years. Any suggestions ? //daysList contains all weekdays that need to be found between the two dates -(NSInteger) numberOfWeekdaysFromDaysList:(NSMutableArray*) daysList startingFromDate:(NSDate*)startDate toDate:(NSDate*)endDate { NSInteger retNumdays = 0; for (Day *dayObject in [daysList objectEnumerator]) { if ([dayObject isChecked]) { retNumdays += [self numberOfWeekday:[dayObject weekdayNr] startingFromDate:startDate toDate:endDate]; } } return retNumdays; } -(NSInteger) numberOfWeekday:(NSInteger)day startingFromDate:(NSDate*)startDate toDate:(NSDate*)endDate { NSInteger numWeekdays = 0; NSDate *nextDate = startDate; NSComparisonResult result = [endDate compare:nextDate]; //Do while nextDate is in the past while (result == NSOrderedDescending || result == NSOrderedSame) { if ([NSDate weekdayFromDate:nextDate] == day) { numWeekdays++; } nextDate = [nextDate dateByAddingDays:1]; result = [endDate compare:nextDate]; } return numWeekdays; }

    Read the article

  • Comparing objects and inheritance

    - by ereOn
    Hi, In my program I have the following class hierarchy: class Base // Base is an abstract class { }; class A : public Base { }; class B : public Base { }; I would like to do the following: foo(const Base& one, const Base& two) { if (one == two) { // Do something } else { // Do something else } } I have issues regarding the operator==() here. Of course comparing an instance A and an instance of B makes no sense but comparing two instances of Base should be possible. (You can't compare a Dog and a Cat however you can compare two Animals) I would like the following results: A == B = false A == A = true or false, depending on the effective value of the two instances B == B = true or false, depending on the effective value of the two instances My question is: is this a good design/idea ? Is this even possible ? What functions should I write/overload ? My apologies if the question is obviously stupid or easy, I have some serious fever right now and my thinking abilities are somewhat limited :/ Thank you.

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • How To Block The UserName After 3 Invalid Password Attempts IN ASP.NET

    - by shihab
    I used the following code for checking user name and password. and I want ti block the user name after 3 invalid password attempt. what should I add in my codeing MD5CryptoServiceProvider md5hasher = new MD5CryptoServiceProvider(); Byte[] hashedDataBytes; UTF8Encoding encoder = new UTF8Encoding(); hashedDataBytes = md5hasher.ComputeHash(encoder.GetBytes(TextBox3.Text)); StringBuilder hex = new StringBuilder(hashedDataBytes.Length * 2); foreach (Byte b in hashedDataBytes) { hex.AppendFormat("{0:x2}", b); } string hash = hex.ToString(); SqlConnection con = new SqlConnection("Data Source=Shihab-PC;Initial Catalog=test;User ID=SOMETHING;Password=SOMETHINGELSE"); SqlDataAdapter ad = new SqlDataAdapter("select password from Users where UserId='" + TextBox4.Text + "'", con); DataSet ds = new DataSet(); ad.Fill(ds, "Users"); SqlDataAdapter ad2 = new SqlDataAdapter("select UserId from Users ", con); DataSet ds2 = new DataSet(); ad2.Fill(ds2, "Users"); Session["id"] = TextBox4.Text.ToString(); if ((string.Compare((ds.Tables["Users"].Rows[0][0].ToString()), hash)) == 0) { if (string.Compare(TextBox4.Text, (ds2.Tables["Users"].Rows[0][0].ToString())) == 0) { Response.Redirect("actioncust.aspx"); } else { Response.Redirect("actioncust.aspx"); } } else { Label2.Text = "Invalid Login"; } con.Close(); }

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • Syncing objects between two devices with different system times

    - by Mike Weller
    Hi there. I'm syncing objects between two devices. Objects have a lastModified property. If both devices have modified an object, then during the next sync the version of the object with the most recent lastModified is chosen on both devices. So we don't do fine-grained merging, only 'most recent version' merging. The problem is this. When one device receives a list of changed objects it can't reliably compare the lastModified of received objects to its own because the system times on the two devices may be different. I considered having each device send its current date/time during the sync. Then each calculates the difference between the remote time and the local time to compare the dates properly. But if there is lag between sending a date and the remote device receiving it, this causes incorrect comparisons with objects that were modified at the same time (or very close together in time). i.e. both devices think the remote object is newer and they end up with different objects. I hope I have explained this clearly enough. There must be a common solution to this kind of problem but my brain isn't coming up with anything. Any suggestions? Thanks in advance...

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • CodePlex Daily Summary for Sunday, November 21, 2010

    CodePlex Daily Summary for Sunday, November 21, 2010Popular ReleasesMDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;Smith Html Editor: Smith Html Editor V0.75: The first public release.MiniTwitter: 1.59: MiniTwitter 1.59 ???? ?? User Streams ????????????????? ?? ?????????????? ???????? ?????????????.NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.01: Added new extensions for - object.CountLoopsToNull Added new extensions for DateTime: - DateTime.IsWeekend - DateTime.AddWeeks Added new extensions for string: - string.Repeat - string.IsNumeric - string.ExtractDigits - string.ConcatWith - string.ToGuid - string.ToGuidSave Added new extensions for Exception: - Exception.GetOriginalException Added new extensions for Stream: - Stream.Write (overload) And other new methods ... Release as of dotnetpro 01/2011Code Sample from Microsoft: Visual Studio 2010 Code Samples 2010-11-19: Code samples for Visual Studio 2010Prism Training Kit: Prism Training Kit 4.0: Release NotesThis is an updated version of the Prism training Kit that targets Prism 4.0 and added labs for some of the new features of Prism 4.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication MEF Navigation Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2...Free language translator and file converter: Free Language Translator 2.2: Starting with version 2.0, the translator encountered a major redesign that uses MEF based plugins and .net 4.0. I've also fixed some bugs and added support for translating subtitles that can show up in video media players. Version 2.1 shows the context menu 'Translate' in Windows Explorer on right click. Version 2.2 has links to start the media file with its associated subtitle. Download the zip file and expand it in a temporary location on your local disk. At a minimum , you should uninstal...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.4 Released: Hi, Today we are releasing Visifire 3.6.4 with few bug fixes: * Multi-line Labels were getting clipped while exploding last DataPoint in Funnel and Pyramid chart. * ClosestPlotDistance property in Axis was not behaving as expected. * In DateTime Axis, Chart threw exception on mouse click over PlotArea if there were no DataPoints present in Chart. * ToolTip was not disappearing while changing the DataSource property of the DataSeries at real-time. * Chart threw exception ...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.Craig's Utility Library: Craig's Utility Library Code 2.0: This update contains a number of changes, added functionality, and bug fixes: Added transaction support to SQLHelper. Added linked/embedded resource ability to EmailSender. Updated List to take into account new functions. Added better support for MAC address in WMI classes. Fixed Parsing in Reflection class when dealing with sub classes. Fixed bug in SQLHelper when replacing the Command that is a select after doing a select. Fixed issue in SQL Server helper with regard to generati...MFCMAPI: November 2010 Release: Build: 6.0.0.1023 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeDotNetNuke® Community Edition: 05.06.00: Major HighlightsAdded automatic portal alias creation for single portal installs Updated the file manager upload page to allow user to upload multiple files without returning to the file manager page. Fixed issue with Event Log Email Notifications. Fixed issue where Telerik HTML Editor was unable to upload files to secure or database folder. Fixed issue where registration page is not set correctly during an upgrade. Fixed issue where Sendmail stripped HTML and Links from emails...mVu Mobile Viewer: mVu Mobile Viewer 0.7.10.0: Tube8 fix.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.8.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the serverNew Features Improved chart support Different chart-types series on the same chart Support for secondary axis and a lot of new properties Better styling Encryption and Workbook protection Table support Import csv files Array formulas ...and a lot of bugfixesAutoLoL: AutoLoL v1.4.2: Added support for more clients (French and Russian) Settings are now stored sepperatly for each user on a computer Auto Login is much faster now Auto Login detects and handles caps lock state properly nowTailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.9: Contains a number of bug fixes and additional tutorial steps as well as complete database implementation details.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3 and demos: It contains a rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6 new stuff in 1.3 Autocomplete helper Autocomplete and AjaxDropdown can have parentId and be filled with data depending on the value of the parent PopupForm besides Content("ok") on success can also return J...Nearforums - ASP.NET MVC forum engine: Nearforums v4.1: Version 4.1 of the ASP.NET MVC forum engine, with great improvements: TinyMCE added as visual editor for messages (removed CKEditor). Integrated AntiSamy for cleaner html user post and add more prevention to potential injections. Admin status page: a page for the site admin to check the current status of the configuration / db / etc. View Roadmap for more details.UltimateJB: UltimateJB 2.01 PL3 KakaRoto + PSNYes by EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version PSNYes pour pouvoir jouer sur le PSN avec une PS3 Jailbreaker. - Pour l'instant le PSNYes n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et prépare a l'intégration du Firmware 3.30 !!! Conclusion : - UltimateJB PSNYes => Valide l'utilisation du PSN : Uniquement compatible avec les 3.41 - ultimateJB DEFAULT => Pas de PSN mais disponible pour les PS3 sui...New Projects1600hours: 1600hours project made in C++.aoleDownload: Aole Series DownloadBills and Cash Flow: Bills and Cash Flow is a simple multi-tenant application to track bills and view cash flowCUDAagrep: CUDAagrep, a fast CUDA implementation of agrep algorithm for approximate DNA/RNA sequence matching.DNN5 Simple Ticketing Module: This is a simple DNN module that accepts trouble tickets and creates a knowledge base for a company.EntityOH: Dynamic Entities ORMFxcop ASP.NET Security Rules: Fxcop ASP.NET security rules This is a set of code analysis rules aiming at analyzing ASP.NET and ASP.NET MVC security against best practices. The rules can be used by Visual Studio 10 Ultimate or FxCop v10 standalone.Head First Design Patterns - Code Examples in C#: This project consists of ported code examples from the book Head First Design Patterns by Eric and Elizabeth Freeman into C#.HTML5 Media Player (Video / Audio): A .NET implementation of the VideoJS and AudioJS open source projects with video and audio support for HTML5. Excellent for use with iPod, iPad, iPhone, etc.Keyword Auction Simulator: This is the project for simulating the keyword auction like Adwords.mAdcOW Office Add-Ins: A collection of handy Office 2010 add-ins.Manga to Epub: Manga to Epub allow you to convert a bunch of images to a single "epub" file, readable on your reader. It handles most of the image types as well as several archives. You have multiple customization options, such as trimming the images in order to remove white borders.Mapua Career Ramp Up: A joint endeavor with the Philippine IT industry leaders and with Mapua School of Information Technology to build an online collaborative database system to Ramp-Up graduating students on their career as future IT Professionals. minami: Minami is a Project what focuse the work on Stability and Features. Is Development in C++minami-dev: Comes later the Description.Mobile RPG: Mobile RPG is five ATtiny85 microcontrollers playing their own RPG characters with a primary MCU acting as GM. Its a fun exercise in autonomous role playing.NetSnoop: Netsnoop allows everyone to get a quick overview over alle the current connections on their workstation.nGso: GSO algorithm implementation based on http://www.springerlink.com/content/y065470472612847/fulltext.pdf Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions K.N. Krishnanand · D. GhoseOpenID Starter Kit for ASP.NET MVC: OpenID Starter Kit for ASP.NET MVC is used to jump start building your web application with ASP.NET MVC with OpenID login system. It is also a good education resource if you want to learn how to implement OpenID into a ASP.NET MVC.Orchard Contact Us Module: Add a contact us page to your Orchard site using this module.Persian Scheduler and Calendar Control: This is a Jalali (Persian or shamsi) calendar and scheduler control in silverlight. Choosing the name 'Jalali' is in honor of 'Hakim omar khayyam' the founder of Jalali calendar. This is under the lisence of 'Barid New Systems' company.Popfly Metadata Generator: Creates Metadata for New project.PurpleStoat: A modular, extensible Silverlight application shell using Prism, Unity and the Enterprise Library, and written in C#. It includes a WCF service which provides AuthZ and logging services to the shell, which are also available to the modules.QL Config Compare Tool: The QL Config Compare Tool enables you to compare two QuakeLive configs. It creates a detailed overview of the differences and is able to save statistics.SQL PHI Identifier: SQL PHI Identifier is an auditing tool for DBA's in a healthcare environment to be able to help identify which databases/tables might hold protected health information (PHI). Using this information a DBA can then take the necessary steps to secure that data adequately.Sqlite ORM: Sqlite ORM is at present a simple Class to Table mapper for Sqlite databases. Tables are created on demand, and designed to future proof for Sharding. Code has 100% unit test coverage.Test shop: Test shopVarMerger - ??????? ????????? ??? ???????? ????????????.: VarMerger - ?????????? (Add-In) ??? MS Word 2007, ??????? ????????? ??????????? ???????? ???????? ??????? ?? ??????, ?????????? ????????? ?????? ? ??????. Visual Studio Add-In For creating Vista Gadget: The absence of tools in Visual Studio that can help developers to create Vista gadgets is strange and disappointing, in my opinion., I want to show you some tools that can help you to develop Vista gadgets using only Visual Studio 2008 or 2010 IDE.Vocal Remover - VST Plugin: VST Plugin Removes vocal form songs using M/S system trick with EQ on mid signal. source in C++ IDE: Visual Studio 2010 Express Edition LIB: Steinberg VST SDK 2.4Windows Phone 7 To Go: A project with demos for Windows Phone 7 FeaturesWinware: Winware is not only an Entity Framework, but beyond.XTengine: Xtengine makes it easier for XNA developers to develop in a compositional manner. You'll no longer have to write specific game classes with deep hierarchies or hardcode to load levels. It's developed in C# with XNA 4.0, with WP7 in mind.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >