Search Results

Search found 4689 results on 188 pages for 'no average geek'.

Page 65/188 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • custom video icon for a single video file in windows 7 file explorer

    - by MrBrody
    recently I found a video on the net ( a .mp4 file), and when I had it on my computer with Windows7, I noticed its thumbnail was not the average windows 7 video thumbnail (which looks like a piece of video film with a random picture from the movie), but a custom thumbnail! Looking in the file properties did not help find the correct button to change the thumbnail...so I just wonder how he did it! Here is a picture: left: the custom thumbnail, right: the average thumbnail...

    Read the article

  • Possible reasons for high CPU load of taskmgr.exe process on VM?

    - by mjn
    On a VMware virtual machine which has severe performance problems I can see a constant average of 20+ percent CPU load for the TASKMGR.EXE (task manager) process. The apps running on this server have lower load, around 4 to 10 percent average. The VM is running Windows 2003 Server Standard with 3.75 GB assigned RAM. I suspect that the task manager CPU load has something to do with other VM instances on the VMWare server but could not see a similar value on internal ESXi systems (the problematic VM runs in the customers IT).

    Read the article

  • Incorrect value for sum of two NSIntegers

    - by Antonio
    Hi everybody: I'm sure I'm missing something and the answer is very simple, but I can't seem to understand why this is happening. I'm trying to make an average of dates: NSInteger runningSum =0; NSInteger count=0; for (EventoData *event in self.events) { NSDate *dateFromString = [[NSDate alloc] init]; if (event.date != nil) { dateFromString = [dateFormatter dateFromString:event.date]; runningSum += (NSInteger)[dateFromString timeIntervalSince1970]; count += 1; } } if (count>0) { NSLog(@"average is: %@",[NSDate dateWithTimeIntervalSince1970:(NSInteger)((CGFloat)runningAverage/count)]); } Everything seems to work OK, except for runningSum += (NSInteger)[dateFromString timeIntervalSince1970], which gives an incorrect result. If I put a breakpoint when taking the average of two equal dates (2009-10-10, for example, which is a timeInterval of 1255125600), runningSum is -1784716096, instead of the expected 2510251200. I've tried using NSNumber and I get the same result. Can anybody point me in the right direction? Thanks! Antonio

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • add Constraint on database with trigger

    - by Am1rr3zA
    Hi, I have 3 tables (Student, Course, student_course_choose(have field grade)) I defined a view on these 3 tables that get me an Average of the each student. I want to have constraint(with trigger) on these view(or on the table that need it) to limit the average of each student between 13 and 18. I somewhere read that I must use foreach statement(instead of foreach row) on trigger because when I decrease some grade of special student and his/her average become less than 13 they don't give me error (because later I increase grade of another his/her course ). how must I wrote this Trigger? (I want to implement aprh for testing trigger) note:I can write it in SQL server, oracle or Mysql no diff for me.

    Read the article

  • How do I handle a low job offer for an entry level position?

    - by user229269
    Hi guys! I recently graduated with MS in CS and I am excited because I just received a job offer from a company I really like for an entry-level sw engineer position. The thing is that, although the salary is not my priority and I care way more about gaining experience, their offer unfortunately is way below of what I expected. Actually after I did some research I realized that, comparing to the average salary range for the entry-level sw engineering positions in my area (one of the most expensive areas in the US) supposedly [X - Y]$ (where X is the lowest average and Y the highest), their offer is 20% below X! I wouldnt have a problem accepting an offer around X but this one is even lower than the lowest. Can I counter offer the X which is 20% more than what they offered me but at the same time is the minimum average? -- And mind you that I didnt even take under consideration the fact that I hold a MS degree which in many cases yields to a 5-10% more pay.

    Read the article

  • how to solve nested list programs [closed]

    - by riya
    write a function to get most popular car that accepts a car detail as input and returns the most popular car name along with its average rating .Each element of car details list is a sublist that provides the below information about a car (a)name of a car(b)car price (c) list of ratings obtained by car from various agencies.Incase two cars have the same average rating then the car with the lesser price qualifies as most popular car? here's my solution-: (define-struct cardetails ("name" price list of '(ratings)) (define car1 (make-cardetails "toyota" 123 '( 1 2 3))) (define car2 (make-cardetails "santro" 321 '( 2 2 3))) (define car3 (make-cardetails "toyota" 100 '( 1 2 3))) (define cardetailslist(list(car1) (car2)(car 3))) (let loop ((count 0)) (let (len (length cardetailslist)) (if(< count len) (string-ref (string-ref n)0) now please tell me how to find maximum average and display car name.it's not a homework question tomorrow is my test and we have not been taught this concept in class although it is very important from test point of view

    Read the article

  • Either .each do or .all isn't working how I think it should

    - by user1299656
    So whenever someone rates a shop, I want the Shop model to calculate its new average rating and store that in the database (instead of calculating the average every time someone looks at it). So I wrote the segment of code that follows, and it doesn't work. The loop always iterates exactly once, no matter how many shop_ratings in the database exist that have the shop's id as their shop_id. I played around with it a bit and found that every time a new rating is submitted the function is called successfully, but it only runs the loop once and sets the average to what the first rating was. I don't know if the "query" that sets the ratings variable is wrong or if it's the loop that's wrong. class Shop < ActiveRecord::Base has_many :shop_ratings attr_accessible :name, :latitude, :longitude validates_presence_of :name validates_presence_of :latitude validates_presence_of :longitude def distance_to(lat, long) return (self.longitude - long) + (self.latitude - lat) end def find_average total = 0 count = 0 ratings = ShopRating.all(:conditions => {:shop_id => id}) ratings.each do |submission| total = total + submission.rating count = count + 1 end update_attribute :average_rating, total/count end end

    Read the article

  • Handling a binary operation that makes sense only for part of a hierarchy.

    - by usersmarvin_
    I have a hierarchy, which I'll simplify greatly, of implementations of interface Value. Assume that I have two implementations, NumberValue, and StringValue. There is an average operation which only makes sense for NumberValue, with the signature NumberValue average(NumberValue numberValue){ ... } At some point after creating such variables and using them in various collections, I need to average a collection which I know is only of type NumberValue, there are three possible ways of doing this I think: Very complicated generic signatures which preserve the type info in compile time (what I'm doing now, and results in hard to maintain code) Moving the operation to the Value level, and: throwing an unsupportedOperationException for StringValue, and casting for NumberValue. Casting at the point where I know for sure that I have a NumberValue, using slightly less complicated generics to insure this. Does anybody have any better ideas, or a recommendation on oop best practices?

    Read the article

  • OOP PHP simple question

    - by Tristan
    Hello, I'm new to OOP in PHP, is that to seems correct ? class whatever { Function Maths() { $this->sql->query($requete); $i = 0; while($val = mysql_fetch_array($this)) { $tab[i][average] = $val['average']; $tab[i][randomData] = $val['sum']; $i=$i+1; } return $tab; } I want to access the data contained in the array $foo = new whatever(); $foo->Maths(); for ($i, $i <= endOfTheArray; i++) { echo Maths->tab[i][average]; echo Maths->tab[i][randomData]; } Thank you ;)

    Read the article

  • Where to declare variable? C#

    - by user1303781
    I am trying to make an average function... 'Total' adds them, then 'Total' is divided by n, the number of entries... No matter where I put 'double Total;', I get an error message. In this example I get... Use of unassigned local variable 'Total' If I put it before the comment, both references show up as error... I'm sure it's something simple..... namespace frmAssignment3 { class StatisticalFunctions { public static class Statistics { //public static double Average(List<MachineData.MachineRecord> argMachineDataList) public static double Average(List<double> argMachineDataList) { double Total; int n; for (n = 1; n <= argMachineDataList.Count; n++) { Total = argMachineDataList[n]; } return Total / n; } public static double StDevSample(List<MachineData.MachineRecord> argMachineDataList) { return -1; } } } }

    Read the article

  • Google Webmasters tools search queries position

    - by user1592845
    In my website account on Google Webmasters tools, some search queries show average position 1.0. This make me understand that it should be displayed as the first result. When I search for this query I could not able to find my website's page listed as a result?! In some cases I navigate to the third or the fourth result page and I could not find it! What are factors that make my website loss its average position for a search query? and when Google webmasters tools updates their values?

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • C#/.NET &ndash; Finding an Item&rsquo;s Index in IEnumerable&lt;T&gt;

    - by James Michael Hare
    Sorry for the long blogging hiatus.  First it was, of course, the holidays hustle and bustle, then my brother and his wife gave birth to their son, so I’ve been away from my blogging for two weeks. Background: Finding an item’s index in List<T> is easy… Many times in our day to day programming activities, we want to find the index of an item in a collection.  Now, if we have a List<T> and we’re looking for the item itself this is trivial: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // can find the exact item using IndexOf() 5: var pos = list.IndexOf(64); This will return the position of the item if it’s found, or –1 if not.  It’s easy to see how this works for primitive types where equality is well defined.  For complex types, however, it will attempt to compare them using EqualityComparer<T>.Default which, in a nutshell, relies on the object’s Equals() method. So what if we want to search for a condition instead of equality?  That’s also easy in a List<T> with the FindIndex() method: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // finds index of first even number or -1 if not found. 5: var pos = list.FindIndex(i => i % 2 == 0);   Problem: Finding an item’s index in IEnumerable<T> is not so easy... This is all well and good for lists, but what if we want to do the same thing for IEnumerable<T>?  A collection of IEnumerable<T> has no indexing, so there’s no direct method to find an item’s index.  LINQ, as powerful as it is, gives us many tools to get us this information, but not in one step.  As with almost any problem involving collections, there are several ways to accomplish the same goal.  And once again as with almost any problem involving collections, the choice of the solution somewhat depends on the situation. So let’s look at a few possible alternatives.  I’m going to express each of these as extension methods for simplicity and consistency. Solution: The TakeWhile() and Count() combo One of the things you can do is to perform a TakeWhile() on the list as long as your find condition is not true, and then do a Count() of the items it took.  The only downside to this method is that if the item is not in the list, the index will be the full Count() of items, and not –1.  So if you don’t know the size of the list beforehand, this can be confusing. 1: // a collection of extra extension methods off IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item in the collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // note if item not found, result is length and not -1! 8: return list.TakeWhile(i => !finder(i)).Count(); 9: } 10: } Personally, I don’t like switching the paradigm of not found away from –1, so this is one of my least favorites.  Solution: Select with index Many people don’t realize that there is an alternative form of the LINQ Select() method that will provide you an index of the item being selected: 1: list.Select( (item,index) => do something here with the item and/or index... ) This can come in handy, but must be treated with care.  This is because the index provided is only as pertains to the result of previous operations (if any).  For example: 1: // assume have a list of ints: 2: var list = new List<int> { 1, 13, 42, 64, 121, 77, 5, 99, 132 }; 3:  4: // you'd hope this would give you the indexes of the even numbers 5: // which would be 2, 3, 8, but in reality it gives you 0, 1, 2 6: list.Where(item => item % 2 == 0).Select((item,index) => index); The reason the example gives you the collection { 0, 1, 2 } is because the where clause passes over any items that are odd, and therefore only the even items are given to the select and only they are given indexes. Conversely, we can’t select the index and then test the item in a Where() clause, because then the Where() clause would be operating on the index and not the item! So, what we have to do is to select the item and index and put them together in an anonymous type.  It looks ugly, but it works: 1: // extensions defined on IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // finds an item in a collection, similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: // if you don't name the anonymous properties they are the variable names 8: return list.Select((item, index) => new { item, index }) 9: .Where(p => finder(p.item)) 10: .Select(p => p.index + 1) 11: .FirstOrDefault() - 1; 12: } 13: }     So let’s look at this, because i know it’s convoluted: First Select() joins the items and their indexes into an anonymous type. Where() filters that list to only the ones matching the predicate. Second Select() picks the index of the matches and adds 1 – this is to distinguish between not found and first item. FirstOrDefault() returns the first item found from the previous clauses or default (zero) if not found. Subtract one so that not found (zero) will be –1, and first item (one) will be zero. The bad thing is, this is ugly as hell and creates anonymous objects for each item tested until it finds the match.  This concerns me a bit but we’ll defer judgment until compare the relative performances below. Solution: Convert ToList() and use FindIndex() This solution is easy enough.  We know any IEnumerable<T> can be converted to List<T> using the LINQ extension method ToList(), so we can easily convert the collection to a list and then just use the FindIndex() method baked into List<T>. 1: // a collection of extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // find the index of an item in the collection similar to List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: return list.ToList().FindIndex(finder); 8: } 9: } This solution is simplicity itself!  It is very concise and elegant and you need not worry about anyone misinterpreting what it’s trying to do (as opposed to the more convoluted LINQ methods above). But the main thing I’m concerned about here is the performance hit to allocate the List<T> in the ToList() call, but once again we’ll explore that in a second. Solution: Roll your own FindIndex() for IEnumerable<T> Of course, you can always roll your own FindIndex() method for IEnumerable<T>.  It would be a very simple for loop which scans for the item and counts as it goes.  There’s many ways to do this, but one such way might look like: 1: // extension methods for IEnumerable<T> 2: public static class EnumerableExtensions 3: { 4: // Finds an item matching a predicate in the enumeration, much like List<T>.FindIndex() 5: public static int FindIndex<T>(this IEnumerable<T> list, Predicate<T> finder) 6: { 7: int index = 0; 8: foreach (var item in list) 9: { 10: if (finder(item)) 11: { 12: return index; 13: } 14:  15: index++; 16: } 17:  18: return -1; 19: } 20: } Well, it’s not quite simplicity, and those less familiar with LINQ may prefer it since it doesn’t include all of the lambdas and behind the scenes iterators that come with deferred execution.  But does having this long, blown out method really gain us much in performance? Comparison of Proposed Solutions So we’ve now seen four solutions, let’s analyze their collective performance.  I took each of the four methods described above and run them over 100,000 iterations of lists of size 10, 100, 1000, and 10000 and here’s the performance results.  Then I looked for targets at the begining of the list (best case), middle of the list (the average case) and not in the list (worst case as must scan all of the list). Each of the times below is the average time in milliseconds for one execution as computer over the 100,000 iterations: Searches Matching First Item (Best Case)   10 100 1000 10000 TakeWhile 0.0003 0.0003 0.0003 0.0003 Select 0.0005 0.0005 0.0005 0.0005 ToList 0.0002 0.0003 0.0013 0.0121 Manual 0.0001 0.0001 0.0001 0.0001   Searches Matching Middle Item (Average Case)   10 100 1000 10000 TakeWhile 0.0004 0.0020 0.0191 0.1889 Select 0.0008 0.0042 0.0387 0.3802 ToList 0.0002 0.0007 0.0057 0.0562 Manual 0.0002 0.0013 0.0129 0.1255   Searches Where Not Found (Worst Case)   10 100 1000 10000 TakeWhile 0.0006 0.0039 0.0381 0.3770 Select 0.0012 0.0081 0.0758 0.7583 ToList 0.0002 0.0012 0.0100 0.0996 Manual 0.0003 0.0026 0.0253 0.2514   Notice something interesting here, you’d think the “roll your own” loop would be the most efficient, but it only wins when the item is first (or very close to it) regardless of list size.  In almost all other cases though and in particular the average case and worst case, the ToList()/FindIndex() combo wins for performance, even though it is creating some temporary memory to hold the List<T>.  If you examine the algorithm, the reason why is most likely because once it’s in a ToList() form, internally FindIndex() scans the internal array which is much more efficient to iterate over.  Thus, it takes a one time performance hit (not including any GC impact) to create the List<T> but after that the performance is much better. Summary If you’re concerned about too many throw-away objects, you can always roll your own FindIndex() method, but for sheer simplicity and overall performance, using the ToList()/FindIndex() combo performs best on nearly all list sizes in the average and worst cases.    Technorati Tags: C#,.NET,Litte Wonders,BlackRabbitCoder,Software,LINQ,List

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to evaluate SEO/prominence improvement [on hold]

    - by Rober
    I will work on a website SEO and before starting with it I would like to "take a snapshot" of the present status so that I will be able to compare it with the new situation in a few months and evaluate my work and the real improvement. I don't mean whether the website is well implemented or not, but how well it is seen by Google and others. What prominence it has. I am taking some variables from Google Analytics (average day visits...), from Google Webmaster Tools (Search traffic and average position...) and some other indicators, like automatic SEO audit figures (website estimated worth, real pagerank...). What would you look at before starting SEO improvement?

    Read the article

  • Do you count a Masters in CS as a negative?

    - by Pete Hodgson
    In my experience interviewing developers I feel like candidates who've achieved a Masters in Comp Sci tend to be worse programmers on average that those who don't have a Masters. Is that just me, or have others noticed this phenomenon? If so, why would that be the case? UPDATE I appreciate the thoughtful comments. I think I should have been clearer in the comparison I'm making. Given two candidates who graduated from college around the same time, someone who went on to gain a Masters seems on average to be a worse programmer than someone who spent all their time in industry.

    Read the article

  • Tangent basis calculation problem

    - by Kirill Daybov
    I have the problem with seams with calculating a tangent basis in my application. I'm using a seems to be right algorithm, but it gives wrong result on the seams. What am I doing wrong? Is there a problem with an algorithm, or with the model? The designer says that our models with our normal maps are rendered correctly in Xoliul Shader Plugin in 3Ds Max, so there should be a way to calculate correct tangent basis programmatically. Here's an example of the problem I'm talking about. Steps, I've already taken: - Tried different algorithm (from Gamasutra, I can't post the link because I don't have enough reputation yet). I got wrong, much worse, results; - Tried to average basis vectors for vertexes are used in multiple faces; - Tried to average basis vectors for vertexes that have same world coordinates (this would be obviously wrong solution, but I've tried it anyway).

    Read the article

  • Expected time for an CakePHP MVC form/controller and db make up

    - by hephestos
    I would like to know, what is an average time for building a form in MVC pattern with for example CakePHP. I build 8 functions, two of them do custom queries, return json data, split them, expand them in a model in memory and delivers to the view. Those are three queries if you consider and an array to feed view for making some combo box. Why? all these, because I have data from json and I split them in order to make row of data like a table. Like that I changed a bit the edit.ctp but not a lot. And I created a javascript outside, with three functions. One collects data the other upon a change of a combo returnes the selected values, and does also some redirection flow. All this, in average how much time should it take ?

    Read the article

  • easy visualization of usage statistics (web app)

    - by sova
    I have some usage queries for my web app's database, the results of which I want to display graphically. Is there an easy-to-use api that exists for this purpose? I want to show things like average query-time per user (a small user-base), average query time per day, and things like that. I think it would be cool to show these on a two-axis graph. I am displaying this data on my site, so a jQuery/javascript/html solution for rendering information into graphs would be ideal. Thank you :) P.S. I wasn't sure if I should ask this on SO, but I am looking more for which product to use, not how to program with it.

    Read the article

  • Do you count a Masters in CS as a negative? [closed]

    - by Pete Hodgson
    In my experience interviewing developers I feel like candidates who've achieved a Masters in Comp Sci tend to be worse programmers on average that those who don't have a Masters. Is that just me, or have others noticed this phenomenon? If so, why would that be the case? UPDATE I appreciate the thoughtful comments. I think I should have been clearer in the comparison I'm making. Given two candidates who graduated from college around the same time, someone who went on to gain a Masters seems on average to be a worse programmer than someone who spent all their time in industry.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Was a Big Fish in a Little Pond, Am Now a Little Fish in a Big Pond. How Do I Grow? [closed]

    - by Ziv
    I've finished high school where I was in the top three in my class, I studied a little and there too I was pretty much Big Fish in a bigger pond than high school. Now I got into my first job in a very big company, there are some incredibly talented programmers and researchers here (mostly in departments not related to mine) and for the first time I really feel like I'm incredibly average - I do not want to be average. I read technical books all the time, I try to code on my personal time but I don't feel like that's enough. What can I do to become a leading programmer again in this big company? Is there anything specifically that can be done to make myself known here? This is a very big company so in order to advance you must be very good and shine in your field.

    Read the article

  • How does one unit test an algorithm

    - by Asa Baylus
    I was recently working on a JS slideshow which rotates images using a weighted average algorithm. Thankfully, timgilbert has written a weighted list script which implements the exact algorithm I needed. However in his documentation he's noted under todos: "unit tests!". I'd like to know is how one goes about unit testing an algorithm. In the case of a weighted average how would you create a proof that the averages are accurate when there is the element of randomness? Code samples of similar would be very helpful to my understanding.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >