Search Results

Search found 6805 results on 273 pages for 'fast formula'.

Page 23/273 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • I need a fast runtime expression parser

    - by Chris Lively
    I need to locate a fast, lightweight expression parser. Ideally I want to pass it a list of name/value pairs (e.g. variables) and a string containing the expression to evaluate. All I need back from it is a true/false value. The types of expressions should be along the lines of: varA == "xyz" and varB==123 Basically, just a simple logic engine whose expression is provided at runtime. UPDATE At minimum it needs to support ==, !=, , =, <, <= Regarding speed, I expect roughly 5 expressions to be executed per request. We'll see somewhere in the vicinity of 100/requests a second. Our current pages tend to execute in under 50ms. Usually there will only be 2 or 3 variables involved in any expression. However, I'll need to load approximately 30 into the parser prior to execution. UPDATE 2012/11/5 Update about performance. We implemented nCalc nearly 2 years ago. Since then we've expanded it's use such that we average 40+ expressions covering 300+ variables on post backs. There are now thousands of post backs occurring per second with absolutely zero performance degradation. We've also extended it to include a handful of additional functions, again with no performance loss. In short, nCalc met all of our needs and exceeded our expectations.

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • C code Error: free(): invalid next size (fast):

    - by user1436057
    I got an error from my code, but I'm not sure where to fix it. Here's the explanation of what my code does: I'm writing some code that will read an input file and store each line as an object (char type) in an array. The first line of the input file is a number. This number tells me how many lines that I should read and store in the array. Here's my code: int main(int argc, char *argv[]){ FILE *fp; char **path; int num, i; ... /*after reading the first line and store the number value in num*/ path = malloc(num *sizeof(char)); i=0; while (!feof(fp)) { char buffer[500]; int length = 0; for (ch = fgetc(fp); ch != EOF && ch != '\n'; ch = fgetc(fp)) { buffer[length++] = ch; } if(ch == '\n' && ch!= EOF){ buffer[length] = '\0'; path[i] = malloc(strlen(buffer)+1); strcpy(path[i], buffer); i++; } } ... free(path); } After running the code, I get this *** glibc detected *** free(): invalid next size (fast): I have searched around and know this is malloc/free error, but I don't exactly know to fix it. Any help would be great. Thanks!

    Read the article

  • how to get data in sheet2 from sheet1 in excel

    - by romen-leung
    I have two worksheets, Sheet1 Column A = Deptname Column B = Headname Column C = Username Sheet2 Column A = Headname (???) Column B = Username Column C = UserID "Headname" column in Sheet2 is blank and what I wanted to do is to get "Headname" from Sheet1 by using "Username". I have tried to use VLookup but it did not work if username in Sheet1 and Sheet2 is not exactly same. E.G, given two differents Username as shown on below. Username in Sheet1 is "Jenny Oh" and "Chan Shu Mei" Username in Sheet2 is "ITC - Jenny Ong" and "IA: Chan Shu Mei" Any ideas whether it can be done? Thankf in advance for any help.

    Read the article

  • Excel - Looping through data till it finds the right one

    - by DanoK
    I'm pulling data from one Excel table into another using an IF statement. I want it to check two fields and if it is a match I want it to print something and if it doesn't then I want it to continue searching. If there is no absolute match then leave the field blank. I believe I'm running into an syntax issue but after numerous iterations I can't get it to pull everything over. Here is my current syntax. =IF(BM5<>"External","",IF(AND(S5=VLOOKUP(A5,ExternalOnly,5,FALSE),A5=VLOOKUP(A5,ExternalOnly,1,FALSE)),S5,"")

    Read the article

  • Excel Match Bug in a sparse range with duplicate keys

    - by DangerMouse
    Data in TheRange is {1,"",1,"",1,"",1,"",2} =Match(2, TheRange, 1) returns 9 as expected =Match(1.5, TheRange, 1) returns 7 as expected =Match(1, TheRange, 1) returns 5 which is not expected Anyone come across this ? Anyone got a fix? Additionally if I use Worksheet.Function.Match in VBA I get more unexpected results.

    Read the article

  • SumProduct over sets of cells (not contiguous)

    - by Craig
    I have a total data set that is for 4 different groupings. One of the values is the average time, the other is count. For the Total I have to multiply these and then divide by the total of the count. Currently I use: =SUM(D32*D2,D94*D64,D156*D126,D218*D188)/SUM(D32,D94,D156,D218) I would rather use a SumProduct if I can to make it more readable. I tried to do: =SUMPRODUCT((D2,D64,D126,D188),(D32,D94,D156,D218))/SUM(D32,94,D156,D218) But as you can tell by my posting here, that did not work. Is there a way to do SumProduct like I want? Thoughts, Answers, Questions, Comments? Craig

    Read the article

  • How to hide GetType() method from COM?

    - by ticky
    I made an automation Add-In for Excel, and I made several functions (formulas). I have a class which header looks like this (it is COM visible): [ClassInterface(ClassInterfaceType.AutoDual)] [ComVisible(true)] public class Functions {} In a list of methods, I see: ToString(), Equals(), GetHashCode() and GetType() methods. Since all methods of my class are COM visible, I should somehow hide those 4. I succeeded with 3 of them: ToString(), Equals(), GetHashCode() but GetType() cannot be overriden. Here is what I did with 3 of them: [ComVisible(false)] public override string ToString() { return base.ToString(); } [ComVisible(false)] public override bool Equals(object obj) { return base.Equals(obj); } [ComVisible(false)] public override int GetHashCode() { return base.GetHashCode(); } This doesn't work: [ComVisible(false)] public override Type GetType() { return base.GetType(); } Here is the error message in Visual Studio when compile: ..GetType()': cannot override inherited member 'object.GetType()' because it is not marked virtual, abstract, or override So, what should I do to hide the GetType() method from COM?

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

  • populating a list from another worksheet with 1 criteria

    - by Arcadian
    Hello, here is my dilemma. I have two worksheets one that has the name of clients and one that i want to copy the names to depending on the city. For instance: associated to each column is last name, first name and city. i have hundreds of names associated to different cities and what i would like is from worksheet1.xls to copy all the New York clients to worksheet2.xls either when i open worksheet2 or via macro what ever is easier and because last name is in one cell and the first name is in the other i would have to copy both. I saw that its possible to link cells from one worksheet to another and then do a vlookup depending on the criteria. Is that the best easiest way or is there another? thanks in advance and cheers oleg

    Read the article

  • [change] Vlookup onto another workbook

    - by Arcadian
    Hello, here is my dilemma. I have two worksheets one that has the name of clients and one that i want to copy the names to depending on the city. For instance: associated to each column is last name, first name and city. i have hundreds of names associated to different cities and what i would like is from worksheet1.xls to copy all the New York clients to worksheet2.xls either when i open worksheet2 or via macro what ever is easier and because last name is in one cell and the first name is in the other i would have to copy both. I saw that its possible to link cells from one worksheet to another and then do a vlookup depending on the criteria. Is that the best easiest way or is there another? thanks in advance and cheers oleg

    Read the article

  • Why are there 3 conflicting OpenCV camera calibration formulas?

    - by John
    I'm having a problem with OpenCV's various parameterization of coordinates used for camera calibration purposes. The problem is that three different sources of information on image distortion formulae apparently give three non-equivalent description of the parameters and equations involved: (1) In their book "Learning OpenCV…" Bradski and Kaehler write regarding lens distortion (page 376): xcorrected = x * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ 2 * p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ], ycorrected = y * ( 1 + k1 * r^2 + k2 * r^4 + k3 * r^6 ) + [ p1 * ( r^2 + 2 * y^2 ) + 2 * p2 * x * y ], where r = sqrt( x^2 + y^2 ). Assumably, (x, y) are the coordinates of pixels in the uncorrected captured image corresponding to world-point objects with coordinates (X, Y, Z), camera-frame referenced, for which xcorrected = fx * ( X / Z ) + cx and ycorrected = fy * ( Y / Z ) + cy, where fx, fy, cx, and cy, are the camera's intrinsic parameters. So, having (x, y) from a captured image, we can obtain the desired coordinates ( xcorrected, ycorrected ) to produced an undistorted image of the captured world scene by applying the above first two correction expressions. However... (2) The complication arises as we look at OpenCV 2.0 C Reference entry under the Camera Calibration and 3D Reconstruction section. For ease of comparison we start with all world-point (X, Y, Z) coordinates being expressed with respect to the camera's reference frame, just as in #1. Consequently, the transformation matrix [ R | t ] is of no concern. In the C reference, it is expressed that: x' = X / Z, y' = Y / Z, x'' = x' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ 2 * p1 * x' * y' + p2 * ( r'^2 + 2 * x'^2 ) ], y'' = y' * ( 1 + k1 * r'^2 + k2 * r'^4 + k3 * r'^6 ) + [ p1 * ( r'^2 + 2 * y'^2 ) + 2 * p2 * x' * y' ], where r' = sqrt( x'^2 + y'^2 ), and finally that u = fx * x'' + cx, v = fy * y'' + cy. As one can see these expressions are not equivalent to those presented in #1, with the result that the two sets of corrected coordinates ( xcorrected, ycorrected ) and ( u, v ) are not the same. Why the contradiction? It seems to me the first set makes more sense as I can attach physical meaning to each and every x and y in there, while I find no physical meaning in x' = X / Z and y' = Y / Z when the camera focal length is not exactly 1. Furthermore, one cannot compute x' and y' for we don't know (X, Y, Z). (3) Unfortunately, things get even murkier when we refer to the writings in Intel's Open Source Computer Vision Library Reference Manual's section Lens Distortion (page 6-4), which states in part: "Let ( u, v ) be true pixel image coordinates, that is, coordinates with ideal projection, and ( u ~, v ~ ) be corresponding real observed (distorted) image coordinates. Similarly, ( x, y ) are ideal (distortion-free) and ( x ~, y ~ ) are real (distorted) image physical coordinates. Taking into account two expansion terms gives the following: x ~ = x * ( 1 + k1 * r^2 + k2 * r^4 ) + [ 2 p1 * x * y + p2 * ( r^2 + 2 * x^2 ) ] y ~ = y * ( 1 + k1 * r^2 + k2 * r^4 ] + [ 2 p2 * x * y + p2 * ( r^2 + 2 * y^2 ) ], where r = sqrt( x^2 + y^2 ). ... "Because u ~ = cx + fx * u and v ~ = cy + fy * v , … the resultant system can be rewritten as follows: u ~ = u + ( u – cx ) * [ k1 * r^2 + k2 * r^4 + 2 * p1 * y + p2 * ( r^2 / x + 2 * x ) ] v ~ = v + ( v – cy ) * [ k1 * r^2 + k2 * r^4 + 2 * p2 * x + p1 * ( r^2 / y + 2 * y ) ] The latter relations are used to undistort images from the camera." Well, it would appear that the expressions involving x ~ and y ~ coincided with the two expressions given at the top of this writing involving xcorrected and ycorrected. However, x ~ and y ~ do not refer to corrected coordinates, according to the given description. I don't understand the distinction between the meaning of the coordinates ( x ~, y ~ ) and ( u ~, v ~ ), or for that matter, between the pairs ( x, y ) and ( u, v ). From their descriptions it appears their only distinction is that ( x ~, y ~ ) and ( x, y ) refer to 'physical' coordinates while ( u ~, v ~ ) and ( u, v ) do not. What is this distinction all about? Aren't they all physical coordinates? I'm lost! Thanks for any input!

    Read the article

  • Excel MAXIF function or emulation?

    - by Andre Boos
    I have a moderately sized dataset in Excel from which I wish to extract the maximum value of the values in Column B, but those that correspond only to cells in Column A that satisfy certain criteria. The desired functionality is similar to that of SUMIF or COUNTIF, but neither of those return data that is necessary. There isn't a MAXIF function, so I ask the SO community: how do I emulate one?

    Read the article

  • Open an attachment for editing and save changes made to it

    - by Seitaridis
    My Lotus Notes document has a rich text item that stores an attachment. I want to edit the attachment and after this to save the attachment back to the Lotus Notes document. This is the script that launches the attachment: @Command([EditGotoField];"Attachment"); @Command([EditSelectAll]); @Command([AttachmentLaunch]); @Command([EditDeselectAll]) This script opens the attachment, but the changes made are not reflected to the Lotus Document. One way of solving this is to add AttachmentActionDefault=2 as an entry to the notes.ini. This enables to edit the attachment when double clicking on attachment. Also using the right click on the attachment, and then choosing edit action, produces the same result. In both cases, after saving, the changes are reflected back to the Lotus Notes document. The problem is that I want to use a button for opening the attachment.

    Read the article

  • How to allow users to define financial formulas in a C# app

    - by Peter Morris
    I need to allow my users to be able to define formulas which will calculate values based on data. For example //Example 1 return GetMonetaryAmountFromDatabase("Amount due") * 1.2; //Example 2 return GetMonetaryAmountFromDatabase("Amount due") * GetFactorFromDatabase("Discount"); I will need to allow / * + - operations, also to assign local variables and execute IF statements, like so var amountDue = GetMonetaryAmountFromDatabase("Amount due"); if (amountDue > 100000) return amountDue * 0.75; if (amountDue > 50000) return amountDue * 0.9; return amountDue; The scenario is complicated because I have the following structure.. Customer (a few hundred) Configuration (about 10 per customer) Item (about 10,000 per customer configuration) So I will perform a 3 level loop. At each "Configuration" level I will start a DB transaction and compile the forumlas, each "Item" will use the same transaction + compiled formulas (there are about 20 formulas per configuration, each item will use all of them). This further complicates things because I can't just use the compiler services as it would result in continued memory usage growth. I can't use a new AppDomain per each "Configuration" loop level because some of the references I need to pass cannot be marshalled. Any suggestions?

    Read the article

  • How do I delete the 32k errored document?

    - by Ramkumar
    I am having some documents. If I try to open the document then it shows error like "field is too large 32k or view's column & selection formulas are too large" Whenever I try to delete the document, I am getting the same error. I am not able to delete. Okay we can try to get the document via backend, But there, I can not get the document handle. Whatever I try to search then the document collection count is 0. Important:- I am using Notes 6.5.2. Thanks in Advance,

    Read the article

  • Is there a macro or a way to conditionally copy rows from one or more worksheet to another in Excel 2007

    - by marison
    I'm pulling a list of data from two or more excel file into one with some specific condition. For Eg: File1 Date Project ID Engineer 8/2/2008 XYZ T0908-5555 JS 9/4/2008 ABC T0908-6666 DF 9/5/2008 ZZZ T0908-7777 TS 9/4/2008 ABC T0908-1111 DF 9/5/2008 POR T0908-7777 MS 9/4/2008 ABC T0908-2222 DD File 2 Date Project ID Engineer 8/2/2008 ABC T1908-5555 JS 9/4/2008 XYZ T1908-6666 DF 9/5/2008 ABC T1908-7777 TS 9/4/2008 ZZZ T1908-1111 DF 9/5/2008 POR T1908-7777 MS 9/4/2008 ABC T1908-2222 DD I want Data from both file1 and file2 in a new excel with only those rows whose Project ID= "ABC". And the path of file1 and file2 will be changed on daily basis. Kindly help.....

    Read the article

  • Filtering and then counting distinct values

    - by Deon
    This is for Excel: I've been tasked with counting distinct records after I have filtered the data. I have 330 rows with column A containing the 'name' and in Column B I have the name of a test that was done for each 'name', which each 'name' could have taken several iterations of the same test. The test results are in Column C. Col A -Student Col B -Exam Col C - Grade Student 1 Exam 1 .80 Student 2 Exam 1 .50 Student 3 Exam 1 .90 Student 2 Exam 1 .75 Student 4 Exam 1 .90 Student 5 Exam 1 .55 Student 2 Exam 2 .90 Student 1 Exam 2 .90 .... .... ... If I filter col B for Exam 1, I want to count the unique number of students that have taken Exam 1.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >