Search Results

Search found 2750 results on 110 pages for 'recursive subquery factor'.

Page 29/110 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Screenshot utilities for Windows

    - by Factor Mystic
    What are some good screenshot programs for Windows, preferably free? Obviously I know I can hit the Print Screen key and paste that into Paint and save it, but I'm looking for something that makes it simple. Bonus points if it can automatically upload to an image host.

    Read the article

  • Is there an ActiveRecord equivalent to using a nested subquery i.e. ... where NOT IN(select...) ?

    - by Snorkpete
    I have 3 models: Category, Account, and SubAccount The relations are: Accounts has_many :sub_accounts Categories has_many :sub_accounts I wanted to get a list of all Categories that are not used by a given account. My method in the Category model currently looks like: class Category < ActiveRecord::Base def self.not_used_by(account) Category.find_by_sql("select * from categories where id not in(select category_id from sub_accounts where account_id = #{account.id})") end end My question is, is there a cleaner alternative than using SQL? NB. I am currently using Rails 3(beta)

    Read the article

  • Flex/bison, error: undeclared

    - by Imran
    hallo, i have a problem, the followed program gives back an error, error:: Undeclared(first use in function), why this error appears all tokens are declared, but this error comes, can anyone help me, here are the lex and yac files.thanks lex: %{ int yylinenu= 1; int yycolno= 1; %} %x STR DIGIT [0-9] ALPHA [a-zA-Z] ID {ALPHA}(_?({ALPHA}|{DIGIT}))*_? GROUPED_NUMBER ({DIGIT}{1,3})(\.{DIGIT}{3})* SIMPLE_NUMBER {DIGIT}+ NUMMER {GROUPED_NUMBER}|{SIMPLE_NUMBER} %% <INITIAL>{ [\n] {++yylinenu ; yycolno=1;} [ ]+ {yycolno=yycolno+yyleng;} [\t]+ {yycolno=yycolno+(yyleng*8);} "*" {return MAL;} "+" {return PLUS;} "-" {return MINUS;} "/" {return SLASH;} "(" {return LINKEKLAMMER;} ")" {return RECHTEKLAMMER;} "{" {return LINKEGESCHWEIFTEKLAMMER;} "}" {return RECHTEGESCHEIFTEKLAMMER;} "=" {return GLEICH;} "==" {return GLEICHVERGLEICH;} "!=" {return UNGLEICH;} "<" {return KLEINER;} ">" {return GROSSER;} "<=" {return KLEINERGLEICH;} ">=" {return GROSSERGLEICH;} "while" {return WHILE;} "if" {return IF;} "else" {return ELSE;} "printf" {return PRINTF;} ";" {return SEMIKOLON;} \/\/[^\n]* { ;} {NUMMER} {return NUMBER;} {ID} {return IDENTIFIER;} \" {BEGIN(STR);} . {;} } <STR>{ \n {++yylinenu ;yycolno=1;} ([^\"\\]|"\\t"|"\\n"|"\\r"|"\\b"|"\\\"")+ {return STRING;} \" {BEGIN(INITIAL);} } %% yywrap() { } YACC: %{ #include stdio.h> #include string.h> #include "lex.yy.c" void yyerror(char *err); int error=0,linecnt=1; %} %token IDENTIFIER NUMBER STRING COMMENT PLUS MINUS MAL SLASH LINKEKLAMMER RECHTEKLAMMER LINKEGESCHWEIFTEKLAMMER RECHTEGESCHEIFTEKLAMMER GLEICH GLEICHVERGLEICH UNGLEICH GROSSER KLEINER GROSSERGLEICH KLEINERGLEICH IF ELSE WHILE PRINTF SEMIKOLON %start Stmts %% Stmts : Stmt {puts("\t\tStmts : Stmt");} |Stmt Stmts {puts("\t\tStmts : Stmt Stmts");} ; //NEUE REGEL---------------------------------------------- Stmt : LINKEGESCHWEIFTEKLAMMER Stmts RECHTEGESCHEIFTEKLAMMER {puts("\t\tStmt : '{' Stmts '}'");} |IF LINKEKLAMMER Cond RECHTEKLAMMER Stmt {puts("\t\tStmt : '(' Cond ')' Stmt");} |IF LINKEKLAMMER Cond RECHTEKLAMMER Stmt ELSE Stmt {puts("\t\tStmt : '(' Cond ')' Stmt 'ELSE' Stmt");} |WHILE LINKEKLAMMER Cond RECHTEKLAMMER Stmt {puts("\t\tStmt : 'PRINTF' Expr ';'");} |PRINTF Expr SEMIKOLON {puts("\t\tStmt : 'PRINTF' Expr ';'");} |IDENTIFIER GLEICH Expr SEMIKOLON {puts("\t\tStmt : 'IDENTIFIER' '=' Expr ';'");} |SEMIKOLON {puts("\t\tStmt : ';'");} ;//NEUE REGEL --------------------------------------------- Cond: Expr GLEICHVERGLEICH Expr {puts("\t\tCond : '==' Expr");} |Expr UNGLEICH Expr {puts("\t\tCond : '!=' Expr");} |Expr KLEINER Expr {puts("\t\tCond : '<' Expr");} |Expr KLEINERGLEICH Expr {puts("\t\tCond : '<=' Expr");} |Expr GROSSER Expr {puts("\t\tCond : '>' Expr");} |Expr GROSSERGLEICH Expr {puts("\t\tCond : '>=' Expr");} ;//NEUE REGEL -------------------------------------------- Expr:Term {puts("\t\tExpr : Term");} |Term PLUS Expr {puts("\t\tExpr : Term '+' Expr");} |Term MINUS Expr {puts("\t\tExpr : Term '-' Expr");} ;//NEUE REGEL -------------------------------------------- Term:Factor {puts("\t\tTerm : Factor");} |Factor MAL Term {puts("\t\tTerm : Factor '*' Term");} |Factor SLASH Term {puts("\t\tTerm : Factor '/' Term");} ;//NEUE REGEL -------------------------------------------- Factor:SimpleExpr {puts("\t\tFactor : SimpleExpr");} |MINUS SimpleExpr {puts("\t\tFactor : '-' SimpleExpr");} ;//NEUE REGEL -------------------------------------------- SimpleExpr:LINKEKLAMMER Expr RECHTEKLAMMER {puts("\t\tSimpleExpr : '(' Expr ')'");} |IDENTIFIER {puts("\t\tSimpleExpr : 'IDENTIFIER'");} |NUMBER {puts("\t\tSimpleExpr : 'NUMBER'");} |STRING {puts("\t\tSimpleExpr : 'String'");} ;//ENDE ------------------------------------------------- %% void yyerror(char *msg) { error=1; printf("Line: %d , Column: %d : %s \n", yylinenu, yycolno,yytext, msg); } int main(int argc, char *argv[]) { int val; while(yylex()) { printf("\n",yytext); } return yyparse(); }

    Read the article

  • sshd: How to enable PAM authentication for specific users under

    - by Brad
    I am using sshd, and allow logins with public key authentication. I want to allow select users to log in with a PAM two-factor authentication module. Is there any way I can allow PAM two-factor authentication for a specifc user? I don't want users - By the same token - I only want to enable password authentication for specific accounts. I want my SSH daemon to reject the password authentication attempts to thwart would-be hackers into thinking that I will not accept password authentication - except for the case in which someone knows my heavily guarded secret account, which is password enabled. I want to do this for cases in which my SSH clients will not let me do either secret key, or two-factor authentication.

    Read the article

  • Which type of motherboard i should buy and why?

    - by metal gear solid
    If budged is not a problem. I just need best performance with less power consumption. I can purchase any cabinet , power supply and Motherboard. Is Power supply has any relation with Form factor? Is the size of motherboard and number of Slots only difference between all form factors? Is there any differences among form factors, related to performance of motherboard? Is bigger in Size (ATX) motherboard always better? Is it so smaller in Size motherboard will consume less power? What are pros and cons of each Form factor? What there are so many Form factor were created?

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Box 2d basic questions

    - by philipp
    I am a bit new to box2d and I am developing an game with type and letters. I am using an svg font and generate the box2d bodies direct from the glyphs path definition, using the convex hull of them. I also have an decomposition routine the decomposes this hull if necessary. All this it is more or less working, except that I got some strange errors which definitely are caused by the scale factors. The problem is caused by two factors: first: the world scale of box2d, second: the the precision of curve-approximation of the glyph vectors. So through scaling down the input vertices for box2d, it happens that they become equal caused by numerical precision, what causes errors in box2d. Through scaling the my glyphs a bit up, this goes away. I also goes away if I chose a different world scale factor, but this slows down the whole animation quite much! So if my view port is about 990px * 600px and i want to animate Glyphs in box2d which should have a size from about 50px * 50px up to 300px * 300px, which scale factor of the b2world should i choose? How small should the smallest distance from on vertex to another be, while approximating the glyph vectors? Thanks for help greetings philipp EDIT:: I continued reading the docs of box2d and after rethinking of the units system, which is designed to handle object from 0.1 up to 10 meters, I calculated a scale factor of 75. So Objects 600px width will are 8 meters wide in box2d and even small objects of about 20px width will become 0.26 meters width in box2d. I will go on trying with this values, but if there is somebody out there who could give me a clever advice i would be happy!

    Read the article

  • Is recursion really bad?

    - by dotneteer
    After my previous post about the stack space, it appears that there is perception from the feedback that recursion is bad and we should avoid deep recursion. After writing a compiler, I know that the modern computer and compiler are complex enough and one cannot automatically assume that a hand crafted code would out-perform the compiler optimization. The only way is to do some prototype to find out. So why recursive code may not perform as well? Compilers place frames on a stack. In additional to arguments and local variables, compiles also need to place frame and program pointers on the frame, resulting in overheads. So why hand-crafted code may not performance as well? The stack used by a compiler is a simpler data structure and can grow and shrink cleanly. To replace recursion with out own stack, our stack is allocated in the heap that is far more complicated to manage. There could be overhead as well if the compiler needs to mark objects for garbage collection. Compiler also needs to worry about the memory fragmentation. Then there is additional complexity: CPUs have registers and multiple levels of cache. Register access is a few times faster than in-CPU cache access and is a few 10s times than on-board memory access. So it is up to the OS and compiler to maximize the use of register and in-CPU cache. For my particular problem, I did an experiment to rewrite my c# version of recursive code with a loop and stack approach. So here are the outcomes of the two approaches:   Recursive call Loop and Stack Lines of code for the algorithm 17 46 Speed Baseline 3% faster Readability Clean Far more complex So at the end, I was able to achieve 3% better performance with other drawbacks. My message is never assuming your sophisticated approach would automatically work out better than a simpler approach with a modern computer and compiler. Gage carefully before committing to a more complex approach.

    Read the article

  • What happens to C# 4 optional parameters when compiling against 3.5?

    - by Bertrand Le Roy
    Here’s a method declaration that uses optional parameters: public Path Copy( Path destination, bool overwrite = false, bool recursive = false) Something you may not know is that Visual Studio 2010 will let you compile this against .NET 3.5, with no error or warning. You may be wondering (as I was) how it does that. Well, it takes the easy and rather obvious way of not trying to be too smart and just ignores the optional parameters. So if you’re compiling against 3.5 from Visual Studio 2010, the above code is equivalent to: public Path Copy( Path destination, bool overwrite, bool recursive) The parameters are not optional (no such thing in C# 3), and no overload gets magically created for you. If you’re building a library that is going to have both 3.5 and 4.0 versions, and you want 3.5 users to have reasonable overloads of your methods, you’ll have to provide those yourself, which means that providing a version with optional parameters for the benefit of 4.0 users is not going to provide that much value, except for the ability to provide named parameters out of order. I guess that’s not so bad… Providing all of the following overloads will compile against both 3.5 and 4.0: public Path Copy(Path destination)public Path Copy(Path destination, bool overwrite)public Path Copy( Path destination, bool overwrite = false, bool recursive = false)

    Read the article

  • Is it just me or is this a baffling tech interview question

    - by Matthew Patrick Cashatt
    Background I was just asked in a tech interview to write an algorithm to traverse an "object" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just "traverse" the "object". I don't know about anyone else, but this seems like a silly question to me. I asked again, "am I searching for a value?". Nope. Just "traverse" it. Why would I ever want to endlessly loop through this "object"?? To melt my processor maybe?? The answer according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything. Final Answer: CLOWN TRAVERSAL!!! (See @Matt's answer below) Thanks! Matt

    Read the article

  • C# vector class - Interpolation design decision

    - by Benjamin
    Currently I'm working on a vector class in C# and now I'm coming to the point, where I've to figure out, how i want to implement the functions for interpolation between two vectors. At first I came up with implementing the functions directly into the vector class... public class Vector3D { public static Vector3D LinearInterpolate(Vector3D vector1, Vector3D vector2, double factor) { ... } public Vector3D LinearInterpolate(Vector3D other, double factor { ... } } (I always offer both: a static method with two vectors as parameters and one non-static, with only one vector as parameter) ...but then I got the idea to use extension methods (defined in a seperate class called "Interpolation" for example), since interpolation isn't really a thing only available for vectors. So this could be another solution: public class Vector3D { ... } public static class Interpolation { public static Vector3D LinearInterpolate(this Vector3D vector, Vector3D other, double factor) { ... } } So here an example how you'd use the different possibilities: { var vec1 = new Vector3D(5, 3, 1); var vec2 = new Vector3D(4, 2, 0); Vector3D vec3; vec3 = vec1.LinearInterpolate(vec2, 0.5); //1 vec3 = Vector3D.LinearInterpolate(vec1, vec2, 0.5); //2 //or with extension-methods vec3 = vec1.LinearInterpolate(vec2, 0.5); //3 (same as 1) vec3 = Interpolation.LinearInterpolation(vec1, vec2, 0.5); //4 } So I really don't know which design is better. Also I don't know if there's an ultimate rule for things like this or if it's just about what someone personally prefers. But I really would like to hear your opinions, what's better (and if possible why ).

    Read the article

  • Multiplayer Network Game - Interpolation and Frame Rate

    - by J.C.
    Consider the following scenario: Let's say, for sake of example and simplicity, that you have an authoritative game server that sends state to its clients every 45ms. The clients are interpolating state with an interpolation delay of 100 ms. Finally, the clients are rendering a new frame every 15ms. When state is updated on the client, the client time is set from the incoming state update. Each time a frame renders, we take the render time (client time - interpolation delay) and identify a previous and target state to interpolate from. To calculate the interpolation amount/factor, we take the difference of the render time and previous state time and divide by the difference of the target state and previous state times: var factor = ((renderTime - previousStateTime) / (targetStateTime - previousStateTime)) Problem: In the example above, we are effectively displaying the same interpolated state for 3 frames before we collected the next server update and a new client (render) time is set. The rendering is mostly smooth, but there is a dash of jaggedness to it. Question: Given the example above, I'd like to think that the interpolation amount/factor should increase with each frame render to smooth out the movement. Should this be considered and, if so, what is the best way to achieve this given the information from above?

    Read the article

  • Looking for algorithms regarding scaling and moving

    - by user1806687
    I've been bashing my head for the past couple of weeks trying to find algorithms that would help me accomplish, on first look very easy task. So, I got this one object currently made out of 5 cuboids (2 sides, 1 top, 1 bottom, 1 back), this is just for an example, later on there will be whole range of different set ups. I have included three pictures of this object(as said this is just for an example). Now, the thing is when the user scales the whole object this is what should happen: X scale: top and bottom cuboids should get scaled by a scale factor, sides should get moved so they are positioned just like they were before(in this case at both ends of top and bottom cuboids), back should get scaled so it fits like before(if I simply scale it by a scale factor it will leave gaps on each side). Y scale: sides should get scaled by a scale factor, top and bottom cuboid should get moved, and back should also get scaled. Z scale: sides, top and bottom cuboids should get scaled, back should get moved. Here is an image of the example object (a thick walled box, with one face missing, where each wall is made by a cuboid): Front of the object: Hope you can help,

    Read the article

  • How to move the rigidbody at the position of the mouse on release

    - by Edvin
    I'm making a "Can Knockdown" game and I need the rigidbody to move where the player released the mouse(OnMouseUp). Momentarily the Ball moves OnMouseUp because of rigidbody.AddForce(force * factor); and It moves toward the mousePosition but doesn't end up where the mousePosition is. Here's what I have so far in the script. var factor = 20.0; var minSwipeDistY : float; private var startTime : float; private var startPos : Vector3; function OnMouseDown(){ startTime = Time.time; startPos = Input.mousePosition; startPos.z = transform.position.z - Camera.main.transform.position.z; startPos = Camera.main.ScreenToWorldPoint(startPos); } function OnMouseUp(){ var endPos = Input.mousePosition; endPos.z = transform.position.z - Camera.main.transform.position.z; endPos = Camera.main.ScreenToWorldPoint(endPos); var force = endPos - startPos; force.z = force.magnitude; force /= (Time.time - startTime); rigidbody.AddForce(force * factor); }

    Read the article

  • How to undo a changeset using tf.exe rollback

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,Team Foundation Utilities,TFS2010   Oh no! Did you just check in a changeset in to TFS and realized that you need to roll back the changeset because the changes were suppose to go in a different branch? Or did you just accidently merge a wrong changeset in your release branch? There are several ways to undo the damage, Manual: Yes, we all just hate this word but for the record you could manually rollback the changes. Get Specific version on the branch and chose the changeset prior to the one you checked in. After that check out all the files in the changeset and check them in. During the check in you will receive a conflict. At this point choose ‘Keep local changes’ in the conflict resolution window and check in the files. Automated: Yes, we just love it! TFS comes with a very powerful command line utility ‘tf.exe’ that gives you the ability to rollback the effects of one or more changesets to one or more version-controlled items. This command does not remove the changesets from an item's version history. Instead, this command creates in your workspace a set of pending changes that negate the effects of the changesets that you specify. Syntax tf rollback /toversion:VersionSpec ItemSpec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/login:username,[password]] [/noprompt] tf rollback /changeset:ChangesetFrom~ChangesetTo [ItemSpec] [/recursive] [/lock:none|checkin|checkout] [/version:VersionSpec] [/keepmergehistory] [/noprompt] [/login:username,[password]]   I’ll explain this with an example. Your workspace is at the location C:\myWorkspace You want to rollback changeset # 145621 C:\Workspace\MyBranch>tf.exe rollback /changeset:145621 /recursive How do i rollback/undo a series of changesets? You can also rollback a range of changesets by using the following C:\Workspace\MyBranch>tf.exe rollback /changeset:145601~145621 /recursive This will check out the files in the version control and you should be able to see them in the pending changes. Go on check them in to undo the specific changeset that you just rolled back. Do you completely want to get rid of the changeset from all future merges between the two branches? /KeepMergeHistory: This option has an effect only if one or more of the changesets that you are rolling back include a branch or merge change. Specify this option if you want future merges between the same source and the same target to exclude the changes that you are rolling back. Errors “If you get the message ‘Unable to determine the workspace.’ You may be able to correct this by running ‘tf worksapces /collection:TeamProjectCollectionUrl’” you are in the wrong directory. Make sure that you run the ‘tf rollback’ command from the directory of your workspace.   Status Exit Code Description 0 The operation rolled back all items successfully. 1 The operation rolled back at least one item successfully but could not roll back one or more items. 100 The operation could not roll back any items.   To use the command you must have the Read, Check Out, and Check In permissions set to Allow. So, have you been in a rollback undo situation before?   Share this post :

    Read the article

  • getting rid of filesort on WordPress MySQL query

    - by Hans
    An instance of WordPress that I manage goes down about once a day due to this monster MySQL query taking far too long: SELECT SQL_CALC_FOUND_ROWS distinct wp_posts.* FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id LEFT JOIN wp_ec3_schedule ec3_sch ON ec3_sch.post_id=id WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('1050') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') AND NOT EXISTS (SELECT * FROM wp_term_relationships JOIN wp_term_taxonomy ON wp_term_taxonomy.term_taxonomy_id = wp_term_relationships.term_taxonomy_id WHERE wp_term_relationships.object_id = wp_posts.ID AND wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN (533,3567) ) AND ec3_sch.post_id IS NULL GROUP BY wp_posts.ID ORDER BY wp_posts.post_date DESC LIMIT 0, 10; What do I have to do to get rid of the very slow filesort? I would think that the multicolumn type_status_date index would be fast enough. The EXPLAIN EXTENDED output is below. +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | wp_posts | ref | type_status_date | type_status_date | 124 | const,const | 7034 | Using where; Using temporary; Using filesort | | 1 | PRIMARY | wp_term_relationships | ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_posts.ID | 373 | Using index | | 1 | PRIMARY | wp_term_taxonomy | eq_ref | PRIMARY | PRIMARY | 8 | bwog_wordpress_w.wp_term_relationships.term_taxonomy_id | 1 | Using index | | 1 | PRIMARY | ec3_sch | ref | post_id_index | post_id_index | 9 | bwog_wordpress_w.wp_posts.ID | 1 | Using where; Using index | | 3 | DEPENDENT SUBQUERY | wp_term_taxonomy | range | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | NULL | 2 | Using where | | 3 | DEPENDENT SUBQUERY | wp_term_relationships | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | bwog_wordpress_w.wp_posts.ID,bwog_wordpress_w.wp_term_taxonomy.term_taxonomy_id | 1 | Using index | | 2 | DEPENDENT SUBQUERY | tt | const | PRIMARY,term_id_taxonomy,taxonomy | term_id_taxonomy | 106 | const,const | 1 | | | 2 | DEPENDENT SUBQUERY | tr | eq_ref | PRIMARY,term_taxonomy_id | PRIMARY | 16 | func,const | 1 | Using index | +----+--------------------+-----------------------+--------+-----------------------------------+------------------+---------+---------------------------------------------------------------------------------+------+----------------------------------------------+ 8 rows in set, 2 warnings (0.05 sec) And CREATE TABLE: CREATE TABLE `wp_posts` ( `ID` bigint(20) unsigned NOT NULL auto_increment, `post_author` bigint(20) unsigned NOT NULL default '0', `post_date` datetime NOT NULL default '0000-00-00 00:00:00', `post_date_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content` longtext NOT NULL, `post_title` text NOT NULL, `post_excerpt` text NOT NULL, `post_status` varchar(20) NOT NULL default 'publish', `comment_status` varchar(20) NOT NULL default 'open', `ping_status` varchar(20) NOT NULL default 'open', `post_password` varchar(20) NOT NULL default '', `post_name` varchar(200) NOT NULL default '', `to_ping` text NOT NULL, `pinged` text NOT NULL, `post_modified` datetime NOT NULL default '0000-00-00 00:00:00', `post_modified_gmt` datetime NOT NULL default '0000-00-00 00:00:00', `post_content_filtered` text NOT NULL, `post_parent` bigint(20) unsigned NOT NULL default '0', `guid` varchar(255) NOT NULL default '', `menu_order` int(11) NOT NULL default '0', `post_type` varchar(20) NOT NULL default 'post', `post_mime_type` varchar(100) NOT NULL default '', `comment_count` bigint(20) NOT NULL default '0', `robotsmeta` varchar(64) default NULL, PRIMARY KEY (`ID`), KEY `post_name` (`post_name`), KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`), KEY `post_parent` (`post_parent`), KEY `post_date` (`post_date`), FULLTEXT KEY `post_related` (`post_title`,`post_content`) )

    Read the article

  • Oracle Database Enforce CHECK on multiple tables

    - by GigaPr
    I am trying to enforce a CHECK Constraint in a ORACLE Database on multiple tables CREATE TABLE RollingStocks ( Id NUMBER, Name Varchar2(80) NOT NULL, RollingStockCategoryId NUMBER NOT NULL, CONSTRAINT Pk_RollingStocks Primary Key (Id), CONSTRAINT Check_RollingStocks_CategoryId CHECK ((RollingStockCategoryId IN (SELECT Id FROM FreightWagonTypes)) OR (RollingStockCategoryId IN (SELECT Id FROM LocomotiveClasses))) ); ...but i get the following error: *Cause: Subquery is not allowed here in the statement. *Action: Remove the subquery from the statement. Can you help me understanding what is the problem or how to achieve the same result?

    Read the article

  • Which of these queries is preferable?

    - by bread
    I've written the same query as a subquery and a self-join. Is there any obvious argument for one over the other here? SUBQUERY: SELECT prod_id, prod_name FROM products WHERE vend_id = (SELECT vend_id FROM products WHERE prod_id = ‘DTNTR’); SELF-JOIN: SELECT p1.prod_id, p1.prod_name FROM products p1, products p2 WHERE p1.vend_id = p2.vend_id AND p2.prod_id = ‘DTNTR’;

    Read the article

  • How to update a QPixmap in a QGraphicsView with PyQt

    - by pops
    I am trying to paint on a QPixmap inside a QGraphicsView. The painting works fine, but the QGraphicsView doesn't update it. Here is some working code: #!/usr/bin/env python from PyQt4 import QtCore from PyQt4 import QtGui class Canvas(QtGui.QPixmap): """ Canvas for drawing""" def __init__(self, parent=None): QtGui.QPixmap.__init__(self, 64, 64) self.parent = parent self.imH = 64 self.imW = 64 self.fill(QtGui.QColor(0, 255, 255)) self.color = QtGui.QColor(0, 0, 0) def paintEvent(self, point=False): if point: p = QtGui.QPainter(self) p.setPen(QtGui.QPen(self.color, 1, QtCore.Qt.SolidLine)) p.drawPoints(point) def clic(self, mouseX, mouseY): self.paintEvent(QtCore.QPoint(mouseX, mouseY)) class GraphWidget(QtGui.QGraphicsView): """ Display, zoom, pan...""" def __init__(self): QtGui.QGraphicsView.__init__(self) self.im = Canvas(self) self.imH = self.im.height() self.imW = self.im.width() self.zoomN = 1 self.scene = QtGui.QGraphicsScene(self) self.scene.setItemIndexMethod(QtGui.QGraphicsScene.NoIndex) self.scene.setSceneRect(0, 0, self.imW, self.imH) self.scene.addPixmap(self.im) self.setScene(self.scene) self.setTransformationAnchor(QtGui.QGraphicsView.AnchorUnderMouse) self.setResizeAnchor(QtGui.QGraphicsView.AnchorViewCenter) self.setMinimumSize(400, 400) self.setWindowTitle("pix") def mousePressEvent(self, event): if event.buttons() == QtCore.Qt.LeftButton: pos = self.mapToScene(event.pos()) self.im.clic(pos.x(), pos.y()) #~ self.scene.update(0,0,64,64) #~ self.updateScene([QtCore.QRectF(0,0,64,64)]) self.scene.addPixmap(self.im) print('items') print(self.scene.items()) else: return QtGui.QGraphicsView.mousePressEvent(self, event) def wheelEvent(self, event): if event.delta() > 0: self.scaleView(2) elif event.delta() < 0: self.scaleView(0.5) def scaleView(self, factor): n = self.zoomN * factor if n < 1 or n > 16: return self.zoomN = n self.scale(factor, factor) if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) widget = GraphWidget() widget.show() sys.exit(app.exec_()) The mousePressEvent does some painting on the QPixmap. But the only solution I have found to update the display is to make a new instance (which is not a good solution). How do I just update it?

    Read the article

  • cakephp paginate using mysql SQL_CALC_FOUND_ROWS

    - by michael
    I'm trying to make Cakephp paginate take advantage of the SQL_CALC_FOUND_ROWS feature in mysql to return a count of total rows while using LIMIT. Hopefully, this can eliminate the double query of paginateCount(), then paginate(). http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows I've put this in my app_model.php, and it basically works, but it could be done better. Can someone help me figure out how to override paginate/paginateCount so it executes only 1 SQL stmt? /** * Overridden paginateCount method, to using SQL_CALC_FOUND_ROWS */ public function paginateCount($conditions = null, $recursive = 0, $extra = array()) { $options = array_merge(compact('conditions', 'recursive'), $extra); $options['fields'] = "SQL_CALC_FOUND_ROWS `{$this->alias}`.*"; Q: how do you get the SAME limit value used in paginate()? $options['limit'] = 1; // ideally, should return same rows as in paginate() Q: can we somehow get the query results to paginate directly, without the extra query? $cache_results_from_paginateCount = $this->find('all', $options); /* * note: we must run the query to get the count, but it will be cached for multiple paginates, so add comment to query */ $found_rows = $this->query("/* do not cache for {$this->alias} */ SELECT FOUND_ROWS();"); $count = array_shift($found_rows[0][0]); return $count; } /** * Overridden paginate method */ public function paginate($conditions, $fields, $order, $limit, $page = 1, $recursive = null, $extra = array()) { $options = array_merge(compact('conditions', 'fields', 'order', 'limit', 'page', 'recursive'), $extra); Q: can we somehow get $cache_results_for_paginate directly from paginateCount()? return $cache_results_from_paginateCount; // paginate in only 1 SQL stmt return $this->find('all', $options); // ideally, we could skip this call entirely }

    Read the article

  • ANOVA with 3 fixed factors in R

    - by TKBell
    Im trying to run a model with a response variable p and 3 fixed factors to get ANOVA. this is how my code looks like : #run it as 3 fixed factor model p1=c(37,38,37,41,41,40,41,42,41) p2=c(42,41,43,42,42,42,43,42,43) p3=c(30,31,31,31,31,31,29,30,28) p4=c(42,43,42,43,43,43,42,42,42) p5=c(28,30,29,29,30,29,31,29,29) p6=c(42,42,43,45,45,45,44,46,45) p7=c(25,26,27,28,28,30,29,27,27) p8=c(40,40,40,43,42,42,43,43,41) p9=c(37,38,37,41,41,40,41,42,41) p10=c(35,34,34,35,35,34,35,34,35) p = cbind(p1,p2,p3,p4,p5,p6,p7,p8,p9,p10) partnumber=c(rep(1,9),rep(2,9),rep(3,9),rep(4,9),rep(5,9),rep(6,9),rep(7,9),rep(8,9),rep(9,9),rep(10,9)) test=c(rep(c(rep(1:3,3)),10)) inspector = rep(c(rep(1,3),rep(2,3),rep(3,3)),10) fpartnumber = factor(partnumber) ftest = factor(test) finspector = factor(inspector) model=lm(p~fpartnumber*ftest*finspector) summary(model) anova(model) but when I run it I get this error : it says my variable length for fpartnumber is different , but when I checked the length of each variable and is 90. What is going on ? model=lm(y~fpartnumber*ftest*finspector) Error in model.frame.default(formula = yang ~ fpartnumber * ftest * finspector, : variable lengths differ (found for 'fpartnumber')

    Read the article

  • Recognizing terminals in a CFG production previously not defined as tokens.

    - by kmels
    I'm making a generator of LL(1) parsers, my input is a CoCo/R language specification. I've already got a Scanner generator for that input. Suppose I've got the following specification: COMPILER 1. CHARACTERS digit="0123456789". TOKENS number = digit{digit}. decnumber = digit{digit}"."digit{digit}. PRODUCTIONS Expression = Term{"+"Term|"-"Term}. Term = Factor{"*"Factor|"/"Factor}. Factor = ["-"](Number|"("Expression")"). Number = (number|decnumber). END 1. So, if the parser generated by this grammar receives a word "1+1", it'd be accepted i.e. a parse tree would be found. My question is, the character "+" was never defined in a token, but it appears in the non-terminal "Expression". How should my generated Scanner recognize it? It would not recognize it as a token. Is this a valid input then? Should I add this terminal in TOKENS and then consider an error routine for a Scanner for it to skip it? How does usual language specifications handle this?

    Read the article

  • Return call from ggplot object

    - by aL3xa
    I've been using ggplot2 for a while now, and I can't find a way to get formula from ggplot object. Though I can get basic info with summary(<ggplot_object>), in order to get complete formula, usually I was combing up and down through .Rhistory file. And this becomes frustrating when you experiment with new graphs, especially when code gets a bit lengthy... so searching through history file isn't quite convenient way of doing this... Is there a more efficient way of doing this? Just an illustration: p <- qplot(data = mtcars, x = factor(cyl), geom = "bar", fill = factor(cyl)) + scale_fill_manual(name = "Cylinders", value = c("firebrick3", "gold2", "chartreuse3")) + stat_bin(aes(label = ..count..), vjust = -0.2, geom = "text", position = "identity") + xlab("# of cylinders") + ylab("Frequency") + opts(title = "Barplot: # of cylinders") I can get some basic info with summary: > summary(p) data: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb [32x11] mapping: fill = factor(cyl), x = factor(cyl) scales: fill faceting: facet_grid(. ~ ., FALSE) ----------------------------------- geom_bar: stat_bin: position_stack: (width = NULL, height = NULL) mapping: label = ..count.. geom_text: vjust = -0.2 stat_bin: width = 0.9, drop = TRUE, right = TRUE position_identity: (width = NULL, height = NULL) But I want to get code I typed in to get the graph. I reckon that I'm missing something essential here... it's seems impossible that there's no way to get call from ggplot object!

    Read the article

  • Sorting in Lua, counting number of items

    - by Josh
    Two quick questions (I hope...) with the following code. The script below checks if a number is prime, and if not, returns all the factors for that number, otherwise it just returns that the number prime. Pay no attention to the zs. stuff in the script, for that is client specific and has no bearing on script functionality. The script itself works almost wonderfully, except for two minor details - the first being the factor list doesn't return itself sorted... that is, for 24, it'd return 1, 2, 12, 3, 8, 4, 6, and 24 instead of 1, 2, 3, 4, 6, 8, 12, and 24. I can't print it as a table, so it does need to be returned as a list. If it has to be sorted as a table first THEN turned into a list, I can deal with that. All that matters is the end result being the list. The other detail is that I need to check if there are only two numbers in the list or more. If there are only two numbers, it's a prime (1 and the number). The current way I have it does not work. Is there a way to accomplish this? I appreciate all the help! function get_all_factors(number) local factors = 1 for possible_factor=2, math.sqrt(number), 1 do local remainder = number%possible_factor if remainder == 0 then local factor, factor_pair = possible_factor, number/possible_factor factors = factors .. ", " .. factor if factor ~= factor_pair then factors = factors .. ", " .. factor_pair end end end factors = factors .. ", and " .. number return factors end local allfactors = get_all_factors(zs.param(1)) if zs.func.numitems(allfactors)==2 then return zs.param(1) .. " is prime." else return zs.param(1) .. " is not prime, and its factors are: " .. allfactors end

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >