Search Results

Search found 2750 results on 110 pages for 'recursive subquery factor'.

Page 74/110 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Dynamically set generic type argument

    - by fearofawhackplanet
    Following on from my question here, I'm trying to create a generic value equality comparer. I've never played with reflection before so not sure if I'm on the right track, but anyway I've got this idea so far: bool ContainSameValues<T>(T t1, T t2) { if (t1 is ValueType || t1 is string) { return t1.Equals(t2); } else { IEnumerable<PropertyInfo> properties = t1.GetType().GetProperties().Where(p => p.CanRead); foreach (var property in properties) { var p1 = property.GetValue(t1, null); var p2 = property.GetValue(t2, null); if( !ContainSameValues<p1.GetType()>(p1, p2) ) return false; } } return true; } This doesn't compile because I can't work out how to set the type of T in the recursive call. Is it possible to do this dynamically at all? There are a couple of related questions on here which I have read but I couldn't follow them enough to work out how they might apply in my situation.

    Read the article

  • Is it magic or what ??

    - by STRIDER
    I am writing a big C code... The code includes recursive bactracking function named Branch() that is called so much... My goal is to write the fastest code to get the best running time... I also have another function Redundant() void Redundant() { int* A; A=(int*)malloc(100*sizeof(int)); } I created two versions. Version A: Redundant() is included in Branch(). Version B: Redundant() is not included in Branch() A run 10 times faster than B !!!! Is is Magic or is it kind of process scheduling or what ??

    Read the article

  • Parallel Programming. Boost's MPI, OpenMP, TBB, or something else?

    - by unknownthreat
    Hello, I am totally a novice in parallel programming, but I do know how to program C++. Now, I am looking around for parallel programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB. For anyone who have experienced with any of these 3 API (or any other parallelism API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

    Read the article

  • Tabbed javascript widget for a Rails app

    - by neilc
    A user registers on our Rails app and they're given javascript to embed a widget in their website. The widget has a tabbed interface, like the JQuery tabs http://stilbuero.de/jquery/tabs_3/. iFrames have been tested, but the widget form factor and cross-domain policy negates the use of iframes. The widget is very dynamic and will often update the DOM with new content - and because of cross-domain policy, it looks as though JSONP is necessary. I understand that 'widget.js.erb' needs to create the widget layout, reference a stylesheet, render the tabs, etc - but once a tab is clicked, how does the widget request the content from the Rails app and render it in the DOM?

    Read the article

  • General zoom algorithm for drawing program

    - by Steven Sproat
    My GUI toolkit, wxPython provides some methods for implementing a user zoom factor, however the quality isn't too good. I'm looking for ideas on how to create a zooming feature, which I know is complicated. I have a bitmap representing my canvas which is drawn to. This is displayed inside a scrolled window. Problems I forsee: - performance when zoomed in and panning around the canvas - difficulties with "real" coordinates and zoomed in coordinates - image quality not degrading with the zoom Using wxPython's SetUserScale() on its device contexts presents image quality like this - this is with a 1px line, at 30% zoomed in. I'm just wondering the general steps I'll need to take and the challenges I'll encounter. Thanks for any suggestions

    Read the article

  • What's the best way to measure and track performance over various calls at runtime?

    - by bitcruncher
    Hello. I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime? Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time? Many thanks. Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?

    Read the article

  • Paypal subscription trial extra charge?

    - by DucDigital
    I tried to implement paypal pro for my site. Which will let user enter their info and charge 1$ for the trial, and 10$ for the recursive fee. But when I check my merchant account, it show up 1$ and 10$ in separate order, but within 1 day (it charge 10$ that I don't want) PROFILEID=I%2d0xxxxxx1HCKEF &PROFILESTATUS=PendingProfile &TRANSACTIONID=0NP43842KS810000T &TIMESTAMP=2010%2d05%2d16T18%3a56%3a55Z &CORRELATIONID=89adac79d0d6 &ACK=Success &VERSION=57%2e0 &BUILD=1298200 &METHOD=CreateRecurringPaymentsProfile &VERSION=57.0 &PWD=1274sss7 &USER=sand_12sdsad7629_biz_api1.dital.com &SIGNATURE=IacdATZe5XHmKJs1n2w3uWMRDWyaOGDb &PAYMENTACTION=Sale &AMT=10 &CREDITCARDTYPE=Visa &ACCT=4804270925925835 &EXPDATE=052015 &CVV2=243 &FIRSTNAME= &LASTNAME= &STREET=223232323 &CITY=3232 &STATE=IA &ZIP=5452 &COUNTRYCODE=US &CURRENCYCODE=USD &BILLINGPERIOD=Month &BILLINGFREQUENCY=1 &PROFILESTARTDATE=2010-05-6+02%3A56%3A57 &INITAMT=10 &FAILEDINITAMTACTION=ContinueOnFailure &DESC=Recurring+%2410 &AUTOBILLAMT=AddToNextBilling &PROFILEREFERENCE=Anonymous &TRIALBILLINGPERIOD=Day &TRIALBILLINGFREQUENCY=5 &TRIALAMT=1 &TRIALTOTALBILLINGCYCLES=1 &SALUTE=Mr. &EMAIL=dsads%40dsads.com Was there any problem with this query string?

    Read the article

  • Overcoming C limitations for large projects

    - by Francisco Garcia
    One aspect where C shows its age is the encapsulation of code. Many modern languages has classes, namespaces, packages... a much more convenient to organize code than just a simple "include". Since C is still the main language for many huge projects. How do you to overcome its limitations? I suppose that one main factor should be lots of discipline. I would like to know what you do to handle large quantity of C code, which authors or books you can recommend.

    Read the article

  • What is the best Java numerical method package?

    - by Bob Cross
    I am looking for a Java-based numerical method package that provides functionality including: Solving systems of equations using different numerical analysis algorithms. Matrix methods (e.g., inversion). Spline approximations. Probability distributions and statistical methods. In this case, "best" is defined as a package with a mature and usable API, solid performance and numerical accuracy. Edit: derick van brought up a good point in that cost is a factor. I am heavily biased in favor of free packages but others may have a different emphasis.

    Read the article

  • Finding the Formula for a Curve

    - by Mystagogue
    Is there a program that will take "response curve" values from me, and provide a formula that approximates the response curve? It would be cool if such a program would take a numeric "percent correct" (perhaps with a standard deviation) so that it returns simplified formulas when laxity is permissable, and more precise (viz. complex) formulas when the curve needs to be approximated closely. My interest is to play with the response curve values and "laxity" factor, until such a tool spits out a curve-fit formula simple enough that I know it will be high performance during machine computations.

    Read the article

  • What are the reasons to use SQL Server instead of MySQL with a complex .Net project?

    - by cdeszaq
    We currently have a 10 year old nasty, spaghetti-code-style SQL Server database that we are soon looking to pretty much re-write from scratch as part of a re-write to a large web application. (The existing application will serve as the functional requirements for the next incarnation of the app) The new version will be developed in .Net, so a large portion of the application stack will be based on Microsoft technologies (Visual Studio will be used IIS will be the application server). One of the developers on the project has raised the possibility of switching to MySQL instead of SQL Server in order to save on cost for both the licence of the DB server, as well as the tools to design and manipulate the DB (such as the wonderfully free MySQL Workbench). What are the various pros and cons of using SQL Server vs. MySQL as the database for a complex .Net project? Price is one factor we have identified, both in terms of the DB server licence as well as tools to manipulate the DB, but what other factors come into play?

    Read the article

  • MySQL triggers cannot update rows in same table the trigger is assigned to. Suggested workaround?

    - by Cory House
    MySQL doesn't currently support updating rows in the same table the trigger is assigned to since the call could become recursive. Does anyone have suggestions on a good workaround/alternative? Right now my plan is to call a stored procedure that performs the logic I really wanted in a trigger, but I'd love to hear how others have gotten around this limitation. Edit: A little more background as requested. I have a table that stores product attribute assignments. When a new parent product record is inserted, I'd like the trigger to perform a corresponding insert in the same table for each child record. This denormalization is necessary for performance. MySQL doesn't support this and throws: Can't update table 'mytable' in stored function/trigger because it is already used by statement which invoked this stored function/trigger. A long discussion on the issue on the MySQL forums basically lead to: Use a stored proc, which is what I went with for now. Thanks in advance!

    Read the article

  • What application domains are CPU bound and will tend to benefit from multi-core technologies?

    - by Glomek
    I hear a lot of people talking about the revolution that is coming in programming due to multi-core processors and parallelism, but I can't shake the feeling that for most of us, CPU cycles aren't the bottleneck. Pretty much all of my programs have been I/O bound in one way or another (database, filesystem, network, user interaction, etc.) for a very long time. Now I can think of a few areas where CPU cycles are a limiting factor, like code breaking, graphics, sound, some forms of simulation (weather, physics, etc.), and some forms of mathematical research, but they all seem like fairly specialized application domains. My general impression is that most programs are still I/O bound and that for most of our industry CPUs have been plenty fast for quite a while now. Am I off my rocker? What other application domains are CPU bound today? Do any of them include a large portion of the programming population? In essence, I'm wondering whether the multi-core CPUs will impact very many of us, and if so, how?

    Read the article

  • Compiler error: Variable or field declared void [closed]

    - by ?? ?
    i get some error when i try to run this, could someone please tell me the mistakes, thank you! [error: C:\Users\Ethan\Desktop\Untitled1.cpp In function `int main()': 25 C:\Users\Ethan\Desktop\Untitled1.cpp variable or field `findfactors' declared void 25 C:\Users\Ethan\Desktop\Untitled1.cpp initializer expression list treated as compound expression] #include<iostream> #include<cmath> using namespace std; void prompt(int&, int&, int&); int gcd(int , int , int );//3 input, 3 output void findfactors(int , int , int, int, int&, int&);//3 input, 2 output void display(int, int, int, int, int);//5 inputs int main() { int a, b, c; //The coefficients of the quadratic polynomial int ag, bg, cg;//value of a, b, c after factor out gcd int f1, f2; //The two factors of a*c which add to be b int g; //The gcd of a, b, c prompt(a, b, c);//Call the prompt function g=gcd(a, b, c);//Calculation of g void findfactors(a, b, c, f1, f2);//Call findFactors on factored polynomial display(g, f1, f2, a, c);//Call display function to display the factored polynomial system("PAUSE"); return 0; } void prompt(int& num1, int& num2, int& num3) //gets 3 ints from the user { cout << "This program factors polynomials of the form ax^2+bx+c."<<endl; while(num1==0) { cout << "Enter a value for a: "; cin >> num1; if(num1==0) { cout<< "a must be non-zero."<<endl; } } while(num2==0 && num3==0) { cout << "Enter a value for b: "; cin >> num2; cout << "Enter a value for c: "; cin >> num3; if(num2==0 && num3==0) { cout<< "b and c cannot both be 0."<<endl; } } } int gcd(int num1, int num2, int num3) { int k=2, gcd=1; while (k<=num1 && k<=num2 && k<=num3) { if (num1%k==0 && num2%k==0 && num3%k==0) gcd=k; k++; } return gcd; } void findFactors(int Ag, int Bg, int Cg,int& F1, int& F2) { int y=Ag*Cg; int z=sqrt(abs(y)); for(int i=-z; i<=z; i++) //from -sqrt(|y|) to sqrt(|y|) { if(i==0)i++; //skips 0 if(y%i==0) //if i is a factor of y { if(i+y/i==Bg) //if i and its partner add to be b F1=i, F2=y/i; else F1=0, F2=0; } } } void display(int G, int factor1, int factor2, int A, int C) { int k=2, gcd1=1; while (k<=A && k<=factor1) { if (A%k==0 && factor1%k==0) gcd1=k; k++; } int t=2, gcd2=1; while (t<=factor2 && t<=C) { if (C%t==0 && factor2%t==0) gcd2=t; t++; } cout<<showpos<<G<<"*("<<gcd1<<"x"<<gcd2<<")("<<A/gcd1<<"x"<<C/gcd2<<")"<<endl; }

    Read the article

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • How many instructions to access pointer in C?

    - by Derek
    Hi All, I am trying to figure out how many clock cycles or total instructions it takes to access a pointer in C. I dont think I know how to figure out for example, p-x = d-a + f-b i would assume two loads per pointer, just guessing that there would be a load for the pointer, and a load for the value. So in this operations, the pointer resolution would be a much larger factor than the actual addition, as far as trying to speed this code up, right? This may depend on the compiler and architecture implemented, but am I on the right track? I have seen some code where each value used in say, 3 additions, came from a f2->sum = p1->p2->p3->x + p1->p2->p3->a + p1->p2->p3->m type of structure, and I am trying to define how bad this is

    Read the article

  • Multi-Core Programming. Boost's MPI, OpenMP, TBB, or something else?

    - by unknownthreat
    Hello, I am totally a novice in Multi-Core Programming, but I do know how to program C++. Now, I am looking around for Multi-Core Programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB. For anyone who have experienced with any of these 3 API (or any other API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

    Read the article

  • Creating a new coloumn for date info with specific date format

    - by Ayan
    Dear All I am working with a file which has few years data and I am trying to create an aditional coloumn that reads the year and month info from the date coloumn (e.g. 01/01/1997 12:00) and create a new coloumn with month and year together(e.g. Jan-97). I am not sure how to proceed with this but what I am trying to code is the coloumn with name "new_date" in the following picture: my sample data = > dput(df) structure(list(date = structure(c(1L, 4L, 7L, 2L, 5L, 8L, 3L, 6L, 9L), .Label = c("01/01/1997 12:00", "01/01/1998 15:00", "01/01/1999 18:00", "01/02/1997 13:00", "01/02/1998 16:00", "01/02/1999 19:00", "01/03/1997 14:00", "01/03/1998 17:00", "01/03/1999 19:00"), class = "factor"), value = c(29L, 31L, 42L, 42L, 52L, 61L, 57L, 55L, 56L)), .Names = c("date", "value"), row.names = c(NA, -9L), class = "data.frame") I would really appreciate if you could advise me about how should I proceed with this, Many thanks, Ayan

    Read the article

  • more ruby way of gsub from array

    - by aharon
    My goal is to let there be x so that x("? world. what ? you say...", ['hello', 'do']) returns "hello world. what do you say...". I have something that works, but seems far from the "Ruby way": def x(str, arr, rep='?') i = 0 query.gsub(rep) { i+=1; arr[i-1] } end Is there a more idiomatic way of doing this? (Let me note that speed is the most important factor, of course.)

    Read the article

  • Joining links together in a dictionary

    - by ptabatt
    Hi guys, I'm student here, new to python and programming in general. I have a dictionary links which holds a tuple mapped to a number. How can I join the second url in the second tuple together with the urljoin() function? What I'm trying to do is get complete links so I can run a recursive function search() which takes a complete url as an arguement, finds all the links in each url and stores the number of links mapped to the links in a database. So far, I have: links {('href', 'http://reed.cs.depaul.edu/lperkovic/csc242/test2.html'): 1, ('href', 'test3.html'): 1} I want http://reed.cs.depaul.edu/lperkovic/csc242/test3.html...

    Read the article

  • How to save objects using Multi-Threading in Core Data?

    - by Konstantin
    I'm getting some data from the web service and saving it in the core data. This workflow looks like this: get xml feed go over every item in that feed, create a new ManagedObject for every feed item download some big binary data for every item and save it into ManagedObject call [managedObjectContext save:] Now, the problem is of course the performance - everything runs on the main thread. I'd like to re-factor as much as possible to another thread, but I'm not sure where I should start. Is it OK to put everything (1-4) to the separate thread?

    Read the article

  • Ant - using vssadd to add multiple file

    - by mamendex
    Hi, I'm trying to use vssadd task to add a tree of source files to a recent created project on VSS. But it happens to be adding only the folder tree, all files missing. <vsscp vsspath="$/DEV/APL_${version}" ssdir="${vssapl}" serverPath="${vsssvr}"/> The vssadd task displays the name of the folders it's creating: ... (vssadd) $/DEV/APL_0.0.10c/src/domain: (vssadd) $/DEV/APL_0.0.10c/src/mbeans: (vssadd) $/DEV/APL_0.0.10c/src/service: ... The script runs successfully but the files never get in the repository. Trying to use wilcards are no good, the task says it found no matching files and ss returns with a code of 100: <vssadd ssdir="${vssapl}" localPath="C:\Workspace\APL_Build*.*" recursive="true" serverPath="${vsssvr}" comment="Build ${versao} at ${to.timestamp}"/ I've noticed that vssadd does not accept fileset tag either, so I'm kind of lost here. Any tips? tks

    Read the article

  • How to analyse contents of binary serialization stream?

    - by Tao
    I'm using binary serialization (BinaryFormatter) as a temporary mechanism to store state information in a file for a relatively complex (game) object structure; the files are coming out much larger than I expect, and my data structure includes recursive references - so I'm wondering whether the BinaryFormatter is actually storing multiple copies of the same objects, or whether my basic "number of objects and values I should have" arithmentic is way off-base, or where else the excessive size is coming from. Searching on stack overflow I was able to find the specification for Microsoft's binary remoting format: http://msdn.microsoft.com/en-us/library/cc236844(PROT.10).aspx What I can't find is any existing viewer that enables you to "peek" into the contents of a binaryformatter output file - get object counts and total bytes for different object types in the file, etc; I feel like this must be my "google-fu" failing me (what little I have) - can anyone help? This must have been done before, right??

    Read the article

  • Problem with Postgres FOR LOOP

    - by user341831
    Hi all, Ich have a problem in postgres function: CREATE OR REPLACE FUNCTION linkedRepoObjects(id bigint) RETURNS int AS $$ DECLARE catNumber int DEFAULT 0; DECLARE cat RECORD; BEGIN WITH RECURSIVE children(categoryid,category_fk) AS ( SELECT categoryid, category_fk FROM b2m.category_tab WHERE categoryid = 1 UNION ALL SELECT c1.categoryid,c1.category_fk FROM b2m.category_tab c1, children WHERE children.categoryid = c1.category_fk ) FOR cat IN SELECT * FROM children LOOP IF EXISTS (SELECT 1 FROM b2m.repoobject_tab WHERE category_fk = cat.categoryid) THEN catNumber = catNumber +1 END IF; END LOOP; RETURN catNumber; END; $$ LANGUAGE 'plpgsql'; I've got error: FEHLER: Syntaxfehler bei »FOR« LINE 1: ...dren WHERE children.categoryid = c1.category_fk ) FOR $2 I... I'm a newbee in Postgres. Please help. Thanx in advance

    Read the article

  • R error message about variable lengths

    - by Abraham
    I ran the following code in order to recode the variable. Unfortunately, when I move to run an logit model (using the Zelig package), I get an error message that the variable length differ for this variable. ## Independent Variable - Partisanship (ANES 2004) data04$V043114 part <- data04$V043114 attributes(part) summary(part) partb < part partb[part %in% levels(part)[4]] <- NA partb[part %in% levels(part)[5]] <- NA partb[part %in% levels(part)[6]] <- NA partb[part %in% levels(part)[7]] <- NA partb <- factor(partb) attributes(partb) summary(partb) table(partb) table(part, partb) cbind(part, partb) partisan041 <- partb partisan042 <- as.numeric(partb) summary(partisan041) summary(partisan042) ## Regression Model - ANES 2004 ## anes04one <- zelig(trade041a ~ age042 + education042 + personal042 + economy042 + partisan042 + employment042 + union042 + home042 + market042 + race042 + income042 + gender042, model="logit", data=data04) summary(anes04one) #Error in model.frame.default(formula = trade041a ~ age042 + education042 + : # variable lengths differ (found for 'partisan042')

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >