Search Results

Search found 1354 results on 55 pages for 'compute scalar'.

Page 37/55 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • a nicer way to create structs in a loop

    - by sandra
    Hi guys, I haven't coded in C++ in ages. And recently, I'm trying to work on something involving structs. Like this typedef struct{ int x; int y; } Point; Then in a loop, I'm trying to create new structs and put pointers to them them in a list. Point* p; int i, j; while (condition){ // compute values for i and j with some function... p = new Point; p* = {i, j}; //initialize my struct. list.append(p); //append this pointer to my list. } Now, my question is it possible to simplify this? I mean, the pointer variable *p outside of the loop and calling p = new Point inside the loop. Isn't there a better/nicer syntax for this?

    Read the article

  • Python 3: Most efficient way to create a [func(i) for i in range(N)] list comprehension

    - by mejiwa
    Say I have a function func(i) that creates an object for an integer i, and N is some nonnegative integer. Then what's the fastest way to create a list (not a range) equal to this list mylist = [func(i) for i in range(N)] without resorting to advanced methods like creating a function in C? My main concern with the above list comprehension is that I'm not sure if python knows beforehand the length of range(N) to preallocate mylist, and therefore has to incrementally reallocate the list. Is that the case or is python clever enough to allocate mylist to length N first and then compute it's elements? If not, what's the best way to create mylist? Maybe this? mylist = [None]*N for i in range(N): mylist[i] = func(i)

    Read the article

  • Keeping connection to APNs open on App Engine using Modules in Go

    - by user3727820
    I'm trying to implement iOS push notifications for a messageboard app I've written (so like notification for new message ect. ect.) but have no real idea where to start. Alot the current documentation seems to be out of date in regard to keeping persistent TLS connections open to the APNs from App Engine and links to articles about depreciated backends I'm using the Go runtime and just keep getting stuck. For instance, the creation of the socket connection to APNs requires a Context which can only be got from a HTTP request, but architecturally this doesn't seem to make a lot of sense because ideally the socket remains open regardless. Is there any clearer guides around that I'm missing or right now is it a better idea to setup a separate VPS or compute instance to handle it?

    Read the article

  • Copy of Access mdb database being updated by live database

    - by James
    I'm trying to compute statistics for data held in an Access .mdb database. In order to avoid interfering with the live database, I'm working from a copy which I made by simply using copy-paste in Windows Explorer. The copy resides in the same directory, but with a different name. I'm using R and RODBC to connect to the copy of the file. The strange thing is that new data that is being updated on the original live database is appearing in my queries. This is despite the file timestamps of the copy not changing at all. It is also causing some slowdown in the live database. My understanding is that the .mdb files are standalone, or is this not the case? Should I have copied the database in a different way?

    Read the article

  • JQuery\Javascript - Passing a function as a variable.

    - by Josh
    I was just curious if I could pass a function as a variable. For example: I have a function $('#validate').makeFloat({x:671,y:70,limitY:700}); I would like to do something like this: $('#validate').makeFloat({x:function(){ return $("#tabs").offset().left+$("#tabs").width();},y:70,limitY:700}); This does not work, but ideally every time the variable was accessed it would compute the new value. So if the window was resized it would automatically adjust as opposed to a variable passed in being static. I realize I can implement this directly inside the function\widget, but I was wondering if there was some way to do something like the above.

    Read the article

  • Shortest command to calculate the sum of a column of output on Unix?

    - by Andrew
    I'm sure there is a quick and easy way to calculate the sum of a column of values on Unix systems (using something like awk or xargs perhaps), but writing a shell script to parse the rows line by line is the only thing that comes to mind at the moment. For example, what's the simplest way to modify the command below to compute and display the total for the SEGSZ column (70300)? ipcs -mb | head -6 IPC status from /dev/kmem as of Mon Nov 17 08:58:17 2008 T ID KEY MODE OWNER GROUP SEGSZ Shared Memory: m 0 0x411c322e --rw-rw-rw- root root 348 m 1 0x4e0c0002 --rw-rw-rw- root root 61760 m 2 0x412013f5 --rw-rw-rw- root root 8192

    Read the article

  • How to speed this kind of for-loop?

    - by wok
    I would like to compute the maximum of translated images along the direction of a given axis. I know about ordfilt2, however I would like to avoid using the Image Processing Toolbox. So here is the code I have so far: imInput = imread('tire.tif'); n = 10; imMax = imInput(:, n:end); for i = 1:(n-1) imMax = max(imMax, imInput(:, i:end-(n-i))); end Is it possible to avoid using a for-loop in order to speed the computation up, and, if so, how?

    Read the article

  • measuring uncertainty in matlabs svmclassify

    - by Mark
    I'm doing contextual object recognition and I need a prior for my observations. e.g. this space was labeled "dog", what's the probability that it was labeled correctly? Do you know if matlabs svmclassify has an argument to return this level of certainty with it's classification? If not, matlabs svm has the following structures in it: SVM = SupportVectors: [11x124 single] Alpha: [11x1 double] Bias: 0.0915 KernelFunction: @linear_kernel KernelFunctionArgs: {} GroupNames: {11x1 cell} SupportVectorIndices: [11x1 double] ScaleData: [1x1 struct] FigureHandles: [] Can you think of any ways to compute a good measure of uncertainty from these? (Which support vector to use?) Papers/articles explaining uncertainty in SVMs welcome. More in depth explanations of matlabs SVM are also welcome. If you can't do it this way, can you think of any other libraries with SVMs that have this measure of uncertainty?

    Read the article

  • Ajax, not sending querystring data

    - by Tom Gullen
    var http = false; // Creates xmlhttp object if (navigator.appName == "Microsoft Internet Explorer") { http = new ActiveXObject("Microsoft.XMLHTTP"); } else { http = new XMLHttpRequest(); } http.onreadystatechange = function() { if (http.readyState == 4) { alert(http.responseText); } } // Functions to calculate optimum layout etc. function compute() { var statusSpan = document.getElementById("cwStatus"); document.getElementById("fader").style.display = ""; document.getElementById("computingWait").style.display = ""; statusSpan.innerHTML = "<b>Status:</b> Realigning sattelites" http.open("GET", "alg.aspx?cr=8&cc=7&sq=3,3", true); http.send(null); } This code sort of works, but the querystring data isn't being passed through. It keeps returning an ASPX error page which only happens when there is no querystring data. Thanks for any help

    Read the article

  • Best Design for creating Historic Reports on GAE

    - by charming30
    My App requires Daily reports based on various user activities. My current design does not sum the daily totals in database, which means I must compute them everytime. For example A report that shows Top 100 users based on the number of submissions they have made on a given day. For such a report If I have 50,000 users, what is the best way to create daily report? How to create monthly and yearly report with such data? If this is not a good design, then how to deal with such design decision when the metrics of the report are not clear during db design and by the time it is clear we already have huge data with limited parameters (fields). Please advice.

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Creating a computer with pencil and paper [closed]

    - by Justian Meyer
    This concept has been brought to my attention before, but many people might know it from this popular comic here, where he uses stones instead of points on a paper. This concept is so abstract to me. A full-functioning computer that can manage algorithms, input, output, etc. without electricity? Surely, it's difficult to visualize because my definition of a computer is not exactly what the word sounds like COMPUTE-ER. Can someone help me bring the concept to light - the understanding, how to make one, etc? It seems like it'd be an interesting read, although I think I wouldn't be able to make it do anything. Many thanks in advance for responses. I've tried searching Google, but all I got were ways to diagram code and chart ideas. -.Justian.

    Read the article

  • python parallel computing: split keyspace to give each node a range to work on

    - by MatToufoutu
    My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible. I'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them. I'm searching how to calculate, given the charset and the number of nodes, what range each node should work on (i.e.: node1: aaaa-azzz, node2: baaa-czzz, node3: daaa-ezzz, ...). Is it possible to make an algorithm that could compute this, and if it is, how could i implement this in python? I really don't know how to do that, so any help would be much appreciated

    Read the article

  • C++ converting hexadecimal md5 hash to decimal integer

    - by Zackery
    I'm doing Elgamal Signature Scheme and I need to use the decimal hash value from the message to compute S for signature generation. string hash = md5(message); cout << hash << endl; NTL::ZZ msgHash = strtol(hash.c_str(), NULL, 16); cout << msgHash << endl; There are no integer large enough to contain the value of 32 byte hexadecimal hash, and so I tried big integer from NTL library but it didn't work out because you cannot assign long integer to NTL::ZZ type. Is there any good solution to this? I'm doing this with visual C++ in Visual Studio 2013.

    Read the article

  • Can I use a vertex shader to display a models normals?

    - by geowar
    I'm currently using a VBO for the texture coordinates, normals and the vertices of a (3DS) model I'm drawing with "glDrawArrays(GL_TRIANGLES, ...);". For debugging I want to (temporarily) show the normals when drawing my model. Do I have to use immediate mode to draw each line from vert to vert+normal -OR- stuff another VBO with vert and vert+normal to draw all the normals… -OR- is there a way for the vertex shader to use the vertex and normal data already passed in when drawing the model to compute the V+N used when drawing the normals?

    Read the article

  • C# regularly return values from a different thread

    - by sdds
    Hello, I'm very new to multithreading and lack experience. I need to compute some data in a different thread so the UI doesn't hang up, and then send the data as it is processed to a table on the main form. So, basically, the user can work with the data that is already computed, while other data is still being processed. What is the best way to achieve this? I would also be very grateful for any examples. Thanks in advance.

    Read the article

  • How to solve recursion relations in mathematica efficiently?

    - by Qiang Li
    I have a recursion to solve for. f(m,n)=Sum[f[m - 1, n - 1 - i] + f[m - 3, n - 5 - i], {i, 2, n - 2*m + 2}] + f[m - 1, n - 3] + f[m - 3, n - 7] f(0,n)=1, f(1,n)=n However, the following mma code is very inefficient f[m_, n_] := Module[{}, If[m < 0, Return[0];]; If[m == 0, Return[1];]; If[m == 1, Return[n];]; Return[Sum[f[m - 1, n - 1 - i] + f[m - 3, n - 5 - i], {i, 2, n - 2*m + 2}] + f[m - 1, n - 3] + f[m - 3, n - 7]];] It takes unbearably long to compute f[40,20]. Could anyone please suggest an efficient way of doing this? Many thanks!

    Read the article

  • TCP 30 small packets per second polutes connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software and can compute about 50k packets/s so software is not a problem).

    Read the article

  • Java: empty ArrayLists in a foor loop

    - by Patrick
    hi, I'm reusing the same ArrayList in a for loop, and I use for loop results = new ArrayList<Integer>(); experts = new ArrayList<Integer>(); output = new ArrayList<String>(); .... to create new ones. I guess this is wrong, because I'm allocating new memory. Is this correct ? If yes, how can I empty them ? Added: another example I'm creating new variables each time I call this method. Is this good practice ? I mean to create new precision, relevantFound.. etc ? Or should I declare them in my class, outside the method to not allocate more and more memory ? public static void computeMAP(ArrayList results, ArrayList experts) { //compute MAP double precision = 0; int relevantFound = 0; double sumprecision = 0; thanks

    Read the article

  • Population count of rightmost n integers

    - by Jason Baker
    I'm implementing Bagwell's Ideal Hash Trie in Haskell. To find an element in a sub-trie, he says to do the following: Finding the arc for a symbol s, requires ?nding its corresponding bit in the bit map and then counting the one bits below it in the map to compute an index into the ordered sub-trie. What is the best way to do this? It sounds like the most straightforward way of doing this is to select the bits below that bit and do a population count on the resulting number. Is there a faster or better way to do this?

    Read the article

  • SQL Server. Stored procedure to get the biweekly periods

    - by Yada
    I'm currently trying to write a stored procedure that can compute the biweekly periods when a date is passed in as a parameter. The business logic: the first Monday of the year is first day of the biweekly period. For example in 2010: period period_start period_end 1 2010-01-04 2010-01-17 2 2010-01-18 2010-01-31 3 2010-02-01 2010-02-14 .... 26 2010-12-20 2011-01-02 Passing today's date of 2010-12-31 will return 26, 2010-12-20 and 2011-01-02. I'm not too strong in T-SQL. Any help is appreciated. Thanks

    Read the article

  • For each level of factor aggregate values over all levels except the current one (in R)

    - by Andrey Chetverikov
    For each level of factor I need to extract values aggregated over all subsets of data.frame except the current one. For example, there is a several subjects doing a reaction time task during several days, and I need to compute mean reaction time for all subjects and all days, but not including the subject for whom the mean is computed. Currently, I do it like this: library(lme4) ddply(sleepstudy, .(Subject, Days), summarise , avg_rt=mean(sleepstudy[sleepstudy$Subject!=Subject&sleepstudy$Days==Days,"Reaction"]), .progress="text") It works fine for small data sets, but for large ones it can be very slow. Is there a way to do it faster?

    Read the article

  • Is there a way to intersect/diff a std::map and a std::set?

    - by Jack
    I'm wondering if there a way to intersect or make the differences between two structures defined as std::set<MyData*> and std::map<MyData*, MyValue> with standard algorithms (like std::set_intersect) The problem is that I need to compute the difference between the set and the keyset of the map but I would like to avoid reallocating it (since it's something that is done many times per second with large data structures). Is there a way to obtain a "key view" of the std::map? After all what I'm looking is to consider just the keys when doing the set operation so from an implementation point it should be possible but I haven't been able to find anything.

    Read the article

  • MPI_Bsend and MPI_Isend. How do they work ?

    - by GBBL
    Hi, using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread. Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result. Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ? Thanks.

    Read the article

  • read text files containing binary data as a single matrix in matlab

    - by user1716595
    I have a text file which contains binary data in the following manner: 00000000000000000000000000000000001011111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111000111110000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111110000000000000000000000000000000 00000000000000000000000000000000000000000000111111111111111111111111111111111111110000000011100000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111111100111110000000000000000000000000000000 00000000000000000000000000000000000111111111111111111111111111111111111111111111111111110111110000000000000000000000000000000 00000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000000001111111111111111111111111111111111111111111111000011100000000000000000000000000000000 00000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111000000000000000000000000000000000 00000000000000000000000000000000000000011111111111111111111111111111111111111111111110000011100000000000000000000000000000000 00000000000000000000000000000000000000000000011111111111111111111111111111111111100000000011100000000000000000000000000000000 00000000000000000000000000000000000000111111111111111111111111111111111111111111111111110111100000000000000000000000000000000 Plz note that each 1 or 0 is independent i.e the values are not decimal.I need to find the column wise sum of the file.There are 125 columns in all (here it is jumping onto the next line) and there are 840946 rows. I have tried textread,fscanf and a few other matlab commands but the result is that they all read each row in decimal format and create a 840946*1 array.I want to create a 840946*125 array to compute a column wise sum. Kindly help, Thanks!

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >