Search Results

Search found 1058 results on 43 pages for 'compute'.

Page 38/43 | < Previous Page | 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • adding array values

    - by christian
    Array ( [0] => Array ( [datas] => Array ( [name] => lorem [id] => 1 [type] => t1 [due_type] => Q1 [t1] => 1 [t2] => 1 [t3] => 1 ) ) [1] => Array ( [datas] => Array ( [name] => lorem [id] => 1 [type] => t2 [due_type] => Q1 [t1] => 0 [t2] => 1 [t3] => 0 ) ) [2] => Array ( [datas] => Array ( [name] => name [id] => 2 [type] => t1 [due_type] => Q1 [t1] => 1 [t2] => 0 [t3] => 1 ) ) [3] => Array ( [datas] => Array ( [name] => name [id] => 2 [type] => t2 [due_type] => Q1 [t1] => 1 [t2] => 0 [t3] => 0 ) ) ) I want to add the values of each array according to its id, but I am having problem getting the values using these code: I want to compute the sum of all type according to each due_type and combining them into one array. $totals = array(); $i = -1; foreach($datas as $key => $row){ $i += 1; $items[$i] = $row; if (isset($totals[$items[$i]['datas']['id']])){ if($totals[$items[$i]['datas']['id']]['due_type'] == 'Q1'){ if($totals[$items[$i]['datas']['id']]['type'] == 't1'){ $t1+=$totals[$items[$i]['datas']['id']]['t1']; }elseif($totals[$items[$i]['datas']['id']]['type'] == 't2'){ $t2+=$totals[$items[$i]['datas']['id']]['t2']; }elseif($totals[$items[$i]['datas']['id']]['type'] == 't3'){ $t3+=$totals[$items[$i]['datas']['id']]['t3']; } $totals[$items[$i]['datas']['id']]['t1_total'] = $t1; $totals[$items[$i]['datas']['id']]['t2_total'] = $t2; } } else { $totals[$items[$i]['datas']['id']] = $row['datas']; $totals[$items[$i]['datas']['id']]['t1_total'] = $items[$i]['datas']['t1']; $totals[$items[$i]['datas']['id']]['t2_total'] = $items[$i]['datas']['t2']; } }

    Read the article

  • Python/Biophysics- Trying to code a simple stochastic simulation!

    - by user359597
    Hey guys- I'm trying to figure out what to make of the following code- this is not the clear, intuitive python I've been learning. Was it written in C or something then wrapped in a python fxn? The code I wrote (not shown) is using the same math, but I couldn't figure out how to write a conditional loop. If anyone could explain/decipher/clean this up, I'd be really appreciative. I mean- is this 'good' python- or does it look funky? I'm brand new to this- but it's like the order of the fxns is messed up? I understand Gillespie's- I've successfully coded several simpler simulations. So in a nutshell- good code-(pythonic)? order? c? improvements? am i being an idiot? The code shown is the 'answer,' to the following question from a biophysics text (petri-net not shown and honestly not necessary to understand problem): "In a programming language of your choice, implement Gillespie’s First Reaction Algorithm to study the temporal behaviour of the reaction A---B in which the transition from A to B can only take place if another compound, C, is present, and where C dynamically interconverts with D, as modelled in the Petri-net below. Assume that there are 100 molecules of A, 1 of C, and no B or D present at the start of the reaction. Set kAB to 0.1 s-1 and both kCD and kDC to 1.0 s-1. Simulate the behaviour of the system over 100 s." def sim(): # Set the rate constants for all transitions kAB = 0.1 kCD = 1.0 kDC = 1.0 # Set up the initial state A = 100 B = 0 C = 1 D = 0 # Set the start and end times t = 0.0 tEnd = 100.0 print "Time\t", "Transition\t", "A\t", "B\t", "C\t", "D" # Compute the first interval transition, interval = transitionData(A, B, C, D, kAB, kCD, kDC) # Loop until the end time is exceded or no transition can fire any more while t <= tEnd and transition >= 0: print t, '\t', transition, '\t', A, '\t', B, '\t', C, '\t', D t += interval if transition == 0: A -= 1 B += 1 if transition == 1: C -= 1 D += 1 if transition == 2: C += 1 D -= 1 transition, interval = transitionData(A, B, C, D, kAB, kCD, kDC) def transitionData(A, B, C, D, kAB, kCD, kDC): """ Returns nTransition, the number of the firing transition (0: A->B, 1: C->D, 2: D->C), and interval, the interval between the time of the previous transition and that of the current one. """ RAB = kAB * A * C RCD = kCD * C RDC = kDC * D dt = [-1.0, -1.0, -1.0] if RAB > 0.0: dt[0] = -math.log(1.0 - random.random())/RAB if RCD > 0.0: dt[1] = -math.log(1.0 - random.random())/RCD if RDC > 0.0: dt[2] = -math.log(1.0 - random.random())/RDC interval = 1e36 transition = -1 for n in range(len(dt)): if dt[n] > 0.0 and dt[n] < interval: interval = dt[n] transition = n return transition, interval if __name__ == '__main__': sim()

    Read the article

  • Variable sized packet structs with vectors

    - by Rev316
    Lately I've been diving into network programming, and I'm having some difficulty constructing a packet with a variable "data" property. Several prior questions have helped tremendously, but I'm still lacking some implementation details. I'm trying to avoid using variable sized arrays, and just use a vector. But I can't get it to be transmitted correctly, and I believe it's somewhere during serialization. Now for some code. Packet Header class Packet { public: void* Serialize(); bool Deserialize(void *message); unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; }; Packet ImpL typedef struct { unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; } Packet; void* Packet::Serialize(int size) { Packet* p = (Packet *) malloc(8 + 30); p->sender_id = htonl(this->sender_id); p->sequence_number = htonl(this->sequence_number); p->data.assign(size,'&'); //just for testing purposes } bool Packet::Deserialize(void *message) { Packet *s = (Packet*)message; this->sender_id = ntohl(s->sender_id); this->sequence_number = ntohl(s->sequence_number); this->data = s->data; } During execution, I simply create a packet, assign it's members, and send/receive accordingly. The above methods are only responsible for serialization. Unfortunately, the data never gets transferred. Couple of things to point out here. I'm guessing the malloc is wrong, but I'm not sure how else to compute it (i.e. what other value it would be). Other than that, I'm unsure of the proper way to use a vector in this fashion, and would love for someone to show me how (code examples please!) :) Edit: I've awarded the question to the most comprehensive answer regarding the implementation with a vector data property. Appreciate all the responses!

    Read the article

  • difference equations in MATLAB - why the need to switch signs?

    - by jefflovejapan
    Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution. First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t": %% --------------------------[2] MODEL proc-----------------------------%% % Define endogenous vars ('a' denotes t+1 values) syms y2a pi2a ya pia va y2t pi2t yt pit vt ; % Monetary policy rule ia = q1*ya+q2*pia; % ia = q1*(ya-yt)+q2*pia; %%option speed limit policy % Model equations IS = rho*y2a+(1-rho)yt-sigma(ia-pi2a)-ya; AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va; dum1 = ya-y2t; dum2 = pia-pi2t; MPs = phi*vt-va; optcon = [IS ; AS ; dum1 ; dum2; MPs]; He then computes the matrix A: %% ------------------ [3] Linearization proc ------------------------%% % Differentiation xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars jopt = jacobian(optcon,xx); % Define Linear Coefficients coef = eval(jopt); B = [ -coef(:,1:5) ] ; C = [ coef(:,6:10) ] ; % B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)] A = inv(C)*B ; %(Linearized reduced form ) As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step: B = [ -coef(:,1:5) ] ; Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • Calculating a Sample Covariance Matrix for Groups with plyr

    - by John A. Ramey
    I'm going to use the sample code from http://gettinggeneticsdone.blogspot.com/2009/11/split-apply-and-combine-in-r-using-plyr.html for this example. So, first, let's copy their example data: mydata=data.frame(X1=rnorm(30), X2=rnorm(30,5,2), SNP1=c(rep("AA",10), rep("Aa",10), rep("aa",10)), SNP2=c(rep("BB",10), rep("Bb",10), rep("bb",10))) I am going to ignore SNP2 in this example and just pretend the values in SNP1 denote group membership. So then, I may want some summary statistics about each group in SNP1: "AA", "Aa", "aa". Then if I want to calculate the means for each variable, it makes sense (modifying their code slightly) to use: > ddply(mydata, c("SNP1"), function(df) data.frame(meanX1=mean(df$X1), meanX2=mean(df$X2))) SNP1 meanX1 meanX2 1 aa 0.05178028 4.812302 2 Aa 0.30586206 4.820739 3 AA -0.26862500 4.856006 But what if I want the sample covariance matrix for each group? Ideally, I would like a 3D array, where the I have the covariance matrix for each group, and the third dimension denotes the corresponding group. I tried a modified version of the previous code and got the following results that have convinced me that I'm doing something wrong. > daply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) , , = 1 SNP1 1 2 aa 1.4961210 -0.9496134 Aa 0.8833190 -0.1640711 AA 0.9942357 -0.9955837 , , = 2 SNP1 1 2 aa -0.9496134 2.881515 Aa -0.1640711 2.466105 AA -0.9955837 4.938320 I was thinking that the dim() of the 3rd dimension would be 3, but instead, it is 2. Really this is a sliced up version of the covariance matrix for each group. If we manually compute the sample covariance matrix for aa, we get: [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 Using plyr, the following gives me what I want in list() form: > dlply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) $aa [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 $Aa [,1] [,2] [1,] 0.8833190 -0.1640711 [2,] -0.1640711 2.4661046 $AA [,1] [,2] [1,] 0.9942357 -0.9955837 [2,] -0.9955837 4.9383196 attr(,"split_type") [1] "data.frame" attr(,"split_labels") SNP1 1 aa 2 Aa 3 AA But like I said earlier, I would really like this in a 3D array. Any thoughts on where I went wrong with daply() or suggestions? Of course, I could typecast the list from dlply() to a 3D array, but I'd rather not do this because I will be repeating this process many times in a simulation. As a side note, I found one method (http://www.mail-archive.com/[email protected]/msg86328.html) that provides the sample covariance matrix for each group, but the outputted object is bloated. Thanks in advance.

    Read the article

  • Job queue manager with RPC interface

    - by admr
    I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server. Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too. I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example. I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial). The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging). I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part. I would be happy to hear suggestions for either the former or latter scenario!

    Read the article

  • Advice on software / database design to avoid using cursors when updating database

    - by Remnant
    I have a database that logs when an employee has attended a course and when they are next due to attend the course (courses tend to be annual). As an example, the following employee attended course '1' on 1st Jan 2010 and, as the course is annual, is due to attend next on the 1st Jan 2011. As today is 20th May 2010 the course status reads as 'Complete' i.e. they have done the course and do not need to do it again until next year: EmployeeID CourseID AttendanceDate DueDate Status 123456 1 01/01/2010 01/01/2011 Complete In terms of the DueDate I calculate this in SQL when I update the employee's record e.g. DueDate = AttendanceDate + CourseFrequency (I pull course frequency this from a separate table). In my web based app (asp.net mvc) I pull back this data for all employees and display it in a grid like format for HR managers to review. This allows HR to work out who needs to go on courses. The issue I have is as follows. Taking the example above, suppose today is 2nd Jan 2011. In this case, employee 123456 is now overdue for the course and I would like to set the Status to Incomplete so that the HR manager can see that they need to action this i.e. get employee on the course. I could build a trigger in the database to run overnight to update the Status field for all employees based on the current date. From what I have read I would need to use cursors to loop over each row to amend the status and this is considered bad practice / inefficient or at least something to avoid if you can??? Alternatively, I could compute the Status in my C# code after I have pulled back the data from the database and before I display it on screen. The issue with this is that the Status in the database would not necessarily match what is shown on screen which just feels plain wrong to me. Does anybody have any advice on the best practice approach to such an issue? It helps, if I did use a cursor I doubt I would be looping over more than 1000 records at any given time. Maybe this is such small volume that using cursors is okay?

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • Python/Biomolecular Physics- Trying to code a simple stochastic simulation of a system exhibiting co

    - by user359597
    *edited 6/17/10 I'm trying to understand how to improve my code (make it more pythonic). Also, I'm interested in writing more intuitive 'conditionals' that would describe scenarios that are commonplace in biochemistry. The conditional criteria in the below program is explained in Answer #2, but I am not satisfied with it- it is correct, but isn't obvious and isn't easy to implement for more complicated conditional scenarios. Ideas welcome. Comments/criticisms welcome. First posting experience @ stackoverflow- please comment on etiquette if needed. The code generates a list of values that are the solution to the following exercise: "In a programming language of your choice, implement Gillespie’s First Reaction Algorithm to study the temporal behaviour of the reaction A---B in which the transition from A to B can only take place if another compound, C, is present, and where C dynamically interconverts with D, as modelled in the Petri-net below. Assume that there are 100 molecules of A, 1 of C, and no B or D present at the start of the reaction. Set kAB to 0.1 s-1 and both kCD and kDC to 1.0 s-1. Simulate the behaviour of the system over 100 s." def sim(): # Set the rate constants for all transitions kAB = 0.1 kCD = 1.0 kDC = 1.0 # Set up the initial state A = 100 B = 0 C = 1 D = 0 # Set the start and end times t = 0.0 tEnd = 100.0 print "Time\t", "Transition\t", "A\t", "B\t", "C\t", "D" # Compute the first interval transition, interval = transitionData(A, B, C, D, kAB, kCD, kDC) # Loop until the end time is exceded or no transition can fire any more while t <= tEnd and transition >= 0: print t, '\t', transition, '\t', A, '\t', B, '\t', C, '\t', D t += interval if transition == 0: A -= 1 B += 1 if transition == 1: C -= 1 D += 1 if transition == 2: C += 1 D -= 1 transition, interval = transitionData(A, B, C, D, kAB, kCD, kDC) def transitionData(A, B, C, D, kAB, kCD, kDC): """ Returns nTransition, the number of the firing transition (0: A->B, 1: C->D, 2: D->C), and interval, the interval between the time of the previous transition and that of the current one. """ RAB = kAB * A * C RCD = kCD * C RDC = kDC * D dt = [-1.0, -1.0, -1.0] if RAB > 0.0: dt[0] = -math.log(1.0 - random.random())/RAB if RCD > 0.0: dt[1] = -math.log(1.0 - random.random())/RCD if RDC > 0.0: dt[2] = -math.log(1.0 - random.random())/RDC interval = 1e36 transition = -1 for n in range(len(dt)): if dt[n] > 0.0 and dt[n] < interval: interval = dt[n] transition = n return transition, interval if __name__ == '__main__': sim()

    Read the article

  • Calculating rotation and translation matrices between two odometry positions for monocular linear triangulation

    - by user1298891
    Recently I've been trying to implement a system to identify and triangulate the 3D position of an object in a robotic system. The general outline of the process goes as follows: Identify the object using SURF matching, from a set of "training" images to the actual live feed from the camera Move/rotate the robot a certain amount Identify the object using SURF again in this new view Now I have: a set of corresponding 2D points (same object from the two different views), two odometry locations (position + orientation), and camera intrinsics (focal length, principal point, etc.) since it's been calibrated beforehand, so I should be able to create the 2 projection matrices and triangulate using a basic linear triangulation method as in Hartley & Zissermann's book Multiple View Geometry, pg. 312. Solve the AX = 0 equation for each of the corresponding 2D points, then take the average In practice, the triangulation only works when there's almost no change in rotation; if the robot even rotates a slight bit while moving (due to e.g. wheel slippage) then the estimate is way off. This also applies for simulation. Since I can only post two hyperlinks, here's a link to a page with images from the simulation (on the map, the red square is simulated robot position and orientation, and the yellow square is estimated position of the object using linear triangulation.) So you can see that the estimate is thrown way off even by a little rotation, as in Position 2 on that page (that was 15 degrees; if I rotate it any more then the estimate is completely off the map), even in a simulated environment where a perfect calibration matrix is known. In a real environment when I actually move around with the robot, it's worse. There aren't any problems with obtaining point correspondences, nor with actually solving the AX = 0 equation once I compute the A matrix, so I figure it probably has to do with how I'm setting up the two camera projection matrices, specifically how I'm calculating the translation and rotation matrices from the position/orientation info I have relative to the world frame. How I'm doing that right now is: Rotation matrix is composed by creating a 1x3 matrix [0, (change in orientation angle), 0] and then converting that to a 3x3 one using OpenCV's Rodrigues function Translation matrix is composed by rotating the two points (start angle) degrees and then subtracting the final position from the initial position, in order to get the robot's straight and lateral movement relative to its starting orientation Which results in the first projection matrix being K [I | 0] and the second being K [R | T], with R and T calculated as described above. Is there anything I'm doing really wrong here? Or could it possibly be some other problem? Any help would be greatly appreciated.

    Read the article

  • Need help converting Ruby code to php code

    - by newprog
    Yesterday I posted this queston. Today I found the code which I need but written in Ruby. Some parts of code I have understood (I don't know Ruby) but there is one part that I can't. I think people who know ruby and php can help me understand this code. def do_create(image) # Clear any old info in case of a re-submit FIELDS_TO_CLEAR.each { |field| image.send(field+'=', nil) } image.save # Compose request vm_params = Hash.new # Submitting a file in ruby requires opening it and then reading the contents into the post body file = File.open(image.filename_in, "rb") # Populate the parameters and compute the signature # Normally you would do this in a subroutine - for maximum clarity all # parameters are explicitly spelled out here. vm_params["image"] = file # Contents will be read by the multipart object created below vm_params["image_checksum"] = image.image_checksum vm_params["start_job"] = 'vectorize' vm_params["image_type"] = image.image_type if image.image_type != 'none' vm_params["image_complexity"] = image.image_complexity if image.image_complexity != 'none' vm_params["image_num_colors"] = image.image_num_colors if image.image_num_colors != '' vm_params["image_colors"] = image.image_colors if image.image_colors != '' vm_params["expire_at"] = image.expire_at if image.expire_at != '' vm_params["licensee_id"] = DEVELOPER_ID #in php it's like this $vm_params["sequence_number"] = -rand(100000000);????? vm_params["sequence_number"] = Kernel.rand(1000000000) # Use a negative value to force an error when calling the test server vm_params["timestamp"] = Time.new.utc.httpdate string_to_sign = CREATE_URL + # Start out with the URL being called... #vm_params["image"].to_s + # ... don't include the file per se - use the checksum instead vm_params["image_checksum"].to_s + # ... then include all regular parameters vm_params["start_job"].to_s + vm_params["image_type"].to_s + vm_params["image_complexity"].to_s + # (nil.to_s => '', so this is fine for vm_params we don't use) vm_params["image_num_colors"].to_s + vm_params["image_colors"].to_s + vm_params["expire_at"].to_s + vm_params["licensee_id"].to_s + # ... then do all the security parameters vm_params["sequence_number"].to_s + vm_params["timestamp"].to_s vm_params["signature"] = sign(string_to_sign) #no problem # Workaround class for handling multipart posts mp = Multipart::MultipartPost.new query, headers = mp.prepare_query(vm_params) # Handles the file parameter in a special way (see /lib/multipart.rb) file.close # mp has read the contents, we can close the file now response = post_form(URI.parse(CREATE_URL), query, headers) logger.info(response.body) response_hash = ActiveSupport::JSON.decode(response.body) # Decode the JSON response string ##I have understood below def sign(string_to_sign) #logger.info("String to sign: '#{string_to_sign}'") Base64.encode64(HMAC::SHA1.digest(DEVELOPER_KEY, string_to_sign)) end # Within Multipart modul I have this: class MultipartPost BOUNDARY = 'tarsiers-rule0000' HEADER = {"Content-type" => "multipart/form-data, boundary=" + BOUNDARY + " "} def prepare_query (params) fp = [] params.each {|k,v| if v.respond_to?(:read) fp.push(FileParam.new(k, v.path, v.read)) else fp.push(Param.new(k,v)) end } query = fp.collect {|p| "--" + BOUNDARY + "\r\n" + p.to_multipart }.join("") + "--" + BOUNDARY + "--" return query, HEADER end end end Thanks for your help.

    Read the article

  • Is this implementation truely tail-recursive?

    - by CFP
    Hello everyone! I've come up with the following code to compute in a tail-recursive way the result of an expression such as 3 4 * 1 + cos 8 * (aka 8*cos(1+(3*4))) The code is in OCaml. I'm using a list refto emulate a stack. type token = Num of float | Fun of (float->float) | Op of (float->float->float);; let pop l = let top = (List.hd !l) in l := List.tl (!l); top;; let push x l = l := (x::!l);; let empty l = (l = []);; let pile = ref [];; let eval data = let stack = ref data in let rec _eval cont = match (pop stack) with | Num(n) -> cont n; | Fun(f) -> _eval (fun x -> cont (f x)); | Op(op) -> _eval (fun x -> cont (op x (_eval (fun y->y)))); in _eval (fun x->x) ;; eval [Fun(fun x -> x**2.); Op(fun x y -> x+.y); Num(1.); Num(3.)];; I've used continuations to ensure tail-recursion, but since my stack implements some sort of a tree, and therefore provides quite a bad interface to what should be handled as a disjoint union type, the call to my function to evaluate the left branch with an identity continuation somehow irks a little. Yet it's working perfectly, but I have the feeling than in calling the _eval (fun y->y) bit, there must be something wrong happening, since it doesn't seem that this call can replace the previous one in the stack structure... Am I misunderstanding something here? I mean, I understand that with only the first call to _eval there wouldn't be any problem optimizing the calls, but here it seems to me that evaluation the _eval (fun y->y) will require to be stacked up, and therefore will fill the stack, possibly leading to an overflow... Thanks!

    Read the article

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • SSL Authentication with Certificates: Should the Certificates have a hostname?

    - by sixtyfootersdude
    Summary JBoss allows clients and servers to authenticate using certificates and ssl. One thing that seems strange is that you are not required to give your hostname on the certificate. I think that this means if Server B is in your truststore, Sever B can pretend to be any server that they want. (And likewise: if Client B is in your truststore...) Am I missing something here? Authentication Steps (Summary of Wikipeida Page) Client Server ================================================================================================= 1) Client sends Client Hello ENCRIPTION: None - highest TLS protocol supported - random number - list of cipher suites - compression methods 2) Sever Hello ENCRIPTION: None - highest TLS protocol supported - random number - choosen cipher suite - choosen compression method 3) Certificate Message ENCRIPTION: None - 4) ServerHelloDone ENCRIPTION: None 5) Certificate Message ENCRIPTION: None 6) ClientKeyExchange Message ENCRIPTION: server's public key => only server can read => if sever can read this he must own the certificate - may contain a PreMasterSecerate, public key or nothing (depends on cipher) 7) CertificateVerify Message ENCRIPTION: clients private key - purpose is to prove to the server that client owns the cert 8) BOTH CLIENT AND SERVER: - use random numbers and PreMasterSecret to compute a common secerate 9) Finished message - contains a has and MAC over previous handshakes (to ensure that those unincripted messages did not get broken) 10) Finished message - samething Sever Knows The client has the public key for the sent certificate (step 7) The client's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the server's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Client Knows The server has the public key for the sent certificate (step 6 with step 8) The server's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the client's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Potential Problem Suppose the client's truststore has certs in it: Server A Server B (malicous) Server A has hostname www.A.com Server B has hostname www.B.com Suppose: The client tries to connect to Server A but Server B launches a man in the middle attack. Since server B: has a public key for the certificate that will be sent to the client has a "valid certificate" (a cert in the truststore) And since: certificates do not have a hostname feild in them It seems like Server B can pretend to be Server A easily. Is there something that I am missing?

    Read the article

  • Efficient algorithm to distribute work?

    - by Zwei Steinen
    It's a bit complicated to explain but here we go. We have problems like this (code is pseudo-code, and is only for illustrating the problem. Sorry it's in java. If you don't understand, I'd be glad to explain.). class Problem { final Set<Integer> allSectionIds = { 1,2,4,6,7,8,10 }; final Data data = //Some data } And a subproblem is: class SubProblem { final Set<Integer> targetedSectionIds; final Data data; SubProblem(Set<Integer> targetedSectionsIds, Data data){ this.targetedSectionIds = targetedSectionIds; this.data = data; } } Work will look like this, then. class Work implements Runnable { final Set<Section> subSections; final Data data; final Result result; Work(Set<Section> subSections, Data data) { this.sections = SubSections; this.data = data; } @Override public void run(){ for(Section section : subSections){ result.addUp(compute(data, section)); } } } Now we have instances of 'Worker', that have their own state sections I have. class Worker implements ExecutorService { final Map<Integer,Section> sectionsIHave; { sectionsIHave = {1:section1, 5:section5, 8:section8 }; } final ExecutorService executor = //some executor. @Override public void execute(SubProblem problem){ Set<Section> sectionsNeeded = fetchSections(problem.targetedSectionIds); super.execute(new Work(sectionsNeeded, problem.data); } } phew. So, we have a lot of Problems and Workers are constantly asking for more SubProblems. My task is to break up Problems into SubProblem and give it to them. The difficulty is however, that I have to later collect all the results for the SubProblems and merge (reduce) them into a Result for the whole Problem. This is however, costly, so I want to give the workers "chunks" that are as big as possible (has as many targetedSections as possible). It doesn't have to be perfect (mathematically as efficient as possible or something). I mean, I guess that it is impossible to have a perfect solution, because you can't predict how long each computation will take, etc.. But is there a good heuristic solution for this? Or maybe some resources I can read up before I go into designing? Any advice is highly appreciated!

    Read the article

  • Output from OouraFFT correct sometimes but completely false other times. Why ?

    - by Yan
    Hi I am using Ooura FFT to compute the FFT of the accelerometer data in windows of 1024 samples. The code works fine, but then for some reason it produces very strange outputs, i.e. continuous spectrum with amplitudes of the order of 10^200. Here is the code: OouraFFT *myFFT=[[OouraFFT alloc] initForSignalsOfLength:1024 NumWindows:10]; // had to allocate it UIAcceleration *tempAccel = nil; double *input=(double *)malloc(1024 * sizeof(double)); double *frequency=(double *)malloc(1024*sizeof(double)); if (input) { //NSLog(@"%d",[array count]); for (int u=0; u<[array count]; u++) { tempAccel = (UIAcceleration *)[array objectAtIndex:u]; input[u]=tempAccel.z; //NSLog(@"%g",input[u]); } } myFFT.inputData=input; // specifies input data to myFFT [myFFT calculateWelchPeriodogramWithNewSignalSegment]; // calculates FFT for (int i=0;i<myFFT.dataLength;i++) // loop to copy output of myFFT, length of spectrumData is half of input data, so copy twice { if (i<myFFT.numFrequencies) { frequency[i]=myFFT.spectrumData[i]; // } else { frequency[i]=myFFT.spectrumData[myFFT.dataLength-i]; // copy twice } } for (int i=0;i<[array count];i++) { TransformedAcceleration *NewAcceleration=[[TransformedAcceleration alloc]init]; tempAccel=(UIAcceleration*)[array objectAtIndex:i]; NewAcceleration.timestamp=tempAccel.timestamp; NewAcceleration.x=tempAccel.x; NewAcceleration.y=tempAccel.z; NewAcceleration.z=frequency[i]; [newcurrentarray addObject:NewAcceleration]; // this does not work //[self replaceAcceleration:NewAcceleration]; //[NewAcceleration release]; [NewAcceleration release]; } TransformedAcceleration *a=nil;//[[TransformedAcceleration alloc]init]; // object containing fft of x,y,z accelerations for(int i=0; i<[newcurrentarray count]; i++) { a=(TransformedAcceleration *)[newcurrentarray objectAtIndex:i]; //NSLog(@"%d,%@",i,[a printAcceleration]); fprintf(fp,[[a printAcceleration] UTF8String]); //this is going wrong somewhow } fclose(fp); [array release]; [myFFT release]; //[array removeAllObjects]; [newcurrentarray release]; free(input); free(frequency);

    Read the article

  • objective C convert NSString to unsigned

    - by user1501354
    I have changed my question. I want to convert an NSString to an unsigned int. Why? Because I want to do parallel payment in PayPal. Below I have given my coding in which I want to convert the NSString to an unsigned int. My query is: //optional, set shippingEnabled to TRUE if you want to display shipping //options to the user, default: TRUE [PayPal getPayPalInst].shippingEnabled = TRUE; //optional, set dynamicAmountUpdateEnabled to TRUE if you want to compute //shipping and tax based on the user's address choice, default: FALSE [PayPal getPayPalInst].dynamicAmountUpdateEnabled = TRUE; //optional, choose who pays the fee, default: FEEPAYER_EACHRECEIVER [PayPal getPayPalInst].feePayer = FEEPAYER_EACHRECEIVER; //for a payment with multiple recipients, use a PayPalAdvancedPayment object PayPalAdvancedPayment *payment = [[PayPalAdvancedPayment alloc] init]; payment.paymentCurrency = @"USD"; // A payment note applied to all recipients. payment.memo = @"A Note applied to all recipients"; //receiverPaymentDetails is a list of PPReceiverPaymentDetails objects payment.receiverPaymentDetails = [NSMutableArray array]; NSArray *emailArray = [NSArray arrayWithObjects:@"[email protected]",@"[email protected]", nil]; for (int i = 1; i <= 2; i++) { PayPalReceiverPaymentDetails *details = [[PayPalReceiverPaymentDetails alloc] init]; // Customize the payment notes for one of the three recipient. if (i == 2) { details.description = [NSString stringWithFormat:@"Component %d", i]; } details.recipient = [NSString stringWithFormat:@"%@",[emailArray objectAtIndex:i-1]]; unsigned order; if (i==1) { order = [[feeArray objectAtIndex:0] unsignedIntValue]; } if (i==2) { order = [[amountArray objectAtIndex:0] unsignedIntValue]; } //subtotal of all items for this recipient, without tax and shipping details.subTotal = [NSDecimalNumber decimalNumberWithMantissa:order exponent:-4 isNegative:FALSE]; //invoiceData is a PayPalInvoiceData object which contains tax, shipping, and a list of PayPalInvoiceItem objects details.invoiceData = [[PayPalInvoiceData alloc] init]; //invoiceItems is a list of PayPalInvoiceItem objects //NOTE: sum of totalPrice for all items must equal details.subTotal //NOTE: example only shows a single item, but you can have more than one details.invoiceData.invoiceItems = [NSMutableArray array]; PayPalInvoiceItem *item = [[PayPalInvoiceItem alloc] init]; item.totalPrice = details.subTotal; [details.invoiceData.invoiceItems addObject:item]; [payment.receiverPaymentDetails addObject:details]; } [[PayPal getPayPalInst] advancedCheckoutWithPayment:payment]; Can anybody tell me how to do this conversion? Thanks and regards in advance.

    Read the article

  • RaphaelJS HTML5 Library pathIntersection() bug or alternative optimisation (screenshots)

    - by user1236048
    I have a chart generated using RaphaelJS library. It is just on long path: M 50 122 L 63.230769230769226 130 L 76.46153846153845 130 L 89.6923076923077 128 L 102.92307692307692 56 L 116.15384615384615 106 L 129.3846153846154 88 L 142.6153846153846 114 L 155.84615384615384 52 L 169.07692307692307 30 L 182.3076923076923 62 L 195.53846153846152 130 L 208.76923076923077 74 L 222 130 L 235.23076923076923 66 L 248.46153846153845 102 L 261.6923076923077 32 L 274.9230769230769 130 L 288.15384615384613 130 L 301.38461538461536 32 L 314.6153846153846 86 L 327.8461538461538 130 L 341.07692307692304 70 L 354.30769230769226 130 L 367.53846153846155 102 L 380.7692307692308 120 L 394 112 L 407.2307692307692 68 L 420.46153846153845 48 L 433.6923076923077 92 L 446.9230769230769 128 L 460.15384615384613 110 L 473.38461538461536 78 L 486.6153846153846 130 L 499.8461538461538 56 L 513.0769230769231 116 L 526.3076923076923 80 L 539.5384615384614 58 L 552.7692307692307 40 L 566 130 L 579.2307692307692 94 L 592.4615384615385 64 L 605.6923076923076 122 L 618.9230769230769 98 L 632.1538461538461 120 L 645.3846153846154 70 L 658.6153846153845 82 L 671.8461538461538 76 L 685.0769230769231 124 L 698.3076923076923 110 L 711.5384615384615 94 L 724.7692307692307 130 L 738 130 L 751.2307692307692 66 L 764.4615384615385 118 L 777.6923076923076 70 L 790.9230769230769 130 L 804.1538461538461 44 L 817.3846153846154 130 L 830.6153846153845 36 L 843.8461538461538 92 L 857.076923076923 130 L 870.3076923076923 76 L 883.5384615384614 130 L 896.7692307692307 60 L 910 88 Also below these chart I have a jqueryUI slider of the same width (860px) and centered with the chart. I want when I move the slider to move a dot on the chart accordingly with the slider position. See attached screenshot: As you can see it seems to work fine. I've implemented this behaviour using the pathIntersection() method. On the slide event at each ui.value (x coordinate) I intersect my chartPath (the one from above) with a vertical straight line at the x coordinate. But still there are some problems. One of them is that it runs very hard, and it kinda freezes sometimes.. and very weird sometimes it doesn't seem to intersect at all even it should.. I'll example below 2 cases I identified: M 499.8461538461538 0 L 499.8461538461538 140 M 910 0 L 910 140 Could you please explain why this intersect behaviour happens (it should return a dot).. and the worst part it seems like it happens randomly.. if I use another chartdata. Also if you can identify another (better) solution to syncronise the slider position with the dot on the chart.. would be perfect. I thought about using Element.getPointAtLength(length), but I don't know how. I think I should save the pathSegments and for each to compute the start Length and the finish Length.

    Read the article

  • Java program grading

    - by pasito15
    I've been working on this program for hours and I can't figure out how to get the program to actually print the grades from the scores Text file public class Assign7{ private double finalScore; private double private_quiz1; private double private_quiz2; private double private_midTerm; private double private_final; private final char grade; public Assign7(double finalScore){ private_quiz1 = 1.25; private_quiz2 = 1.25; private_midTerm = 0.25; private_final = 0.50; if (finalScore >= 90) { grade = 'A'; } else if (finalScore >= 80) { grade = 'B'; } else if (finalScore >= 70) { grade = 'C'; } else if (finalScore>= 60) { grade = 'D'; } else { grade = 'F'; } } public String toString(){ return finalScore+":"+private_quiz1+":"+private_quiz2+":"+private_midTerm+":"+private_final; } } this code compiles as well as this one import java.util.*; import java.io.*; public class Assign7Test{ public static void main(String[] args)throws Exception{ int q1,q2; int m = 0; int f = 0; int Record ; String name; Scanner myIn = new Scanner( new File("scores.txt") ); System.out.println( myIn.nextLine() +" avg "+"letter"); while( myIn.hasNext() ){ name = myIn.next(); q1 = myIn.nextInt(); q2 = myIn.nextInt(); m = myIn.nextInt(); f = myIn.nextInt(); Record myR = new Record( name, q1,q2,m,f); System.out.println(myR); } } public static class Record { public Record() { } public Record(String name, int q1, int q2, int m, int f) { } } } once a compile the code i get this which dosent exactly compute the numbers I have in the scores.txt Name quiz1 quiz2 midterm final avg letter Assign7Test$Record@4bcc946b Assign7Test$Record@642423 Exception in thread "main" java.until.InputMismatchException at java.until.Scanner.throwFor(Unknown Source) at java.until.Scanner.next(Unknown Source) at java.until.Scanner.nextInt(Unknown Source) at java.until.Scanner.nextInt(Unknown Source) at Assign7Test.main(Assign7Test.java:25)

    Read the article

  • hashing password giving different results

    - by geoff
    I am taking over a system that a previous developer wrote. The system has an administrator approve a user account and when they do that the system uses the following method to hash a password and save it to the database. It sends the unhashed password to the user. When the user logs in the system uses the exact same method to hash what the user enters and compares it to the database value. We've run into a couple of times when the database entry doesn't match the user's entry whey they should. So it appears that the method isn't always hashing the value the same. Does anyone know if this method of hashing isn't reliable and how to make it reliable? Thanks. private string HashPassword(string password) { string hashedPassword = string.Empty; // Convert plain text into a byte array. byte[] plainTextBytes = Encoding.UTF8.GetBytes(password); // Allocate array, which will hold plain text and salt. byte[] plainTextWithSaltBytes = new byte[plainTextBytes.Length + SALT.Length]; // Copy plain text bytes into resulting array. for(int i = 0; i < plainTextBytes.Length; i++) plainTextWithSaltBytes[i] = plainTextBytes[i]; // Append salt bytes to the resulting array. for(int i = 0; i < SALT.Length; i++) plainTextWithSaltBytes[plainTextBytes.Length + i] = SALT[i]; // Because we support multiple hashing algorithms, we must define // hash object as a common (abstract) base class. We will specify the // actual hashing algorithm class later during object creation. HashAlgorithm hash = new SHA256Managed(); // Compute hash value of our plain text with appended salt. byte[] hashBytes = hash.ComputeHash(plainTextWithSaltBytes); // Create array which will hold hash and original salt bytes. byte[] hashWithSaltBytes = new byte[hashBytes.Length + SALT.Length]; // Copy hash bytes into resulting array. for(int i = 0; i < hashBytes.Length; i++) hashWithSaltBytes[i] = hashBytes[i]; // Append salt bytes to the result. for(int i = 0; i < SALT.Length; i++) hashWithSaltBytes[hashBytes.Length + i] = SALT[i]; // Convert result into a base64-encoded string. hashedPassword = Convert.ToBase64String(hashWithSaltBytes); return hashedPassword; }

    Read the article

  • C++ problem with string stream istringstream

    - by user69514
    I am reading a file in the following format 1001 16000 300 12.50 2002 24000 360 10.50 3003 30000 300 9.50 where the items are: loan id, principal, months, interest rate. I'm not sure what it is that I am doing wrong with my input string stream, but I am not reading the values correctly because only the loan id is read correctly. Everything else is zero. Sorry this is a homework, but I just wanted to know if you could help me identify my error. if( inputstream.is_open() ){ /** print the results **/ cout << fixed << showpoint << setprecision(2); cout << "ID " << "\tPrincipal" << "\tDuration" << "\tInterest" << "\tPayment" <<"\tTotal Payment" << endl; cout << "---------------------------------------------------------------------------------------------" << endl; /** assign line read while we haven't reached end of file **/ string line; istringstream instream; while( inputstream >> line ){ instream.clear(); instream.str(line); /** assing values **/ instream >> loanid >> principal >> duration >> interest; /** compute monthly payment **/ double ratem = interest / 1200.0; double expm = (1.0 + ratem); payment = (ratem * pow(expm, duration) * principal) / (pow(expm, duration) - 1.0); /** computer total payment **/ totalPayment = payment * duration; /** print out calculations **/ cout << loanid << "\t$" << principal <<"\t" << duration << "mo" << "\t" << interest << "\t$" << payment << "\t$" << totalPayment << endl; } }

    Read the article

  • Code to generate random numbers in C++

    - by user1678927
    Basically I have to write a program to generate random numbers to simulate the rolling of a pair of dice. This program should be constructed in multiple files. The main function should be in one file, the other functions should be in a second source file, and their prototypes should be in a header file. First I write a short function that returns a random value between 1 and 6 to simulate the rolling of a single 6-sided die.Second, i write a function that pretends to roll a pair of dice by calling this function twice. My program starts by asking the user how many rolls should be made. Then I write a function to simulate rolling the dice this many times, keeping a count of exactly how many times the values 2,3,4,5,6,7,8,9,10,11,12(each number is the sum of a pair of dice) occur in an array. Later I write a function to display a small bar chart using these counts that ideally would look something like below for a sample of 144 rolls, where the number of asterisks printed corresponds to the count: 2 3 4 5 6 7 8 9 10 11 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Next, to see how well the random number generator is doing, I write a function to compute the average value rolled. Compare this to the ideal average of 7. Also, print out a small table showing the counts of each roll made by the program, the ideal count based on the frequencies above given the total number of rolls, and the difference between these values in separate columns. This is my incomplete code so far: "Compiler visual studio 2010" int rolling(){ //Function that returns a random value between 1 and 6 rand(unsigned(time(NULL))); int dice = 1 + (rand() %6); return dice; } int roll_dice(int num1,int num2){ //it calls 'rolling function' twice int result1,result2; num1 = rolling(); num2 = rolling(); result1 = num1; result2 = num2; return result1,result2; } int main(void){ int times,i,sum,n1,n2; int c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11;//counters for each sum printf("Please enter how many times you want to roll the dice.\n") scanf_s("%i",&times); I pretend to use counters to count each sum and store the number(the count) in an array. I know i need a loop (for) and some conditional statements (if) but m main problem is to get the values from roll_dice and store them in n1 and n2 so then i can sum them up and store the sum in 'sum'.

    Read the article

  • c permutation of a number

    - by Pkp
    I am writing a program for magic box. As the only way around it is brute force, I wrote a program to compute permutations of a given array using bells algorithm. I wrote in the lines similar to http://programminggeeks.com/c-code-for-permutation/. It does work for array of 3 and 4. It does not work for an arryay of 8 numbers (1,2,3,4,5,6,7,8). I see that the combination 1 2 3 4 5 6 7 8 gets repeated couple of times. Also there are other combinations that gets repeated. I see that certain combinations don't get displayed even. So could someone tell me what is wrong in the program below. Code: include<stdio.h> int len,numperm=1,count=0; display(int a[]){ int i; for(i=0;i<len;i++) printf("%d ",a[i]); printf("\n"); count++; } swap(int *a,int *b){ int temp; temp=*a; *a=*b; *b=temp; } no_of_perm(){ int x; for(x=1;x<=len;x++) numperm=numperm*x; } perm(int a[]){ int x,y; while(count < numperm){ for(x=0;x<len-1;x++){ swap(&a[x],&a[x+1]); display(a); } swap(&a[0],&a[1]); display(a); for(y=len-1;y>0;y--){ swap(&a[y],&a[y-1]); display(a); } swap(&a[len-1],&a[len-2]); display(a); } } main(int argc, char *argv[]){ if(argc<2){ printf("Error\n"); exit(0); } int i,*a=malloc(sizeof(int)*atoi(argv[1])); len=atoi(argv[1]); for(i=0;i<len;i++) a[i]=i+1; no_of_perm(); perm(a); }

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43  | Next Page >