Search Results

Search found 1058 results on 43 pages for 'compute'.

Page 35/43 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Matlab matrix translation and rotation multiple times

    - by pinnacler
    I have a map of individual trees from a forest stored as x,y points in a matrix. I call it fixedPositions. It's cartesian and (0,0) is the origin. I would like 0/360 degrees to be the top of the screen and 90 degrees to be to the right. Given a velocity and a heading, i.e. .5 m/s and 60 degrees (2 o'clock equivalent on a watch), how do I rotate that x,y points, so that the new origin is centered at (.5cos(60),.5sin(60)) and 60 degrees is now at the top of the screen? Then if I were to give you another heading and speed, i.e. 0 degrees and 2m/s, it should calculate it from the last point, not the original fixedPositions origin. I've wasted my day trying to figure this out. I wish I took matrix algebra but I'm at a loss. I tried doing cos(30) and even those wouldn't compute correctly, which after an hour I realize were in radians.

    Read the article

  • SQL Server compare table entries for update

    - by Dave
    I have a trade table with several million rows. Each row represents the version of a trade. If I'm given a possibly new trade I compare it to the latest version in the trade table. If it has changed I add a new version, otherwise I do nothing. In order to compare the 2 trades I read the version from the trade table into my application. This doesn't work well when I'm given 10s of thousands of possibly new trades. Even batching reads to read in a 1000 trades at once and compare them the whole process can take several minutes. All the time is spent in the DB. I'm trying to find a way to compare the possibly new trades to the ones in the trade table without so much I/O. What I've come up with so far is adding a hash column to each row in the trade table. The hash is of all the trade fields. Then when I'm given possibly new trades I compute their hash, put the values into a temporary table, then find ones that are different. This feels very hacky. Is there a better way of doing it? Thanks

    Read the article

  • Code Golf: Find the possible ways on a numpad

    - by ikar
    I was bored today at school and so I tried to amuse myself using my calculator and a "game" I've invented which isn't really a game but keeps the boringness away. Also some time has passed since the last real code-golf here, so I decided to create this one. Imagine a simplified numpad like you know it from your phone (I'll leave the 0 out for this code-golf as it kinda destroys all the fun) 1 2 3 4 5 6 7 8 9 Now the rules of the game were always: At the end every digit must have been visited exactly once You can start at any digit you want You can always move one digit up, down, left or right. You can't move diagonally! There a quite a lot of possible ways (or not; I haven't found out yet), here some trivial examples: > > v v < < > > | The output of the golf-program should look something like the above, I'll try to explain: Symbols: Go right < Go left ^ Go up v Go down | End of the way Example solutions: (Program output can either be the numbers pressed in the right order from beginning point to end, or an (ASCII) picture like above) 147852369 569874123 523698741 So if we speak out the example above it would be: Start at 1, move right to 2, move right to 3, go down to 6, go left to 5, go left to 4, go down to 7, go right to 8 then go right to 9 and we are finished! Now there are many different ways possible: You could as well start at 5 and go around it in a circle. So the task would be: Write a program that can compute (using brute-force or whatever) the possible solutions for the numpad problem described above. (Friendly rethorical question with smiley removed because it made some people think that this is homework)

    Read the article

  • Does "epsilon" really guarantees anything in floating-point computations?!

    - by Michal Czardybon
    To make the problem short let's say I want to compute expression: a / (b - c) on float's. To make sure the result is meaningful, I can check if 'b' and 'c' are inequal: float eps = std::numeric_limits<float>::epsilon(); if ((b - c) > EPS || (c - b) > EPS) { return a / (b - c); } but my tests show it is not enough to guarantee either meaningful results nor not failing to provide a result if it is possible. Case 1: a = 1.0f; b = 0.00000003f; c = 0.00000002f; Result: The if condition is NOT met, but the expression would produce a correct result 100000008 (as for the floats' precision). Case 2: a = 1e33f; b = 0.000003; c = 0.000002; Result: The if condition is met, but the expression produces not a meaningful result +1.#INF00. I found it much more reliable to check the result, not the arguments: const float INF = numeric_limits<float>::infinity(); float x = a / (b - c); if (-INF < x && x < INF) { return x; } But what for is the epsilon then and why is everyone saying epsilon is good to use?

    Read the article

  • linear combinations in python/numpy

    - by nmaxwell
    greetings, I'm not sure if this is a dumb question or not. Lets say I have 3 numpy arrays, A1,A2,A3, and 3 floats, c1,c2,c3 and I'd like to evaluate B = A1*c1+ A2*c2+ A3*c3 will numpy compute this as for example, E1 = A1*c1 E2 = A2*c2 E3 = A3*c3 D1 = E1+E2 B = D1+E3 or is it more clever than that? In c++ I had a neat way to abstract this kind of operation. I defined series of general 'LC' template functions, LC for linear combination like: template<class T,class D> void LC( T & R, T & L0,D C0, T & L1,D C1, T & L2,D C2) { R = L0*C0 +L1*C1 +L2*C2; } and then specialized this for various types, so for instance, for an array the code looked like for (int i=0; i<L0.length; i++) R.array[i] = L0.array[i]*C0 + L1.array[i]*C1 + L2.array[i]*C2; thus avoiding having to create new intermediate arrays. This may look messy but it worked really well. I could do something similar in python, but I'm not sure if its nescesary. Thanks in advance for any insight. -nick

    Read the article

  • Object equality in context of hibernate / webapp

    - by bert
    How do you handle object equality for java objects managed by hibernate? In the 'hibernate in action' book they say that one should favor business keys over surrogate keys. Most of the time, i do not have a business key. Think of addresses mapped to a person. The addresses are keeped in a Set and displayed in a Wicket RefreshingView (with a ReuseIfEquals strategy). I could either use the surrogate id or use all fields in the equals() and hashCode() functions. The problem is that those fields change during the lifetime ob the object. Either because the user entered some data or the id changes due to JPA merge() being called inside the OSIV (Open Session in View) filter. My understanding of the equals() and hashCode() contract is that those should not change during the lifetime of an object. What i have tried so far: equals() based on hashCode() which uses the database id (or super.hashCode() if id is null). Problem: new addresses start with an null id but get an id when attached to a person and this person gets merged() (re-attached) in the osiv-filter. lazy compute the hashcode when hashCode() is first called and make that hashcode @Transitional. Does not work, as merge() returns a new object and the hashcode does not get copied over. What i would need is an ID that gets assigned during object creation I think. What would be my options here? I don't want to introduce some additional persistent property. Is there a way to explicitly tell JPA to assign an ID to an object? Regards

    Read the article

  • caching previous return values of procedures in scheme

    - by Brock
    In Chapter 16 of "The Seasoned Schemer", the authors define a recursive procedure "depth", which returns 'pizza nested in n lists, e.g (depth 3) is (((pizza))). They then improve it as "depthM", which caches its return values using set! in the lists Ns and Rs, which together form a lookup-table, so you don't have to recurse all the way down if you reach a return value you've seen before. E.g. If I've already computed (depthM 8), when I later compute (depthM 9), I just lookup the return value of (depthM 8) and cons it onto null, instead of recursing all the way down to (depthM 0). But then they move the Ns and Rs inside the procedure, and initialize them to null with "let". Why doesn't this completely defeat the point of caching the return values? From a bit of experimentation, it appears that the Ns and Rs are being reinitialized on every call to "depthM". Am I misunderstanding their point? I guess my question is really this: Is there a way in Scheme to have lexically-scoped variables preserve their values in between calls to a procedure, like you can do in Perl 5.10 with "state" variables?

    Read the article

  • MS Acess SQL - where clause to get year from date based on the year - data located in MS access form

    - by primus285
    OK, back again. I have a problem getting a drop down list to populate based on information in two fields. I have got the SQL correct as to Select just one year if I put DateValue('01/01/2001') in both places, but I am trying now to get it to grab the year data from the MS access form - another drop down named "cboYear". I'd hate to have to do something in VB, unless necessary. so far I have gotten this to pull up something (its always incorrect) SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' & [cboYear]) And Database_New.Date<=DateValue('12/31/' & [cboYear]); and SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' + [cboYear]) And Database_New.Date<=DateValue('12/31/' + [cboYear]); SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' AND [cboYear]) And Database_New.Date<=DateValue('12/31/' AND [cboYear]); both give errors saying that it is too complex to compute. Its probably something simple, but where do I go from here?

    Read the article

  • Is XSLT worth investing time in and are there any actual alternatives?

    - by Keeno
    I realize this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the XML etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independent e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibilty to be able to easy reformat the content (I realize CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independent, and to be able to run "offline" i.e. without a server powering the HTML Negatives I've read so far for XSLT: Overhead? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • Calculating distance from latitude, longitude and height using a geocentric co-ordinate system

    - by Sarge
    I've implemented this method in Javascript and I'm roughly 2.5% out and I'd like to understand why. My input data is an array of points represented as latitude, longitude and the height above the WGS84 ellipsoid. These points are taken from data collected from a wrist-mounted GPS device during a marathon race. My algorithm was to convert each point to cartesian geocentric co-ordinates and then compute the Euclidean distance (c.f Pythagoras). Cartesian geocentric is also known as Earth Centred Earth Fixed. i.e. it's an X, Y, Z co-ordinate system which rotates with the earth. My test data was the data from a marathon and so the distance should be very close to 42.26km. However, the distance comes to about 43.4km. I've tried various approaches and nothing changes the result by more than a metre. e.g. I replaced the height data with data from the NASA SRTM mission, I've set the height to zero, etc. Using Google, I found two points in the literature where lat, lon, height had been transformed and my transformation algorithm is matching. What could explain this? Am I expecting too much from Javascript's double representation? (The X, Y, Z numbers are very big but the differences between two points is very small). My alternative is to move to computing the geodesic across the WGS84 ellipsoid using Vincenty's algorithm (or similar) and then calculating the Euclidean distance with the two heights but this seems inaccurate. Thanks in advance for your help!

    Read the article

  • C++ Matrix class hierachy

    - by bpw1621
    Should a matrix software library have a root class (e.g., MatrixBase) from which more specialized (or more constrained) matrix classes (e.g., SparseMatrix, UpperTriangluarMatrix, etc.) derive? If so, should the derived classes be derived publicly/protectively/privately? If not, should they be composed with a implementation class encapsulating common functionality and be otherwise unrelated? Something else? I was having a conversation about this with a software developer colleague (I am not per se) who mentioned that it is a common programming design mistake to derive a more restricted class from a more general one (e.g., he used the example of how it was not a good idea to derive a Circle class from an Ellipse class as similar to the matrix design issue) even when it is true that a SparseMatrix "IS A" MatrixBase. The interface presented by both the base and derived classes should be the same for basic operations; for specialized operations, a derived class would have additional functionality that might not be possible to implement for an arbitrary MatrixBase object. For example, we can compute the cholesky decomposition only for a PositiveDefiniteMatrix class object; however, multiplication by a scalar should work the same way for both the base and derived classes. Also, even if the underlying data storage implementation differs the operator()(int,int) should work as expected for any type of matrix class. I have started looking at a few open-source matrix libraries and it appears like this is kind of a mixed bag (or maybe I'm looking at a mixed bag of libraries). I am planning on helping out with a refactoring of a math library where this has been a point of contention and I'd like to have opinions (that is unless there really is an objective right answer to this question) as to what design philosophy would be best and what are the pros and cons to any reasonable approach.

    Read the article

  • XSLT, worth investing time in, any actual alternatives?

    - by Keeno
    I realise this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the xml etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independant e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibily to be able to easy reformat the content (i realise CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independant, and to be able to run "offline" i.e. without a server powering the HTML Negatives ive read so far for XSLT: Overheard? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • Merge computed data from two tables back into one of them

    - by Tyler McHenry
    I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here) Contents of Measures1: +----+----------+--------+ | ID | Weight | Volume | +----+----------+--------+ | 1 | 100.0000 | NULL | | 2 | 200.0000 | NULL | | 3 | 150.0000 | NULL | | 4 | 325.0000 | NULL | +----+----------+--------+ Contents of Measures2: +----+----------+----------+ | ID | Weight | Volume | +----+----------+----------+ | 1 | 75.0000 | 10.0000 | | 2 | 400.0000 | 64.0000 | | 3 | 100.0000 | 22.0000 | | 4 | 500.0000 | 100.0000 | +----+----------+----------+ These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2. What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not. I can compute the volumes I want to fill in with the following query: SELECT Measures1.ID, Measures1.Weight, (Measures2.Volume * (Measures1.Weight / Measures2.Weight)) AS DesiredVolume FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID; Producing: +----+----------+-----------------+ | ID | Weight | DesiredVolume | +----+----------+-----------------+ | 4 | 325.0000 | 65.000000000000 | | 3 | 150.0000 | 33.000000000000 | | 2 | 200.0000 | 32.000000000000 | | 1 | 100.0000 | 13.333333333333 | +----+----------+-----------------+ But I am at a loss for how to actually insert these computed values into the Measures1 table. Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that. I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

    Read the article

  • java : how to handle the design when template methods throw exception when overrided method not throw

    - by jiafu
    when coding. try to solve the puzzle: how to design the class/methods when InputStreamDigestComputor throw IOException? It seems we can't use this degisn structure due to the template method throw exception but overrided method not throw it. but if change the overrided method to throw it, will cause other subclass both throw it. So can any good suggestion for this case? abstract class DigestComputor{ String compute(DigestAlgorithm algorithm){ MessageDigest instance; try { instance = MessageDigest.getInstance(algorithm.toString()); updateMessageDigest(instance); return hex(instance.digest()); } catch (NoSuchAlgorithmException e) { LOG.error(e.getMessage(), e); throw new UnsupportedOperationException(e.getMessage(), e); } } abstract void updateMessageDigest(MessageDigest instance); } class ByteBufferDigestComputor extends DigestComputor{ private final ByteBuffer byteBuffer; public ByteBufferDigestComputor(ByteBuffer byteBuffer) { super(); this.byteBuffer = byteBuffer; } @Override void updateMessageDigest(MessageDigest instance) { instance.update(byteBuffer); } } class InputStreamDigestComputor extends DigestComputor{ // this place has error. due to exception. if I change the overrided method to throw it. evey caller will handle the exception. but @Override void updateMessageDigest(MessageDigest instance) { throw new IOException(); } }

    Read the article

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • How does the receiver of a cipher text know the IV used for encryption?

    - by PatrickL
    If a random IV is used in encrypting plain text, how does the receiver of the cipher text know what the IV is in order to decrypt it? This is a follow-up question to a response to the previous stackoverflow question on IVs here. The IV allows for plaintext to be encrypted such that the encrypted text is harder to decrypt for an attacker. Each bit of IV you use will double the possibilities of encrypted text from a given plain text. The point is that the attacker doesn't know what the IV is and therefore must compute every possible IV for a given plain text to find the matching cipher text. In this way, the IV acts like a password salt. Most commonly, an IV is used with a chaining cipher (either a stream or block cipher). ... So, if you have a random IV used to encrypt the plain text, how do you decrypt it? Simple. Pass the IV (in plain text) along with your encrypted text. Wait. You just said the IV is randomly generated. Then why pass it as plain text along with the encrypted text?

    Read the article

  • minimal cover for functional dependencies

    - by user2975836
    I have the following problem: AB -> CD H->B G ->DA CD-> EF A -> HJ J>G I understand the first step (break down right hand side) and get the following results: AB -> C AB -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G I understand that A - h and h - b, therefore I can remove the B from AB - c and ab - D, to get: A -> C A -> D H -> B G -> D G -> A CD -> E CD -> F A -> H A -> J J -> G The step that follows is what I can't compute (reduce the left hand side) Any help will be greatly appreciated.

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • How to generate a monotone MART ROC in R?

    - by user1521587
    I am using R and applying MART (Alg. for multiple additive regression trees) on a training set to build prediction models. When I look at the ROC curve, it is not monotone. I would be grateful if someone can help me with how I should fix this. I am guessing the issue is that initially, MART generates n trees and if these trees are not the same for all the models I am building, the results will not be comparable. Here are the steps I take: 1) Fix the false-negative cost, c_fn. Let cost = c(0, 1, c_fn, 0). 2) use the following line to build the mart model: mart(x, y, lx, martmode='class', niter=2000, cost.mtx=cost) where x is the matrix of training set variables, y is the observation matrix, lx is the matrix which specifies which of the variables in x is numerical, which one categorical. 3) I predict the test set observations using the mart model found in step 2 using this line: y_pred = martpred(x_test, probs=T) 4) I compute the false-positive and false-negative errors as follows: t = 1/(1+c_fn) %threshold based on Bayes optimal rule where c_fp=1 and c_fn. p_0 = length(which(y_test==1))/dim(y_test)[1] p_01 = sum(1*(y_pred[,2]t & y_test==0))/dim(y_test)[1] p_11 = sum(1*(y_pred[,2]t & y_test==1))/dim(y_test)[1] p_fp = p_01/(1-p_0) p_tp = p_11/p_0 5) repeat step 1-4 for a new false-negative cost.

    Read the article

  • The correct usage of nested #pragma omp for directives

    - by GoldenLee
    The following code runs like a charm before OpenMP parallelization was applied. In fact, the following code was in a state of endless loop! I'm sure that's result from my incorrect use to the OpenMP directives. Would you please show me the correct way? Thank you very much. #pragma omp parallel for for (int nY = nYTop; nY <= nYBottom; nY++) { for (int nX = nXLeft; nX <= nXRight; nX++) { // Use look-up table for performance dLon = theApp.m_LonLatLUT.LonGrid()[nY][nX] + m_FavoriteSVISSRParams.m_dNadirLon; dLat = theApp.m_LonLatLUT.LatGrid()[nY][nX]; // If you don't want to use longitude/latitude look-up table, uncomment the following line //NOMGeoLocate.XYToGEO(dLon, dLat, nX, nY); if (dLon > 180 || dLat > 180) { continue; } if (Navigation.GeoToXY(dX, dY, dLon, dLat, 0) > 0) { continue; } // Skip void data scanline dY = dY - nScanlineOffset; // Compute coefficients as well as its four neighboring points' values nX1 = int(dX); nX2 = nX1 + 1; nY1 = int(dY); nY2 = nY1 + 1; dCx = dX - nX1; dCy = dY - nY1; dP1 = pIRChannelData->operator [](nY1)[nX1]; dP2 = pIRChannelData->operator [](nY1)[nX2]; dP3 = pIRChannelData->operator [](nY2)[nX1]; dP4 = pIRChannelData->operator [](nY2)[nX2]; // Bilinear interpolation usNomDataBlock[nY][nX] = (unsigned short)BilinearInterpolation(dCx, dCy, dP1, dP2, dP3, dP4); } }

    Read the article

  • Permutations distinct under given symmetry (Mathematica 8 group theory)

    - by Yaroslav Bulatov
    Given a list of integers like {2,1,1,0} I'd like to list all permutations of that list that are not equivalent under given group. For instance, using symmetry of the square, the result would be {{2, 1, 1, 0}, {2, 1, 0, 1}}. Approach below (Mathematica 8) generates all permutations, then weeds out the equivalent ones. I can't use it because I can't afford to generate all permutations, is there a more efficient way? Update: actually, the bottleneck is in DeleteCases. The following list {2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0} has about a million permutations and takes 0.1 seconds to compute. Apparently there are supposed to be 1292 orderings after removing symmetries, but my approach doesn't finish in 10 minutes removeEquivalent[{}] := {}; removeEquivalent[list_] := ( Sow[First[list]]; equivalents = Permute[First[list], #] & /@ GroupElements[group]; DeleteCases[list, Alternatives @@ equivalents] ); nonequivalentPermutations[list_] := ( reaped = Reap@FixedPoint[removeEquivalent, Permutations@list]; reaped[[2, 1]] ); group = DihedralGroup[4]; nonequivalentPermutations[{2, 1, 1, 0}]

    Read the article

  • How can I programmatically position a view using relative points?

    - by Steve Madsen
    What is the best way to position a view relative to the size of its superview, when the bounds of the superview are not yet known? I am trying to avoid hard-coding coordinates if it is at all possible. Perhaps this is silly, and if so, that's a perfectly acceptable answer. I've run into this many times when working with custom UI. The most recent example is that I'm trying to replace the UINavigationItem plain-text title with a custom view. I want that view to fill the superview, but in addition, I want a UIActivityIndicatorView on the right side, inset about 2 pixels and centered vertically. Here's the code: - (void) viewDidLoad { [super viewDidLoad]; customTitleView = [[UIView alloc] initWithFrame:CGRectZero]; customTitleView.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth; titleLabel = [[UILabel alloc] initWithFrame:CGRectZero]; titleLabel.autoresizingMask = UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth; titleLabel.lineBreakMode = UILineBreakModeWordWrap; titleLabel.numberOfLines = 2; titleLabel.minimumFontSize = 11.0; titleLabel.font = [UIFont systemFontOfSize:17.0]; titleLabel.adjustsFontSizeToFitWidth = YES; [customTitleView addSubview:titleLabel]; spinnerView = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhite]; spinnerView.center = CGPointMake(customTitleView.bounds.size.width - (spinnerView.bounds.size.width / 2) - 2, customTitleView.bounds.size.height / 2); spinnerView.hidesWhenStopped = YES; [customTitleView addSubview:spinnerView]; self.navigationItem.titleView = customTitleView; [customTitleView release]; } Here's my problem: at the time that this code runs, customTitleView.bounds is still zeroes. The auto-resizing mask hasn't had a chance to do its thing yet, but I very much want those values so that I can compute the relative positions of other sub-views (here, the activity indicator). Is this possible without being ugly?

    Read the article

  • MD5CryptoServiceProvider ComputeHash Issues between VS 2003 and VS 2008

    - by owensoroke
    I have a database application that generates a MD5 hash and compares the hash value to a value in our DB (SQL 2K). The original application was written in Visual Studio 2003 and a deployed version has been working for years. Recently, some new machines on the .NET framework 3.5 have been having unrelated issues with our runtime. This has forced us to port our code path from Visual Studio 2003 to Visual Studio 2008. Since that time the hash produced by the code is different than the values in the database. The original call to the function posted in code is: RemoveInvalidPasswordCharactersFromHashedPassword(Text_Scrub(GenerateMD5Hash(strPSW))) I am looking for expert guidance as to whether or not the MD5 methods have changed since VS 2K3 (causing this point of failure), or where other possible problems may be originating from. I realize this may not be the best method to hash, but utimately any changes to the MD5 code would force us to change some 300 values in our DB table and would cost us a lot of time. In addition, I am trying to avoid having to redeploy all of the functioning versions of this application. I am more than happy to post other code including the RemoveInvalidPasswordCharactersFromHashedPassword function, or our Text_Scrub if it is necessary to recieve appropriate feedback. Thank you in advance for your input. Public Function GenerateMD5Hash(ByVal strInput As String) As String Dim md5Provider As MD5 ' generate bytes for the input string Dim inputData() As Byte = ASCIIEncoding.ASCII.GetBytes(strInput) ' compute MD5 hash md5Provider = New MD5CryptoServiceProvider Dim hashResult() As Byte = md5Provider.ComputeHash(inputData) Return ASCIIEncoding.ASCII.GetString(hashResult) End Function

    Read the article

  • Triangulation & Direct linear transform

    - by srand
    Following Hartley/Zisserman's Multiview Geometery, Algorithm 12: The optimal triangulation method (p318), I got the corresponding image points xhat1 and xhat2 (step 10). In step 11, one needs to compute the 3D point Xhat. One such method is Direct Linear Transform (DLT), mentioned in 12.2 (p312) and 4.1 (p88). The homogenous method (DLT), p312-313, states that it finds a solution as the unit singular vector corresponding to the smallest singular value of A, thus, A = [xhat1(1) * P1(3,:)' - P1(1,:)' ; xhat1(2) * P1(3,:)' - P1(2,:)' ; xhat2(1) * P2(3,:)' - P2(1,:)' ; xhat2(2) * P2(3,:)' - P2(2,:)' ]; [Ua Ea Va] = svd(A); Xhat = Va(:,end); plot3(Xhat(1),Xhat(2),Xhat(3), 'r.'); However, A is a 16x1 matrix, resulting in a Va that is 1x1. What am I doing wrong (and a fix) in getting the 3D point? For what its worth sample data: xhat1 = 1.0e+009 * 4.9973 -0.2024 0.0027 xhat2 = 1.0e+011 * 2.0729 2.6624 0.0098 P1 = 699.6674 0 392.1170 0 0 701.6136 304.0275 0 0 0 1.0000 0 P2 = 1.0e+003 * -0.7845 0.0508 -0.1592 1.8619 -0.1379 0.7338 0.1649 0.6825 -0.0006 0.0001 0.0008 0.0010 A = <- my computation 1.0e+011 * -0.0000 0 0.0500 0 0 -0.0000 -0.0020 0 -1.3369 0.2563 1.5634 2.0729 -1.7170 0.3292 2.0079 2.6624

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >