Search Results

Search found 148 results on 6 pages for 'permutation'.

Page 6/6 | < Previous Page | 2 3 4 5 6 

  • Best way for launching html/jsp to communicate with GWT module

    - by h2g2java
    I asked this at the GWT forum but I'm impatient for the answer and I seem to get rather good responses here. A html or jsp file is used to launch the xxx.nocache.js, which then decides which browser "permutation" to use. <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8"> <title>xxx</title> <script type="text/javascript" language="javascript" src="xxx.nocache.js"></script> </head> In my case, I am using a jsp. When the JSP is executed, it discovers some conditions. I wish to pass these conditions as values to the GWT module being launched. The "elegant" GWT way to pass these values would be to persist them as request/memcache attributes and then have the GWT module perform RPC to retrieve those values. For example, the JSP discovers that the current user is Whoopy. Shouldn't I simply have the JSP generate javascript to store user = "Whoopy" as a top or namedframe level javascript variable and use JSNI within the module to retrieve the value for user? I have not tried it yet, but I would like to know how anyone might have done it without having to use RPC.

    Read the article

  • mysql: Average over multiple columns in one row, ignoring nulls

    - by Sai Emrys
    I have a large table (of sites) with several numeric columns - say a through f. (These are site rankings from different organizations, like alexa, google, quantcast, etc. Each has a different range and format; they're straight dumps from the outside DBs.) For many of the records, one or more of these columns is null, because the outside DB doesn't have data for it. They all cover different subsets of my DB. I want column t to be their weighted average (each of a..f have static weights which I assign), ignoring null values (which can occur in any of them), except being null if they're all null. I would prefer to do this with a simple SQL calculation, rather than doing it in app code or using some huge ugly nested if block to handle every permutation of nulls. (Given that I have an increasing number of columns to average over as I add in more outside DB sources, this would be exponentially more ugly and bug-prone.) I'd use AVG but that's only for group by, and this is w/in one record. The data is semantically nullable, and I don't want to average in some "average" value in place of the nulls; I want to only be counting the columns for which data is there. Is there a good way to do this? Ideally, what I want is something like UPDATE sites SET t = AVG(a*a_weight,b*b_weight,...) where any null values are just ignored and no grouping is happening.

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • How should I generate the partitions / pairs for the Chinese Postman problem?

    - by Simucal
    I'm working on a program for class that involves solving the Chinese Postman problem. Our assignment only requires us to write a program to solve it for a hard-coded graph but I'm attempting to solve it for the general case on my own. The part that is giving me trouble is generating the partitions of pairings for the odd vertices. For example, if I had the following labeled odd verticies in a graph: 1 2 3 4 5 6 I need to find all the possible pairings / partitions I can make with these vertices. I've figured out I'll have i paritions given: n = num of odd verticies k = n / 2 i = ((2k)(2k-1)(2k-2)...(k+1))/2 So, given the 6 odd verticies above, we will know that we need to generate i = 15 partitions. The 15 partions would look like: 1 2 3 4 5 6 1 2 3 5 4 6 1 2 3 6 4 5 ... 1 6 ... Then, for each partition, I take each pair and find the shortest distance between them and sum them for that partition. The partition with the total smallest distance between its pairs is selected, and I then double all the edges between the shortest path between the odd vertices (found in the selected partition). These represent the edges the postman will have to walk twice. At first I thought I had worked out an appropriate algorithm for generating these partitions / pairs but it is flawed. I found it wasn't a simple permutation/combination problem. Does anyone who has studied this problem before have any tips that can help point me in the right direction for generating these partitions?

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Erlang ODBC parameter query with null parameters

    - by Schlomer
    Is it possible to pass null values to parameter queries? For example Sql = "insert into TableX values (?,?)". Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}]. % Val2 may be a float, or it may be the atom, undefined odbc:param_query(OdbcRef, Sql, Params). Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as: Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)". with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there. Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it. Having a parameterized query for each permutation is simply not an option. Any thoughts or ideas would be fantastic! Thank you!

    Read the article

  • Javascript JQUERY AJAX: When Are These Implemented

    - by Michael Moreno
    I'm learning javascript. Poked around this excellent site to gather intel. Keep coming across questions / answers about javascript, JQUERY, JQUERY with AJAX, javascript with JQUERY, AJAX alone. My conclusion: these are all individually powerful and useful. My confusion: how does one determine which/which combination to use ? I've concluded that javascript is readily available on most browsers. For example, I can extend a simple HTML page with <html> <body> <script type="text/javascript"> document.write("Hello World!"); </script> </body> </html> However, within the scope of Python/DJANGO, many of these questions are JQUERY and AJAX related. At which point or under what development circumstances would I conclude that javascript alone isn't going to "cut it", and I need to implement JQUERY and/or AJAX and/or some other permutation ?

    Read the article

  • What category of combinatorial problems appear on the logic games section of the LSAT?

    - by Merjit
    There's a category of logic problem on the LSAT that goes like this: Seven consecutive time slots for a broadcast, numbered in chronological order I through 7, will be filled by six song tapes-G, H, L, O, P, S-and exactly one news tape. Each tape is to be assigned to a different time slot, and no tape is longer than any other tape. The broadcast is subject to the following restrictions: L must be played immediately before O. The news tape must be played at some time after L. There must be exactly two time slots between G and P, regardless of whether G comes before P or whether G comes after P. I'm interested in generating a list of permutations that satisfy the conditions as a way of studying for the test and as a programming challenge. However, I'm not sure what class of permutation problem this is. I've generalized the type problem as follows: Given an n-length array A: How many ways can a set of n unique items be arranged within A? Eg. How many ways are there to rearrange ABCDEFG? If the length of the set of unique items is less than the length of A, how many ways can the set be arranged within A if items in the set may occur more than once? Eg. ABCDEF = AABCDEF; ABBCDEF, etc. How many ways can a set of unique items be arranged within A if the items of the set are subject to "blocking conditions"? My thought is to encode the restrictions and then use something like Python's itertools to generate the permutations. Thoughts and suggestions are welcome.

    Read the article

  • Find the order among tasks in a company by using prolog?

    - by Cem
    First of all,I wish a happy new year for everyone.I searched more and worked a lot but I could not solve this question.I am quite a new in prolog and I must do this homework. In my homework,the question is like this: Write a prolog program that determines a valid order for the tasks to be carried out in a company. The prolog program will consist of a set of "before" predicates which denotes the order between task pairs. Here is an example; before(a,b). before(a,e). before(d,c). before(b,c). before(c,e). Here, task a should be carried before tasks b and e, d before c and so on. Hence a valid ordering of the tasks would be [a, b, d, c, e]. The order predicate in your program will be queried as follows. ?- order([a,b,c,d,e],X). X = [a, b, d, c, e] ; X = [a, d, b, c, e] ; X = [d, a, b, c, e] ; false. Hint: Try to generate different orders for the tasks (permutation) and then check if the order is consistent with the "before" relationships given. Even if you can generate a single valid order, you will get reasonable partial credits.

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

  • C++ Iterator Pipelining Designs

    - by Kirakun
    Suppose we want to apply a series of transformations, int f1(int), int f2(int), int f3(int), to a list of objects. A naive way would be SourceContainer source; TempContainer1 temp1; transform(source.begin(), source.end(), back_inserter(temp1), f1); TempContainer2 temp2; transform(temp1.begin(), temp1.end(), back_inserter(temp2), f2); TargetContainer target; transform(temp2.begin(), temp2.end(), back_inserter(target), f3); This first solution is not optimal because of the extra space requirement with temp1 and temp2. So, let's get smarter with this: int f123(int n) { return f3(f2(f1(n))); } ... SourceContainer source; TargetContainer target; transform(source.begin(), source.end(), back_inserter(target), f123); This second solution is much better because not only the code is simpler but more importantly there is less space requirement without the intermediate calculations. However, the composition f123 must be determined at compile time and thus is fixed at run time. How would I try to do this efficiently if the composition is to be determined at run time? For example, if this code was in a RPC service and the actual composition--which can be any permutation of f1, f2, and f3--is based on arguments from the RPC call.

    Read the article

  • iphone: re-sizing gradient after shift from portrait to landscape

    - by d_CFO
    In viewDidLoad, I can create a gradient with no problem: CAGradientLayer *blueGradient = [[CAGradientLayer layer] retain]; blueGradient.frame = CGRectMake(gradientStartX,gradientStartY,gradientWidth,gradientHeight); where gradientWith is device-defined as 320 or 1024 as appropriate. What I can’t do is resize it inside willRotateToInterfaceOrientation: -– and thus get rid of that empty black space off to the right -- after the user changes to landscape mode. (The nav bar and tab bar behave nicely.) (1) Recalibrating the gradient’s new dimensions according to the new mid-point, (2) using kCALayerMaxXMargin, and (3) employing bounds all looked like they would do the job. bounds looked a litte more intuitive, so I tried that. I don’t want to admit that I have made zero progress. I will say that I’ve been reduced to the brute force method of trying every permutation of self, view, layer, bounds, blueGradient, and CGRect(gradientStartX,gradientStartY,newGradientWidth,newGradientHeight) with zero success. This is not difficult. My lack of understanding is making it difficult. Anyone out there “Been there, done that”?

    Read the article

  • Java: How do I implement a method that takes 2 arrays and returns 2 arrays?

    - by HansDampf
    Okay, here is what I want to do: I want to implement a crossover method for arrays. It is supposed to take 2 arrays of same size and return two new arrays that are a kind of mix of the two input arrays. as in [a,a,a,a] [b,b,b,b] ------ [a,a,b,b] [b,b,a,a]. Now I wonder what would be the suggested way to do that in Java, since I cannot return more than one value. My ideas are: - returning a Collection(or array) containing both new arrays. I dont really like that one because it think would result in a harder to understand code. - avoiding the need to return two results by calling the method for each case but only getting one of the results each time. I dont like that one either, because there would be no natural order about which solution should be returned. This would need to be specified, though resulting in harder to understand code. Plus, this will work only for this basic case, but I will want to shuffle the array before the crossover and reverse that afterwards. I cannot do the shuffling isolated from the crossover since I wont want to actually do the operation, instead I want to use the information about the permutation while doing the crossover, which will be a more efficient way I think.

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • Storing a set of criteria in another table

    - by bendataclear
    I have a large table with sales data, useful data below: RowID Date Customer Salesperson Product_Type Manufacturer Quantity Value 1 01-06-2004 James Ian Taps Tap Ltd 200 £850 2 02-06-2004 Apple Fran Hats Hats Inc 30 £350 3 04-06-2004 James Lawrence Pencils ABC Ltd 2000 £980 ... Many rows later... ... 185352 03-09-2012 Apple Ian Washers Tap Ltd 600 £80 I need to calculate a large set of targets from table containing values different types, target table is under my control and so far is like: TargetID Year Month Salesperson Target_Type Quantity 1 2012 7 Ian 1 6000 2 2012 8 James 2 2000 3 2012 9 Ian 2 6500 At present I am working out target types using a view of the first table which has a lot of extra columns: SELECT YEAR(Date) , MONTH(Date) , Salesperson , Quantity , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType1 , CASE WHEN Manufacturer = 'Hats Inc' AND Product_Type IN ('Hats','Coats') THEN True ELSE False END AS IsType2 ... ... , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType24 , CASE WHEN Manufacturer IN ('Tap Ltd','Hats Inc') AND Product_Type = 'Hats' THEN True ELSE False END AS IsType25 FROM SalesTable WHERE [some stuff here] This is horrible to read/debug and I hate it!! I've tried a few different ways of simplifying this but have been unable to get it to work. The closest I have come is to have a third table holding the definition of the types with the values for each field and the type number, this can be joined to the tables to give me the full values but I can't work out a way to cope with multiple values for each field. Finally the question: Is there a standard way this can be done or an easier/neater method other than one column for each type of target? I know this is a complex problem so if anything is unclear please let me know. Edit - What I need to get: At the very end of the process I need to have targets displayed with actual sales: Type Year Month Salesperson TargetQty ActualQty 2 2012 8 James 2000 2809 2 2012 9 Ian 6500 6251 Each row of the sales table could potentially satisfy 8 of the types. Some more points: I have 5 different columns that need to be defined against the targets (or set to NULL to include any value) I have between 30 and 40 different types that need to be defined, several of the columns could contain as many as 10 different values For point 2, if I am using a row for each permutation of values, 2 columns with 10 values each would give me 100 rows for each sales person for each month which is a lot but if this is the only way to define multiple values I will have to do this. Sorry if this makes no sense!

    Read the article

  • Trying to use Digest Authentication for Folder Protection

    - by Jon Hazlett
    StackOverflow users suggested I try my question here. I'm using Server 2008 EE and IIS 7. I've got a site that I've migrated over from XP Pro using IIS 5. On the old system, I was using IIS Password to use simple .htaccess files to control a couple of folders that I didn't want to be publicly viewable. Now that I'm running a full-blown DC with a more powerful version of IIS, I decided it'd be a good idea to start using something slightly more sophisticated. After doing my research and trying to keep things as cheap as possible with a touch of extra security, I decided that Digest Authentication would be the best way to go. My issue is this: With Anon access disabled and Digest enabled, I am never prompted for credentials. when on the server, viewing domain[dot]com/example will simply show my 401.htm page without prompting me for credentials. when on a different network/computer, viewing domain[dot]com/example again shows my 401.htm without prompting for credentials. At the site level I only have Anon enabled. Every subfolder, unless I want it protected, has just Anon enabled. Only the folders I want protected have Anon disabled and Digest enabled. I have tried editing the bindings to see if that would spark any kind of change... www.domain.com, domain.com, and localhost have all been tried. There was never a change in behavior at any permutation (aside from the page not being found when I un-bound localhost to the site). I might have screwed up when I deleted the default site from IIS. I didn't think I'd actually need it for anything, but some of what I have read online is telling me otherwise now. As for Digest settings, I have it pointed to local.domain.com, which is the name assigned to my AD Domain. I'm guessing that's right, but honestly have no clue about what a realm actually is. Would it matter that I have an A record for local.domain.com pointing to my IP address? I had problems initially with an absolute link for 401.htm pages, but have since resolved that. Instead of D:\HTTP\401.htm I've used /401.htm and all is well. I used to get error 500's because it couldn't find the custom 401.htm file, but now it loads just fine. As for some data, I was getting entries like this from access logs: 2009-07-10 17:34:12 10.0.0.10 GET /example/ - 80 - [workip] Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+InfoPath.2) 401 2 5 132 But after correcting my 401.htm links now get logs like this: 2009-07-10 18:56:25 10.0.0.10 GET /example - 80 - [workip] Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.0.11)+Gecko/2009060215+Firefox/3.0.11 200 0 0 146 I don't know if that means anything or not. I still don't get any credential challenges, regardless of where I try to sign in from ( my workstation, my server, my cellphone even ). The only thing that's seemed to work is viewing localhost and I donno what could be preventing authentication from finding it's way out of the server. Thanks for any help! Jon

    Read the article

  • can't login to new install of SQL 2008 x64 via SSMS

    - by tpcolson
    I have performed a fresh install of SQL 2008 x64 on a fresh install of Server 2008 R2 x64 in an AD environment. Upon install completion, I cannot login to the SQL Instance via SSMS, with the following error: Login failed for user domain\user. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ]. Background: the server is correctly joined to the AD Domain, the install was performed with defaults, windows authentication only (per organizational rules), the SQL install completes with no errors, domain\user was added as SQL Amin during setup account provisioning, I am logged into to console as domain\user when this error occurs, windows firewall is OFF, UAC is ON (an will never be turned off in accordance with organizational policy). To troubleshoot this error I have tried: Run SSMS as administrator: fail; Start SQL in single user mode, run SSMS: fail Start SQL in single user mode, run SSMS as administrator: Success Start SQL in single user mode, run SSMS as administrator, remove domain\user from sysadmin group, re-add, run SSMS: fail; Any combination and permutation of log off and log on, reboot, and chant gregorian prayers: fail; Reimage server with 2008 x64, slipstream SP2 into SQL 2008 install, all above troubleshooting steps are repeatable exactly, so I've narrowed this down to not being a SP issue; (this is NOT 2008 SQL R2) Any suggestion on how to grant management access to this fresh install of SQL 2008 via SSMS? Our organizational policy is no console access to servers, management will be done via management tools intalled on client workstations. domain\user is a group of 8 users whom will have SSMS installed on workstations. However, we can't even access SQL via SSMS from the console! We cannot deploy this in an environment where these 8 users will have to sneak into the server closet on the weekends and have console access to SQL and run SSMS as administrator. EDIT: domain\group is a replacement for the actual object; the queries indicate that domain\group does indeed have the right privelges....!?! 1> EXEC xp_logininfo 'domain\group' go account name type privilege mapped login name permission path 'domain\group' group admin 'domain\group' NULL xp_logininfo seems to show 'domain\group' in the sql admin group; 1> SELECT A.name AS 'Role', B.name AS 'Login' 3> FROM sys.server_role_members C 5> INNER JOIN sys.server_principals A ON A.principal_id = C.role_principal_id 7> INNER JOIN sys.server_principals B ON B.principal_id = C.member_principal _id 9> go Role Login sysadmin sa sysadmin NT AUTHORITY\SYSTEM sysadmin NT SERVICE\MSSQLSERVER sysadmin NT SERVICE\SQLSERVERAGENT sysadmin domain\group 1> SELECT PRINCIPAL_ID AS [Principal ID], 2> NAME AS [User], 3> TYPE_DESC AS [Type Description], 4> IS_DISABLED AS [Status] 5> FROM sys.server_principals 6> GO Principal ID User Type Description Status ------------ ------------------------------------------------------------------- ------------------------------------------------------------- ------------------ ------------------------------------------ ------ 1 sa SQL_LOGIN 1 2 public SERVER_ROLE 0 3 sysadmin SERVER_ROLE 0 4 securityadmin SERVER_ROLE 0 5 serveradmin SERVER_ROLE 0 6 setupadmin SERVER_ROLE 0 7 processadmin SERVER_ROLE 0 8 diskadmin SERVER_ROLE 0 9 dbcreator SERVER_ROLE 0 10 bulkadmin SERVER_ROLE 0 101 ##MS_SQLResourceSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 102 ##MS_SQLReplicationSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 103 ##MS_SQLAuthenticatorCertificate## CERTIFICATE_MAPPED _LOGIN 0 105 ##MS_PolicySigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 257 ##MS_PolicyTsqlExecutionLogin## SQL_LOGIN 1 259 NT AUTHORITY\SYSTEM WINDOWS_LOGIN 0 260 NT SERVICE\MSSQLSERVER WINDOWS_GROUP 0 262 NT SERVICE\SQLSERVERAGENT WINDOWS_GROUP 0 263 ##MS_PolicyEventProcessingLogin## SQL_LOGIN 1 264 ##MS_AgentSigningCertificate## CERTIFICATE_MAPPED _LOGIN 0 265 domain\group WINDOWS_GROUP 0 (21 rows affected)

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • Knight movement.... " how to output all possible moves. "

    - by josh kant
    hi tried the following code and is still not working. it is having problem on backtracking. it just fills the squares of a board with numbers but not in expected order. The code is as follows : include include using namespace std; int i=0; int permuteno = 0; bool move(int *p[], int *used[] ,int x, int y,int n, int count); bool knights (int *p[], int *used[],int x,int y,int n, int count); void output(int *p[],int n); int main(char argc, char *argv[]) { int count = 1; int n; //for size of board int x,y; // starting pos int **p; // to hold no. of combinations int **used; // to keep track of used squares on the board if ( argc != 5) { cout << "Very few arguments. Please try again."; cout << endl; return 0; } n = atoi(argv[2]); if( argv[1] <= 0 ) { cout << " Invalid board size. "; return 0; } x = atoi(argv[4]); y = atoi(argv[4]); cout << "board size: " << n << ", "<< n << endl; cout << "starting pos: " << x << ", " << y << endl; //dynamic allocation of arrays to hold permutation p = new int *[n]; for (int i = 0; i < n; i++) p[i] = new int [n]; //dynamic allocation of used arrays used = new int*[n]; for (int i = 0; i < n; i++) used[i] = new int [n]; //initializing board int i, j; for (i=0; i output(p,n); if (knights(p,used,x, y, n, count)) { cout << "solution found: " << endl < int i, j; for (i=0; i else { cout << "Solution not found" << endl; output (p, n); } knights (p,used, x, y, n, 1); //knights (p,used,x, y, n, count); cout << "no. perm " << permuteno << endl; return 0; } void output(int *p[],int n) { int i = 0,j; while ( i !=n) { for ( j=0; j bool move(int *p[], int *used[] ,int x, int y,int n,int count) { if (x < 0 || x = n) { return false; } if ( y < 0 || y = n) { return false; } if( used[x][y] != 0) { return false; } if( p[x][y] != 0) { return false; } count++; return true; } bool knights (int *p[], int *used[], int x,int y,int n ,int count) { //used[x][y] = 1; if (!move(p,used,x,y,n, count)) { return false; } if (move(p,used,x,y,n, count)) { i++; } p[x][y] = count; used[x][y] = 1; cout << "knight moved " << x << ", " << y << " " << count << endl; if(n*n == count) { return true; } //move 1 if (!knights (p,used, x-1, y-2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 2 if (!knights (p,used, x+1, y-2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 3 if (!knights (p,used, x+2, y-1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 4 if (!knights (p,used, x+2, y+1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 5 if (!knights (p,used, x+1, y+2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 6 if (!knights (p,used, x-1, y+2, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 7 if (!knights (p,used, x-2, y+1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } //move 8 if (!knights (p,used, x-2, y-1, n, count+1)) { used[x][y] = 0; //p[x][y] = 0; } permuteno++; //return true; //}while ( x*y != n*n ); return false; } I has to output all the possible combinations of the knight in a nXn board.. any help would be appreciated...

    Read the article

< Previous Page | 2 3 4 5 6