Search Results

Search found 5655 results on 227 pages for 'stl algorithm'.

Page 219/227 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • curl multipart/form-data help

    - by user253530
    Hi am trying to post some data on a website using CURL. The posting process has 3 steps. 1. enter a URL, submit and get to the 2nd step with some fields already completed 2. submit again, after you entered some more data and preview the form. 3. submit the final data. The problem is that after the second step, the form data looks like this POSTDATA =-----------------------------12249266671528 Content-Disposition: form-data; name="title" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme - CineMagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="category" 3 -----------------------------12249266671528 Content-Disposition: form-data; name="tags" filme, programe tv, program cinema -----------------------------12249266671528 Content-Disposition: form-data; name="bodytext" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme -----------------------------12249266671528 Content-Disposition: form-data; name="trackback" -----------------------------12249266671528 Content-Disposition: form-data; name="url" http://cinemagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="phase" 2 -----------------------------12249266671528 Content-Disposition: form-data; name="randkey" 9510520 -----------------------------12249266671528 Content-Disposition: form-data; name="id" 17753 -----------------------------12249266671528-- I am stuck trying to devise an algorithm that will generate this kind of POST data for the second step. Just to mention the URL of the form never changes. It is always: http://www.xxx.com/submit. There is only a hidden input called "phase" that changes according to the step i am currently on (phase = 1, phase = 2, phase = 3). Any help, be it either code, pseudo-code or just guidance would be greatly appreciated. My code so far: function postBlvsocialbookmarkingcom($curl,$vars) { extract($vars); $baseUrl = "http://www.blv-socialbookmarking.com/"; //step 1: login $curl->setRedirect(); $page = $curl->post ($baseUrl.'login.php?return=/index.php', array ('username' => $username, 'password' => $password, 'processlogin' => '1', 'return' => '/index.php')); if ($err = $curl->getError ()) { return $err; } //post step 1---- //get random key $page = $curl->post($baseUrl.'/submit', array()); $randomKey = explode('<input type="hidden" name="randkey" value="',$page); $randKey = explode('"',$randomKey[1]); //------------------------------------- $page = $curl->post($baseUrl.'/submit', array('url'=>$address,'phase'=>'1','randkey'=>$randKey[0],'id'=>'c_1')); if ($err = $curl->getError ()) { return $err; } //echo $page; // //post step 2 $page = $curl->post ($baseUrl.'/submit', array ('title' => $title, 'category'=>'1', 'tags' => $tags, 'bodytext' => $description, 'phase' => '2')); if ($err = $curl->getError ()) { return $err; } echo $page; //post step 3 $page = $curl->post ($baseUrl.'/submit', array ('phase' => '3')); if ($err = $curl->getError ()) { return $err; } echo $page; }

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • Python TEA implementation

    - by Gaks
    Anybody knows proper python implementation of TEA (Tiny Encryption Algorithm)? I tried the one I've found here: http://sysadminco.com/code/python-tea/ - but it does not seem to work properly. It returns different results than other implementations in C or Java. I guess it's caused by completely different data types in python (or no data types in fact). Here's the code and an example: def encipher(v, k): y=v[0];z=v[1];sum=0;delta=0x9E3779B9;n=32 w=[0,0] while(n>0): y += (z << 4 ^ z >> 5) + z ^ sum + k[sum & 3] y &= 4294967295L # maxsize of 32-bit integer sum += delta z += (y << 4 ^ y >> 5) + y ^ sum + k[sum>>11 & 3] z &= 4294967295L n -= 1 w[0]=y; w[1]=z return w def decipher(v, k): y=v[0] z=v[1] sum=0xC6EF3720 delta=0x9E3779B9 n=32 w=[0,0] # sum = delta<<5, in general sum = delta * n while(n>0): z -= (y << 4 ^ y >> 5) + y ^ sum + k[sum>>11 & 3] z &= 4294967295L sum -= delta y -= (z << 4 ^ z >> 5) + z ^ sum + k[sum&3] y &= 4294967295L n -= 1 w[0]=y; w[1]=z return w Python example: >>> import tea >>> key = [0xbe168aa1, 0x16c498a3, 0x5e87b018, 0x56de7805] >>> v = [0xe15034c8, 0x260fd6d5] >>> res = tea.encipher(v, key) >>> "%X %X" % (res[0], res[1]) **'70D16811 F935148F'** C example: #include <unistd.h> #include <stdio.h> void encipher(unsigned long *const v,unsigned long *const w, const unsigned long *const k) { register unsigned long y=v[0],z=v[1],sum=0,delta=0x9E3779B9, a=k[0],b=k[1],c=k[2],d=k[3],n=32; while(n-->0) { sum += delta; y += (z << 4)+a ^ z+sum ^ (z >> 5)+b; z += (y << 4)+c ^ y+sum ^ (y >> 5)+d; } w[0]=y; w[1]=z; } int main() { unsigned long v[] = {0xe15034c8, 0x260fd6d5}; unsigned long key[] = {0xbe168aa1, 0x16c498a3, 0x5e87b018, 0x56de7805}; unsigned long res[2]; encipher(v, res, key); printf("%X %X\n", res[0], res[1]); return 0; } $ ./tea **D6942D68 6F87870D** Please note, that both examples were run with the same input data (v and key), but results were different. I'm pretty sure C implementation is correct - it comes from a site referenced by wikipedia (I couldn't post a link to it because I don't have enough reputation points yet - some antispam thing)

    Read the article

  • Managing highly repetitive code and documentation in Java

    - by polygenelubricants
    Highly repetitive code is generally a bad thing, and there are design patterns that can help minimize this. However, sometimes it's simply inevitable due to the constraints of the language itself. Take the following example from java.util.Arrays: /** * Assigns the specified long value to each element of the specified * range of the specified array of longs. The range to be filled * extends from index <tt>fromIndex</tt>, inclusive, to index * <tt>toIndex</tt>, exclusive. (If <tt>fromIndex==toIndex</tt>, the * range to be filled is empty.) * * @param a the array to be filled * @param fromIndex the index of the first element (inclusive) to be * filled with the specified value * @param toIndex the index of the last element (exclusive) to be * filled with the specified value * @param val the value to be stored in all elements of the array * @throws IllegalArgumentException if <tt>fromIndex &gt; toIndex</tt> * @throws ArrayIndexOutOfBoundsException if <tt>fromIndex &lt; 0</tt> or * <tt>toIndex &gt; a.length</tt> */ public static void fill(long[] a, int fromIndex, int toIndex, long val) { rangeCheck(a.length, fromIndex, toIndex); for (int i=fromIndex; i<toIndex; i++) a[i] = val; } The above snippet appears in the source code 8 times, with very little variation in the documentation/method signature but exactly the same method body, one for each of the root array types int[], short[], char[], byte[], boolean[], double[], float[], and Object[]. I believe that unless one resorts to reflection (which is an entirely different subject in itself), this repetition is inevitable. I understand that as a utility class, such high concentration of repetitive Java code is highly atypical, but even with the best practice, repetition does happen! Refactoring doesn't always work because it's not always possible (the obvious case is when the repetition is in the documentation). Obviously maintaining this source code is a nightmare. A slight typo in the documentation, or a minor bug in the implementation, is multiplied by however many repetitions was made. In fact, the best example happens to involve this exact class: Google Research Blog - Extra, Extra - Read All About It: Nearly All Binary Searches and Mergesorts are Broken (by Joshua Bloch, Software Engineer) The bug is a surprisingly subtle one, occurring in what many thought to be just a simple and straightforward algorithm. // int mid =(low + high) / 2; // the bug int mid = (low + high) >>> 1; // the fix The above line appears 11 times in the source code! So my questions are: How are these kinds of repetitive Java code/documentation handled in practice? How are they developed, maintained, and tested? Do you start with "the original", and make it as mature as possible, and then copy and paste as necessary and hope you didn't make a mistake? And if you did make a mistake in the original, then just fix it everywhere, unless you're comfortable with deleting the copies and repeating the whole replication process? And you apply this same process for the testing code as well? Would Java benefit from some sort of limited-use source code preprocessing for this kind of thing? Perhaps Sun has their own preprocessor to help write, maintain, document and test these kind of repetitive library code? A comment requested another example, so I pulled this one from Google Collections: com.google.common.base.Predicates lines 276-310 (AndPredicate) vs lines 312-346 (OrPredicate). The source for these two classes are identical, except for: AndPredicate vs OrPredicate (each appears 5 times in its class) "And(" vs Or(" (in the respective toString() methods) #and vs #or (in the @see Javadoc comments) true vs false (in apply; ! can be rewritten out of the expression) -1 /* all bits on */ vs 0 /* all bits off */ in hashCode() &= vs |= in hashCode()

    Read the article

  • Find optimal strategy and AI for the game 'Proximity'?

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal algorithm then b) how to build an AI. Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the insane branching factor (20^120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • How can I read the verbose output from a Cmdlet in C# using Exchange Powershell

    - by mrkeith
    Environment: Exchange 2007 sp3 (2003 sp2 mixed mode) Visual Studio 2008, .Net 3.5 Hello, I'm working with an Exchange powershell move-mailbox cmdlet and have noted when I do so from the Exchange Management shell (using the Verbose switch) there is a ton of real-time information provided. To provide a little context, I'm attempting to create a UI application that moves mailboxes similarly to the Exchange Management Console but desire to support an input file and specific server/database destinations for each entry (and threading). Here's roughly what I have at present but I'm not sure if there is an event I need to register for or what... And to be clear, I desire to get this information in real-time so I may update my UI to reflect what's occurring in the move sequence for the appropriate user (pretty much like the native functionality offered in the Management Console). And in case you are wondering, the reason why I'm not content with the Management Console functionality is, I have an algorithm which I'm using to balance users depending on storage limit, Blackberry use, journaling, exception mailbox size etc which demands user be mapped to specific locations... and I do not desire to create many/several move groups for each common destination or to hunt for lists of users individually through the management console UI. I can not seem to find any good documentation or examples of how to tie into reading the verbose messages that are provided within the console using C# (I see value in being able to read this kind of information in many different scenarios). I've explored the Invoke and InvokeAsync methods and the StateChanged & DataReady events but none of these seem to provide the information (verbose comments) that I'm after. Any direction or examples that can be provided will be very appreciated! A code sample which is little more than how I would ordinarily call any other powershell command follows: // config to use ExMgmt shell, create runspace and open it RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException = null; PSSnapInInfo info = rsConfig.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapInException); if (snapInException != null) throw snapInException; Runspace runspace = RunspaceFactory.CreateRunspace(rsConfig); try { runspace.Open(); // create a pipeline and feed script text Pipeline pipeline = runspace.CreatePipeline(); string targetDatabase = @"myServer\myStorageGroup\myDB"; string mbxOwner = "[email protected]"; Command myMoveMailbox = new Command("Move-Mailbox", false, false); myMoveMailbox.Parameters.Add("Identity", mbxOwner); myMoveMailbox.Parameters.Add("TargetDatabase", targetDatabase); myMoveMailbox.Parameters.Add("Verbose"); myMoveMailbox.Parameters.Add("ValidateOnly"); myMoveMailbox.Parameters.Add("Confirm", false); pipeline.Commands.Add(myMoveMailbox); System.Collections.ObjectModel.Collection output = null; // these next few lines that are commented out are where I've tried // registering for events and calling asynchronously but this doesn't // seem to get me anywhere closer // //pipeline.StateChanged += new EventHandler(pipeline_StateChanged); //pipeline.Output.DataReady += new EventHandler(Output_DataReady); //pipeline.InvokeAsync(); //pipeline.Input.Close(); //return; tried these variations that are commented out but none seem to be useful output = pipeline.Invoke(); // Check for errors in the pipeline and throw an exception if necessary if (pipeline.Error != null && pipeline.Error.Count 0) { StringBuilder pipelineError = new StringBuilder(); pipelineError.AppendFormat("Error calling Test() Cmdlet. "); foreach (object item in pipeline.Error.ReadToEnd()) pipelineError.AppendFormat("{0}\n", item.ToString()); throw new Exception(pipelineError.ToString()); } foreach (PSObject psObject in output) { // blah, blah, blah // this is normally where I would read details about a particular PS command // but really pertains to a command once it finishes and has nothing to do with // the verbose messages that I'm after... since this part of the methods pertains // to the after-effects of a command having run, I'm suspecting I need to look to // the asynch invoke method but am not certain or knowing how. } } finally { runspace.Close(); } Thanks! Keith

    Read the article

  • DataGridView CheckBox events

    - by Kevin
    I'm making a DataGridView with a series of Checkboxes in it with the same labels horizontally and vertically. Any labels that are the same, the checkboxes will be inactive, and I only want one of the two "checks" for each combination to be valid. The following screenshot shows what I have: Anything that's checked on the lower half, I want UN-checked on the upper. So if [quux, spam] (or [7, 8] for zero-based co-ordinates) is checked, I want [spam, quux] ([8, 7]) un-checked. What I have so far is the following: dgvSysGrid.RowHeadersWidthSizeMode = DataGridViewRowHeadersWidthSizeMode.AutoSizeToAllHeaders; dgvSysGrid.AutoSizeColumnsMode = DataGridViewAutoSizeColumnsMode.AllCells; string[] allsysNames = { "heya", "there", "lots", "of", "names", "foo", "bar", "quux", "spam", "eggs", "bacon" }; // Add a column for each entry, and a row for each entry, and mark the "diagonals" as readonly for (int i = 0; i < allsysNames.Length; i++) { dgvSysGrid.Columns.Add(new DataGridViewCheckBoxColumn(false)); dgvSysGrid.Columns[i].HeaderText = allsysNames[i]; dgvSysGrid.Rows.Add(); dgvSysGrid.Rows[i].HeaderCell.Value = allsysNames[i]; // Mark all of the "diagonals" as unable to change DataGridViewCell curDiagonal = dgvSysGrid[i, i]; curDiagonal.ReadOnly = true; curDiagonal.Style.BackColor = Color.Black; curDiagonal.Style.ForeColor = Color.Black; } // Hook up the event handler so that we can change the "corresponding" checkboxes as needed //dgvSysGrid.CurrentCellDirtyStateChanged += new EventHandler(dgvSysGrid_CurrentCellDirtyStateChanged); dgvSysGrid.CellValueChanged += new DataGridViewCellEventHandler(dgvSysGrid_CellValueChanged); } void dgvSysGrid_CellValueChanged(object sender, DataGridViewCellEventArgs e) { Point cur = new Point(e.ColumnIndex, e.RowIndex); // Change the diagonal checkbox to the opposite state DataGridViewCheckBoxCell curCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.X, cur.Y]; DataGridViewCheckBoxCell diagCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.Y, cur.X]; if ((bool)(curCell.Value) == true) { diagCell.Value = false; } else { diagCell.Value = true; } } /// <summary> /// Change the corresponding checkbox to the opposite state of the current one /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void dgvSysGrid_CurrentCellDirtyStateChanged(object sender, EventArgs e) { Point cur = dgvSysGrid.CurrentCellAddress; // Change the diagonal checkbox to the opposite state DataGridViewCheckBoxCell curCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.X, cur.Y]; DataGridViewCheckBoxCell diagCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.Y, cur.X]; if ((bool)(curCell.Value) == true) { diagCell.Value = false; } else { diagCell.Value = true; } } The problem comes is that the cell value changed always seems to be "one behind" where you actually click if I use the CellValueChanged event, and I'm not sure how to get the current cell if I'm in the "dirty" state as curCell comes in as a null (suggesting the current cell address is wrong somehow, but I didn't try and get that value out) meaning that path isn't working at all. Basically, how do I get the "right" address with the right boolean value so that my flipping algorithm will work?

    Read the article

  • Running an allocation simulation repeatedly breaks after the first run.

    - by Az
    Background I have a bunch of students, their desired projects and the supervisors for the respective projects. I'm running a battery of simulations to see which projects the students end up with, which will allow me to get some useful statistics required for feedback. So, this is essentially a Monte-Carlo simulation where I'm randomising the list of students and then iterating through it, allocating projects until I hit the end of the list. Then the process is repeated again. Note that, within a single session, after each successful allocation of a project the following take place: + the project is set to allocated and cannot be given to another student + the supervisor has a fixed quota of students he can supervise. This is decremented by 1 + Once the quota hits 0, all the projects from that supervisor become blocked and this has the same effect as a project being allocated Code def resetData(): for student in students.itervalues(): student.allocated_project = None for supervisor in supervisors.itervalues(): supervisor.quota = 0 for project in projects.itervalues(): project.allocated = False project.blocked = False The role of resetData() is to "reset" certain bits of the data. For example, when a project is successfully allocated, project.allocated for that project is flipped to True. While that's useful for a single run, for the next run I need to be deallocated. Above I'm iterating through thee three dictionaries - one each for students, projects and supervisors - where the information is stored. The next bit is the "Monte-Carlo" simulation for the allocation algorithm. sesh_id = 1 for trial in range(50): for id in randomiseStudents(1): stud_id = id student = students[id] if not student.preferences: # Ignoring the students who've not entered any preferences for rank in ranks: temp_proj = random.choice(list(student.preferences[rank])) if not (temp_proj.allocated or temp_proj.blocked): alloc_proj = student.allocated_proj_ref = temp_proj.proj_id alloc_proj_rank = student.allocated_rank = rank successActions(temp_proj) temp_alloc = Allocated(sesh_id, stud_id, alloc_proj, alloc_proj_rank) print temp_alloc # Explained break sesh_id += 1 resetData() # Refer to def resetData() above All randomiseStudents(1) does is randomise the order of students. Allocated is a class defined as such: class Allocated(object): def __init__(self, sesh_id, stud_id, alloc_proj, alloc_proj_rank): self.sesh_id = sesh_id self.stud_id = stud_id self.alloc_proj = alloc_proj self.alloc_proj_rank = alloc_proj_rank def __repr__(self): return str(self) def __str__(self): return "%s - Student: %s (Project: %s - Rank: %s)" %(self.sesh_id, self.stud_id, self.alloc_proj, self.alloc_proj_rank) Output and problem Now if I run this I get an output such as this (truncated): 1 - Student: 7720 (Project: 1100241 - Rank: 1) 1 - Student: 7832 (Project: 1100339 - Rank: 1) 1 - Student: 7743 (Project: 1100359 - Rank: 1) 1 - Student: 7820 (Project: 1100261 - Rank: 2) 1 - Student: 7829 (Project: 1100270 - Rank: 1) . . . 1 - Student: 7822 (Project: 1100280 - Rank: 1) 1 - Student: 7792 (Project: 1100141 - Rank: 7) 2 - Student: 7739 (Project: 1100267 - Rank: 1) 3 - Student: 7806 (Project: 1100272 - Rank: 1) . . . 45 - Student: 7806 (Project: 1100272 - Rank: 1) 46 - Student: 7714 (Project: 1100317 - Rank: 1) 47 - Student: 7930 (Project: 1100343 - Rank: 1) 48 - Student: 7757 (Project: 1100358 - Rank: 1) 49 - Student: 7759 (Project: 1100269 - Rank: 1) 50 - Student: 7778 (Project: 1100301 - Rank: 1) Basically, it works perfectly for the first run, but on subsequent runs leading upto the nth run, in this case 50, only a single student-project allocation pair is returned. Thus, the main issue I'm having trouble with is figuring out what is causing this anomalous behaviour especially since the first run works smoothly. Thanks in advance, Az

    Read the article

  • Python/numpy tricky slicing problem

    - by daver
    Hi stack overflow, I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do: Say we have a simple array like this: a = array([1, 0, 0, 0]) I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this: a[1:] = a[0:3] This would get the following result: a = array([1, 1, 1, 1]) Or something like this: a[1:] = 2*a[:3] # a = [1,2,4,8] To illustrate further I want the following kind of behaviour: for i in range(len(a)): if i == 0 or i+1 == len(a): continue a[i+1] = a[i] Except I want the speed of numpy. The default behavior of numpy is to take a copy of the slice, so what I actually get is this: a = array([1, 1, 0, 0]) I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. Am I dreaming or is this magic possible? Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes. The algorithm is this: while not converged: for i in range(len(u[:,0])): for j in range(len(u[0,:])): # skip over boundary entries, i,j == 0 or len(u) u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1]) Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there. I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing: u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:]) But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers. Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.

    Read the article

  • JAVA BubbleSort Output Plotting

    - by John Smith
    I'm not sure how to plot the output I get with my run time results for BubbleSort. Here's the thing: I've written a working BubbleSort algorithm that does exactly as it should. But I wish to plot the output, to show the following: Best Case, Worst Case, Average Case ... How would I go about plotting it on a graph? Here is the code: public class BubbleSort { static double bestTime = 10000000, worstTime = 0; public static void main(String[] args) { int BubArray[] = new int[]{13981, 6793, 2662, 10986, 733, ... #1000 integers}; System.out.println("Unsorted List Before Bubble Sort"); for(int a = 0; a < BubArray.length; a++){ System.out.print(BubArray[a] + " "); } System.out.println("\n Bubble Sort Execution ..."); for(int i=0; i<10000;i++) { bubbleSortTimeTaken(BubArray, i); } int itrs = bubbleSort(BubArray); System.out.println(""); System.out.println("Array After Bubble Sort"); System.out.println("Moves Taken for Sort : " + itrs + " Moves."); System.out.println("BestTime: " + bestTime + " WorstTime: " + worstTime); System.out.print("Sorted Array: \n"); for(int a = 0; a < BubArray.length; a++){ System.out.print(BubArray[a] + " "); } } private static int bubbleSort(int[] BubArray) { int z = BubArray.length; int temp = 0; int itrs = 0; for(int a = 0; a < z; a++){ for(int x=1; x < (z-a); x++){ if(BubArray[x-1] > BubArray[x]){ temp = BubArray[x-1]; BubArray[x-1] = BubArray[x]; BubArray[x] = temp; } itrs++; } } return itrs; } public static void bubbleSortTimeTaken(int[] BubArray, int n) { long startTime = System.nanoTime(); bubbleSort(BubArray); double timeTaken = (System.nanoTime() - startTime)/1000000d; if (timeTaken > 0) { worstTime = timeTaken; } else if (timeTaken < bestTime) { bestTime = timeTaken; } System.out.println(n + "," + timeTaken); } } The output are as the following ( execution number, time (nano/10^6): Unsorted List Before Bubble Sort 13981 6793 2662 .... #1000 integers Bubble Sort Execution ... 0, 18.319891 1, 4.728978 2, 3.670697 3, 3.648922 4, 4.161576 5, 3.824369 .... 9995, 4.331423 9996, 3.692473 9997, 3.709893 9998, 6.16055 9999, 4.32209 Array After Bubble Sort Moves Taken for Sort : 541320 Moves. BestTime: 1.0E7 WorstTime: 4.32209 Sorted Array: 10 11 17 24 57 60 83 128 141 145 ... #1000 integers I am looking for graphs to represent Average, Best and Worst case based on the output but my current graphs don't look correct. Any help would be appreciated, thanks.

    Read the article

  • TripleDES in Perl/PHP/ColdFusion

    - by Seidr
    Recently a problem arose regarding hooking up an API with a payment processor who were requesting a string to be encrypted to be used as a token, using the TripleDES standard. Our Applications run using ColdFusion, which has an Encrypt tag - that supports TripleDES - however the result we were getting back was not what the payment processor expected. First of all, here is the resulting token the payment processor were expecting. AYOF+kRtg239Mnyc8QIarw== And below is the snippet of ColdFusion we were using, and the resulting string. <!--- Coldfusion Crypt (here be monsters) ---> <cfset theKey="123412341234123412341234"> <cfset theString = "username=test123"> <cfset strEncodedEnc = Encrypt(theString, theKey, "DESEDE", "Base64")> <!--- resulting string(strEncodedEnc): tc/Jb7E9w+HpU2Yvn5dA7ILGmyNTQM0h ---> As you can see, this was not returning the string we were hoping for. Seeking a solution, we ditched ColdFusion for this process and attempted to reproduce the token in PHP. Now I'm aware that various languages implement encryption in different ways - for example in the past managing encryption between a C# application and PHP back-end, I've had to play about with padding in order to get the two to talk, but my experience has been that PHP generally behaves when it comes to encryption standards. Anyway, on to the PHP source we tried, and the resulting string. /* PHP Circus (here be Elephants) */ $theKey="123412341234123412341234"; $theString="username=test123"; $strEncodedEnc=base64_encode(mcrypt_ecb (MCRYPT_3DES, $theKey, $theString, MCRYPT_ENCRYPT)); /* resulting string(strEncodedEnc): sfiSu4mVggia8Ysw98x0uw== */ As you can plainly see, we've got another string that differs from both the string expected by the payment processor AND the one produced by ColdFusion. Cue head-against-wall integration techniques. After many to-and-fro communications with the payment processor (lots and lots of reps stating 'we can't help with coding issues, you must be doing it incorrectly, read the manual') we were finally escalated to someone with more than a couple of brain-cells to rub together, who was able to step back and actually look at and diagnose the issue. He agreed, our CF and PHP attempts were not resulting in the correct string. After a quick search, he also agreed that it was not neccesarily our source, but rather how the two languages implemented their vision of the TripleDES standard. Coming into the office this morning, we were met by an email with a snippet of source code, in Perl. This is was the code they were directly using on their end to produce the expected token. #!/usr/bin/perl # Perl Crypt Calamity (here be...something) use strict; use CGI; use MIME::Base64; use Crypt::TripleDES; my $cgi = CGI->new(); my $param = $cgi->Vars(); $param->{key} = "123412341234123412341234"; $param->{string} = "username=test123"; my $des = Crypt::TripleDES->new(); my $enc = $des->encrypt3($param->{string}, $param->{key}); $enc = encode_base64($enc); $enc =~ s/\n//gs; # resulting string (enc): AYOF+kRtg239Mnyc8QIarw== So, there we have it. Three languages, three implementations of what they quote in the documentation as TripleDES Standard Encryption, and three totally different resulting strings. My question is, from your experience of these three languages and their implementations of the TripleDES algorithm, have you been able to get any two of them to give the same response, and if so what tweaks to the code did you have to make in order to come to the result? I understand this is a very drawn out question, but I wanted to give clear and precise setting for each stage of testing that we had to perform. I'll also be performing some more investigatory work on this subject later, and will post any findings that I come up with to this question, so that others may avoid this headache.

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • Find optimal/good-enough strategy and AI for the game 'Proximity'?

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal algorithm then b) how to build an AI. Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the insane branching factor (20^120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • C#, AES encryption check!

    - by Data-Base
    I have this code for AES encryption, can some one verify that this code is good and not wrong? it works fine, but I'm more concern about the implementation of the algorithm // Plaintext value to be encrypted. //Passphrase from which a pseudo-random password will be derived. //The derived password will be used to generate the encryption key. //Password can be any string. In this example we assume that this passphrase is an ASCII string. //Salt value used along with passphrase to generate password. //Salt can be any string. In this example we assume that salt is an ASCII string. //HashAlgorithm used to generate password. Allowed values are: "MD5" and "SHA1". //SHA1 hashes are a bit slower, but more secure than MD5 hashes. //PasswordIterations used to generate password. One or two iterations should be enough. //InitialVector (or IV). This value is required to encrypt the first block of plaintext data. //For RijndaelManaged class IV must be exactly 16 ASCII characters long. //KeySize. Allowed values are: 128, 192, and 256. //Longer keys are more secure than shorter keys. //Encrypted value formatted as a base64-encoded string. public static string Encrypt(string PlainText, string Password, string Salt, string HashAlgorithm, int PasswordIterations, string InitialVector, int KeySize) { byte[] InitialVectorBytes = Encoding.ASCII.GetBytes(InitialVector); byte[] SaltValueBytes = Encoding.ASCII.GetBytes(Salt); byte[] PlainTextBytes = Encoding.UTF8.GetBytes(PlainText); PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes, HashAlgorithm, PasswordIterations); byte[] KeyBytes = DerivedPassword.GetBytes(KeySize / 8); RijndaelManaged SymmetricKey = new RijndaelManaged(); SymmetricKey.Mode = CipherMode.CBC; ICryptoTransform Encryptor = SymmetricKey.CreateEncryptor(KeyBytes, InitialVectorBytes); MemoryStream MemStream = new MemoryStream(); CryptoStream CryptoStream = new CryptoStream(MemStream, Encryptor, CryptoStreamMode.Write); CryptoStream.Write(PlainTextBytes, 0, PlainTextBytes.Length); CryptoStream.FlushFinalBlock(); byte[] CipherTextBytes = MemStream.ToArray(); MemStream.Close(); CryptoStream.Close(); return Convert.ToBase64String(CipherTextBytes); } public static string Decrypt(string CipherText, string Password, string Salt, string HashAlgorithm, int PasswordIterations, string InitialVector, int KeySize) { byte[] InitialVectorBytes = Encoding.ASCII.GetBytes(InitialVector); byte[] SaltValueBytes = Encoding.ASCII.GetBytes(Salt); byte[] CipherTextBytes = Convert.FromBase64String(CipherText); PasswordDeriveBytes DerivedPassword = new PasswordDeriveBytes(Password, SaltValueBytes, HashAlgorithm, PasswordIterations); byte[] KeyBytes = DerivedPassword.GetBytes(KeySize / 8); RijndaelManaged SymmetricKey = new RijndaelManaged(); SymmetricKey.Mode = CipherMode.CBC; ICryptoTransform Decryptor = SymmetricKey.CreateDecryptor(KeyBytes, InitialVectorBytes); MemoryStream MemStream = new MemoryStream(CipherTextBytes); CryptoStream cryptoStream = new CryptoStream(MemStream, Decryptor, CryptoStreamMode.Read); byte[] PlainTextBytes = new byte[CipherTextBytes.Length]; int ByteCount = cryptoStream.Read(PlainTextBytes, 0, PlainTextBytes.Length); MemStream.Close(); cryptoStream.Close(); return Encoding.UTF8.GetString(PlainTextBytes, 0, ByteCount); } Thank you

    Read the article

  • being able to solve google code jam problem sets

    - by JPro
    This is not a homework question, but rather my intention to know if this is what it takes to learn programming. I keep loggin into TopCoder not to actually participate but to get the basic understand of how the problems are solved. But to my knowledge I don't understand what the problem is and how to translate the problem into an algorithm that can solve it. Just now I happen to look at ACM ICPC 2010 World Finals which is being held in china. The teams were given problem sets and one of them is this: Given at most 100 points on a plan with distinct x-coordinates, find the shortest cycle that passes through each point exactly once, goes from the leftmost point always to the right until it reaches the rightmost point, then goes always to the left until it gets back to the leftmost point. Additionally, two points are given such that the the path from left to right contains the first point, and the path from right to left contains the second point. This seems to be a very simple DP: after processing the last k points, and with the first path ending in point a and the second path ending in point b, what is the smallest total length to achieve that? This is O(n^2) states, transitions in O(n). We deal with the two special points by forcing the first path to contain the first one, and the second path contain the second one. Now I have no idea what I am supposed to solve after reading the problem set. and there's an other one from google code jam: Problem In a big, square room there are two point light sources: one is red and the other is green. There are also n circular pillars. Light travels in straight lines and is absorbed by walls and pillars. The pillars therefore cast shadows: they do not let light through. There are places in the room where no light reaches (black), where only one of the two light sources reaches (red or green), and places where both lights reach (yellow). Compute the total area of each of the four colors in the room. Do not include the area of the pillars. Input * One line containing the number of test cases, T. Each test case contains, in order: * One line containing the coordinates x, y of the red light source. * One line containing the coordinates x, y of the green light source. * One line containing the number of pillars n. * n lines describing the pillars. Each contains 3 numbers x, y, r. The pillar is a disk with the center (x, y) and radius r. The room is the square described by 0 = x, y = 100. Pillars, room walls and light sources are all disjoint, they do not overlap or touch. Output For each test case, output: Case #X: black area red area green area yellow area Is it required that people who program should be should be able to solve these type of problems? I would apprecite if anyone can help me interpret the google code jam problem set as I wish to participate in this years Code Jam to see if I can do anthing or not. Thanks.

    Read the article

  • Problem with memset after an instance of a user defined class is created and a file is opened

    - by Liberalkid
    I'm having a weird problem with memset, that was something to do with a class I'm creating before it and a file I'm opening in the constructor. The class I'm working with normally reads in an array and transforms it into another array, but that's not important. The class I'm working with is: #include <vector> #include <algorithm> using namespace std; class PreProcess { public: PreProcess(char* fileName,char* outFileName); void SortedOrder(); private: vector< vector<double > > matrix; void SortRow(vector<double> &row); char* newFileName; vector< pair<double,int> > rowSorted; }; The other functions aren't important, because I've stopped calling them and the problem persists. Essentially I've narrowed it down to my constructor: PreProcess::PreProcess(char* fileName,char* outFileName):newFileName(outFileName){ ifstream input(fileName); input.close(); //this statement is inconsequential } I also read in the file in my constructor, but I've found that the problem persists if I don't read in the matrix and just open the file. Essentially I've narrowed it down to if I comment out those two lines the memset works properly, otherwise it doesn't. Now to the context of the problem I'm having with it: I wrote my own simple wrapper class for matrices. It doesn't have much functionality, I just need 2D arrays in the next part of my project and having a class handle everything makes more sense to me. The header file: #include <iostream> using namespace std; class Matrix{ public: Matrix(int r,int c); int &operator()(int i,int j) {//I know I should check my bounds here return matrix[i*columns+j]; } ~Matrix(); const void Display(); private: int *matrix; const int rows; const int columns; }; Driver: #include "Matrix.h" #include <string> using namespace std; Matrix::Matrix(int r,int c):rows(r),columns(c) { matrix=new int[rows*columns]; memset(matrix,0,sizeof(matrix)); } const void Matrix::Display(){ for(int i=0;i<rows;i++){ for(int j=0;j<columns;j++) cout << (*this)(i,j) << " "; cout << endl; } } Matrix::~Matrix() { delete matrix; } My main program runs: PreProcess test1(argv[1],argv[2]); //test1.SortedOrder(); Matrix test(10,10); test.Display(); And when I run this with the input line uncommented I get: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371727776 32698 -1 0 0 0 0 0 6332656 0 -1 -1 0 0 6332672 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371732704 32698 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I really don't have a clue what's going on in memory to cause this, on a side note if I replace memset with: for(int i=0;i<rows*columns;i++) *(matrix+i) &= 0x0; Then it works perfectly, it also works if I don't open the file. If it helps I'm running GCC 64-bit version 4.2.4 on Ubuntu.I assume there's some functionality of memset that I'm not properly understanding.

    Read the article

  • fd partitions gone from 2 discs, md happy with it and resyncs. How to recover ?

    - by d0nd
    Hey gurus, need some help badly with this one. I run a server with a 6Tb md raid5 volume built over 7*1Tb disks. I've had to shut down the server lately and when it went back up, 2 out of the 7 disks used for the raid volume had lost its conf : dmesg : [ 10.184167] sda: sda1 sda2 sda3 // System disk [ 10.202072] sdb: sdb1 [ 10.210073] sdc: sdc1 [ 10.222073] sdd: sdd1 [ 10.229330] sde: sde1 [ 10.239449] sdf: sdf1 [ 11.099896] sdg: unknown partition table [ 11.255641] sdh: unknown partition table All 7 disks have same geometry and were configured alike : dmesg : Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect All 7 disks (sdb1, sdc1, sdd1, sde1, sdf1, sdg1, sdh1) were used in a md raid5 xfs volume. When booting, md, which was (obviously) out of sync kicked in and automatically started rebuilding over the 7 disks, including the two "faulty" ones; xfs tried to do some shenanigans as well: dmesg : [ 19.566941] md: md0 stopped. [ 19.817038] md: bind<sdc1> [ 19.817339] md: bind<sdd1> [ 19.817465] md: bind<sde1> [ 19.817739] md: bind<sdf1> [ 19.817917] md: bind<sdh> [ 19.818079] md: bind<sdg> [ 19.818198] md: bind<sdb1> [ 19.818248] md: md0: raid array is not clean -- starting background reconstruction [ 19.825259] raid5: device sdb1 operational as raid disk 0 [ 19.825261] raid5: device sdg operational as raid disk 6 [ 19.825262] raid5: device sdh operational as raid disk 5 [ 19.825264] raid5: device sdf1 operational as raid disk 4 [ 19.825265] raid5: device sde1 operational as raid disk 3 [ 19.825267] raid5: device sdd1 operational as raid disk 2 [ 19.825268] raid5: device sdc1 operational as raid disk 1 [ 19.825665] raid5: allocated 7334kB for md0 [ 19.825667] raid5: raid level 5 set md0 active with 7 out of 7 devices, algorithm 2 [ 19.825669] RAID5 conf printout: [ 19.825670] --- rd:7 wd:7 [ 19.825671] disk 0, o:1, dev:sdb1 [ 19.825672] disk 1, o:1, dev:sdc1 [ 19.825673] disk 2, o:1, dev:sdd1 [ 19.825675] disk 3, o:1, dev:sde1 [ 19.825676] disk 4, o:1, dev:sdf1 [ 19.825677] disk 5, o:1, dev:sdh [ 19.825679] disk 6, o:1, dev:sdg [ 19.899787] PM: Starting manual resume from disk [ 28.663228] Filesystem "md0": Disabling barriers, not supported by the underlying device [ 28.663228] XFS mounting filesystem md0 [ 28.884433] md: resync of RAID array md0 [ 28.884433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 28.884433] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 28.884433] md: using 128k window, over a total of 976759936 blocks. [ 29.025980] Starting XFS recovery on filesystem: md0 (logdev: internal) [ 32.680486] XFS: xlog_recover_process_data: bad clientid [ 32.680495] XFS: log mount/recovery failed: error 5 [ 32.682773] XFS: log mount failed I ran fdisk and flagged sdg1 and sdh1 as fd. I tried to reassemble the array but it didnt work: no matter what was in mdadm.conf, it still uses sdg and sdh instead of sdg1 and sdh1. I checked in /dev and I see no sdg1 and and sdh1, shich explains why it wont use it. I just don't know why those partitions are gone from /dev and how to readd those... blkid : /dev/sda1: LABEL="boot" UUID="519790ae-32fe-4c15-a7f6-f1bea8139409" TYPE="ext2" /dev/sda2: TYPE="swap" /dev/sda3: LABEL="root" UUID="91390d23-ed31-4af0-917e-e599457f6155" TYPE="ext3" /dev/sdb1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdc1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdd1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sde1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdf1: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdg: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" /dev/sdh: UUID="2802e68a-dd11-c519-e8af-0d8f4ed72889" TYPE="mdraid" fdisk -l : Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8c878c87 Device Boot Start End Blocks Id System /dev/sda1 * 1 12 96358+ 83 Linux /dev/sda2 13 134 979965 82 Linux swap / Solaris /dev/sda3 135 4865 38001757+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x1e7481a5 Device Boot Start End Blocks Id System /dev/sdb1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc9bdc1e9 Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xcc356c30 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe87f7a3d Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xb17a2d22 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x8f3bce61 Device Boot Start End Blocks Id System /dev/sdg1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa98062ce Device Boot Start End Blocks Id System /dev/sdh1 1 121601 976760001 fd Linux raid autodetect I really dont know what happened nor how to recover from this mess. Needless to say the 5TB or so worth of data sitting on those disks are very valuable to me... Any idea any one? Did anybody ever experienced a similar situation or know how to recover from it ? Can someone help me? I'm really desperate... :x

    Read the article

  • I don't understand how work call_once

    - by SABROG
    Please help me understand how work call_once Here is thread-safe code. I don't understand why this need Thread Local Storage and global_epoch variables. Variable _fast_pthread_once_per_thread_epoch can be changed to constant/enum like {FAST_PTHREAD_ONCE_INIT, BEING_INITIALIZED, FINISH_INITIALIZED}. Why needed count calls in global_epoch? I think this code can be rewriting with logc: if flag FINISH_INITIALIZED do nothing, else go to block with mutexes and this all. #ifndef FAST_PTHREAD_ONCE_H #define FAST_PTHREAD_ONCE_H #include #include typedef sig_atomic_t fast_pthread_once_t; #define FAST_PTHREAD_ONCE_INIT SIG_ATOMIC_MAX extern __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; #ifdef __cplusplus extern "C" { #endif extern void fast_pthread_once( pthread_once_t *once, void (*func)(void) ); inline static void fast_pthread_once_inline( fast_pthread_once_t *once, void (*func)(void) ) { fast_pthread_once_t x = *once; /* unprotected access */ if ( x _fast_pthread_once_per_thread_epoch ) { fast_pthread_once( once, func ); } } #ifdef __cplusplus } #endif #endif FAST_PTHREAD_ONCE_H Source fast_pthread_once.c The source is written in C. The lines of the primary function are numbered for reference in the subsequent correctness argument. #include "fast_pthread_once.h" #include static pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER; /* protects global_epoch and all fast_pthread_once_t writes */ static pthread_cond_t cv = PTHREAD_COND_INITIALIZER; /* signalled whenever a fast_pthread_once_t is finalized */ #define BEING_INITIALIZED (FAST_PTHREAD_ONCE_INIT - 1) static fast_pthread_once_t global_epoch = 0; /* under mu */ __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; static void check( int x ) { if ( x == 0 ) abort(); } void fast_pthread_once( fast_pthread_once_t *once, void (*func)(void) ) { /*01*/ fast_pthread_once_t x = *once; /* unprotected access */ /*02*/ if ( x _fast_pthread_once_per_thread_epoch ) { /*03*/ check( pthread_mutex_lock(µ) == 0 ); /*04*/ if ( *once == FAST_PTHREAD_ONCE_INIT ) { /*05*/ *once = BEING_INITIALIZED; /*06*/ check( pthread_mutex_unlock(µ) == 0 ); /*07*/ (*func)(); /*08*/ check( pthread_mutex_lock(µ) == 0 ); /*09*/ global_epoch++; /*10*/ *once = global_epoch; /*11*/ check( pthread_cond_broadcast(&cv;) == 0 ); /*12*/ } else { /*13*/ while ( *once == BEING_INITIALIZED ) { /*14*/ check( pthread_cond_wait(&cv;, µ) == 0 ); /*15*/ } /*16*/ } /*17*/ _fast_pthread_once_per_thread_epoch = global_epoch; /*18*/ check (pthread_mutex_unlock(µ) == 0); } } This code from BOOST: #ifndef BOOST_THREAD_PTHREAD_ONCE_HPP #define BOOST_THREAD_PTHREAD_ONCE_HPP // once.hpp // // (C) Copyright 2007-8 Anthony Williams // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include #include #include #include "pthread_mutex_scoped_lock.hpp" #include #include #include namespace boost { struct once_flag { boost::uintmax_t epoch; }; namespace detail { BOOST_THREAD_DECL boost::uintmax_t& get_once_per_thread_epoch(); BOOST_THREAD_DECL extern boost::uintmax_t once_global_epoch; BOOST_THREAD_DECL extern pthread_mutex_t once_epoch_mutex; BOOST_THREAD_DECL extern pthread_cond_t once_epoch_cv; } #define BOOST_ONCE_INITIAL_FLAG_VALUE 0 #define BOOST_ONCE_INIT {BOOST_ONCE_INITIAL_FLAG_VALUE} // Based on Mike Burrows fast_pthread_once algorithm as described in // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2444.html template void call_once(once_flag& flag,Function f) { static boost::uintmax_t const uninitialized_flag=BOOST_ONCE_INITIAL_FLAG_VALUE; static boost::uintmax_t const being_initialized=uninitialized_flag+1; boost::uintmax_t const epoch=flag.epoch; boost::uintmax_t& this_thread_epoch=detail::get_once_per_thread_epoch(); if(epoch #endif I right understand, boost don't use atomic operation, so code from boost not thread-safe?

    Read the article

  • A* (A-star) implementation in AS3

    - by Bryan Hare
    Hey, I am putting together a project for a class that requires me to put AI in a top down Tactical Strategy game in Flash AS3. I decided that I would use a node based path finding approach because the game is based on a circular movement scheme. When a player moves a unit he essentially draws a series of line segments that connect that a player unit will follow along. I am trying to put together a similar operation for the AI units in our game by creating a list of nodes to traverse to a target node. Hence my use of Astar (the resulting path can be used to create this line). Here is my Algorithm function findShortestPath (startN:node, goalN:node) { var openSet:Array = new Array(); var closedSet:Array = new Array(); var pathFound:Boolean = false; startN.g_score = 0; startN.h_score = distFunction(startN,goalN); startN.f_score = startN.h_score; startN.fromNode = null; openSet.push (startN); var i:int = 0 for(i= 0; i< nodeArray.length; i++) { for(var j:int =0; j<nodeArray[0].length; j++) { if(!nodeArray[i][j].isPathable) { closedSet.push(nodeArray[i][j]); } } } while (openSet.length != 0) { var cNode:node = openSet.shift(); if (cNode == goalN) { resolvePath (cNode); return true; } closedSet.push (cNode); for (i= 0; i < cNode.dirArray.length; i++) { var neighborNode:node = cNode.nodeArray[cNode.dirArray[i]]; if (!(closedSet.indexOf(neighborNode) == -1)) { continue; } neighborNode.fromNode = cNode; var tenativeg_score:Number = cNode.gscore + distFunction(neighborNode.fromNode,neighborNode); if (openSet.indexOf(neighborNode) == -1) { neighborNode.g_score = neighborNode.fromNode.g_score + distFunction(neighborNode,cNode); if (cNode.dirArray[i] >= 4) { neighborNode.g_score -= 4; } neighborNode.h_score=distFunction(neighborNode,goalN); neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; insertIntoPQ (neighborNode, openSet); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); } else if (tenativeg_score <= neighborNode.g_score) { neighborNode.fromNode=cNode; neighborNode.g_score=cNode.g_score+distFunction(neighborNode,cNode); if (cNode.dirArray[i]>=4) { neighborNode.g_score-=4; } neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; openSet.splice (openSet.indexOf(neighborNode),1); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); insertIntoPQ (neighborNode, openSet); } } } trace ("fail"); return false; } Right now this function creates paths that are often not optimal or wholly inaccurate given the target and this generally happens when I have nodes that are not path able, and I am not quite sure what I am doing wrong right now. If someone could help me correct this I would appreciate it greatly. Some Notes My OpenSet is essentially a Priority Queue, so thats how I sort my nodes by cost. Here is that function function insertIntoPQ (iNode:node, pq:Array) { var inserted:Boolean=true; var iterater:int=0; while (inserted) { if (iterater==pq.length) { pq.push (iNode); inserted=false; } else if (pq[iterater].f_score >= iNode.f_score) { pq.splice (iterater,0,iNode); inserted=false; } ++iterater; } } Thanks!

    Read the article

  • Why does my finite state machine take so long to execute?

    - by BillyONeal
    Hello all :) I'm working on a state machine which is supposed to extract function calls of the form /* I am a comment */ //I am a comment perf("this.is.a.string.which\"can have QUOTES\"", 123456); where the extracted data would be perf("this.is.a.string.which\"can have QUOTES\"", 123456); from a file. Currently, to process a 41kb file, this process is taking close to a minute and a half. Is there something I'm seriously misunderstanding here about this finite state machine? #include <boost/algorithm/string.hpp> std::vector<std::string> Foo() { std::string fileData; //Fill filedata with the contents of a file std::vector<std::string> results; std::string::iterator begin = fileData.begin(); std::string::iterator end = fileData.end(); std::string::iterator stateZeroFoundLocation = fileData.begin(); std::size_t state = 0; for(; begin < end; begin++) { switch (state) { case 0: if (boost::starts_with(boost::make_iterator_range(begin, end), "pref(")) { stateZeroFoundLocation = begin; begin += 4; state = 2; } else if (*begin == '/') state = 1; break; case 1: state = 0; switch (*begin) { case '*': begin = boost::find_first(boost::make_iterator_range(begin, end), "*/").end(); break; case '/': begin = std::find(begin, end, L'\n'); } break; case 2: if (*begin == '"') state = 3; break; case 3: switch(*begin) { case '\\': state = 4; break; case '"': state = 5; } break; case 4: state = 3; break; case 5: if (*begin == ',') state = 6; break; case 6: if (*begin != ' ') state = 7; break; case 7: switch(*begin) { case '"': state = 8; break; default: state = 10; break; } break; case 8: switch(*begin) { case '\\': state = 9; break; case '"': state = 10; } break; case 9: state = 8; break; case 10: if (*begin == ')') state = 11; break; case 11: if (*begin == ';') state = 12; break; case 12: state = 0; results.push_back(std::string(stateZeroFoundLocation, begin)); }; } return results; } Billy3

    Read the article

  • Astar implementation in AS3

    - by Bryan Hare
    Hey, I am putting together a project for a class that requires me to put AI in a top down Tactical Strategy game in Flash AS3. I decided that I would use a node based path finding approach because the game is based on a circular movement scheme. When a player moves a unit he essentially draws a series of line segments that connect that a player unit will follow along. I am trying to put together a similar operation for the AI units in our game by creating a list of nodes to traverse to a target node. Hence my use of Astar (the resulting path can be used to create this line). Here is my Algorithm function findShortestPath (startN:node, goalN:node) { var openSet:Array = new Array(); var closedSet:Array = new Array(); var pathFound:Boolean = false; startN.g_score = 0; startN.h_score = distFunction(startN,goalN); startN.f_score = startN.h_score; startN.fromNode = null; openSet.push (startN); var i:int = 0 for(i= 0; i< nodeArray.length; i++) { for(var j:int =0; j<nodeArray[0].length; j++) { if(!nodeArray[i][j].isPathable) { closedSet.push(nodeArray[i][j]); } } } while (openSet.length != 0) { var cNode:node = openSet.shift(); if (cNode == goalN) { resolvePath (cNode); return true; } closedSet.push (cNode); for (i= 0; i < cNode.dirArray.length; i++) { var neighborNode:node = cNode.nodeArray[cNode.dirArray[i]]; if (!(closedSet.indexOf(neighborNode) == -1)) { continue; } neighborNode.fromNode = cNode; var tenativeg_score:Number = cNode.gscore + distFunction(neighborNode.fromNode,neighborNode); if (openSet.indexOf(neighborNode) == -1) { neighborNode.g_score = neighborNode.fromNode.g_score + distFunction(neighborNode,cNode); if (cNode.dirArray[i] >= 4) { neighborNode.g_score -= 4; } neighborNode.h_score=distFunction(neighborNode,goalN); neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; insertIntoPQ (neighborNode, openSet); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); } else if (tenativeg_score <= neighborNode.g_score) { neighborNode.fromNode=cNode; neighborNode.g_score=cNode.g_score+distFunction(neighborNode,cNode); if (cNode.dirArray[i]>=4) { neighborNode.g_score-=4; } neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; openSet.splice (openSet.indexOf(neighborNode),1); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); insertIntoPQ (neighborNode, openSet); } } } trace ("fail"); return false; } Right now this function creates paths that are often not optimal or wholly inaccurate given the target and this generally happens when I have nodes that are not path able, and I am not quite sure what I am doing wrong right now. If someone could help me correct this I would appreciate it greatly. Some Notes My OpenSet is essentially a Priority Queue, so thats how I sort my nodes by cost. Here is that function function insertIntoPQ (iNode:node, pq:Array) { var inserted:Boolean=true; var iterater:int=0; while (inserted) { if (iterater==pq.length) { pq.push (iNode); inserted=false; } else if (pq[iterater].f_score >= iNode.f_score) { pq.splice (iterater,0,iNode); inserted=false; } ++iterater; } } Thanks!

    Read the article

  • help setting up an IPSEC vpn from my linux box

    - by robthewolf
    I have an office with a router and a remote server (Linux - Ubuntu 10.10). Both locations need to connect to a data supplier through a VPN. The VPN is an IPSEC gateway. I was able to configure my Linksys rv42 router to create a VPN connection successfully and now I need to do the same for Linux server. I have been messing around with this for too long. First I tried OpenVPN, but that is SSL and not IPSEC. Then I tried Shrew. I think I have the settings correct but I haven't been able to create the connection. It maybe that I have to use something else like a direct IPSEC config or something like that. If someone knows of a way to turn the following settings that I have been given below into a working IPSEC VPN connection I would be very grateful. Here are the settings I was given that must be used to connect to my supplier: Local destination network: 192.168.4.0/24 Local destination hosts: 192.168.4.100 Remote destination network: 192.167.40.0/24 Remote destination hosts: 192.168.40.27 VPN peering point: xxx.xxx.xxx.xxx Then they have given me the following details: IPSEC/ISAKMP Phase 1 Parameters: Authentication method: pre shared secret Diffie Hellman group: group 2 Encryption Algorithm: 3DES Lifetime in seconds:28800 Phase 2 parameters: IPSEC security: ESP Encryption algortims: 3DES Authentication algorithms: MD5 lifetime in seconds: 28800 pfs: disabled Here are the settings from my attempt to use shrew: n:version:2 n:network-ike-port:500 n:network-mtu-size:1380 n:client-addr-auto:0 n:network-frag-size:540 n:network-dpd-enable:1 n:network-notify-enable:1 n:client-banner-enable:1 n:client-dns-used:1 b:auth-mutual-psk:YjJzN2QzdDhyN2EyZDNpNG42ZzQ= n:phase1-dhgroup:2 n:phase1-keylen:0 n:phase1-life-secs:28800 n:phase1-life-kbytes:0 n:vendor-chkpt-enable:0 n:phase2-keylen:0 n:phase2-pfsgroup:-1 n:phase2-life-secs:28800 n:phase2-life-kbytes:0 n:policy-nailed:0 n:policy-list-auto:1 n:client-dns-auto:1 n:network-natt-port:4500 n:network-natt-rate:15 s:client-dns-addr:0.0.0.0 s:client-dns-suffix: s:network-host:xxx.xxx.xxx.xxx s:client-auto-mode:pull s:client-iface:virtual s:client-ip-addr:192.168.4.0 s:client-ip-mask:255.255.255.0 s:network-natt-mode:enable s:network-frag-mode:disable s:auth-method:mutual-psk s:ident-client-type:address s:ident-client-data:192.168.4.0 s:ident-server-type:address s:ident-server-data:192.168.40.0 s:phase1-exchange:aggressive s:phase1-cipher:3des s:phase1-hash:md5 s:phase2-transform:3des s:phase2-hmac:md5 s:ipcomp-transform:disabled Finally here is the debug output from the shrew log: 10/12/22 17:22:18 ii : ipc client process thread begin ... 10/12/22 17:22:18 < A : peer config add message 10/12/22 17:22:18 DB : peer added ( obj count = 1 ) 10/12/22 17:22:18 ii : local address 217.xxx.xxx.xxx selected for peer 10/12/22 17:22:18 DB : tunnel added ( obj count = 1 ) 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : proposal config message 10/12/22 17:22:18 < A : client config message 10/12/22 17:22:18 < A : local id '192.168.4.0' message 10/12/22 17:22:18 < A : remote id '192.168.40.0' message 10/12/22 17:22:18 < A : preshared key message 10/12/22 17:22:18 < A : peer tunnel enable message 10/12/22 17:22:18 DB : new phase1 ( ISAKMP initiator ) 10/12/22 17:22:18 DB : exchange type is aggressive 10/12/22 17:22:18 DB : 217.xxx.xxx.xxx:500 <- 206.xxx.xxx.xxx:500 10/12/22 17:22:18 DB : c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 DB : phase1 added ( obj count = 1 ) 10/12/22 17:22:18 : security association payload 10/12/22 17:22:18 : - proposal #1 payload 10/12/22 17:22:18 : -- transform #1 payload 10/12/22 17:22:18 : key exchange payload 10/12/22 17:22:18 : nonce payload 10/12/22 17:22:18 : identification payload 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v00 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v01 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v02 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( draft v03 ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports nat-t ( rfc ) 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local supports DPDv1 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SHREW SOFT compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is NETSCREEN compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is SIDEWINDER compatible 10/12/22 17:22:18 : vendor id payload 10/12/22 17:22:18 ii : local is CISCO UNITY compatible 10/12/22 17:22:18 = : cookies c1a8b31ac860995d:0000000000000000 10/12/22 17:22:18 = : message 00000000 10/12/22 17:22:18 - : send IKE packet 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 ( 484 bytes ) 10/12/22 17:22:18 DB : phase1 resend event scheduled ( ref count = 2 ) 10/12/22 17:22:18 ii : opened tap device tap0 10/12/22 17:22:28 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:38 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:48 - : resend 1 phase1 packet(s) 217.xxx.xxx.xxx:500 - 206.xxx.xxx.xxx:500 10/12/22 17:22:58 ii : resend limit exceeded for phase1 exchange 10/12/22 17:22:58 ii : phase1 removal before expire time 10/12/22 17:22:58 DB : phase1 deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : closed tap device tap0 10/12/22 17:22:58 DB : tunnel stats event canceled ( ref count = 1 ) 10/12/22 17:22:58 DB : removing tunnel config references 10/12/22 17:22:58 DB : removing tunnel phase2 references 10/12/22 17:22:58 DB : removing tunnel phase1 references 10/12/22 17:22:58 DB : tunnel deleted ( obj count = 0 ) 10/12/22 17:22:58 DB : removing all peer tunnel refrences 10/12/22 17:22:58 DB : peer deleted ( obj count = 0 ) 10/12/22 17:22:58 ii : ipc client process thread exit ...

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >