Search Results

Search found 2497 results on 100 pages for 'pitch tracking'.

Page 85/100 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Pushing to bare Git repository (remote) causes it to stop being bare

    - by NSD
    I have a local repository called TestRepo. I clone it with the --bare option, zip this clone up, and throw it on my server. Unzip it, and it's still bare. I then clone the bare remote repository locally over ssh with something like git clone ssh://[email protected]/~/TestRepo.git TestRepoCloned The local TestRepoCloned is not bare and has a remote called "origin." It appears to be tracking correctly from the looks of its config file [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = ssh://[email protected]/~/TestRepo.git [branch "master"] remote = origin merge = refs/heads/master I edit an existing file. I commit the change to the current branch (master) via git commit -a -m "Edited a file." The commit succeeds and all is well. I decide to push this change to the remote repository via SSH with a git push The remote repository is now no longer bare, but has a complete working directory, and I get continuous error messages on all further attempts to push to it. Everything I've read seems to suggest that what I'm doing is correct, but it simply is not working. How am I supposed to push changes to a bare remote repo and actually keep it bare?

    Read the article

  • Stock management of assemblies and its sub parts (relations)

    - by The Disintegrator
    I have to track the stock of individual parts and kits (assemblies) and can't find a satisfactory way of doing this. Sample bogus and hyper simplified database: Table prod: prodID 1 prodName Flux capacitor prodCost 900 prodPrice 1350 (900*1.5) prodStock 3 - prodID 2 prodName Mr Fusion prodCost 300 prodPrice 600 (300*2) prodStock 2 - prodID 3 prodName Time travel kit prodCost 1650 (1350+300) prodPrice 2145 (1650*1.3) prodStock 2 Table rels relID 1 relSrc 1 (Flux capacitor) relType 4 (is a subpart of) relDst 3 (Time travel kit) - relID 2 relSrc 2 (Mr Fusion) relType 4 (is a subpart of) relDst 3 (Time travel kit) prodPrice: it's calculated based on the cost but not in a linear way. In this example for costs of 500 or less, the markup is a 200%. For costs of 500-1000 the markup is 150%. For costs of 1000+ the markup is 130% That's why the time travel kit is much cheaper than the individual parts prodStock: here is my problem. I can sell kits or the individual parts, So the stock of the kits is virtual. The problem when I buy: Some providers sell me the Time Travel kit as a whole (with one barcode) and some sells me the individual parts (with a different barcode) So when I load the stock I don't know how to impute it. The problem when I sell: If I only sell kits, calculate the stock would be easy: "I have 3 Flux capacitors and 2 Mr Fusions, so I have 2 Time travel kits and a Flux Capacitor" But I can sell Kits or individual parts. So, I have to track the stock of the individual parts and the possible kits at the same time (and I have to compensate for the sell price) Probably this is really simple, but I can't see a simple solution. Resuming: I have to find a way of tracking the stock and the database/program is the one who has to do it (I cant ask the clerk to correct the stock) I'm using php+MySql. But this is more a logical problem than a programing one

    Read the article

  • What version control system is best designed to *prevent* concurrent editing?

    - by Fred Hamilton
    We've been using CVS (with TortoiseCVS interface) for years for both source control and wide-ranging document control (including binaries such as Word, Excel, Framemaker, test data, simulation results, etc.). Unlike typical version control systems, 99% of the time we want to prevent concurrent editing - when a user starts editing a file, the pre-edit version of the file becomes read only to everyone else. Many of the people who will be using this are not programmers or even that computer savvy, so we're also looking for a system that let's people simply add documents to the repository, check out and edit a document (unless someone else is currently editing it), and check it back in with a minimum of fuss. We've gotten this to work reasonably well with CVS + TortoiseCVS, but we're now considering Subversion and Mercurial (and open to others if they're a better fit) for their better version tracking, so I was wondering which one supported locking files most transparently. For example, we'd like exclusive locking enabled as the default, and we want to make it as difficult as possible for someone to accidentally start editing a file that someone else has checked out. For example when someone checks out a file for editing, it checks with the master database first even if they have not recently updated their sandbox. Maybe it even won't let a user check out a document if it's off the network and can't check in with the mothership.

    Read the article

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • trackPageView on Google Analytics for iPhone Not Working

    - by DigitalZombieKid
    I'm trying to get Google Analytics working on an iPhone application without much luck. I've followed all the instructions on their website (google/apis/analytics/docs/tracking/mobileAppsTracking.html) and studied their sample application (google/gaformobileapps/GoogleAnalyticsIphone_0.7.tar.gz). When I run my application and go to Google Analytics' website (https://www.google.com/analytics/reporting/), the only page that is recording is /app_entry_point. I'm seeing one count in my Google Analytics detailed report once every time my app fires up. However, I have added other pages to be tracked but it's not working. Here is a sample of two pages I've added to be tracked: trackPageview:@"/calculator"; trackPageview:@"/tellafriend"; I call them from various ViewControllers in the app. In each of those view controllers I import the the GANTracker header: #import "GANTracker.h" I'll admit it: I'm an objective-c newbie. Any help you can offer is greatly appreciated! Do I need to physically dispatch them to get the trackPageview working? If so, why is the /app_entry_point page the only page that is recorded by Google Analytics?

    Read the article

  • Way to automate setting of MergeOptions

    - by Nix
    I am looking for an automated way to iterate over all ObjectQueries and set the merge option to no tracking (read only context). Once i find out how to do it i will be able to generate a default read only context using a T4 template. Is this possible? For example lets say i have these tables in my object context SampleContext TableA TableB TableC I would have to go through and do the below. SampleContext sc = new SampleContext(); sc.TableA.MergeOption = MergeOption.NoTracking; sc.TableB.MergeOption = MergeOption.NoTracking; sc.TableC.MergeOption = MergeOption.NoTracking; I am trying to find a way to generalize this using object context. I want to get it down to something like foreach(var objectQuery : sc){ objectQuery.MergeOption = MergeOption.NoTracking; } Preferably I would like to do it using the baseclass(ObjectContext): ObjectContext baseClass = sc as ObjectContext var objectQueries = sc.MetadataWorkspace.GetItem("Magic Object Query Option); But i am not sure i can even get access to the queries. Any help would be appreciated.

    Read the article

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • Deserializing XML to Objects in C#

    - by Justin Bozonier
    So I have xml that looks like this: <todo-list> <id type="integer">#{id}</id> <name>#{name}</name> <description>#{description}</description> <project-id type="integer">#{project_id}</project-id> <milestone-id type="integer">#{milestone_id}</milestone-id> <position type="integer">#{position}</position> <!-- if user can see private lists --> <private type="boolean">#{private}</private> <!-- if the account supports time tracking --> <tracked type="boolean">#{tracked}</tracked> <!-- if todo-items are included in the response --> <todo-items type="array"> <todo-item> ... </todo-item> <todo-item> ... </todo-item> ... </todo-items> </todo-list> How would I go about using .NET's serialization library to deserialize this into C# objects? Currently I'm using reflection and I map between the xml and my objects using the naming conventions.

    Read the article

  • Help me understand entity framework 4 caching for lazy loading

    - by Chris
    I am getting some unexpected behaviour with entity framework 4.0 and I am hoping someone can help me understand this. I am using the northwind database for the purposes of this question. I am also using the default code generator (not poco or self tracking). I am expecting that anytime I query the context for the framework to only make a round trip if I have not already fetched those objects. I do get this behaviour if I turn off lazy loading. Currently in my application I am breifly turning on lazy loading and then turning it back off so I can get the desired behaviour. That pretty much sucks, so please help. Here is a good code example that can demonstrate my problem. Public Sub ManyRoundTrips() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() 'makes unnessesary round trip to the database, I just loaded the employees' MessageBox.Show(context.Employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) context.Orders.Execute(System.Data.Objects.MergeOption.AppendOnly) For Each emp As Employee In employees 'makes unnessesary trip to database every time despite orders being pre loaded.' Dim i As Integer = emp.Orders.Count Next End Sub Public Sub OneRoundTrip() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Include("Orders").Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() MessageBox.Show(employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) For Each emp As Employee In employees Dim i As Integer = emp.Orders.Count Next End Sub Why is the first block of code making unnessesary round trips?

    Read the article

  • Javascript: Multiple mouseout events triggered

    - by Channel72
    I'm aware of the different event models in Javascript (the WC3 model versus the Microsoft model), as well as the difference between bubbling and capturing. However, after a few hours reading various articles about this issue, I'm still unsure how to properly code the following seemingly simple behavior: If I have an outer div and an inner div element, I want a single mouse-out event to be triggered when the mouse leaves the outer-div. When the mouse crosses from the inner-div to the outer-div, nothing should happen, and when the mouse crosses from the outer-div to the inner-div nothing should happen. The event should only fire if the mouse moves from the outer-div to the surrounding page. <div id="outer" style = "width:20em; height:20em; border:1px solid #F00" align = "center" onmouseout="alert('mouseout event!')" > <div id="inner" style = "width:18em; height:18em; border:1px solid #000"></div> </div> Now, if I place the "mouseout" event on the outer-div, two mouse-out events are fired when the mouse moves from the inner-div to the surrounding page, because the event fires once when the mouse moves from inner to outer, and then again when it moves from outer to the surrounding page. I know I can cancel the event using ev.stopPropagation(), so I tried registering an event handler with the inner-div to cancel the event propagation. However, this won't prevent the event from firing when the mouse moves from the outer-div to the inner-div. So, unless I'm overlooking something, it seems to me this behavior can't be accomplished without complex mouse-tracking functions. In the future, I plan to reimplement a lot of this code using a more advanced framework, like JQuery, but for now, I'm wondering if there is a simple way to implement the above behavior in regular Javascript.

    Read the article

  • WCF service returns error 500 on /js request

    - by Cine
    I have a wcf service that randomly begins to fail when requesting the autogenerated javascript that wcf supports making. But I have no luck tracking down why. The js thing is part of the wcf featureset, so I dont know how it can suddenly begin to fail and be unable to work until IIS is recycled. The http log gives me: 2010-06-10 09:11:49 W3SVC2095255988 myip GET /path/myservice.svc/js _=1276161113900 80 - ip browser 500 0 0 So its an error 500, and that is about the only thing I can figure out. The event log contains no information. Requests to /path/myservice.svc works just fine. After recycling IIS it works again, and some days later it begins to fail until I recycle IIS. <service name="path.myservice" behaviorConfiguration="b"> <endpoint address="" behaviorConfiguration="eb" binding="webHttpBinding" contract="path.Imyservice" /> </service> ... <endpointBehaviors> <behavior name="eb"> <enableWebScript /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="b"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> I dont see any problems in the web.config settings either. Any clues how I can track down what the problem is?

    Read the article

  • continueTrackingWithTouch: withEvent: not being called continuously.

    - by Steven Noyes
    I have a very simply subclass of UIButton that will fire off a custom event when the button has been held for 2 seconds. To accomplish this, I overrode: // // Mouse tracking // - (BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event { [super beginTrackingWithTouch:touch withEvent:event]; [self setTimeButtonPressed:[NSDate date]]; return (YES); } - (BOOL)continueTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event { [super continueTrackingWithTouch:touch withEvent:event]; if ([[self timeButtonPressed] timeIntervalSinceNow] > 2.0) { // // disable the button. This will eventually fire the event // but this is just stubbed to act as a break point area. // [self setEnabled:NO]; [self setSelected:NO]; } return (YES); } My problem is (this is in the simulator, can't do on device work quite yet) "continueTrackingWithTouch: withEvent:" is not being called unless the mouse is actually moved. It does not take much motion, but it does take some motion. By returning "YES" in both of these, I should be setup to receive continuous events. Is this an oddity of the simulator or am I doing something wrong? NOTE: userInteractionEnabled is set to YES. NOTE2: I could setup a timer in beginTrackingWithTouch: withEvent: but that seems like more effort for something that should be simple.

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • Libtool versioning of a library that depends on other libraries.

    - by Artyom
    Hello, I have a framework that uses Boost and CgiCC in the core application and in its interface. How should I version the library binary interface (a.k.a. libtool -version-info)? I have no problems tracking the changes in library itself when I make various changes. As it is clear for me how should I version. But... Both Boost and CgiCC libraries do not provide any backward compatible API/ABI and my library may be linked with quite arbitrary versions Boost and CgiCC so I can't provide any promise about the interfaces, so I can't really specify -version-info because even the same library compiled against different versions of Boost and CgiCC would not be compatible. So... What should I do? How should I version library? I know that I should not depend on Boost and CgiCC interfaces in first place, but this is what I get so far for existing stable version. This issue is addressed in next major release but I still have and want to maintain current release as it is very valuable.

    Read the article

  • Vim, LaTeX, and version controlI

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • Divide a path into N sections using Java or PostgreSQL/PostGIS

    - by Guido
    Imagine a GPS tracking system that is following the position of several objects. The points are stored in a database (PostgreSQL + PostGIS). Each path is composed by a different number of points. That is the reason why, in order to compare a pair of paths, I need to divide every path in a set of 100 points. Do you know any PostGIS function that already implement this algorithm? I've not been able to find it. If not, I'd like to solve it using Java. In this case I'd like to know an efficient and easy to implement algorithm to divide a path into N points. The most simple example could be to divide this path into three points: position 1 : x=1, y=2 position 2 : x=1, y=3 And the result should be: position 1 : x=1, y=2 (starting point) position 2 : x=5, y=2.5 position 3 : x=9, y=3 (end point) Edit: By 'compare a pair of paths' I mean to calculate the distance between two paths. I plan to divide each path in 100 points, and sum the euclidean distance between each one of these points as the distance between the two paths.

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • Is there a Post-Build Extensible Installer System

    - by Will Hughes
    We have a product that we need to create an installer for. It has a number of components which can be installed or not as the situation demands. When we ship our installation package, we want to be able to have that include any number of additional components to be installed. For example, Foo Manager Pro contains: Foo Manager Console Foo Manager Database Foo Manager Services That might be shipped as something like: FooManagerInstaller.exe FMPConsole.pkg FMPDatabase.pkg FMPServices.pkg A package might consist of something like: Manifest Files to be deployed Additional scripts to be executed (eg find file foo.config, do some XML Manipulation) If a client wants to add custom skins and a series of plugins as part of the install, they create their own packages: FMPConsoleSkins.pkg ClientWebservices.pkg If that client then ships it to someone else who wants to add more customisation - they can do so in the same way. We can build this from scratch - but wanted to check if this sort of install system already exists. We already have a set of NAnt scripts which do something not too far from this. But they're difficult to maintain, and quite complex. They don't offer any of the 'niceties' that we'd expect from an installer (like tracking deployed files and removing them if the install fails). We've been looking a little bit at NSIS and building MSIs using WiX, but it's not clear that these can offer us the capability for downstream to provide additional packages, without inventing our own installer language.

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >