Search Results

Search found 4369 results on 175 pages for 'merge tracking'.

Page 149/175 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Way to automate setting of MergeOptions

    - by Nix
    I am looking for an automated way to iterate over all ObjectQueries and set the merge option to no tracking (read only context). Once i find out how to do it i will be able to generate a default read only context using a T4 template. Is this possible? For example lets say i have these tables in my object context SampleContext TableA TableB TableC I would have to go through and do the below. SampleContext sc = new SampleContext(); sc.TableA.MergeOption = MergeOption.NoTracking; sc.TableB.MergeOption = MergeOption.NoTracking; sc.TableC.MergeOption = MergeOption.NoTracking; I am trying to find a way to generalize this using object context. I want to get it down to something like foreach(var objectQuery : sc){ objectQuery.MergeOption = MergeOption.NoTracking; } Preferably I would like to do it using the baseclass(ObjectContext): ObjectContext baseClass = sc as ObjectContext var objectQueries = sc.MetadataWorkspace.GetItem("Magic Object Query Option); But i am not sure i can even get access to the queries. Any help would be appreciated.

    Read the article

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • A copy of ApplicationController has been removed from the module tree but is still active

    - by Matchu
    Whenever two concurrent HTTP requests go to my Rails app, the second always returns the following error: A copy of ApplicationController has been removed from the module tree but is still active! From there it gives an unhelpful stack trace to the effect of "we went through the standard server stuff, ran your first before_filter on ApplicationController (and I checked; it's just whichever filter runs first)", then offers the following: /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' /home/matchu/rails/torch/vendor/rails/activesupport/lib/active_support/dependencies.rb:96:in `const_missing' which I'm assuming is a generic response and doesn't really say much. Google seems to tell me that people developing Rails Engines will encounter this, but I don't do that. All I've done is upgrade my Rails app from 2.2 (2.1?) to 2.3. What are some possible causes for this error, and how can I go about tracking down what's really going on? I know this question is vague, so would any other information be helpful? More importantly: I tried doing a test run in a "production" environment just now, and the error doesn't seem to persist. Does this only affect development, then, and need I not worry too much?

    Read the article

  • How to create an XML document from a .NET object?

    - by JL
    I have the following variable that accepts a file name: var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); I would like to change it so that I can pass in an object. I don't want to have to serialize the object to file first. Is this possible? Update: My original intentions were to take an xml document, merge some xslt (stored in a file), then output and return html... like this: public string TransformXml(string xmlFileName, string xslFileName) { var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); var xslt = new System.Xml.Xsl.XslCompiledTransform(); xslt.Load(xslFileName); var stm = new MemoryStream(); xslt.Transform(xd, null, stm); stm.Position = 1; var sr = new StreamReader(stm); xtr.Close(); return sr.ReadToEnd(); } In the above code I am reading in the xml from a file. Now what I would like to do is just work with the object, before it was serialized to the file. So let me illustrate my problem using code public string TransformXMLFromObject(myObjType myobj , string xsltFileName) { // Notice the xslt stays the same. // Its in these next few lines that I can't figure out how to load the xml document (xd) from an object, and not from a file.... var xtr = new XmlTextReader(xmlFileName) { WhitespaceHandling = WhitespaceHandling.None }; var xd = new XmlDocument(); xd.Load(xtr); }

    Read the article

  • Basic Subversion questions

    - by Epitaph
    I've just started using subversion, and have read the official documentation (svn book), cheat sheet and a couple of guides. I know how to install subversion (in linux), create a repository (svnadmin create), and import my Eclipse project into the repository (SVN import), view the repository files (using svn list). But I am unable to understand some of the other terminologies. For example, after importing my Eclipse project into the newly created repository I have made changes to my Eclipse project (more than 1 file). Now, how should I update the repository with this added files/changes made to my Eclipse project? The svn update command brings the changes from the repository into your working copy - which is the opposite of what I want i.e. bring the changes I made in my Eclipse project into the previously imported project in repository. If I am correct, you update the repository more often (as you keep extending your project implementation) than your current project (with update). Also, I do not understand when would you use svn merge. The svn book states it applies the differences between 2 sources to a working copy. Is there a scenario which would explain this? Finally, can I have more than 1 project checked into the repository? Or is it better to create a new repository for each project?

    Read the article

  • Merging: hg/git vs. svn

    - by stmax
    I often read that hg (and git and...) are better at merging than svn but I have never seen practical examples of where hg/git can merge something where svn fails (or where svn needs manual intervention). Could you post a few step-by-step lists of branch/modify/commit/...-operations that show where svn would fail while hg/git happily moves on? Practical, not highly exceptional cases please... Some background: we have a few dozen developers working on projects using svn, with each project (or group of similar projects) in its own repo. We know how to apply release- and feature-branches so we don't run into problems very often (i.e. we've been there, but we've learned to overcome joel's problems of "one programmer causing trauma to the whole team" or "needing six developers for two weeks to reintegrate a branch"). We have release-branches that are very stable and only used to apply bugfixes. We have trunks that should be stable enough to be able to create a release within one week. And we have feature-branches that single developers or groups of developers can work on. Yes, they are deleted after reintegration so they don't clutter up the repository. ;) So I'm still trying to find the advantages of hg/git over svn. I'd love to get some hands-on experience, but there aren't any bigger projects we could move to hg/git yet, so I'm stuck with playing with small artifical projects that only contain a few made up files. And I'm looking for a few cases where you can feel the impressive power of hg/git, since so far I have often read about them but failed to find them myself.

    Read the article

  • Algorithm to find the percentage of how much two texts are identical

    - by qster
    What algorithm would you suggest to identify how much from 0 to 1 (float) two texts are identical? Note that I don't mean similar (ie, they say the same thing but in a different way), I mean exact same words, but one of the two texts could have extra words or words slightly different or extra new lines and stuff like that. A good example of the algorithm I want is the one google uses to identify duplicate content in websites (X search results very similar to the ones shown have been omitted, click here to see them). The reason I need it is because my website has the ability for users to post comments; similar but different pages currently have their own comments, so many users ended up copy&pasting their comments on all the similar pages. Now I want to merge them (all similar pages will "share" the comments, and if you post it on page A it will appear on similar page B), and I would like to programatically erase all those copy&pasted comments from the same user. I have quite a few million comments but speed shouldn't be an issue since this is a one time thing that will run in the background. The programming language doesn't really matter (as long as it can interface to a MySQL database), but I was thinking of doing it in C++.

    Read the article

  • Deserializing XML to Objects in C#

    - by Justin Bozonier
    So I have xml that looks like this: <todo-list> <id type="integer">#{id}</id> <name>#{name}</name> <description>#{description}</description> <project-id type="integer">#{project_id}</project-id> <milestone-id type="integer">#{milestone_id}</milestone-id> <position type="integer">#{position}</position> <!-- if user can see private lists --> <private type="boolean">#{private}</private> <!-- if the account supports time tracking --> <tracked type="boolean">#{tracked}</tracked> <!-- if todo-items are included in the response --> <todo-items type="array"> <todo-item> ... </todo-item> <todo-item> ... </todo-item> ... </todo-items> </todo-list> How would I go about using .NET's serialization library to deserialize this into C# objects? Currently I'm using reflection and I map between the xml and my objects using the naming conventions.

    Read the article

  • Adding trend lines/boxplots (by group) in ggplot2

    - by Tal Galili
    Hi all, I have 40 subjects, of two groups, over 15 weeks, with some measured variable (Y). I wish to have a plot where: x = time, y = T, lines are by subjects and colours by groups. I found it can be done like this: TIME <- paste("week",5:20) ID <- 1:40 GROUP <- sample(c("a","b"),length(ID), replace = T) group.id <- data.frame(GROUP, ID) a <- expand.grid(TIME, ID) colnames(a) <-c("TIME", "ID") group.id.time <- merge(a, group.id) Y <- rnorm(dim(group.id.time)[1], mean = ifelse(group.id.time$GROUP =="a",1,3) ) DATA <- cbind(group.id.time, Y) qplot(data = DATA, x=TIME, y=Y, group=ID, geom = c("line"),colour = GROUP) But now I wish to add to the plot something to show the difference between the two groups (for example, a trend line for each group, with some CI shadelines) - how can it be done? I remember once seeing the ggplot2 can (easily) do this with geom_smooth, but I am missing something about how to make it work. Also, I wondered at maybe having the lines be like a boxplot for each group (with a line for the different quantiles and fences and so on). But I imagine answering the first question would help me resolve the second. Thanks.

    Read the article

  • Help me understand entity framework 4 caching for lazy loading

    - by Chris
    I am getting some unexpected behaviour with entity framework 4.0 and I am hoping someone can help me understand this. I am using the northwind database for the purposes of this question. I am also using the default code generator (not poco or self tracking). I am expecting that anytime I query the context for the framework to only make a round trip if I have not already fetched those objects. I do get this behaviour if I turn off lazy loading. Currently in my application I am breifly turning on lazy loading and then turning it back off so I can get the desired behaviour. That pretty much sucks, so please help. Here is a good code example that can demonstrate my problem. Public Sub ManyRoundTrips() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() 'makes unnessesary round trip to the database, I just loaded the employees' MessageBox.Show(context.Employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) context.Orders.Execute(System.Data.Objects.MergeOption.AppendOnly) For Each emp As Employee In employees 'makes unnessesary trip to database every time despite orders being pre loaded.' Dim i As Integer = emp.Orders.Count Next End Sub Public Sub OneRoundTrip() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Include("Orders").Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() MessageBox.Show(employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) For Each emp As Employee In employees Dim i As Integer = emp.Orders.Count Next End Sub Why is the first block of code making unnessesary round trips?

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • Merging Passed Parameters

    - by Josh Crowder
    I have a two data arrays sent in from a form, one called transloaded and the other video which is the actual form for the model. I need to get [:video_encoded][:url] and save that to [:video][:flash_url] This is the passed arguments or transloaded, when I try and access [:transload][:results][:video_encode] I get nil. print params[:transload] { "assembly_id":"d59b4293b3d79d2ccd1948c02421c6a6", "status":"success", "uploads":{ "video":{ "name":"bbc_one.mp4", "mime":"video/mp4", "ext":"mp4", "size":601104, "meta":{ "width":720, "height":404, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.07, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://tmp.transloadit.com/" } }, "results":{ "video_encode":{ "name":"bbc_one.flv", "mime":"video/x-flv", "steps":["encode","export"], "ext":"flv", "size":388317, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":"512k", "video_format":"FLV1", "video_codec":"ffflv", "audio_bitrate":"64k", "audio_codec":"mp3", "duration":3.11, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/b7deac9c96af6c745e914e25d0350baa/7a/2b09e822265ac2328789b40dcc02ae/bbc_one.flv" }, "video_encode_iphone":{ "name":"bbc_one.qt", "mime":"video/quicktime", "steps":["encode_iphone","export"], "ext":"qt", "size":218236, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.04, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/31/58bcc80d5345e52a42c9773125e8f0/bbc_one.qt" } } } Here is what I am trying to use video_links = { :flash_url => params[:transload][:results][:video_encode][:url], :mp4_url => params[:transload][:results][:video_encode_iphone][:url] } params[:video].merge(video_links)

    Read the article

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • Combining multiple content types into a single search result with Drupal 6 and Views 2

    - by Chaulky
    Hi all, I need to create a somewhat advanced search functionality for my Drupal 6 site. I have a one-to-many relationship between two content types and need to search them, respecting that relationship. To make things more clear... I have content types TypeX and TypeY. TypeY has a node reference CCK field that relates it to a single node of TypeX. So, many nodes of TypeY reference the same node of TypeX. I want to use Views 2 to create a search page for these nodes. I want each search result to be a node of TypeX, along with all the nodes of TypeY that reference it. I know I could just theme the individual results and use a view to add the nodes of TypeY to the single node of TypeX... but that won't allow users to actually search TypeY... it would only search TypeX and merely display some nodes of TypeY along with it. Is there anyway to get the search to account for content in nodes of both content types, but merge the TypeY results into the "parent" node of TypeX? In database terms, it seems like I need to do a join, then filter by the search terms. But I can't figure out how to do this in Views. Thanks for any help i can get!!!

    Read the article

  • Membership Provider users in different tables

    - by Mike
    I have an existing database with users and administrators in different tables. I am rewriting an existing website in ASP.net and need to decide - should I merge the two tables into one users table and just have one provider, OR leave the tables separated and have two different providers. Administrators, they need the ability to create, edit and delete users. I am thinking that the membership/profile provider way of editing users (i.e. System.Web.Profile.ProfileBase pro = System.Web.Profile.ProfileBase.Create("User1"); pro.Initialize("User1", true); txtEmail.Text = pro["SecondaryEmail"].ToString(); is the best way to edit users because the provider handles it? You cannot use this if you have two separate providers? (because they are both looking at different tables). Or should I make a whole lot of methods to edit the users for the administrators? UPDATE: Making a custom membership provider look at both tables is fine, but then what about the profile provider? The profile provider GetPropertyValues and SetPropertyValues would be going on the same set of properties for users and admins. Mike

    Read the article

  • WCF service returns error 500 on /js request

    - by Cine
    I have a wcf service that randomly begins to fail when requesting the autogenerated javascript that wcf supports making. But I have no luck tracking down why. The js thing is part of the wcf featureset, so I dont know how it can suddenly begin to fail and be unable to work until IIS is recycled. The http log gives me: 2010-06-10 09:11:49 W3SVC2095255988 myip GET /path/myservice.svc/js _=1276161113900 80 - ip browser 500 0 0 So its an error 500, and that is about the only thing I can figure out. The event log contains no information. Requests to /path/myservice.svc works just fine. After recycling IIS it works again, and some days later it begins to fail until I recycle IIS. <service name="path.myservice" behaviorConfiguration="b"> <endpoint address="" behaviorConfiguration="eb" binding="webHttpBinding" contract="path.Imyservice" /> </service> ... <endpointBehaviors> <behavior name="eb"> <enableWebScript /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="b"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> I dont see any problems in the web.config settings either. Any clues how I can track down what the problem is?

    Read the article

  • Javascript: Multiple mouseout events triggered

    - by Channel72
    I'm aware of the different event models in Javascript (the WC3 model versus the Microsoft model), as well as the difference between bubbling and capturing. However, after a few hours reading various articles about this issue, I'm still unsure how to properly code the following seemingly simple behavior: If I have an outer div and an inner div element, I want a single mouse-out event to be triggered when the mouse leaves the outer-div. When the mouse crosses from the inner-div to the outer-div, nothing should happen, and when the mouse crosses from the outer-div to the inner-div nothing should happen. The event should only fire if the mouse moves from the outer-div to the surrounding page. <div id="outer" style = "width:20em; height:20em; border:1px solid #F00" align = "center" onmouseout="alert('mouseout event!')" > <div id="inner" style = "width:18em; height:18em; border:1px solid #000"></div> </div> Now, if I place the "mouseout" event on the outer-div, two mouse-out events are fired when the mouse moves from the inner-div to the surrounding page, because the event fires once when the mouse moves from inner to outer, and then again when it moves from outer to the surrounding page. I know I can cancel the event using ev.stopPropagation(), so I tried registering an event handler with the inner-div to cancel the event propagation. However, this won't prevent the event from firing when the mouse moves from the outer-div to the inner-div. So, unless I'm overlooking something, it seems to me this behavior can't be accomplished without complex mouse-tracking functions. In the future, I plan to reimplement a lot of this code using a more advanced framework, like JQuery, but for now, I'm wondering if there is a simple way to implement the above behavior in regular Javascript.

    Read the article

  • Add multiple entities to Javascript namespace from different files

    - by Brian M. Hunt
    Given a namespaces ns used in two different files: abc.js ns = ns || (function () { foo = function() { ... }; return { abc : foo }; }()); def.js // is this correct? ns = ns || {} ns.def = ns.def || (function () { defoo = function () { ... }; return { deFoo: defoo }; }()); Is this the proper way to add def to the ns to a namespace? In other words, how does one merge two contributions to a namespace in javascript? If abc.js comes before def.js I'd expect this to work. If def.js comes before abc.js I'd expect ns.abc to not exist because ns is defined at the time. It seems there ought to be a design pattern to eliminate any uncertainty of doing inclusions with the javascript namespace pattern. I'd appreciate thoughts and input on how best to go about this sort of 'inclusion'. Thanks for reading. Brian

    Read the article

  • _Painless_ integration of Eclipse with Vim?

    - by Adnan
    Hi, Has anyone managed to get Vim integrated into Eclipse painlessly? I just want to use Vim for the editor while retaining the general Eclipse interface. I have tried using Eclim plugin but the editor seemed to crash more often than work (the site said that the editor replacement functionality is still beta). On the flip side, is there any IDE which matches Eclipse's functionality -- mainly the integration with SVN, ant, etc. -- and is also able to use Vim? I mostly use eclipse for SAS SCL, Java and Javascript programming and find the eclipse editor too "mouse-y". I'd also like, in a perfect world, to use vimdiff as a diff viewer for SVN (we use TortoiseSVN) while checking for diffs or conflicts during merge etc. I admit I havent spent a lot of time trying to get these things to work. I feel guilty about spending too much time on potential wild-goose-chases while my other team members are working away at their code, perfectly content with all that eclipse has to offer. Edit: Just found this while desperately browsing around: Vim plugin.. Any experience using this? Its saturday afternoon in Kiwi land, so I'll try it later and update if no one has an opinion. From the claims on the site, it sounds perfect.

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • continueTrackingWithTouch: withEvent: not being called continuously.

    - by Steven Noyes
    I have a very simply subclass of UIButton that will fire off a custom event when the button has been held for 2 seconds. To accomplish this, I overrode: // // Mouse tracking // - (BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event { [super beginTrackingWithTouch:touch withEvent:event]; [self setTimeButtonPressed:[NSDate date]]; return (YES); } - (BOOL)continueTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event { [super continueTrackingWithTouch:touch withEvent:event]; if ([[self timeButtonPressed] timeIntervalSinceNow] > 2.0) { // // disable the button. This will eventually fire the event // but this is just stubbed to act as a break point area. // [self setEnabled:NO]; [self setSelected:NO]; } return (YES); } My problem is (this is in the simulator, can't do on device work quite yet) "continueTrackingWithTouch: withEvent:" is not being called unless the mouse is actually moved. It does not take much motion, but it does take some motion. By returning "YES" in both of these, I should be setup to receive continuous events. Is this an oddity of the simulator or am I doing something wrong? NOTE: userInteractionEnabled is set to YES. NOTE2: I could setup a timer in beginTrackingWithTouch: withEvent: but that seems like more effort for something that should be simple.

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • i18n redirection breaks my tests ....

    - by Mike
    I have a big application covered by more than a thousand tests via rspec. We just made the choice to redirect any page like : / /foo /foo/4/bar/34 ... TO : /en /en/foo /fr/foo/4/bar/34 .... So I made a before filter in application.rb like so : if params[:locale].blank? headers["Status"] = "301 Moved Permanently" redirect_to request.env['REQUEST_URI'].sub!(%r(^(http.?://[^/]*)?(.*))) { "#{$1}/#{I18n.locale}#{$2}" } end It's working great but ... It's breaking a lot of my tests, ex : it "should return 404" do Video.should_receive(:failed_encodings).and_return([]) get :last_failed_encoding response.status.should == "404 Not Found" end To fix this test, I should do : get :last_failed_encoding, :locale => "en" But ... seriously I don't want to fix all my test one by one ... I tried to make the locale a default parameter like this : class ActionController::TestCase alias_method(:old_get, :get) unless method_defined?(:old_get) def get(path, parameters = {}, headers = nil) parameters.merge({:locale => "fr"}) if parameters[:locale].blank? old_get(path, parameters, headers) end end ... but couldnt make this work ... Any idea ??

    Read the article

  • MVVM pattern: ViewModel updates after Model server roundtrip

    - by Pavel Savara
    I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel. My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates. I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels. I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info. Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ? Is there better client side pattern for architecture with server side logic ?

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >