Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 52/2677 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Almost Realtime Data and Web application

    - by Chris G.
    I have a computer that is recording 100 different data points into an OPC server. I've written a simple OPC client that can read all of this data. I have a front-end website on a different network that I would like to consume this data. I could easily set the OPC client to send the data to a SQL server and the website could read from it, but that would be a lot of writes. If I wanted the data to be updated every 10 seconds I'd be writing to the database every 10 seconds. (I could probably just serialize the 100 points to get 1 write / 10 seconds but that would also limit my ability to search the data later). This solution wouldn't scale very well. If I had 100 of these computers the situation would quickly grow out of hand. Obviously I am well out of my league here and I have no experience with working with a large amount of data like this. What are my options and what should I research?

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Battery icon constantly empty when discharging Vaio VGN-FZ210CE

    - by Alex
    I have a Sony Vaio VGN-210CE laptop, and its battery does not report the drain rate properly. In 11.10 and earlier, the battery icon would update with the percentage, not the estimated time. In 12.04 and 12.10, it estimates by time, which is always some very low value because the estimated drain rate is 700W. I currently have 89% in my battery but the icon is red and empty. If there is an application I should install, or a setting I should change, please let me know. Thanks!

    Read the article

  • How to balance programming projects between feasibility and usefulness

    - by tyjkenn
    I've become fairly competent as a programmer, but I would not say I am a master. I work independently, most as a hobby, although I have done some freelance PHP work. I tend to find myself dabbling in a lot of things: Java Android SDK, Arduino, game scripting, Lua, etc. I've reached the point where I want to start a real software project, but cannot think of a small enough project that allows me enough practice, while still being able to publish a decent piece of software in a reasonable amount of time, and build up a portfolio. More specifically, I was looking at Ubuntu development, in Python, using the Quickly toolset, which includes the PyGTK libraries. So the question is, what is the best way to come up with a small project that is still useful, as a starting point to a software development career?

    Read the article

  • Agile Tools For Handling Multiple Projects

    - by f1dave
    Currently I'm leading our agile team in an iteration manager role as well as doing my regular dev work. One of the difficulties I'm facing as an IM is tracking burn-down/burn-up; not because I can't produce graphs, but because there's multiple projects that this team is working on at one time. At present I have an excel workbook with sheets that contain a whole bunch of graphs, both at an overall team and by-project level. It's clunky and I spend more time tweaking formulas and double checking calculations than I'd really like. As such, I'm interested to know if anyone has used a tool that can effectively produce these sorts of reports, burn-downs, and predictions across multiple projects. I've seen http://www.pivotaltracker.com/ do some nice things, and of course there's JIRA/Greenhopper, but I'm not aware of those being used to track the progress of multiple projects within one team. If anyone's got an idea of some tools, or has faced a similar problem before, I'd love to hear from you.

    Read the article

  • Variable-step update() in game loop is falling behind, how can I get around this?

    - by ThatsGobbles
    I'm working on a minimal game engine for my next game. I'm using the delta update method like shown: void update(double delta) { // Update code that uses `delta` goes here } I have a deep hierarchy of updatable objects, with a root updatable that contains several updatables, each of which contains more updatables, etc. Normally I'd just iterate through each of the root's children and update each one, which would then do the same for its children, and so on. However, passing a fixed value of delta to the root means that by the time the leaf updatables are reached, it's been longer since delta seconds that have elapsed. This is causing noticable desyncing in my game, and time synchronization is very important in my case (I'm working on a rhythm game). Any ideas on how I should tackle this? I've considered using StopWatches and a global readable timer, but any advice would be helpful. I'm also open to moving to fixed timesteps as opposed to variable.

    Read the article

  • How many hours do you spend programming outside of work if you are married (with children) [closed]

    - by eterps
    I recently attended a conference and met some insanely talented developers with mile-long lists of accomplishments. One puzzling thing I noticed about many of these wildly successful people is that a lot of their output (books, blogs, games, mobile apps) are done as hobby projects outside of their normal jobs. Not only this, but most of them were married, and some even with children. I'm absolutely baffled by this. My question is targeted to folks that are married, working full time, and also spend time outside of work on hobby projects or extra programming. The question is: How many hours do you put in programming outside of your normal job, and how do you balance it with your family life?

    Read the article

  • ????!Platinum?????????Oracle Database ??????@??????

    - by Yusuke.Yamamoto
    ?ORACLE MASTER Platinum??????????????????????????????????????????? ?????????????????????????????????PS??????????????????? 12???????????????????????????????????????? ??????????????????????18?30?~??? 4????????????????????? ????? Oracle Database ????????????????????????????????????????????????????? ???????·????????? SQL??? ??????????? ?????? ??? ???? ???? 2011?03?11?(?)18:30~20:30 ?? ORACLE MASTER Platinum ?????????Oracle Database ??????

    Read the article

  • What strategy do you use to sync your code when working from home

    - by Ben Daniel
    At my work I currently have my development environment inside a Virtual Machine. When I need to do work from home I copy my VM and any databases I need onto a laptop drive sized external USB drive. After about 10 minutes of copying I put the drive in my pocket and head home, copy back the VM and databases onto my personal computer and I'm ready to work. I follow the same steps to take the work back with me. So if I count the total amount of time I spend waiting around for files to finish copying in order for me to take work home and bring it back again, it comes to around 40 minutes! I do have a VPN connection to my work from home (providing the internet is up at both sites) and a decent internet speed (8mbits down/?up) but I find Remote Desktoping into my work machine laggy enough for me to want to work on my VM directly. So in looking at what other options I have or how I could improve my existing option I'm interested in what strategy you use or recommend to do work at home and keeping your code/environment in sync. EDIT: I'd prefer an option where I don't have to commit my changes into version control before I leave work - as I like to make meaningful descriptive comments in my commits, committing would take longer than just copying my VM onto a portable drive! lol Also I'd prefer a solution where my dev environment stays in sync too. Having said that I'm still very interested in your own solutions even if they don't exactly solve my problem as best as I'd like. :)

    Read the article

  • PHP XML Strategy: Parsing DOM to fill "Bean"

    - by Mike
    I have a question concerning a good strategy on how to fill a data "bean" with data inside an xml file. The bean might look like this: class Person { var $id; var $forename = ""; var $surname = ""; var $bio = new Biography(); } class Biography { var $url = ""; var $id; } the xml subtree containing the info might look like this: <root> <!-- some more parent elements before node(s) of interest --> <person> <name pre="forename"> Foo </name> <name pre="surname"> Bar </name> <id> 1254 </id> <biography> <url> http://www.someurl.com </url> <id> 5488 </id> </biography> </person> </root> At the moment, I have one approach using DOMDocument. A method iterates over the entries and fills the bean by "remembering" the last node. I think thats not a good approach. What I have in mind is something like preconstructing some xpath expression(s) and then iterate over the subtrees/nodeLists. Return an array containing the beans as defined above eventually. However, it seems not to be possible reusing a subtree /DOMNode as DOMXPath constructor parameter. Has anyone of you encountered such a problem?

    Read the article

  • Getter and Setter vs. Builder strategy

    - by Extrakun
    I was reading a JavaWorld's article on Getter and Setter where the basic premise is that getters expose internal content of an object, hence tightening coupling, and go on to provide examples using builder objects. I was rather leery of abolishing getter/setter but on second reading of the article, see to quite like the idea. However, sometimes I just need one cruical element of an entity class, such as the user's id and writing one whole class just to extract that cruical element seems like overkill. It also implies that for different view, a different type of importer/exporter must be implemented (or the whole data of the class to be exported out, thus resulting in waste). Usually I tend towards filtering the result of a getter - for example, if I need to output the price of a product in different currency, I would code it as: return CurrencyOutput::convertTo($product->price(), 'USD'); This is with the understanding that the raw output of a getter is not necessary the final result to be pushed onto a screen or a database. Is getter/setter really as bad as it is protrayed to be? When should one adopt a builder strategy, or a 'get the result and filter it' approach? How do you avoid having a class needing to know about every other objects if you are not using getter/setter?

    Read the article

  • Memcache key generation strategy

    - by Maxim Veksler
    Given function f1 which receives n String arguments, would be considered better random key generation strategy for memcache for the scenario described below ? Our Memcache client does internal md5sum hashing on the keys it gets public class MemcacheClient { public Object get(String key) { String md5 = Md5sum.md5(key) // Talk to memcached to get the Serialization... return memcached(md5); } } First option public static String f1(String s1, String s2, String s3, String s4) { String key = s1 + s2 + s3 + s4; return get(key); } Second option /** * Calculate hash from Strings * * @param objects vararg list of String's * * @return calculated md5sum hash */ public static String stringHash(Object... strings) { if(strings == null) throw new NullPointerException("D'oh! Can't calculate hash for null"); MD5 md5sum = new MD5(); // if(prevHash != null) // md5sum.Update(prevHash); for(int i = 0; i < strings.length; i++) { if(strings[i] != null) { md5sum.Update("_" + strings[i] + "_"); // Convert to String... } else { // If object is null, allow minimum entropy by hashing it's position md5sum.Update("_" + i + "_"); } } return md5sum.asHex(); } public static String f1(String s1, String s2, String s3, String s4) { String key = stringHash(s1, s2, s3, s4); return get(key); } Note that the possible problem with the second option is that we are doing second md5sum (in the memcache client) on an already md5sum'ed digest result. Thanks for reading, Maxim.

    Read the article

  • ID3D10Device Memory Allocation Strategy and E_OUTOFMEMORY

    - by Buzz
    Hi,guys, I want to know more detail of memory allocation strategy in D3D10Device. Could you give me some help? First questions is: I know D3D10 has done some work on memory virtualization that means client don't need to consider where the buffer was reserved, GPU memory, AGP memory or Process system memory. Is this correct? Second question is: When I use ID3D10Device to CreateBuffer continuously, no matter what buffer desc type is, for example ID3D10Device::CreateBuffer( ... D3D10_USAGE_DEFAULT ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_IMMUTABLE ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_DYNAMIC ... ); ID3D10Device::CreateBuffer( ... D3D10_USAGE_STAGING ... ); etc, if CreateBuffer return error code "E_OUTOFMEMORY", does that mean process virtual memory is exhausted? And at this time, memory allocation on process default heap would also be failed? Thanks in advance!

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • Strategy for locale sensitive sort with pagination

    - by Thom Birkeland
    Hi, I work on an application that is deployed on the web. Part of the app is search functions where the result is presented in a sorted list. The application targets users in several countries using different locales (= sorting rules). I need to find a solution for sorting correctly for all users. I currently sort with ORDER BY in my SQL query, so the sorting is done according to the locale (or LC_LOCATE) set for the database. These rules are incorrect for those users with a locale different than the one set for the database. Also, to further complicate the issue, I use pagination in the application, so when I query the database I ask for rows 1 - 15, 16 - 30, etc. depending on the page I need. However, since the sorting is wrong, each page contains entries that are incorrectly sorted. In a worst case scenario, the entire result set for a given page could be out of order, depending on the locale/sorting rules of the current user. If I were to sort in (server side) code, I need to retrieve all rows from the database and then sort. This results in a tremendous performance hit given the amount of data. Thus I would like to avoid this. Does anyone have a strategy (or even technical solution) for attacking this problem that will result in correctly sorted lists without having to take the performance hit of loading all data? Tech details: The database is PostgreSQL 8.3, the application an EJB3 app using EJB QL for data query, running on JBoss 4.5.

    Read the article

  • Game engine deployment strategy for the Android?

    - by Jeremy Bell
    In college, my senior project was to create a simple 2D game engine complete with a scripting language which compiled to bytecode, which was interpreted. For fun, I'd like to port the engine to android. I'm new to android development, so I'm not sure which way to go as far as deploying the engine on the phone. The easiest way I suppose would be to require the engine/interpreter to be bundled with every game that uses it. This solves any versioning issues. There are two problems with this. One: this makes each game app larger and two: I originally released the engine under the LGPL license (unfortunately), but this deployment strategy makes it difficult to conform to the rules of that license, particularly with respect to allowing users to replace the lib easily with another version. So, my other option is to somehow have the engine stand alone as an Activity or service that somehow responds to intents raised by game apps, and somehow give the engine app permissions to read the scripts and other assets to "run" the game. The user could then be able to replace the engine app with a different version (possibly one they made themselves). Is this even possible? What would you recommend? How could I handle it in a secure way?

    Read the article

  • Resource placement (optimal strategy)

    - by blackened
    I know that this is not exactly the right place to ask this question, but maybe a wise guy comes across and has the solution. I'm trying to write a computer game and I need an algorithm to solve this question: The game is played between 2 players. Each side has 1.000 dollars. There are three "boxes" and each player writes down the amount of money he is going to place into those boxes. Then these amounts are compared. Whoever placed more money in a box scores 1 point (if draw half point each). Whoever scores more points wins his opponents 1.000 dollars. Example game: Player A: [500, 500, 0] Player B: [333, 333, 334] Player A wins because he won Box A and Box B (but lost Box C). Question: What is the optimal strategy to place the money? I have more questions to ask (algorithm related, not math related) but I need to know the answer to this one first. Update (1): After some more research I've learned that these type of problems/games are called Colonel Blotto Games. I did my best and found few (highly technical) documents on the subject. Cutting it short, the problem I have (as described above) is called simple Blotto Game (only three battlefields with symmetric resources). The difficult ones are the ones with, say, 10+ battle fields with non-symmetric resources. All the documents I've read say that the simple Blotto game is easy to solve. The thing is, none of them actually say what that "easy" solution is.

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Should a C++ constructor do real work?

    - by Wade Williams
    I'm strugging with some advice I have in the back of my mind but for which I can't remember the reasoning. I seem to remember at some point reading some advice (can't remember the source) that C++ constructors should not do real work. Rather, they should initialize variables only. The advice when on to explain that real work should be done in some sort of init() method, to be called separately after the instance was created. The situation is I have a class that represents a hardware device. It makes logical sense to me for the constructor to call the routines that query the device in order to build up the instance variables that describe the device. In other words, once new instantiates the object, the developer receives an object which is ready to be used, no separate call to object-init() required. Is there a good reason why constructors shouldn't do real work? Obviously it could slow allocation time, but that wouldn't be any different if calling a separate method immediately after allocation. Just trying to figure out what gotchas I not currently considering that might have lead to such advice.

    Read the article

  • JPA - Real primary key generated ID for references

    - by Val
    I have ~10 classes, each of them, have composite key, consist of 2-4 values. 1 of the classes is a main one (let's call it "Center") and related to other as one-to-one or one-to-many. Thinking about correct way of describing this in JPA I think I need to describe all the primary keys using @Embedded / @PrimaryKey annotations. Question #1: My concern is - does it mean that on the database level I will have # of additional columns in each table referring to the "Center" equal to number of column in "Center" PK? If yes, is it possible to avoid it by using some artificial unique key for references? Could you please give an idea how real PK and the artificial one needs to be described in this case? Note: The reason why I would like to keep the real PK and not just use the unique id as PK is - my application have some data loading functionality from external data sources and sometimes they may return records which I already have in local database. If unique ID will be used as PK - for new records I won't be able to do data update, since the unique ID will not be available for just downloaded ones. At the same time it is normal case scenario for application and it just need to update of insert new records depends on if the real composite primary key matches. Question #2: All of the 10 classes have common field "date" which I described in an abstract class which each of them extends. The "date" itself is never a key, but it always a part of composite key for each class. Composite key is different for each class. To be able to use this field as a part of PK should I describe it in each class or is there any way to use it as is? I experimented with @Embedded and @PrimaryKey annotations and always got an error that eclipselink can't find field described in an abstract class. Thank you in advance! PS. I'm using latest version of eclipselink & H2 database.

    Read the article

  • Which design pattern fits - strategy makes sense ?

    - by user554833
    --Bump *One desperate try to get someone's attention I have a simple database table that stores list of users who have subscribed to folders either by email OR to show up on the site (only on the web UI). In the storage table this is controlled by a number(1 - show on site 2- by email). When I am showing in UI I need to show a checkbox next to each of folders for which the user has subscribed (both email & on site). There is a separate table which stores a set of default subscriptions which would apply to each user if user has not expressed his subscription. This is basically a folder ID and a virtual group name. But, Email subscriptions do not count for applying these default groups. So if no "on site" subscription apply default group. Thats the rule. How about a strategy pattern here (Pseudo code) Interface ISubscription public ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithoutDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionOnlyDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) does this even make sense? I would be more than glad for receive any criticism / help / notes. I am learning. Cheers

    Read the article

  • How are these numbers converted to a readable Date/Time string?

    - by duckwizzle
    I have 2 XML files I'm reading - one has a date/time attribute that's readable (ex. May 1, 2010 12:03:14 AM) and the other... not so much (ex. 1272686594492). Both files have the complicated date/time format, but only the newer one has the readable version. I cannot figure out how to make the complicated version readable. Any ideas? The numbers are in the pastbin below. http://pastebin.com/HMLEAGhf Thanks!

    Read the article

  • Is it possible to cast the Elapsed Time function to Integer?

    - by nuvio
    I have the following function: (def elapsedtime (with-out-str (time (run-my-function)))) and I was wondering if is possible to store only the integer value of the time, as I can only store a String at the moment.... Any suggestion? Thanks a lot UPDATE So I did use this: (defmacro nsecs [expr] `(let [start# (. System (nanoTime))] ~expr (- (. System (nanoTime)) start#))) And then modified this: (def elapsedtime (nsecs (run-my-function argument1 argument2))) but doesn't work, what am I doing wrong? "Exception in thread "AWT-EventQueue-0" java.lang.IllegalArgumentException: Wrong number of args (1) passed to: main$fn--105$nsecs"

    Read the article

  • How do I get the response time from a jQuery ajax call?

    - by Dumpen
    So I am working on tool that can show long a request to a page is taking. I am doing this by using jQuery Ajax (http://api.jquery.com/jQuery.ajax/) and I want to figure out the best way to get the response time. I found a thread (http://forum.jquery.com/topic/jquery-get-time-of-ajax-post) which describes using the "Date" in JavaScript, but is this method really reliable? An example of my code could be this below $.ajax({ type: "POST", url: "some.php", }).done(function () { // Here I want to get the how long it took to load some.php and use it further });

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >