Search Results

Search found 61036 results on 2442 pages for 'time keeping'.

Page 40/2442 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • C# Keeping DLLs with the associated library

    - by SimonN
    We have a library built on the back of eldos' Secure Black Box. We use copy local to ensure that the appropriate runtime DLLs are included. If we now reference our library in another project with a copy local our library is copied into the bin folder of our main project but the Eldos SBB libraries aren't. We could reference SBB in the main project but there are no direct calls to SBB so any time the code is refactored the references may be removed as unused. What is the best way of handling this issue? Simon

    Read the article

  • Windows Phone: Updating backend datastore (via web service) while keeping UI very responsive

    - by will
    I am developing a Windows Phone app where users can update a list. Each update, delete, add etc need to be stored in a database that sits behind a web service. As well as ensuring all the operations made on the phone end up in the cloud, I need to make sure the app is really responsive and the user doesn’t feel any lag time whatsoever. What’s the best design to use here? Each check box change, each text box edit fires a new thread to contact the web service? Locally store a list of things that need to be updated then send to the server in batch every so often (what about the back button)? Am I missing another even easier implementation? Thanks in advance,

    Read the article

  • OSGI bundle (or service)- how to register for a given time period?

    - by Alec
    Hello, all! Search did not give me a hint, how can i behave with the following situation: I'd love to have 2 OSGI implementations of the same interface: one is regular, the other should work (be active/present/whatever) on the given time period (f.e for Christmas weeks :)) The main goal is to call the same interface without specifying any flags/properties/without manual switching of ranking. Application should somehow switch implementation for this special period, doing another/regular job before and after :) I'm a newbie, maybe i do not completely understand OSGI concept somewhere, sorry for that of give me a hint or link, sorry for my English. Using Felix/Equinox with Apache Aries.

    Read the article

  • How do you calculate expanding mean on time series using pandas?

    - by mlo
    How would you create a column(s) in the below pandas DataFrame where the new columns are the expanding mean/median of 'val' for each 'Mod_ID_x'. Imagine this as if were time series data and 'ID' 1-2 was on Day 1 and 'ID' 3-4 was on Day 2. I have tried every way I could think of but just can't seem to get it right. left4 = pd.DataFrame({'ID': [1,2,3,4],'val': [10000, 25000, 20000, 40000],'Mod_ID': [15, 35, 15, 42], 'car': ['ford','honda', 'ford', 'lexus']}) right4 = pd.DataFrame({'ID': [3,1,2,4],'color': ['red', 'green', 'blue', 'grey'], 'wheel': ['4wheel','4wheel', '2wheel', '2wheel'], 'Mod_ID': [15, 15, 35, 42]}) df1 = pd.merge(left4, right4, on='ID').drop('Mod_ID_y', axis=1)

    Read the article

  • Best practices for storing & selecting time/date with php & mysql?

    - by Adam
    I often find myself storing data in a mysql database, and then wanting to display all sorts of stats about my data, specifically stuff like 'how many rows do I have for this date, or that date'. Does anyone know of (or could write) a good tutorial on this subject? Ideally a good tutorial would overview: Best practices when storing the data (i.e. what formats to use, how to use them servertime vs. generated time, etc.) Best practices when selecting data from the database with php (i.e. how to sort rows by date, how to retrieve only rows from a certain date, or a certain hour, etc).. Timezones and other issues that might come up. Thanks in advance.

    Read the article

  • Get "2:35pm" instead of "02:35PM" from Python date/time?

    - by anonymous coward
    I'm still a bit slow with Python, so I haven't got this figured out beyond what's obviously in the docs, etc. I've worked with Django a bit, where they've added some datetime formatting options via template tags, but in regular python code how can I get the 12-hour hour without a leading zero? Is there a straightforward way to do this? I'm looking at the 2.5 and 2.6 docs for "strftime()" and there doesn't seem to be a formatting option there for this case. Should I be using something else? Feel free to include any other time-formatting tips that aren't obvious from the docs. =)

    Read the article

  • Sort vector<int>(n) in O(n) time using O(m) space?

    - by Adam
    I have a vector<unsigned int> vec of size n. Each element in vec is in the range [0, m], no duplicates, and I want to sort vec. Is it possible to do better than O(n log n) time if you're allowed to use O(m) space? In the average case m is much larger than n, in the worst case m == n. Ideally I want something O(n). I get the feeling that there's a bucket sort-ish way to do this: unsigned int aux[m]; aux[vec[i]] = i; Somehow extract the permutation and permute vec. I'm stuck on how to do 3. In my application m is on the order of 16k. However this sort is in the inner loops and accounts for a significant portion of my runtime.

    Read the article

  • Cisco ASA5505 won't sync with NTP

    - by Martijn Heemels
    Today I noticed that the clock my Cisco ASA 5505 firewall was running about 15 minutes late, which surprised me since I've set up the NTP client. My two NTP servers 10.10.0.1 and 10.10.0.2 are virtualized Windows Server 2008 R2 domain controllers, and both have the correct time. As shown below, the ASA knows about the two servers, can ping them and seems to poll them periodically, so I suppose it can reach them both. The ASA claims its time source is NTP, however the clock is unsynchronized. Neither host is marked as synced. Result of the command: "ping 10.10.0.1" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.10.0.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms Result of the command: "sh ntp ass" address ref clock st when poll reach delay offset disp ~10.10.0.1 .LOCL. 1 78 1024 377 0.5 643.69 17.0 ~10.10.0.2 10.10.0.1 2 190 1024 377 0.9 655.91 58.4 * master (synced), # master (unsynced), + selected, - candidate, ~ configured Result of the command: "sh ntp stat" Clock is unsynchronized, stratum 16, no reference clock nominal freq is 99.9984 Hz, actual freq is 99.9984 Hz, precision is 2**6 reference time is 00000000.00000000 (07:28:16.000 CEST Thu Feb 7 2036) clock offset is 0.0000 msec, root delay is 0.00 msec root dispersion is 0.00 msec, peer dispersion is 0.00 msec Result of the command: "sh clock detail" 10:33:23.769 CEDT Tue Jun 26 2012 Time source is NTP UTC time is: 08:33:23 UTC Tue Jun 26 2012 Summer time starts 02:00:00 CEST Sun Mar 25 2012 Summer time ends 03:00:00 CEDT Sun Oct 28 2012 I've tried the basic steps of manually setting the time and removing and adding the timeservers, to no avail. My ASA's ntp config is simply: ntp server 10.10.0.1 ntp server 10.10.0.2 Do I need to enable authentication to use a Windows NTP server? Any thoughts?

    Read the article

  • Remove duplicates from a sorted ArrayList while keeping some elements from the duplicates

    - by js82
    Okay at first I thought this would be pretty straightforward. But I can't think of an efficient way to solve this. I figured a brute force way to solve this but that's not very elegant. I have an ArrayList. Contacts is a VO class that has multiple members - name, regions, id. There are duplicates in ArrayList because different regions appear multiple times. The list is sorted by ID. Here is an example: Entry 0 - Name: John Smith; Region: N; ID: 1 Entry 1 - Name: John Smith; Region: MW; ID: 1 Entry 2 - Name: John Smith; Region: S; ID: 1 Entry 3 - Name: Jane Doe; Region: NULL; ID: 2 Entry 4 - Name: Jack Black; Region: N; ID: 3 Entry 6 - Name: Jack Black; Region: MW; ID: 3 Entry 7 - Name: Joe Don; Region: NE; ID: 4 I want to transform the list to below by combining duplicate regions together for the same ID. Therefore, the final list should have only 4 distinct elements with the regions combined. So the output should look like this:- Entry 0 - Name: John Smith; Region: N,MW,S; ID: 1 Entry 1 - Name: Jane Doe; Region: NULL; ID: 2 Entry 2 - Name: Jack Black; Region: N,MW; ID: 3 Entry 3 - Name: Joe Don; Region: NE; ID: 4 What are your thoughts on the optimal way to solve this? I am not looking for actual code but ideas or tips to go about the best way to get it done. Thanks for your time!!!

    Read the article

  • Rails: keeping DRY with ActiveRecord models that share similar complex attributes

    - by Greg
    This seems like it should have a straightforward answer, but after much time on Google and SO I can't find it. It might be a case of missing the right keywords. In my RoR application I have several models that share a specific kind of string attribute that has special validation and other functionality. The closest similar example I can think of is a string that represents a URL. This leads to a lot of duplication in the models (and even more duplication in the unit tests), but I'm not sure how to make it more DRY. I can think of several possible directions... create a plugin along the lines of the "validates_url_format_of" plugin, but that would only make the validations DRY give this special string its own model, but this seems like a very heavy solution create a ruby class for this special string, but how do I get ActiveRecord to associate this class with the model attribute that is a string in the db Number 3 seems the most reasonable, but I can't figure out how to extend ActiveRecord to handle anything other than the base data types. Any pointers? Finally, if there is a way to do this, where in the folder hierarchy would you put the new class that is not a model? Many thanks.

    Read the article

  • Keeping the UI responsive while parsing a very large logfile

    - by Carlos
    I'm writing an app that parses a very large logfile, so that the user can see the contents in a treeview format. I've used a BackGroundWorker to read the file, and as it parses each message, I use a BeginInvoke to get the GUI thread to add a node to my treeview. Unfortunately, there's two issues: The treeview is unresponsive to clicks or scrolls while the file is being parsed. I would like users to be able to examine (ie expand) nodes while the file is parsing, so that they don't have to wait for the whole file to finish parsing. The treeview flickers each time a new node is added. Here's the code inside the form: private void btnChangeDir_Click(object sender, EventArgs e) { OpenFileDialog browser = new OpenFileDialog(); if (browser.ShowDialog() == DialogResult.OK) { tbSearchDir.Text = browser.FileName; BackgroundWorker bgw = new BackgroundWorker(); bgw.DoWork += (ob, evArgs) => ParseFile(tbSearchDir.Text); bgw.RunWorkerAsync(); } } private void ParseFile(string inputfile) { FileStream logFileStream = new FileStream(inputfile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); StreamReader LogsFile = new StreamReader(logFileStream); while (!LogsFile.EndOfStream) { string Msgtxt = LogsFile.ReadLine(); Message msg = new Message(Msgtxt.Substring(26)); //Reads the text into a class with appropriate members AddTreeViewNode(msg); } } private void AddTreeViewNode(Message msg) { TreeNode newNode = new TreeNode(msg.SeqNum); BeginInvoke(new Action(() => { treeView1.BeginUpdate(); treeView1.Nodes.Add(newNode); treeView1.EndUpdate(); Refresh(); } )); } What needs to be changed?

    Read the article

  • Netty options for real-time distribution of small messages to a large number of clients?

    - by user439407
    I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM. I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages? Any insight you could offer would be greatly appreciated.

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Keeping track of dependency revisions

    - by Samaursa
    I have a project with several dependencies that are in various repositories. Each time I commit changes to my project, I make sure I write the revision numbers of all the dependent repositories so that in the event I ever have to come back to this revision (let's call it 5), I can immediately know which revisions of the dependent repositories revision 5 is guaranteed to work with, update the dependencies to the specified revisions, compile and run the project. So for example if I have: Dep1 @ Revisions 10 Dep2 @ Revisions 20 Dep3 @ Revisions 10 Proj @ Revisions 35 And let's say that when Proj was on revision 17, the Dep1 revision was 5, Dep2 revision was 13 and Dep3 revision was 3. So in my SVN logs, I recorded something like this: !! Works with Dep1 Rev 5, Dep2 Rev 13, Dep3 Rev 3 To me this seems primitive and makes me believe that there is a better way to do it. Now in one of my other questions, Ivy Dependency Manager has been recommended. I have not looked at it in detail yet (seems complicated and yet another thing I must learn). To me it seems like the log of SVN (and Mercurial etc.) could have been split into Log and Dependencies (if any) where the latter could be switched off if there were no dependencies (unless of course I am unaware of an easier/better solution). This would allow for a cleaner log that maybe even warned at each new commit to check the previously defined dependencies again and make sure they have not changed. So, I was wondering how everyone manages this situations and if you have any tips, techniques, programs, suggestions that you can offer. Thank you.

    Read the article

  • Keeping the number of objects and event-listeners on stage as low as possible

    - by DevEight
    Hello. I am creating a site with lots of big scrollable text-boxes in it. Each text-box object contained some text, and two buttons to scroll up/down with. The scroll buttons each had an event listener so the text moved when you clicked them. These text-boxes were stacked on-top of each other with all except one having an alpha of 0. If I wanted to change which text-box is active I move it to the front and call a small TweenLite animation. To the left (outside of the text-box objects) I have an object similar to a menu. It also has about 12 or so event-listeners (one for every button). This turns out cause A LOT of lag an it's very troublesome for my laptop to run it. What I need help with doing is to reduce the number of event-listeners on the stage and also the amount of text-boxes. What I was thinking was to add the text-boxes using AS so I only have 1 on the stage at a time but I couldn't figure out how to do it. I also thought it might be better to just use 1 big event-listeners and from mouseX and mouseY decide which button the user is trying to push. Are there any better alternatives to this? And if so, please elaborate on how to do it.

    Read the article

  • Keeping Linq to SQL alive when using ViewModels (ASP.NET MVC)

    - by Kohan
    I have recently started using custom ViewModels (For example, CustomerViewModel) public class CustomerViewModel { public IList<Customer> Customers{ get; set; } public int ProductId{ get; set; } public CustomerViewModel(IList<Customer> customers, int productId) { this.Customers= customers; this.ProductId= productId; } public CustomerViewModel() { } } ... and am now passing them to my view instead of the Entities themselves (for example, var Custs = repository.getAllCusts(id) ) as it seems good practice to do so. The problem i have encountered is that when using ViewModels; by the time it has got to the the view i have lost the ability to lazy load on customers. I do not believe this was the case before using them. Is it possible to retain the ability of Lazy Loading while still using ViewModels? Or do i have to eager load using this method? Thanks, Kohan.

    Read the article

  • Abstraction: The War between solving the problem and a general solution.

    - by Bryan Harrington
    As a programmer, I find myself in the dilemma where I want make my program as abstract and as general as possible. Doing so usually would allow me to reuse my code and have a more general solution for a problem that might (or might not) come up again. Then this voice in my head says, just solve the problem dummy its that easy! Why spend more time than you have to? We all have indeed faced this question where Abstraction is on your right shoulder and Solve-it-stupid sits on the left. Which to listen to and how often? What is your strategy for this? Should you abstract everything?

    Read the article

  • Keeping the DI-container usage in the composition root in Silverlight and MVVM

    - by adrian hara
    It's not quite clear to me how I can design so I keep the reference to the DI-container in the composition root for a Silverlight + MVVM application. I have the following simple usage scenario: there's a main view (perhaps a list of items) and an action to open an edit view for one single item. So the main view has to create and show the edit view when the user takes the action (e.g. clicks some button). For this I have the following code: public interface IView { IViewModel ViewModel {get; set;} } Then, for each view that I need to be able to create I have an abstract factory, like so public interface ISomeViewFactory { IView CreateView(); } This factory is then declared a dependency of the "parent" view model, like so: public class SomeParentViewModel { public SomeParentViewModel(ISomeViewFactory viewFactory) { // store it } private void OnSomeUserAction() { IView view = viewFactory.CreateView(); dialogService.ShowDialog(view); } } So all is well until here, no DI-container in sight :). Now comes the implementation of ISomeViewFactory: public class SomeViewFactory : ISomeViewFactory { public IView CreateView() { IView view = new SomeView(); view.ViewModel = ???? } } The "????" part is my problem, because the view model for the view needs to be resolved from the DI-container so it gets its dependencies injected. What I don't know is how I can do this without having a dependency to the DI-container anywhere except the composition root. One possible solution would be to have either a dependency on the view model that gets injected into the factory, like so: public class SomeViewFactory : ISomeViewFactory { public SomeViewFactory(ISomeViewModel viewModel) { // store it } public IView CreateView() { IView view = new SomeView(); view.ViewModel = viewModel; } } While this works, it has the problem that since the whole object graph is wired up "statically" (i.e. the "parent" view model will get an instance of SomeViewFactory, which will get an instance of SomeViewModel, and these will live as long as the "parent" view model lives), the injected view model implementation is stateful and if the user opens the child view twice, the second time the view model will be the same instance and have the state from before. I guess I could work around this with an "Initialize" method or something similar, but it doesn't smell quite right. Another solution might be to wrap the DI-container and have the factories depend on the wrapper, but it'd still be a DI-container "in disguise" there :) Any thoughts on this are greatly appreciated. Also, please forgive any mistakes or rule-breaking, since this is my first post on stackoverflow :) Thanks! ps: my current solution is that the factories know about the DI-container, and it's only them and the composition root that have this dependency.

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Closed-loop Recommendation Engines: Analyst Insight report on Oracle Real-Time Decisions (RTD)

    - by Mike.Hallett(at)Oracle-BI&EPM
    In November 2011, Helena Schwenk of MWD Advisors, published her analysis on Oracle Real-Time Decisions.  She summarizes as follows: "In contrast to other popular approaches to implementing predictive analytics, RTD focuses on learning from each interaction and using these insights to adjust what is presented, offered or displayed to a customer. Likewise its capabilities for optimising decisions within the context of specific business goals and a report-driven framework for assessing the performance of models and decisions make it a strong contender for organisations that want to continuously improve decision making as part of a customer experience marketing, e-commerce optimisation and operational process efficiency initiative." This is an outstanding report to share with a prospect or client as it goes into great detail about the product and its capabilities.  It also highlights the differences in Oracle's Real-Time Decisions product vs. other closed loop recommendation engines. I encourage you to share this report with your clients and prospects. It can be downloaded directly from here - MWD Advisors Vendor Profile: Oracle Real-Time Decisions. (expires in November 2012) Highlights: "At the core of RTD lies a learning engine that combines business rules and adaptive predictive models to deliver recommendations to operational systems while simultaneously learning from experiences." "While closed-loop recommendation engines are becoming more prevalent... there are a number of features that distinguish RTD: It makes its decisions in the context of the business objectives, such as maximising customer revenue or reducing service costs Its support for operational integration offers organisations some flexibility in how they implement the offering."

    Read the article

  • Hiding the Flash Message After a Time Delay

    - by Madhan ayyasamy
    Hi Friends,The flash hash is a great way to provide feedback to your users.Here is a quick tip for hiding the flash message after a period of time if you don’t want to leave it lingering around.First, add this line to the head of your layout to ensure the prototype and script.aculo.us javascript libraries are loaded:Next, add the following to either your layout (recommended), your view templates or a partial depending on your needs. I usually add this to a partial and include the partial in my layouts. "flash", :id = flash_type % "text/javascript" do % setTimeout("new Effect.Fade('');", 10000); This will wrap the flash message in a div with class=‘flash’ and id=‘error’, ‘notice’ or ‘warn’ depending on the flash key specified.The value ‘10000’ is the time in milliseconds before the flash will disappear. In this case, 10 seconds.This function looks pretty good and little javascript stunts like this can help make your site feel more professional. It’s also worth bearing in mind though, not everybody can see well or read as quickly as others so this may not be suitable for every application.Update:As Mitchell has pointed out (see comments below), it may be better to set the flash_type as the div class rather than it’s id. If there is the possibility that you’ll be showing more than one flash message per page, setting the flash_type as the div id will result in your HTML/XHTML code becoming invalid because the unique intentifier will be used more than once per page.Here is a slightly more complex version of the method shown above that will hide all divs with class ‘flash’ after a time delay, achieving the same effect and also ensuring your code stays valid with more than one flash message! "flash #{flash_type}" % "text/javascript" do % setTimeout("$$('div.flash').each(function(flash){ flash.hide();})", 10000); In this example, the div id is not set at all. Instead, each flash div will have class “div” and also class of the type of flash message (“error”, “warning” etc.).Have a Great Day..:)

    Read the article

  • 10 Tech Products Ahead of Their Time [Video]

    - by Jason Fitzpatrick
    Sometimes a product just can’t help but be too far ahead of it’s time to be adopted. Check out these 10 products that had their moment of glory a moment (or a decade) too soon. At Mashable they’ve gathered up 10 products that hit the market too soon for people to really appreciate them. Among them, as seen in the video above, a super simple internet-focused computer. At the time it hit the market people simply didn’t get the value of having a cheap, easy to use internet terminal. It probably didn’t help much that the 1990s internet didn’t have the plethora of powerful and useful web-based applications we have now. None the less we now have tons of lightweight and “underpowered” devices focused on the internet experience (like netbooks, iPads, smart phones, chromebooks, and more). Hit up the link below to see the 9 other gems from their collection of products ahead of their times. 10 Tech Products Ahead of Their Time [Mashable] How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • MMORPG design for time-limited players

    - by Philipp
    I believe that there is a significant market of players who would enjoy the exploration and interaction aspects of MMORPGs, but simply don't have the time for the endless grinding marathons which are part of the average MMORPG. MMORPGs are all about interaction between players. But when different players have different amounts of time to invest into a game, those with less time to spend will soon lack behind their power-leveling friends and won't be able to interact with them anymore. One way to solve this would be to limit the progress a player can achieve per day, so that it simply doesn't make sense to play more than one or two hours a day. But even the busiest casual players sometimes like to spend a whole sunday afternoon playing a video game. Just stopping them after two hours would be really frustrating. It also creates a pressure to use the daily progress limit every day, because otherwise the player would feel like wasting something. This pressure would be detrimental for casual gamers. What else could be done to level the playing field between those players who play 40+ hours a week and those who can't play more than 10?

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >