Search Results

Search found 1068 results on 43 pages for 'strategies'.

Page 8/43 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Laptop Backup Synch to the Data Center Without VPN

    - by Sameer
    We would like to synchronize our users or backup their laptops to the data center – looking for suggestions/alternatives to synch them to the data center where they don’t have to know about it. Blue sky like to haves: • Don’t want VPN but needs to secure • Admin can access all files • Global dedup • Select file types only – MS Office, PSTs, PDFs • Incremental change only • Right now 60 users but needs to scale (all Windows7 64 bit) • Can allocate budget if have to Don’t mean to be vague but hoping to get some proven places to start.

    Read the article

  • Backup / Disaster Recovery, should I store RAR-compressed files?

    - by moraleida
    I'm in the process of recovering files from an accidentally formated Ext4 partition using Photorec. It had about 300Gb of data, of which I've already got hold of about 30Gb. So far, it seems to me that the recovery of RAR-compressed files has been much more successful than the recovery of individual uncompressed files and ZIP compressed files - in the sense that a lot of recovered files/zips were unreadable, and pretty much all of the RAR files were intact. Is there such a relation? Are RAR-compressed files really less prone to corruption and thus easier to recover?

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • How can I incrementally backup a large amount of data [with rsync]?

    - by Annan
    A website contains ~40GB images + files which needs to be backed up. Rollbacks need to be possible daily for the last 30 days. And backup server < 1.2TB My idea is to have one full backup from 30 days ago, then incremental backups for the last 30 days. On each day the last incremental backup is combined with the full backup and a new incremental backup is added. Can this strategy be implemented with rsync, if so how? Are there any problems with this plan? A better plan? PS: Incremental backups, not backup incrementally (which rsync does automatically)

    Read the article

  • How to automatically take daily HDD backup?

    - by user13107
    I think my HDD might have crashed. I don't want to lose data if this happens again. I have dual boot Windows/Ubuntu system. What is the best way to backup data and other software settings in Ubuntu? I don't care much about Windows partition, important stuff is in Ubuntu. I have 1 Tb external HDD (laptop HDD is of 500 gb total). One way would be to run rsync every day (or via cronjob) to backup everything to external HDD. What might be better ways of achieving this (backup)? Also are instant backup software recommended? Are there any disadvantages of instant backup as opposed to daily rsync?

    Read the article

  • Rails deployment strategies with Bundler and JRuby

    - by brad
    I have a jruby rails app and I've just started using bundler for gem dependency management. I'm interested in hearing peoples' opinions on deployment strategies. The docs say that bundle package will package your gems locally so you don't have to fetch them on the server (and I believe warbler does this by default), but I personally think (for us) this is not the way to go as our deployed code (in our case a WAR file) becomes much larger. My preference would be to mimic our MVN setup which fetches all dependencies directly on the server AFTER the code has been copied there. Here's what I'm thinking, all comments are appreciated: Step1: Build war file, copy to server Step2: Unpack war on server, fetch java dependencies with mvn Step3: use Bundler to fetch Gem deps (Where should these be placed??) * Step 3 is the step I'm a bit unclear on. Do I run bundle install with a particular target in mind?? Step4: Restart Tomcat Again my reasoning here is that I'd like to keep the dependencies separate from the code at deploy time. I'd also like to place all gem dependencies in the app itself so they are contained, rather than installing them in the app user's home directory (as, again, I believe is the default for Bundler)

    Read the article

  • BlackBerry Deployment Strategies

    - by cagreen
    I'm new to large scale BB app deployment and I'm looking for some clarification on the various methods of deployment. Please bear with me as I'm sure there is more to it than my naive view would lead me to believe. My app is very targeted to corporate users and requires a subscription to some additional services before it can be used. In other words, it's not targeted towards the consumer market, so I'm not worried about people not being able to easily find it online. What do I need to be aware of when looking at deployment strategies? Any gotchas? From my understanding my choices are: - App World small upfront vendor fee users can easily search for and find my app billing handled by RIM 4 licensing models (static, single, pool, dynamic). Though I'm not sure I've seen enough info on the pool and dynamic to fully appreciate how it might help me. - Download from my website billing is handled by me can I enforce the number of licenses that are in use within an organization? is this easier/harder for a user? - What else am I missing?

    Read the article

  • Recognizing when to use the mod operator

    - by Will
    I have a quick question about the mod operator. I know what it does; it calculates the remainder of a division. My question is, how can I identify a situation where I would need to use the mod operator? I know I can use the mod operator to see whether a number is even or odd and prime or composite, but that's about it. I don't often think in terms of remainders. I'm sure the mod operator is useful and I would like to learn to take advantage of it. I just have problems identifying where the mod operator is applicable. In various programming situations, it is difficult for me to see a problem and realize "hey! the remainder of division would work here!" Any tips or strategies? Thanks

    Read the article

  • Fiscal year handling strategies in database design

    - by Sapphire
    By fiscal year I mean all the data in the database (in all tables) that occurred in the particular year. Lets say that we are building an application that allows user to choose from different years. What way of implementing this would you prefer, and why: Separate fiscal year data based on multiple separate database instances (for example, on every fiscal year start you could create a new instance with no data) Have everything in one database, but with logic that automatically separates records from different years. Personally, I have "seen" both methods, and I would choose the second. The only argument I can think of for the first method is to have less records in case that these are really big databases - but still, you could "archive" old records by joining them in summaries or by some other way. What do you think?

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • WCF Data Services implementation strategies.

    - by Nix
    Microsoft has done a savvy job of not outlining the actual place for data services in the wonderful world of SOA/Web dev. So my question is simple, are WCF Data Services designed to be used via clients? Or has anyone ever heard of someone using them on the server side? Simple scenario a general layered architecture using BO business objects (parenthesis indicate what is being passed between layers) (XML) WCF Service - (BO)Business Logic - (BO) Dao - Entity Framework or using data services it would be where DS BO are modeled business entities to be used in data service. (XML) WCF Service -(BO) Business Logic - (BO) WCF Data Service - (DS BO)Server I can't see a use for the later, unless there are going to be a lot of cases people would be accessing your data via your Data Service Layer vs the Service layer? Thoughts anyone? I have not seen any mention of using DS from within a Service Layer....

    Read the article

  • AntFarm anti-pattern -- strategies to avoid, antidotes to help heal from

    - by alchemical
    I'm working on a 10 page web site with a database back-end. There are 500+ objects in use, trying to implement the MVP pattern in ASP.Net. I'm tracing the code-execution from a single-page, my finger has been on F-11 in Visual Studio for about 40 minutes, there seems to be no end, possibly 1000+ method calls for one web page! If it was just 50 objects that would be one thing, however, code execution snakes through all these objects just like millions of ants frantically woring in their giant dirt mound house, riddled with object tunnels. Hence, a new anti-pattern is born : AntFarm. AntFarm is also known as "OO-Madnes", "OO-Fever", OO-ADD, or simply design-pattern junkie. This is not the first time I've seen this, nor my associates at other companies. It seems that this style is being actively propogated, or in any case is a misunderstanding of the numerous OO/DP gospels going around... I'd like to introduce an anti-pattern to the anti-pattern: GST or "Get Stuff Done" AKA "Get Sh** done" AKA GRD (GetRDone). This pattern focused on just what it says, getting stuff done, in a simple way. I may try to outline it more in a later post, or please share your ideas on this antidote pattern. Anyway, I'm in the midst of a great example of AntFarm anti-pattern as I write (as a bonus, there is no documentation or comments). Please share you thoughts on how this anti-pattern has become so prevelant, how we can avoid it, and how can one undo or deal with this pattern in a live system one must work with!

    Read the article

  • nHibernate strategies in a web farm

    - by Pete Nelson
    Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database. We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running. Some scenarios I've been thinking about... 1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency. 2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine. Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system. I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!

    Read the article

  • Looking for DOS/DDOS protection tools and strategies

    - by Alexandre Victoor
    I am working on a java application that exposes webservices for a flash client. Any idea on how to prevent DOS/DDOS attacks ? I cannot use mechanism unfriendly for the end user such as captcha. So far I have found mod_evasive, an apache module which looks quite promising... Any suggestions, best practices, tools I might use ? Thanks in advance

    Read the article

  • Strategies for Accessing a Application with a COM API From PHP

    - by Alan Storm
    Background: Experienced PHP developer with a mostly *nix background. I'm writing a PHP application that needs to interact with a proprietary 3rd party system. The 3rd party system is Windows only. The PHP application will be living on a separate Linux based system The 3rd party application has been described as having a "COM API" that I'll need to talk to from the PHP application. What does this look like architecturally speaking? I'm starting with the COM section of the PHP manual, but I have specific questions. Specific Questions: Can I talk directly to a COM API from a PHP application running on another server? If so, how? (what PHP extensions would I need, or what protocols/PHP functions would I be using to talk to the API) If the answer to number 2 is no, I'd assume I'd need some kind of application on the Windows machine that can talk to COM, and then a service on the windows machine I can hit with PHP. Are there prebuilt frameworks for this kind of thing? Is this all nonsense and/or did I say something exceedingly stupid? (Quite possible, as I'm a little fuzzy on what "COM" does and doesn't cover) I'm obviously not looking for a full solution here, I'm just trying to get a general idea of what is and isn't possible and what kind of things I'll want to Google for. Thanks!

    Read the article

  • Performance analysis strategies

    - by Bernd
    I am assigned to a performance-tuning-debugging-troubleshooting task. Scenario: a multi-application environment running on several networked machines using databases. OS is Unix, DB is Oracle. Business logic is implemented across applications using synchronous/asynchronous communication. Applications are multi-user with several hundred call center users at peak time. User interfaces are web-based. Applications are third party, I can get access to developers and source code. I only have the production system and a functional test environment, no load test environment. Problem: bad performance! I need fast results. Management is going crazy. I got symptom examples like these: user interface actions taking minutes to complete. Seaching for a customer usually takes 6 seconds but an immediate subsequent search with same parameters may take 6 minutes. What would be your strategy for finding root causes?

    Read the article

  • New design patterns/design strategies

    - by steven
    I've studied and implemented design patterns for a few years now, and I'm wondering. What are some of the newer design patterns (since the GOF)? Also, what should one, similar to myself, study [in the way of software design] next? Note: I've been using TDD, and UML for some time now. I'm curious about the newer paradigm shifts, and or newer design patterns.

    Read the article

  • DLL Deployment Strategies

    - by Filip Ekberg
    If you have a modular applicaiton that depends on its modules to be in seperate libraries ( dlls ). What kind of Re-deployment strategy would be good to follow? The application is installed using the Setup Project that is available in Visual Studio. I would like to avoid the copy and paste approach!

    Read the article

  • What are some popular Git layout strategies?

    - by CodexArcanum
    A fellow developer recently showed me a blog post with a nice visual representation of a git layout. He implied that this particular strategy was gaining a lot of popularity, but numerous searches here and through the Google have yet to turn up the blog post. The gist of it was that you had a trunk for main development, and a "side-trunk" for immediate customer-driven bug fixes. Main development had a branch, which was merged to trunk periodically for major releases, and then you had feature branches. There was a lovely diagram that clearly showed all this. Since I'd like to learn git better, I'd love to have that diagram available as an aide. It'd also be useful as a visual for trying to convince coworkers to switch to git. Does anyone happen to know what I'm talking about and can provide a link?

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Windows Phone 7, MVVM, Silverlight and navigation best practice / patterns and strategies

    - by Matt F
    Whilst building a Windows Phone 7 app. using the MVVM pattern we've struggled to get to grips with a pattern or technique to centralise navigation logic that will fit with MVVM. To give an example, everytime the app. calls our web service we check that the logon token we've assigned the app. earlier hasn't expired. We always return some status to the phone from the web service and one of those might be Enum.AuthenticationExpired. If we receive that I'd imagine we'd alert the user and navigate back to the login screen. (this is one of many examples of status we might receive) Now, wanting to keep things DRY, that sort of logic feels like it should be in one place. Therein lies my question. How should I go about modelling navigation that relies on (essentially) switch or if statements to tell us where to navigate to next without repeating that in every view. Are there recognised patterns or techniques that someone could recommend? Thanks

    Read the article

  • string categorization strategies

    - by Andrew Heath
    I'm the one-man dev team on a fledgling military history website. One aspect of the site is a catalog of ~1,200 individual battles, including the nations & formations (regiments, divisions, etc) which took part. The formation information (as well as the other battle info) was manually imported from a series of books by a 10-man volunteer team. The formations were listed in groups with varying formatting and abbreviation patterns. At the time I set up the data collection forms I couldn't think of a good way to process that data... and elected to store it all as strings in the MySQL database and sort it out later. Well, "later" - as it tends to happen - has arrived. :-) Each battle has 2+ records in the database - one for each nation that participated. Each record has a formations text string listing the formations present as the volunteer chose to add them. Some real examples: 39th Grenadier Rgmt, 26th Volksgrenadier Division 2nd Luftwaffe Field Division, 246th Infantry Division 247th Rifle Division, 255th Tank Brigade 2nd Luftwaffe Field Division, SS Cavalry Division 28th Tank Brigade, 158th Rifle Division, 135th Rifle Division, 81st Tank Brigade, 242nd Tank Brigade 78th Infantry Division 3rd Kure Special Naval Landing Force, Tulagi Seaplane Base personnel 1st Battalion 505th Infantry Regiment The ultimate goal is for each individual force to have an ID, so that its participation can be traced throughout the battle database. Formation hierarchy, such as the final item above 1st Battalion (of the) 505th Infantry Regiment also needs to be preserved. In that case, 1st Battalion and 505th Infantry Regiment would be split, but 1st Battalion would be flagged as belonging to the 505th. In database terms, I think I want to pull the formation field out of the current battle info table and create three new tables: FORMATION [id] [name] FORMATION_HIERARCHY [id] [parent] [child] FORMATION_BATTLE [f_id] [battle_id] It's simple to explain, but complicated to enact. What I'm looking for from the SO community is just some tips on how best to tackle this problem. Ideally there's some sort of method to solving this that I'm not aware of. However, as a last resort, I could always code a classification framework and call my volunteers back to sort through 2,500+ records...

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Ajax data two-way data binding strategies?

    - by morgancodes
    I'd like to 1) Draw create form fields and populate them with data from javascript objects 2) Update those backing objects whenever the value of the form field changes Number 1 is easy. I have a few js template systems I've been using that work quite nicely. Number 2 may require a bit of thought. A quick google search on "ajax data binding" turned up a few systems which seem basically one-way. They're designed to update a UI based on backing js objects, but don't seem to address the question of how to update those backing objects when changes are made to the UI. Can anyone recommend any libraries which will do this for me? It's something I can write myself without too much trouble, but if this question has already been thought through, I'd rather not duplicate the work.

    Read the article

  • Pros and cons of escaping strategies in symfony

    - by zergu
    I am still not sure in that matter. While turned on we're quite safe but some other problems appear (with passing template variables or counting characters). On the other hand we have magic turned off, everything is clear, but we have to manually escape every variable (that come from untrusted source) in templates. By the way, non-magic solution is used in Ruby-on-Rails. So the question is: when starting a new project in symfony do you disable escaping_strategy and why?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >