Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 23/549 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Can Stopwatch be used in production code?

    - by Adrian
    Hi, I need an accurate timer, and DateTime.Now seems not accurate enough. From the descriptions I read, System.Diagnostics.Stopwatch seems to be exactly what I want. But I have a phobia. I'm nervous about using anything from System.Diagnostics in actual production code. (I use it extensively for debugging with Asserts and PrintLns etc, but never yet for production stuff.) I'm not merely trying to use a timer to benchmark my functions - my app needs an actual timer. I've read on another forum that System.Diagnostics.StopWatch is only for benchmarking, and shouldn't be used in retail code, though there was no reason given. Is this correct, or am I (and whoever posted that advice) being too closed minded about System.Diagnostics? ie, is it ok to use System.Diagnostics.Stopwatch in production code? Thanks Adrian

    Read the article

  • iOS - Application logging test and production code

    - by Peter Warbo
    I am doing a bunch of logging when I'm testing my application which is useful for getting information about variable state and such. However I have read that you should use logging sparsely in production code (because it can potentially slow down your application). But my question is now: if my app is in production and people are using it, whenever a crash (god forbid) occurs, how will I be able to interpret the crash information if I have removed the logging statements? Then I suppose I will only have a stacktrace for me to interpret? Does this mean I should leave logging in production code only WHERE it's really essential for me to interpret what has happened? Also how will the logging statements relate to the crash reports? Will they be combined? I'm thinking of using Flurry as analytics and crash reports...

    Read the article

  • jQuery cookie behaves differently on local machine vs production server

    - by user1810414
    I'm setting a jQuery cookie for some site javascript effects. I'm setting expiration date to 30 days. On local development machine cookie expiration is set properly, whereas on production expiration is being set to end of session. After inspecting the created cookies with web develoer tools, everything is identical except production machine cookie is marked as "session cookie" and setting expiration is disallowed, while cookie created on local development machine is not set to "session" and expiration date is editable. Is there a way to make production machine cookie accept the expiration date? This is how I set the cookie: $.cookie('skip_animation','skip', 2592000); Using jQuery 1.6.4 (legacy site) All browsers behave the same way

    Read the article

  • setting apache environment variable

    - by Kiran
    My hosting environment using Server version: Apache/2.2.14 (Unix) and I am modifying ./usr/local/apache/conf/httpd.conf to set environment variable and restarting the server . SetEnv XML-RPC-IPs 193.45.32.21 I did set it as a first entry in the file and restarted the server . But even restarting if I try to print it is still getting me black , Am I missing any thing ? echo "My IP address ".$_SERVER['XML-RPC-IPs']; Thanks for your help Regards Kiran

    Read the article

  • setting apache environment variable

    - by Kiran
    Hi , My hosting environment using Server version: Apache/2.2.14 (Unix) and I am modifying ./usr/local/apache/conf/httpd.conf to set environment variable and restarting the server . SetEnv XML-RPC-IPs 193.45.32.21 I did set it as a first entry in the file and restarted the server . But even restarting if I try to print it is still getting me black , Am I missing any thing ? echo "My IP address ".$_SERVER['XML-RPC-IPs']; Thanks for your help Regards Kiran

    Read the article

  • Is there any small linux distribution which comes with a complete C development environment

    - by hits_lucky
    Hi, I have installed "Damn Small Linux" on my home computer for doing C development in unix. But the distribution doesn't by default come with the C development environment and I am facing some issues when trying to install the gcc. Is there any other small Linux distribution which by default has the required packages for the C development. And also I don't want additional software which takes up lot of space but still would like to have the graphical environment. Thanks

    Read the article

  • putting folders above public_html in a shared hosting environment

    - by redconservatory
    On my local environment, the following settings work in my index.php file: $system_path = '../../ci/system/'; $application_folder = '../../ci/application'; My folder structure looks like this: -ci ---system ---application -public_html ---site ----index.php This works on my local environment, but when I upload my files, I get an error message: Your system folder path does not appear to be set correctly. Please open the following file and correct this: index.php I tried the following: $system_path = dirname(__FILE__).'../../ci/system/'; $application_folder = dirname(__FILE__).'../../ci/application';

    Read the article

  • Test Environment for Microsoft SQL Cluster

    - by user195643
    I have a the following test environment: 1 - Windows 2008 (DC Edition) Role - Active Directory 2 - Windows 2008 (Enterprise Servers) I would like to create a MSSQL Cluster in this test environment. I am using a desktop PC and I would like to know how I can configure second network card in order to configure the MSSQL Cluster? and how I can use a shared storage without using any external drives?

    Read the article

  • heroku logs --ps run showign nothing

    - by Zarne Dravitzki
    I have two running apps on heroku staging and production. They are near identical enviornments. (Staging has extra configs IE RailsFootnotes, Bullet gem) When I run heroku logs --ps run --app jl-staging Returns as logs like 2012-08-30T01:30:42+00:00 heroku[run.1]: Starting process with command `bundle exec rake jewellover:warn_users` This log is a Task set to run with Heroku Schedular Free. Everything Works perfect but when I do the same with heroku logs --ps run --app jl-production There are no results. No heroku[run.1] process logs. Both environments have the same scheduled tasks, albeit at different times but none the less both run scheduled tasks at specified times. Is there something im missing about heroku[run.1] processes in production env? Does heroku only keep the -ps logs for a certain amount of time? It seems to show less activity than the normal logs. Maybe only show 24hrs worth of logs rather than Last 100 logs... I need to log and debug the [run.1] process from the production env... specifically the jewellover:warn_users task. any ideas?

    Read the article

  • Push Trunk or Push Branch to Production

    - by coffeeaddict
    I'd like to get an idea of some processes on build process with Tortoise SVN. Primarily I'm wondering do you push: The Mainline Trunk A branch after QA has grabbed it into a working copy locally and tested the branch and then some build pushes that branch The problem I have is I work at a craphole (hey, it is what it is and I'm venting on stackoverflow, you better believe it..good way to relieve stress due to complete utter chaos) and we have no formal process for pushing anything. In fact even worse my boss directly codes against production. When I have changes, he pushes the mainline trunk. The problem becomes when I make database changes on our Dev database for lets say Branch A. Well...that breaks Branch B and C. I have like 4 projects going on at once! Why? Well, I will not get into that (chaos). So consequently I rename a table field, or add a field or whatever in SQL Server and walla, now my other branches have stale code pointing to previous field names. So what happens? I have to merge certain changes to this branch, to that branch, etc. It feels like a war zone. Finally, what happens is I try to only merge the minimum. Lets say I made DB changes for Branch A's code but now I had to jump back on Branch B's project. Well I need to merge "some" of A's changes over for those database changes so that B's code is not going to bomb out and is able to work with the new table changes. Finally boss pushes the mainline trunk to production. Now I get an email "you forgot to remove the hyperlink for this". That hyperlink was actually a feature I added in Branch A. But what he's talking about here is he just pushed the mainline trunk to production which now has my merged changes from Branch B and any database scripts for Branch A because remember I had some DB changes and if he pushes code, it's got to reflect those changes thus some partial database changes must also be pushed even if it's not related to this project. Well...I missed the hyperlink, so kill me. Maybe that's why we need a build process boss? (sorry, it's been a nightmare working here which is why this thread is getting so detailed). Anyway, obviously this is a nightmare. And he dictates almost everything. The only reason we have source control is because I've worked on hard core teams and that's the first thing you setup. Well there was none here. Problem is I can't dictate the structure..he does but he's never really used source control!! My God. So we have no QA. This is an e-commerce website. That's another huge issue. So consequently I'm expected to be perfect. That means mainline trunk needs to be perfect for whatever we're pushing, whatever branch feature. Is this luda? wtf do I do? I could go off on him after tying so many times tactfully to explain that we need a freakin build process (not just copy local mainline trunk to production!) but I've tried to push before and got yelled at. So I gave up on that. So it will help me tremendously to know how others are pushing their source from Tortoise to production. I was not the person pushing when I was on previous teams so really I'm not too versed in build processes. We are a fairly good size e-commerce site and get a couple millions hits a month.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Office lights on or off in programming department? How to decide? [closed]

    - by smp7d
    At my company, the programmers who sit in the same area are constantly fighting over whether the lights stay on or off. Because there is no official policy it makes it a particularly sticky situation. We are a typical cube-farm and we have those typical cube-farm fluorescent lights and smaller ones at our desks. With the lights off, it is difficult to read and you would probably need to turn on your desk light (which some people do anyway). All programmers in our department do most of their reading on their monitor because of the nature of our business. Some feel that we should have a vote to decide whether the lights stay on or off. A couple who prefer 'lights on' feel that the vote would need to be unanimous to turn them off as having them on is the more natural office setting. Those who want them off point out that all other departments keep their lights off. I have heard all of the arguments: -Fluorescent lights cause eye strain -Reading in dark causes eye strain -The desk lights can be used if light is needed -People from other departments feel uncomfortable approaching us in the "dark" -The monitors are harder to see in the light ... Right now, some of the developers turn off the lights and some turn them on. It really just depends who last walked by the switch. I am a bit sick of the controversy as it feels a bit childish at the moment. I'm tired of hearing about it and I'm tired of having to talk about it. I tried to help them decide but as I explained, voting wasn't enough. Do other programming departments have this same argument? What is the standard or traditionally accepted option in a programming area? Are there any good reasons for one way or the other outside of preference? How can we decide fairly? EDIT Just a little more info... We do not have clients/visitors come into our office. We do have windows and hall lights that make our environment plenty bearable with the lights off. It kind of resembles a meeting room that has the lights off during a powerpoint presentation.

    Read the article

  • Is anyone using Node.js as an actual web server?

    - by Jeremy
    I am trying to convince myself to pick it up and start developing with it, but I want to know if anyone has expected stability issues or anything of the sort. I understand it isn't "production" quality, like Apache or IIS. I figure for a small site, it should be fine (max of 200 concurrent connections). Should I assume this?

    Read the article

  • Samba4 advice for production use

    - by pgb
    I have an old Samba 3 + LDAP server installed that needs to be rebuilt. I'm weighting my options, and Windows Server seems too expensive at the moment, and Samba 4 appeared to be a nice option, coupled with the last Bind 9 that can dynamically add the computers to the DNS. I have about 30 workstations, so I still consider it a small network. My questions are: Is Samba 4 stable enough for production? It seems as if the Samba team is too cautious on when to call their version final, or even beta, as compared with other open source projects. What Linux distribution would you recommend to set it up? I usually use Ubuntu Server, but may use another one if installing / maintaining Samba 4 is better on that one.

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • Create new partition on live production CentOS server

    - by Kimmel
    I have a production server that is running on CentOS. I'd like to create a partition on the server without having to reinstall everything. I have CLI and VNC access to the remote server. Is there anyway that I can create a partition safely? Here's my output from fdisk -l Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00033d5e Device Boot Start End Blocks Id System /dev/sda1 * 1 10444 83885056 83 Linux Thanks.

    Read the article

  • Unable to logoff, disconnect, or reset terminal server user in production environment

    - by l0c0b0x
    I'm looking for some ideas on how to disconnect, logoff, or reset a user's session in a 2008 Terminal Server (unable to login as the user either as it is completely locked-up). This is a production environment, so rebooting the server or doing something system-wide is out of the question for now. Any Powershell tricks to help us with this? We've tried killing the session's processes too, directly from the same terminal server (from the task manager, Terminal Services Manager and the Resource Monitor) with no results. Help!

    Read the article

  • XPP-32 over W7-64 on music production laptop

    - by quarlo
    I need to upgrade my laptop and need high performance for music production (recording and mixing). My audio interface manufacturer seems to be unable to successfully convert their drivers to 64-bit. I do not trust a virtual machine to handle real-time audio recording at low enough latency so ... I would like to install XP Pro 32-bit on a separate partition and dual boot since most of the machines that can handle this application now ship with Windows 7 64-bit flavors. I'd like to transit to 64-bit over time assuming M-Audio does eventually get a handle on 64-bit drivers, but really need to ensure that I can stay at 32-bit for now. Does anyone have any experience with this or something similar?

    Read the article

  • VirtualBox in production?

    - by MrG
    I'm planning to move a service which is currently powered by Debian into a VirtualBox. That would allow us to easily port it i.e. to a faster machine if required. The setup would be: debian host > Virtual Box #1 > debian instance #1 running Apache & application > Virtual Box #2 > debian instance #2 containing database Do you have any experience with a production setup based on Virtual Box? Is it stable and fast enough? Many thanks!

    Read the article

  • Samba4 advice for production use

    - by pgb
    I have an old Samba 3 + LDAP server installed that needs to be rebuilt. I'm weighting my options, and Windows Server seems too expensive at the moment, and Samba 4 appeared to be a nice option, coupled with the last Bind 9 that can dynamically add the computers to the DNS. I have about 30 workstations, so I still consider it a small network. My questions are: Is Samba 4 stable enough for production? It seems as if the Samba team is too cautious on when to call their version final, or even beta, as compared with other open source projects. What Linux distribution would you recommend to set it up? I usually use Ubuntu Server, but may use another one if installing / maintaining Samba 4 is better on that one.

    Read the article

  • Setting up Tornado with Nginx on Ubuntu 10.04 for production use

    - by DjangoRocks
    Hi all, I understand that there's an nginx configuration file at http://www.friendfeed.com But i don't really know how to set up Tornada for production use on Ubuntu 10.04 with Nginx. Here's my situation and assumptions: 1) Assuming my Tornado project is set up as such: project/ src/ static/ templates/ project.py And I have installed Tornado by downloading the repositary from Github and than sudo python setup.py install 2) I've installed Nginx and started it based on the instructions here : http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid My questions are: Where does my nginx configuration file go ? Within the src/ folder? After configuring Nginx, how do I start my Tornado project?

    Read the article

  • Is vSphere's Data Recover appliance 'production-ready'?

    - by Chopper3
    I have a smallish lab environment (16 x ESX4iU1 hosts and VC4U1) that I periodically want to backup. Normally in production we snap to secondary SAN boxes then have disk-based VTL backups via NetBackup which eventually migrate to off-site removable disks but this seems like an overkill for my own kit. I've spent a bit of time with vSphere's 'Data Recovery' appliance, it was easy enough to setup and I've not really ran into any issues with it but that doesn't mean I trust it fully. Have you had any experiences with it, positive or negative that would help me decide whether to trust it or pay Symantec for more licences? Thank you in advance.

    Read the article

  • Unable to logoff, disconnect, or reset terminal server user in production environment

    - by l0c0b0x
    I'm looking for some ideas on how to disconnect, logoff, or reset a user's session in a 2008 Terminal Server (unable to login as the user either as it is completely locked-up). This is a production environment, so rebooting the server or doing something system-wide is out of the question for now. Any Powershell tricks to help us with this? We've tried to disconnect, log the user off and reset the session as well as killing the session's processes too, directly from the same terminal server (from the task manager, Terminal Services Manager and the Resource Monitor) with no results. Help! UPDATE: We ended up rebooting the server as no other attempts that we could think of worked. I'll leave this question open hoping someone might have more information about this one issue, and it's potential fixes

    Read the article

  • Production Instance : CLOSE_WAIT Connection Issue

    - by rajnikant
    I am using 10EC2 Instances behind 1 ELB. And ELB configured 80 to 8080 and 443 to 8080 port. And all 10EC2 instances having installed with Apache Tomcat, total request on ELB around 8000 to 10000 in 1 minute. I am facing problem for CLOSE_WAIT connection on 10 EC2 Instance, having Apache Tomcat. EC2 Instance Type : m1.xlarge When we restart the Apache Tomcat, all CLOSE_WAIT connections are lost, but its not proper way to work on Production Instances. Please help me out.

    Read the article

  • Simplest way to shrink transaction log files on a mirrored production database

    - by MGOwen
    What's the simplest way to shrink transaction log file on a mirrored production database? I have to, as my disk space is running out. I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?). (Hope this isn't an exact duplicate, I read a lot of questions but couldn't find this exact scenario elsewhere).

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >