Search Results

Search found 4561 results on 183 pages for 'production'.

Page 20/183 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • ATG Live Webcast June 28: Scrambling Sensitive Data in EBS 12 Cloned Environments

    - by BillSawyer
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.   The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. This ATG Live Webcast is your chance to come learn about the Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack from the experts. Oracle E-Business Suite Release 12.1.3 Template for Data Masking The agenda for the Oracle E-Business Suite Template for Data Masking Pack webcast includes the following topics: What does data masking do in E-Business Suite environments? De-identify the data Mask sensitive data Maintain data validity How can EBS customers use data masking? References Join Eric Bing, Senior Director and Elke Phelps, Senior Principal Product Manager, as they discusses the Oracle E-Business Suite Template for Data Masking Pack.Date:                  Thursday, June 28, 2012Time:                 8:00AM Pacific Standard TimePresenters:     Eric Bing, Senior Director                           Elke Phelps, Senior Principal Product ManagerWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              100865To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  599097152If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • MySQL-Cluster or Multi-Master for production? Performance issues?

    - by Phillip Oldham
    We are expanding our network of webservers on EC2 to a number of different regions and currently use master/slave replication. We've found that over the past couple of months our slave has stopped replicating a number of times which required us to clear the db and initialise the replication again. As we're now looking to have servers in 3 different regions we're a little concerned about these MySQL replication errors. We believe they're due to auto_increment values, so we're considering a number of approaches to quell these errors and stabilise replication: Multi-Master replication; 3 masters (one in each region), with the relevant auto_increment offsets, regularly backing up to S3. Or, MySQL-Cluster; 3 nodes (one in each region) with a separate management node which will also aggregate logs and statistics. After investigating it seems they both have down-sides (replication errors for the former, performance issues for the latter). We believe the cluster approach would allow us to manage and add new nodes more easily than the Multi-Master route, and would reduce/eliminate the replication issues we're currently seeing. But performance is a priority. Are the performance issues of MySQL-Cluster as bad as people say?

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Can installing URL Rewrite 2.0 on production IIS 7.5 have a negative impact?

    - by Olaf
    We have a Windows 2008 R2 server with IIS 7.5 and a lot of customer websites. Being no sys-admin but a developer, I hate to touch that wonderfully running system. However, I'd like to update the IIS plugin URL Rewrite to version 2.0 via Web Platform Installer. I do not want to do anything that might compromise the server, hence the question: can I expect it to work with at least a 99% chance? I did not find any bug reports or reviews on the web, so I'd like to hear an expert on that.

    Read the article

  • Do you use different wallpaper or background colors for dev, production servers?

    - by crosenblum
    I want to make it easier to distinguish between server's connected via rdp, without draining resources too much. Do any of you use any wallpaper, and if so, can you show any examples? Or do you use a custom desktop background color? Or do you use something like bginfo? Are there any good wallpaper sites, specifically for server wallpaper? To help distinguish what server you are on... Thanks..

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • EMERGENCY! Update Statement for critical mysql production database now running for 18 hours, need help.

    - by Tim
    We have a table with 500 million rows. Unfortunately, one of the columns was int(11), which is a signed int, and it was an incrementing value that just rolled over the 2.1 billion magic number. This immediately caused downtime for about 10.000 users. We discussed many solutions, and decided that we could just roll back this value safely, by say, a billion. But we had to roll it back for every row. Here is what we did: update Table1 Set MessageId = case when MessageId < 1073741824 then 0 else MessageId - 1073741824 end; I tested this on a table with 10 million rows and it took 11 minutes. So I assumed the larger table would take 550 minutes, or 9 hours. This was going to be our biggest downtime in 3 years. (We're a startup). It's now going on 18 hours. What should we do? Please don't say what we should have done. I think we should have updated a few million rows at a time. Is there a way we can see progress? Could Mysql have hung? Using mysql 5.0.22. Thanks!

    Read the article

  • Why does the heat production increase as the clockrate of a CPU increases?

    - by Nils
    This is probably a bit off-topic, but the whole multi-core debate got me thinking. It's much easier to produce two cores (in one package) then speeding up one core by a factor of two. Why exactly is this? I googled a bit, but found mostly very imprecise answers from over clocking boards which do not explain the underlying Physics. The voltage seems to have the most impact (quadratic), but do I need to run a CPU at higher voltage if I want a faster clock rate? Also I like to know why exactly (and how much) heat a semiconductor circuit produces when it runs at a certain clock speed.

    Read the article

  • Should Production Windows Web Servers (IIS & SQL) be in a domain?

    - by tlianza
    We have a few web servers and a few database servers. To date, they've been standalone machines that are not part of a domain. The web servers don't talk to each other, and the web servers talk to the database servers via SQL Auth. My concern with putting the machines in a domain together were added complexity - it's one more "thing" running, and doing "things" that could go wrong. risk - if a domain controller fails, am I now putting other machines at risk? However, in certain scenarios it does seem convenient for them to be on a domain, sharing credentials. For example, if I want to give the "services" control on one machine access to another machine (because Remote Desktop craps out) I need to go in and assign privileges on multiple machines - something that I believe Active Directory and Domain Accounts set to simplify. My question: I'm sure there are things I'm not considering here. Is there a best practice?

    Read the article

  • How to handle SalesForce WSDL files for sandbox and production sites in ASP.Net?

    - by Traveling Tech Guy
    I need to authenticate users and get info about them from an ASP.Net application. Since I have 2 sites (sandbox, production) and 2 org IDs - I needed to generate 2 SalesForce WSDL files. I diffed the 2 files (each about 600kb in size) and while they are 95% the same, there are enough differences strewn all over the place - enough for me to need to use them both. I added both as web references to my solution, and here's where my problem starts.Obviously, I cannot use both references in the same file, as they contain the same classes/functions. I had to write a quick-and-dirty solution over the weekend, so I just created 2 classes - each using a different web reference - but otherwise the exact functionality, and I use the appropriate one, based on the URL the user is coming from. This works well, but strikes me as a bad (read: quick-and-dirty) solution. My question: is there any way to do one or more of the following: change the web reference on the fly? use both web references in the same file, but put one in a different namespace? find a better solution to the whole situation? I nd up with a huge XmlSerializer.dll (3mb!) - probably due to using both huge WSDL files. Thanks for your time.

    Read the article

  • How to transform a production to LL(1) for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • getting CS1502 compiler error on dev environment but not production.

    - by nw
    When I try to run my ASP.NET app from my development environment I get the following error message: Compiler Error Message: CS1502: The best overloaded method match for 'mmars.Printing.printFunctions.SetPrintSummaryProperties(mmars.contextInfo, ref mmars.Printing.printObjSummary)' has some invalid arguments. When I publish and run on our production server I don't get this error. It seems to compile fine when I build from the build menu (in fact if I change the second argument of the bolded function call below, i get a compiler error in visual studio), but now i've suddenly started getting this error message at runtime. So another question I have in addition to getting rid of the error is why is the .NET development server even trying to do JIT compilation on my project if it is already compiled into a DLL? Printing.printObjSummary myPrintObj = new Printing.printObjSummary(); Printing.printFunctions.SetPrintSummaryProperties(ci, ref myPrintObj); printObjects.Add(myPrintObj); This seems to have just suddenly appeared from nowhere today and it's extremely frustrating. Also, though there are no warnings at compile-time, when I get redirected to the page with that first compilation error there are many warnings like the following: Warning: CS0436: The type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs' conflicts with the imported type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\assembly\dl3\7179c19a\345f948c_ece7ca01\mmars.DLL'. Using the type defined in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs'. What's the deal with that? Is the webserver complaining about name conflicts in the source file and dll resulting from the source file?

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • How to transform a production to LL(1) grammar for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • How to deploy to multiple redundant production servers with "cap deploy"?

    - by Chad Johnson
    Capistrano is working great to deploy to a single server. However, I have multiple production API servers for my web application. When I deploy, my code needs to get deployed to every API server at once. Specifying each server manually is NOT the solution I am looking for (e.g. I don't want to do "cap api1 deploy; cap api2 deploy"). Is there a way, using Capistrano, to deploy to all servers at once, with just a simple "cap deploy"? I'm wondering what changes I would need to make to a typical deploy.rb file, whether I'd need to create a separate file for each server, and whether and how the Capfile would need to be changed. Also, I need to be able to specify a different deploy_to path for each server. And ideally, I wouldn't have to repeat things in different config files for different servers (eg. wouldn't have to specify :repository, :application, etc. multiple times). I have spent hours searching Google on this and looking through tutorials, but I have found nothing helpful. Here is a snippet from my current deploy.rb file: set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain.com" role :app, "domain.com" role :db, "domain.com", :primary => true, :norelease => true Should I just use the multistage extension and do this? task :deploy_everything do system "cap api1 deploy" system "cap api2 deploy" system "cap api2 deploy" end That could work, but I feel like this isn't what this extension is meant for...

    Read the article

  • Git development?production workflow – how to set up repo?

    - by Blixt
    I'm working on a relatively small, but fast-changing project (a web application) with a few other developers. We're using Git for source control. We started out creating a stable branch which is what is deployed to the live production web server. The master branch is what is deployed to a secondary "unstable" server for testing purposes. Whenever we felt that the master branch was ready to go live, we merged it into stable. However, we came to a point where we wanted one of the later master commits, but not some of the commits before it, so we used cherry-pick to pull that change into stable. This creates a new commit with the same change as the one in master, and it feels as if we're losing the nice history that Git otherwise provides. Are there better ways of handling this type of unstable/stable deployment model? One solution I thought of was using feature branches, and only ever merging a feature branch into master once we want it to go live. Then we'll tag every deployment instead of having a stable branch.

    Read the article

  • StarTeam trunk.

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • What should be the "trunk" development, or release

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >