Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 170/319 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • sa account gets locked out frequently

    - by bala
    I have an sql server 2008 installed. there are about 6 databases created and running on this server I often get message like "SA account is locked out" is there any specific reasons for this account being locked out? is there any other place I need to check the reasons ( I checked the eventviwer but could not get much information) Edit: i found this information in teh eventviewer backups. Is this the cause? SQL Server failed to communicate with filter daemon launch service (Windows error: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. ). Full-Text filter daemon process failed to start. Full-text search functionality will not be available.

    Read the article

  • How do I resolve the error "The file exists" when restoring a cube backup?

    - by Ant
    I'm trying to restore a cube backup (a .abf file) using SQL Server Management Studio, but I'm getting the error message: The following system error occurred: The file exists. . (Microsoft SQL Server 2005 Analysis Services) (yes, there really are two dots) Does anyone know how to resolve this so I can restore the backup? Here are the steps I'm using: Open Microsoft SQL Server Management Studio Make the connection to the AS server Right-click on the Databases node on the server tree view Choose Restore... Type in a new database name in Restore database Select the backup file in From backup file Enter the correct password Optionally tick Allow database overwrite (it happens both ways) Press OK -- get the above error message

    Read the article

  • Get a file from a load balanced server in Windows Server

    - by Leandro
    I've a load balanced server on production environment for my application. The server is on Windows Server 2008 R2. I'm running a web application that creates and save a file into a folder on the web path. So I need to create a job that copy this file into another server. The main idea is that a file watcher checks for the file and then copy it instantly. But how can I know in what server it's the file? Please avoid "why you don't" answers to get a directly answer, if it's someone.

    Read the article

  • How can I set up VLANs in a way that won't put me at risk for VLAN hopping?

    - by hobodave
    We're planning to migrate our production network from a VLAN-less configuration to a tagged VLAN (802.1q) configuration. This diagram summarizes the planned configuration: One significant detail is that a large portion of these hosts will actually be VMs on a single bare-metal machine. In fact, the only physical machines will be DB01, DB02, the firewalls and the switches. All other machines will be virtualized on a single host. One concern that has been is that this approach is complicated (overcomplicated implied), and that the VLANs are only providing an illusion of security, because "VLAN hopping is easy". Is this a valid concern, given that multiple VLANs will be used for a single physical switch port due to virtualization? How would I setup my VLANs appropriately to prevent this risk? Also, I've heard that VMWare ESX has something called "virtual switches". Is this unique to the VMWare hypervisor? If not, is it available with KVM (my planned hypervisor of choice)?. How does that come into play?

    Read the article

  • Howto maintain an EXT3 filesystem

    - by Reinoud van Santen
    Lately I had several servers which encountered a write error on an EXT3 filesystem and as a result of that remounted the filesystem read-only. Understandably on a production server this causes severe problems. On a reboot the filesystem where fixed but on large partitions this takes a lot of time. After the filesystem was fixed, correcting several errors, the server runs well again. What can I do to minimize the rate at which this happens? I can't seem to find much information on periodically checking the filesystem(s) on a running server. Is it possible to change the way in which EXT3 / the system handles write errors? What would be a sane solution. All servers which this is regarding to are running CentOS Linux 5.4 or 5.5.

    Read the article

  • Backup and restore MySQL database without system access

    - by Sencerd
    Hi guys, I am trying to move a database from 1 provider to another, the problem is that I don't have system access at either end (ie, no ssh), so I cannot use a mysqldump. I have already tried using MySQL Administrator, the backup took about 45 minutes, but when it came to restoring it was moving at a snails pace, and estimating 12+ hours. This is a live app so I need to keep the downtime to an absolute minimum. The database consists of 35 tables, a mixture of MyISAM and InnoDB, the whole thing comes to about 4.4GB. The source and destination databases are both running on very powerful servers. Any suggestions on a quick way of doing this will be gratefully received. Thanks

    Read the article

  • phpredis + pconnect

    - by john smith
    I am using phpredis on my php based website. The webserver I am using is a the simplest apache apt-get installation, no configuration involved, as this is only a development environment. The issue I am facing is that basically, while using phpredis, there is no difference between "connect" and "pconnect" commands: they both create a new connection everytime, as I can see from the "info" command on redis-cli. Now, I am pretty sure it is because of the apache configuration and the fact that it probably (most likely) is a multi-threaded env, therefore can't enstablish a single connection. My question is basically for when I will turn into production: what would the best choice of a webserver be to avoid this problem? I remember using lighttpd with thousands of users and still get only a very few (like 2 or 3) connections on mongoDB. Any ideas? Thanks in advance.

    Read the article

  • Rsync plugin to many local wordpress installs via script or cli

    - by Nick Abbey
    I am maintaining a large number of wordpress installs on a production server, and we are looking to deploy InfiniteWP for managing these installs. I am looking for a way to script the distribution of the plugin folder to all of these installs. On server wp-prod, all sites are stored in /srv//site/ The plugin needs to be copied from ~/iws-plugin to /srv//site/wp-content/plugins/ Here's some pseudo code to explain what I need to do: array dirs = <all folders in /srv> for each d in dirs if exits "/srv/d/site/wp-content/plugins" rsync -avzh --log-file=~/d.log ~/plugin_base_folder /srv/d/site/wp-content/plugins/ else touch d.log echo 'plugin folder for "d" not found' >> ~/d.log end end I just don't know how to make it happen from the cli or via bash. I can (and will) tinker with a bash or ruby script on my test server, but I'm thinking the command-line-fu here on SF is strong enough to handle this issue much more quickly than I can hack together a solution. Thanks!

    Read the article

  • How do I minor upgrade MySQL on Windows?

    - by TruMan1
    I currently have MySQL 5.1.35 installed on a Windows 2008 server via the MSI installer. I need to upgrade to the latest 5.1.44 to fix a bug, but docs were not clear on how to do this. I ran the MSI installer, but it did not give me any upgrade option so I quit it. I am weary because it's a production machine with many PHP websites running on it. Also, my data directory is not the default one, it's kept on another partition. How can I upgrade it? Thanks for any help.

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • SQL Server backup/restore error: The Media Family on Device is Incorrectly Formed.

    - by Chris
    Basically, I'm having this issue: http://www.sqlcoffee.com/Troubleshooting047.htm What I'm doing is running a script I found online (http://pastebin.com/3n0ZfybL) to do a full backup, then rar'ing up the file and moving it to my computer. The CRC of the backup file inside the rar is correct on both computers, so there is no problem with data being corrupted when I transfer it. But then I go and try to restore the database on my dev computer here and I get the errors "sql server cannot process this media family" ... "msg 3013". Why is this happening? I'd test out the backup on the server I'm getting it from, but it's a production server.

    Read the article

  • Load Balancing a UDP server

    - by Hellfrost
    Hello StackOverflow, I have a udp server, it is a central part in my business process. in order to handle the loads I'm expecting in the production environment I'll probably need 2 or 3 instances of the server. The server is almost entirely stateless, it mostly collects data, and the layer above it knows how to handle the minimal amount of stale data that can arise from the the multiple server instances. My question is, how can I implement load balancing between the servers? I would prefer to distribute the requests as evenly as possible between the servers. I would also would like to have some fidelity, I mean if client X was routed to server y, then I want all of X's subsequent requests to go to server Y, as long as it is sensible and not overloads Y. By the way it is a .NET system... what would you recommend?

    Read the article

  • Looking for a good Web Server that is cheap

    - by SoLoGHoST
    I am a Project Manager, and former Lead Developer for a software portal system that requires a forum software to run. I am in need of a server that is cheap, reliable, and supports the latest PHP (5.2+), MySQL, unlimited e-mails (preferably), a cPanel, multiple sub-domains (atleast 3+). Currently I am paying $34.95 USD/month (approx. $420 USD/year). This is too high for me to pay to keep the site running. I just recently became Project Manager and in charge of Finances and I'm extremely concerned for the future of Dream Portal. With those prices I'm not sure I'll be able to keep it running for too long. Can someone please tell me of a good server that meets all of the requirements that I listed above that is cheaper on a yearly basis? Note: Currently on a Dedicated Server with limited disk space at 15000 MB (15 GB), monthly bandwidth = 500000 MB, 50 emails limit, 20 sub-domains limit, 30 FTP accts., and 25 SQL Databases.

    Read the article

  • AXFR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • Determine / set Puppet environment

    - by quickshiftin
    I'm trying to determine what Puppet thinks the environment is on my agent nodes. Per the documentation I've configured the agent's environment in /etc/puppet/puppet.conf as such [agent] environment = development In order to view the environment I've found this code to add an environment fact to facter: require 'puppet' Facter.add("environment") do setcode do Puppet[:environment] end end However, on one of my agent nodes, if I run sudo facter -p environment, the result is production. I've tried to manually set the environment temporarily via sudo puppet agent --environment development, however the result from facter is the same. Any idea what's going on?

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

  • How to migrate an SQLServer 2000 database from one machine to another

    - by Saiyine
    This January I'm migrating our main SQLServer 2000 based database to a beefier server. Is there any standard procedure or documentation on how to do it? I need to replicate all at the new server (databases, jobs, DTSs, vinculated servers, etc). Edit: I mean SQLServer 2000 on both ends! Edit: Be calm, people, I just crossed the versions from another software I posted about at the same time as this. Effectively, I even checked the wikipedia to be sure version 8 was 2000. Don't need to flame that much about what is just an errata.

    Read the article

  • How to configure to URLs for One Server using wildcard supported certificates?

    - by Amit
    Hi, We have wildcard supported certificate installed in our production environment. One of our client wants his name to appear in the URL (e.g. companyname.sitename.net). How we should facilitate this? Do we need to make any entries for this in DNS? If yes can you please let me know about it? I need to set this up before Fridat PST, any help in this is highly appriciated. Thanks.

    Read the article

  • rsync - How to exclude one .htaccess but not all of them

    - by Cory Gagliardi
    I have an rsync command for copying my files from dev to production. I don't want to copy the .htaccess file that's in the root of the HTML directory but, I do want to copy the few .htaccess files that are in its sub directories. I'm using the argument --exclude .htaccess which is stopping all of the files from getting copied. The other arguments I'm including are -a --recursive --times --perms. Is it possible to configure rsync to do this? Edit: Here is my full command: rsync -a --recursive --times --perms \ --exclude prop_images --exclude tracking --exclude vtours \ --exclude .htaccess --exclude .htaccess_backup --exclude "*~" \ /home/user/dev_html/* /home/user/public_html/

    Read the article

  • Unicorn_init.sh cannot find app root on capistrano cold deploy

    - by oFca
    I am deploying Rails app and upon running cap deploy:cold I get the error saying * 2012-11-02 23:53:26 executing `deploy:migrate' * executing "cd /home/mr_deployer/apps/prjct_mngr/releases/20121102225224 && bundle exec rake RAILS_ENV=production db:migrate" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command command finished in 7464ms * 2012-11-02 23:53:34 executing `deploy:start' * executing "/etc/init.d/unicorn_prjct_mngr start" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command ** [out :: xxxxxxxxxx] /etc/init.d/unicorn_prjct_mngr: 33: cd: can't cd to /home/mr_deployer/apps/prjct_mngr/current; command finished in 694ms failed: "rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3-p125@prjct_mngr' -c '/etc/init.d/unicorn_prjct_mngr start'" on xxxxxxxxxx but my app root is there! Why can't it find it? Here's part of my unicorn_init.sh file : 1 #!/bin/sh 2 set -e 3 # Example init script, this can be used with nginx, too, 4 # since nginx and unicorn accept the same signals 5 6 # Feel free to change any of the following variables for your app: 7 TIMEOUT=${TIMEOUT-60} 8 APP_ROOT=/home/mr_deployer/apps/prjct_mngr/current 9 PID=$APP_ROOT/tmp/pids/unicorn.pid 10 CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb - E production" 11 # INIT_CONF=$APP_ROOT/config/init.conf 12 AS_USER=mr_deployer 13 action="$1" 14 set -u 15 16 # test -f "$INIT_CONF" && . $INIT_CONF 17 18 old_pid="$PID.oldbin" 19 20 cd $APP_ROOT || exit 1 21 22 sig () { 23 test -s "$PID" && kill -$1 `cat $PID` 24 } 25 26 oldsig () { 27 test -s $old_pid && kill -$1 `cat $old_pid` 28 } 29 case $action in 30 31 start) 32 sig 0 && echo >&2 "Already running" && exit 0 33 $CMD 34 ;; 35 36 stop) 37 sig QUIT && exit 0 38 echo >&2 "Not running" 39 ;; 40 41 force-stop) 42 sig TERM && exit 0 43 echo >&2 "Not running" 44 ;; 45 46 restart|reload) 47 sig HUP && echo reloaded OK && exit 0 48 echo >&2 "Couldn't reload, starting '$CMD' instead" 49 $CMD 50 ;; 51 52 upgrade) 53 if sig USR2 && sleep 2 && sig 0 && oldsig QUIT 54 then 55 n=$TIMEOUT 56 while test -s $old_pid && test $n -ge 0 57 do 58 printf '.' && sleep 1 && n=$(( $n - 1 )) 59 done 60 echo 61 62 if test $n -lt 0 && test -s $old_pid 63 then 64 echo >&2 "$old_pid still exists after $TIMEOUT seconds" 65 exit 1 66 fi 67 exit 0 68 fi 69 echo >&2 "Couldn't upgrade, starting '$CMD' instead" 70 $CMD 71 ;; 72 73 reopen-logs) 74 sig USR1 75 ;; 76 77 *) 78 echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>" 79 exit 1 80 ;; 81 esac

    Read the article

  • Install multiple PHP environments on OS X Snow Leopard

    - by Darren Newton
    I just upgraded my MBP to Snow Leopard (OS X 10.6), which took PHP to 5.3 This is great, except I use my MBP as my development machine and I use a lot of PHP libs and frameworks (namely CakePHP 1.2) which are not compatible at the moment with PHP 5.3. CakePHP in particular does not have a stable version for PHP 5.3 so its not a matter of upgrading the framework (and the production servers are under PHP 5.2 anyway.) Is there a way to install PHP 5.2.9 alongside PHP 5.3 and then using httpd.conf or .htaccess tell Apache which version of PHP to use for a particular directory? Alternatively is there a way to do this with MacPorts? Thanks!

    Read the article

  • Debug.Assert has locked / hung my IIS6 site - how do I bring it back?

    - by George
    I'm running some C# / ASP.net code on an IIS6 server. The code has triggered a "Debug.Assert", and now my IIS6 server is not responding for this site — at least that's what I strongly suspect given that this was the cause the last time this happened. How do I bring the site back up? Stopping and then starting the site doesn't bring it back — I continue getting timeouts from the proxy, which I think is caused by the site itself crashing. How do I disable Debug.Assert permanently in production? Do I just edit the web.config: <compilation defaultLanguage="c#" debug="false"... — is there anything else I need to do?

    Read the article

  • MySQL Server Not Starting on Boot

    - by Brian
    I have installed MySQL on a RHEL 5 server and I'm wanting to set it up so that the server starts on boot. I've ran the "chkconfig --list mysqld" command and it's currently running on levels 3, 4 and 5. However, when I reboot the server, no mysqld process is started. I've also tried manually starting the server by executing "/usr/bin/mysqld_safe" and I get the following output: Starting mysqld daemon with databases from /var/lib/mysql STOPPING server from pid file /var/run/mysqld/mysqld.pid 100319 10:31:30 mysqld ended I looked in /var/log/mysqld.log and I found the following: 100319 10:29:01 mysqld started 100319 10:29:02 InnoDB: Started; log sequence number 0 29752204 100319 10:29:02 [ERROR] Can't start server : Bind on unix socket: Permission denied 100319 10:29:02 [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? 100319 10:29:02 [ERROR] Aborting

    Read the article

  • Less reboots on Windows Server Core, is this true or just a myth?

    - by Peter Hahndorf
    Because there are less components installed on a Windows Server core OS, it needs less patches than the full OS. I read in several places that therefor it needs less reboots after patching. I'm running Server 2012 core in production since September 2012 now and I don't remember a single patch-Tuesday when I did not have to reboot the server after installing Windows updates. Are there any hard numbers out there that compare the required reboots for core vs. Full OS? Less reboots may be the main reason why people choose to go with Server core. If it actually requires just as many reboots as the full OS install, they may think again the next time they set up a server.

    Read the article

  • Upgrade a legacy, expired RHEL 3.4.6 server to modern Centos/Scientific Linux

    - by Gabriel Tasiopoulos
    I have a machine running a legacy J2EE app. The code is not Maven-ized and it works with pretty old Java and Postgres versions. I have converted it to a VM in ESXI and I'd like to try to upgrade it to a modern, binary-compatible version of RHEL (Centos or Scientific LInux) and see if things would still work. Where should I start? Am I being too optimistic with this one? It's more of an experiment and I'm not doing it on a production machine. But given the OS is pretty old I am looking for a way to do this eventually. Many thanks

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >