Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 170/319 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Should I anticipate any problems trying to use the same SSL Cert on 2 computers (primary, backup)?

    - by Matt
    We have a production machine running IIS6 with a wildcard SSL certificate. The certificate that was installed is not exportable. We want to upgrade the system to IIS7. As part of this venture, we're creating a backup/failover server that will serve the exact same websites - when we take the primary down for upgrade, the secondary will take over. As such, the secondary also needs the SSL certificate. However, since the certificate was not exportable, this means re-keying it from Go Daddy. Per http://help.godaddy.com/article/867, I know that by re-keying the certificate the original will stop working. I'm still pretty new to SSL certificates, so are there any problems I should anticipate when installing the same SSL certificate on 2 different machines?

    Read the article

  • phpredis + pconnect

    - by john smith
    I am using phpredis on my php based website. The webserver I am using is a the simplest apache apt-get installation, no configuration involved, as this is only a development environment. The issue I am facing is that basically, while using phpredis, there is no difference between "connect" and "pconnect" commands: they both create a new connection everytime, as I can see from the "info" command on redis-cli. Now, I am pretty sure it is because of the apache configuration and the fact that it probably (most likely) is a multi-threaded env, therefore can't enstablish a single connection. My question is basically for when I will turn into production: what would the best choice of a webserver be to avoid this problem? I remember using lighttpd with thousands of users and still get only a very few (like 2 or 3) connections on mongoDB. Any ideas? Thanks in advance.

    Read the article

  • How do I minor upgrade MySQL on Windows?

    - by TruMan1
    I currently have MySQL 5.1.35 installed on a Windows 2008 server via the MSI installer. I need to upgrade to the latest 5.1.44 to fix a bug, but docs were not clear on how to do this. I ran the MSI installer, but it did not give me any upgrade option so I quit it. I am weary because it's a production machine with many PHP websites running on it. Also, my data directory is not the default one, it's kept on another partition. How can I upgrade it? Thanks for any help.

    Read the article

  • Implications of renaming sql server 2000 instance

    - by peg_leg
    I have a sql server 2000 of which I want to replicate the data to a sql server 2008. However the output from select @@servername and select serverproperty('servername') on the 2000 server are different and that prevents replication. There is a process to resolve this at http://support.microsoft.com/kb/818334 . Has anyone done this? What implications are there for following this process? Any 'gotchas' that need to be guarded against? My 2000 server is a lone production server...very scary. Please advise.

    Read the article

  • Less reboots on Windows Server Core, is this true or just a myth?

    - by Peter Hahndorf
    Because there are less components installed on a Windows Server core OS, it needs less patches than the full OS. I read in several places that therefor it needs less reboots after patching. I'm running Server 2012 core in production since September 2012 now and I don't remember a single patch-Tuesday when I did not have to reboot the server after installing Windows updates. Are there any hard numbers out there that compare the required reboots for core vs. Full OS? Less reboots may be the main reason why people choose to go with Server core. If it actually requires just as many reboots as the full OS install, they may think again the next time they set up a server.

    Read the article

  • CD/DVD cataloging software?

    - by NoCanDo
    I'm looking for freeware, or preferably open source CD/DVD cataloging applications. Right now I'm testing http://www.gentibus.com/us/Download.htm, anyone got any other suggestions? I'd like the software to be actively maintained and not released and left behind. I'm looking for software which allows me to catalog, sort, search my DVDs/CDs into databases. Like I've 1 group of 10 dvds only with Fonts etc. I want to read the content into a Database called "Fonts'". Another group of 5 DVD's with Stockimages, and I want to read all 5 DVDs into the Database "Stock Images". Then I want this software to be able to open Database "Stock Images", I want to be able to browse DVD 1 - 5 and to see DVD 1 - 5's contents without having the DVD's in the optical drive.

    Read the article

  • ASP.NET Session Management - which SQL Server option?

    - by frumious
    We're developing some custom web parts for our WSS 3 intranet, and have just run into something we'd like to use ASP.NET sessions for. This isn't currently enabled on the development server. We'd like to use SQL Server as the storage mechanism, because the production environment is a web farm with very simple load-balancing. There are 3 options you can choose from to set up the SQL Server session storage, tempdb, default separate DB, named DB. Both tempdb and default separate DB create a new DB to store certain information in; tempdb stores the actual session info in tempdb, which doesn't survive a reboot, and default separate DB stores everything in the new DB. Since you've got to create the new DB either way, my question is this: why would you ever choose to store the session info in tempdb? The only thing I can think of is if you'd like to have the ability to wipe the session by rebooting the server, but that seems quite apocalyptic!

    Read the article

  • IIS 6 on x64 and long URLs

    - by mausch
    I have a very long URL on a site hosted on Windows 2003 x64 that looks like this: http://myhost/a_very_very_long_url_around_300_chars_long (i.e. a single, very long segment around 300 chars long) Problem is, I'm getting a 400 Bad Request response from HTTP.SYS (it doesn't even reach IIS). I can tell because these requests show up in system32\LogFiles\HTTPERR, e.g: 2009-09-17 19:51:29 200.123.179.9 3636 192.168.129.50 80 HTTP/1.1 GET /a_very_very_long_url_around_300_chars_long 400 - URL - I tried setting UrlSegmentMaxLength in the registry and this fixes the issue on my Windows 2003 x86 box but not on the x64 production server. I tried this on another Win2k3 x64 server and it also failed. Any hints?

    Read the article

  • sa account gets locked out frequently

    - by bala
    I have an sql server 2008 installed. there are about 6 databases created and running on this server I often get message like "SA account is locked out" is there any specific reasons for this account being locked out? is there any other place I need to check the reasons ( I checked the eventviwer but could not get much information) Edit: i found this information in teh eventviewer backups. Is this the cause? SQL Server failed to communicate with filter daemon launch service (Windows error: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. ). Full-Text filter daemon process failed to start. Full-text search functionality will not be available.

    Read the article

  • using munin-plugins-rails to monitor rails app perfromance

    - by user2099762
    I have been trying to configure munin-plugins-rails to monitor the performance of our rails apps from Munin. The graphs appear, but no data is shown in the graphs. The log files show Error output from : 2013/06/27-15:39:06 [5540] Request-log-analyzer, by Willem van Bergen and $ 2013/06/27-15:39:06 [5540] Website: http://railsdoctors.com I have tried running Request-log-analyzer manually and pointing it at the production log file, and this reports as % for every item. There is data in the log file. I have tried changing the version of the gems installed, and also the type of the log file, but no luck. Any ideas anyone? Thanks

    Read the article

  • Howto maintain an EXT3 filesystem

    - by Reinoud van Santen
    Lately I had several servers which encountered a write error on an EXT3 filesystem and as a result of that remounted the filesystem read-only. Understandably on a production server this causes severe problems. On a reboot the filesystem where fixed but on large partitions this takes a lot of time. After the filesystem was fixed, correcting several errors, the server runs well again. What can I do to minimize the rate at which this happens? I can't seem to find much information on periodically checking the filesystem(s) on a running server. Is it possible to change the way in which EXT3 / the system handles write errors? What would be a sane solution. All servers which this is regarding to are running CentOS Linux 5.4 or 5.5.

    Read the article

  • Unicorn_init.sh cannot find app root on capistrano cold deploy

    - by oFca
    I am deploying Rails app and upon running cap deploy:cold I get the error saying * 2012-11-02 23:53:26 executing `deploy:migrate' * executing "cd /home/mr_deployer/apps/prjct_mngr/releases/20121102225224 && bundle exec rake RAILS_ENV=production db:migrate" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command command finished in 7464ms * 2012-11-02 23:53:34 executing `deploy:start' * executing "/etc/init.d/unicorn_prjct_mngr start" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command ** [out :: xxxxxxxxxx] /etc/init.d/unicorn_prjct_mngr: 33: cd: can't cd to /home/mr_deployer/apps/prjct_mngr/current; command finished in 694ms failed: "rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3-p125@prjct_mngr' -c '/etc/init.d/unicorn_prjct_mngr start'" on xxxxxxxxxx but my app root is there! Why can't it find it? Here's part of my unicorn_init.sh file : 1 #!/bin/sh 2 set -e 3 # Example init script, this can be used with nginx, too, 4 # since nginx and unicorn accept the same signals 5 6 # Feel free to change any of the following variables for your app: 7 TIMEOUT=${TIMEOUT-60} 8 APP_ROOT=/home/mr_deployer/apps/prjct_mngr/current 9 PID=$APP_ROOT/tmp/pids/unicorn.pid 10 CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb - E production" 11 # INIT_CONF=$APP_ROOT/config/init.conf 12 AS_USER=mr_deployer 13 action="$1" 14 set -u 15 16 # test -f "$INIT_CONF" && . $INIT_CONF 17 18 old_pid="$PID.oldbin" 19 20 cd $APP_ROOT || exit 1 21 22 sig () { 23 test -s "$PID" && kill -$1 `cat $PID` 24 } 25 26 oldsig () { 27 test -s $old_pid && kill -$1 `cat $old_pid` 28 } 29 case $action in 30 31 start) 32 sig 0 && echo >&2 "Already running" && exit 0 33 $CMD 34 ;; 35 36 stop) 37 sig QUIT && exit 0 38 echo >&2 "Not running" 39 ;; 40 41 force-stop) 42 sig TERM && exit 0 43 echo >&2 "Not running" 44 ;; 45 46 restart|reload) 47 sig HUP && echo reloaded OK && exit 0 48 echo >&2 "Couldn't reload, starting '$CMD' instead" 49 $CMD 50 ;; 51 52 upgrade) 53 if sig USR2 && sleep 2 && sig 0 && oldsig QUIT 54 then 55 n=$TIMEOUT 56 while test -s $old_pid && test $n -ge 0 57 do 58 printf '.' && sleep 1 && n=$(( $n - 1 )) 59 done 60 echo 61 62 if test $n -lt 0 && test -s $old_pid 63 then 64 echo >&2 "$old_pid still exists after $TIMEOUT seconds" 65 exit 1 66 fi 67 exit 0 68 fi 69 echo >&2 "Couldn't upgrade, starting '$CMD' instead" 70 $CMD 71 ;; 72 73 reopen-logs) 74 sig USR1 75 ;; 76 77 *) 78 echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>" 79 exit 1 80 ;; 81 esac

    Read the article

  • AXFR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • Determine / set Puppet environment

    - by quickshiftin
    I'm trying to determine what Puppet thinks the environment is on my agent nodes. Per the documentation I've configured the agent's environment in /etc/puppet/puppet.conf as such [agent] environment = development In order to view the environment I've found this code to add an environment fact to facter: require 'puppet' Facter.add("environment") do setcode do Puppet[:environment] end end However, on one of my agent nodes, if I run sudo facter -p environment, the result is production. I've tried to manually set the environment temporarily via sudo puppet agent --environment development, however the result from facter is the same. Any idea what's going on?

    Read the article

  • rsync - How to exclude one .htaccess but not all of them

    - by Cory Gagliardi
    I have an rsync command for copying my files from dev to production. I don't want to copy the .htaccess file that's in the root of the HTML directory but, I do want to copy the few .htaccess files that are in its sub directories. I'm using the argument --exclude .htaccess which is stopping all of the files from getting copied. The other arguments I'm including are -a --recursive --times --perms. Is it possible to configure rsync to do this? Edit: Here is my full command: rsync -a --recursive --times --perms \ --exclude prop_images --exclude tracking --exclude vtours \ --exclude .htaccess --exclude .htaccess_backup --exclude "*~" \ /home/user/dev_html/* /home/user/public_html/

    Read the article

  • How to migrate an SQLServer 2000 database from one machine to another

    - by Saiyine
    This January I'm migrating our main SQLServer 2000 based database to a beefier server. Is there any standard procedure or documentation on how to do it? I need to replicate all at the new server (databases, jobs, DTSs, vinculated servers, etc). Edit: I mean SQLServer 2000 on both ends! Edit: Be calm, people, I just crossed the versions from another software I posted about at the same time as this. Effectively, I even checked the wikipedia to be sure version 8 was 2000. Don't need to flame that much about what is just an errata.

    Read the article

  • Shared storage solution for our sql server backups

    - by Gokhan
    We have 3 clustered sql servers. We have 5+ multi terrabyte databases and their backup files (compressed using quest litespeed) are hitting over 600gb each, We are required to keep at least a week or two weeks (if we can) of weekly full backups and then 6 days differential backups, and a week or 2 weeks worth of log backups local. We are currently limited to 2TB volumes from our san team, we can have multiple volumes but they are expensive ($200 per raw TB per month) and having to deal with many backup volumes instead of a single big volume is difficult. I think if we could have a shared network storage of 20TB+ raid 10 or so for all our servers for keeping the backups and another department will copy them to tape from the network storage and delete files according to the retention period would be good, if this box would be a build in operating system (even unix a complete file storage system) that would be good. What do you guys think, does this make sense to you, is there any manufacturer that sells a storage product like that which that work in a clustered environment? Thank you

    Read the article

  • Looking for a good Web Server that is cheap

    - by SoLoGHoST
    I am a Project Manager, and former Lead Developer for a software portal system that requires a forum software to run. I am in need of a server that is cheap, reliable, and supports the latest PHP (5.2+), MySQL, unlimited e-mails (preferably), a cPanel, multiple sub-domains (atleast 3+). Currently I am paying $34.95 USD/month (approx. $420 USD/year). This is too high for me to pay to keep the site running. I just recently became Project Manager and in charge of Finances and I'm extremely concerned for the future of Dream Portal. With those prices I'm not sure I'll be able to keep it running for too long. Can someone please tell me of a good server that meets all of the requirements that I listed above that is cheaper on a yearly basis? Note: Currently on a Dedicated Server with limited disk space at 15000 MB (15 GB), monthly bandwidth = 500000 MB, 50 emails limit, 20 sub-domains limit, 30 FTP accts., and 25 SQL Databases.

    Read the article

  • Upgrade a legacy, expired RHEL 3.4.6 server to modern Centos/Scientific Linux

    - by Gabriel Tasiopoulos
    I have a machine running a legacy J2EE app. The code is not Maven-ized and it works with pretty old Java and Postgres versions. I have converted it to a VM in ESXI and I'd like to try to upgrade it to a modern, binary-compatible version of RHEL (Centos or Scientific LInux) and see if things would still work. Where should I start? Am I being too optimistic with this one? It's more of an experiment and I'm not doing it on a production machine. But given the OS is pretty old I am looking for a way to do this eventually. Many thanks

    Read the article

  • Backup and restore MySQL database without system access

    - by Sencerd
    Hi guys, I am trying to move a database from 1 provider to another, the problem is that I don't have system access at either end (ie, no ssh), so I cannot use a mysqldump. I have already tried using MySQL Administrator, the backup took about 45 minutes, but when it came to restoring it was moving at a snails pace, and estimating 12+ hours. This is a live app so I need to keep the downtime to an absolute minimum. The database consists of 35 tables, a mixture of MyISAM and InnoDB, the whole thing comes to about 4.4GB. The source and destination databases are both running on very powerful servers. Any suggestions on a quick way of doing this will be gratefully received. Thanks

    Read the article

  • NT4 server generate too much weird DNS queries, How can i see the source PID?

    - by Hanan N.
    I have a NT4 server that in the last two weeks started to generate too many weird DNS queries to the DNS server is set to use. I have got warnings from the IPS system that it has blocked the responses from the DNS server back to the NT4 server. The queries it generate doesn't relate to any computer in the network, it is like 120624100088.xxxxxxx.net where xxx is the internal network, the numbers are just random at each query. I have done some research on how to get the PID that is generating the queries, and i found that only Process Monitor could give me that information, but since it is NT4 system Process Monitor doesn't work on it. It is a production server and i am just can't stop services as i want. I would like to get your advice on how can i get the PID that is generating these queries? Thanks.

    Read the article

  • How to configure to URLs for One Server using wildcard supported certificates?

    - by Amit
    Hi, We have wildcard supported certificate installed in our production environment. One of our client wants his name to appear in the URL (e.g. companyname.sitename.net). How we should facilitate this? Do we need to make any entries for this in DNS? If yes can you please let me know about it? I need to set this up before Fridat PST, any help in this is highly appriciated. Thanks.

    Read the article

  • How can I set up VLANs in a way that won't put me at risk for VLAN hopping?

    - by hobodave
    We're planning to migrate our production network from a VLAN-less configuration to a tagged VLAN (802.1q) configuration. This diagram summarizes the planned configuration: One significant detail is that a large portion of these hosts will actually be VMs on a single bare-metal machine. In fact, the only physical machines will be DB01, DB02, the firewalls and the switches. All other machines will be virtualized on a single host. One concern that has been is that this approach is complicated (overcomplicated implied), and that the VLANs are only providing an illusion of security, because "VLAN hopping is easy". Is this a valid concern, given that multiple VLANs will be used for a single physical switch port due to virtualization? How would I setup my VLANs appropriately to prevent this risk? Also, I've heard that VMWare ESX has something called "virtual switches". Is this unique to the VMWare hypervisor? If not, is it available with KVM (my planned hypervisor of choice)?. How does that come into play?

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >