Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 363/549 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Cherrypy web application won't communicate outside localhost via VPN

    - by Geoffrey Shea
    I'm trying to run a Python2.7/Cherrypy web server on Win 7 which is connected to a VPN to establish a dedicate IP address. (If I run the exact same application on Win XP connected to the VPN it works fine.) On Win 7 I tried configuring it to use port 8080, 8005, or 80 with no improvements. I turned off Windows Firewall altogether to test and there was no improvement. If I run Apache on the Win 7 machine on port 80 it works fine so I'm pretty sure it's not the VPN service or router. If I go to WhatismyIP.com it shows that I have the IP address being provided by the VPN. Here is the Python code, but I suspect the problem is the network configuration: import cherrypy class HelloWorld: def index(self): return "Hello world!3" index.exposed = True cherrypy.root = HelloWorld() cherrypy.config.update({"global":{ "server.environment": "production", "server.socketPort": 8005 } }) cherrypy.server.start() This will return a web page if I go to localhost:8005, but not if I go to the VPN IP address:8005 from another machine. As I said, if I run Apache on the Win 7 machine on port 80 I can see it at localhost:80 AND at the VPN IP address:80 from another machine. Thanks for any light you can shed! Geoffrey

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • Certificate enrollment request chain not trusted

    - by makerofthings7
    I am working on a MSFT lab for Direct Access, and need to create a Web certificate. The instructions ask be to do the following: On EDGE1, click Start, type mmc, and then press ENTER. Click Yes at the User Account Control prompt. Click File, and then click Add/Remove Snap-ins. Click Certificates, click Add, click Computer account, click Next, select Local computer, click Finish, and then click OK. In the console tree of the Certificates snap-in, open Certificates (Local Computer)\Personal\Certificates. Right-click Certificates, point to All Tasks, and then click Request New Certificate. Click Next twice. On the Request Certificates page, click Web Server, and then click More information is required to enroll for this certificate. On the Subject tab of the Certificate Properties dialog box, in Subject name, for Type, select Common Name. In Value, type edge1.contoso.com, and then click Add. Click OK, click Enroll, and then click Finish. In the details pane of the Certificates snap-in, verify that a new certificate with the name edge1.contoso.com was enrolled with Intended Purposes of Server Authentication. Right-click the certificate, and then click Properties. In Friendly Name, type IP-HTTPS Certificate, and then click OK. Close the console window. If you are prompted to save settings, click No. In production, our company has overridden the Web Server template and it doesn't seem to be issuing certificates with the full CA chain. When I look at the issued certificate properties then both tiers of the 2 tier CA hierarchy are missing. How can I fix this? I'm not sure where to look outside the GUI.

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • LAMP Setup, PHP's session_start permission denied

    - by Andrew
    I'm trying to set up a development environment for a legacy system that runs CentOS 4.8, PHP 4.3.9, and MySQL 4.1.22. I'm matching OS and software versions to keep the development server as close to the production server as possible. When I fire up PHPMyAdmin's setup script (version 2.11.10.1, of course) the installation errors out and I see these errors in my error log: [client 172.18.141.74] PHP Warning: session_start(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in /home/www/intranet/phpmyadmin/libraries/session.inc.php on line 87 [client 172.18.141.74] PHP Warning: Unknown(): open(/var/lib/php/session/sess_b5b90f86bd3dcfad315ff24cb7483a79, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [client 172.18.141.74] PHP Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0 I've done some searching on ServerFault and on teh Googles and I see that a common reason for this error is that the session.save_path isn't writable by the www user. I also found where in /etc/php.ini this URL is set: session.save_path. My session.save_path is set to: session.save_path = /var/lib/php/session I've since changed the owner and the group of /var/lib/php/session and still have the same error. Here's the result of ls -la for /var/lib/php [root@localhost php]# ls -la total 24 drwxrwxr-x 3 www www 4096 Oct 23 20:21 . drwxr-xr-x 17 root root 4096 Oct 23 20:31 .. drwxrwx--- 2 www www 4096 Jun 1 2009 session ...But I'm still getting the same error. Is there another possibility for why I'm getting this error?

    Read the article

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Extract Distinct restful MVC routes from IIS logs

    - by Grummle
    This is a cross post from StackOverflow that after some consideration I believe can be asked here (not getting anything on SO). My shop is using MVC3/FUBU on IIS 7. I recently put something into production and I wanted to gather metrics from the IIS logs using log parser. I've done this many times before with file endpoints but because the MVC3 routes are of the form /api/person/{personid}/address/{addressid} the log saves /api/person/123/address/456 in the uristem column. Does anyone have any ideas on how to get data about specific routes from IIS logs? As an exmaple: Log Like this: cs-uri-stem /api/person/123/address/456 /api/person/121/address/33 /api/person/3555 /api/person/1555/address/5555 I want information about all where the route used was /api/person/{personid} so the count would be 1 in this case. Ideally what I'd like to figure out is how to do is is have IIS log the regex for the route that is choose for a particular url. So in the IIS logs have /api/person/{personid}/address/{addressid} in a column in addition to the cs-uristem /api/person/1555/address/5555

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • LDAP Account Locked Out Sporadically after Password change - Finding the source of invalid attempts

    - by CityView
    On a small network of machines (<1000) we have a user whose account is being locked out after an indeterminate interval following a password change. We are having severe difficulties finding the source of the invalid logon attempts and I would appreciate it greatly if some of you could go through your thought process and the checks you would perform in order to fix the problem. All I know for sure is that the account is locked out several (5+) times a day, I can't even be sure it's due to failed login attempts as there is no record of failure until the account is locked. So far I have tried; Logging the account out of everything we can think of and back in with the new password Scanning the user's box for any non standard software which might perform an LDAP lookup Checking all installed services on our production boxes to check none are attempting to run under the account Changing the user back to their old password (Problem persists so perhaps password change is a red herring) Wireshark on a box where lots of LDAP authentication is performed - Rejects only occur after account is already locked out Clearing the credential cache in - Control Panel - User Accounts - Advanced Looking at the local I'm at a loss for what to try. I am happy to try any suggestions you have in order to diagnose the issue. I think my question boils down to a simple request; I need a technique for deriving the source (Application/Host) of the invalid login attempts which are causing the account to be locked. I'm not sure if that's even possible but I suspect there must be more I can try. Many thanks, CityView

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • Installing Ubuntu guest crashes Hyper-V host

    - by Grant
    I have a weird problem that I don't even know where to begin diagnosing. Trying to install Ubuntu to a VM locks up the host system! My setup is: Dell R715 server, dual 16 core AMD opteron processors, 96GB RAM Dell MD3600f SAN Server 2008 R2 Datacenter System Center VMM 2012 There are 5 windows virtual machines running that have had no problems. This is the first linux VM I've tried to create. I setup a VM through virtual machine manager, set the CD drive to a Ubuntu 12.04 server x64 iso, and started it up. It boots up the normal ubuntu install menu, but the second I hit enter on "Install Ubuntu Server", I get disconnected. The HOST machine stops responding to pings. So do all virtual machines on it. It locks up entirely - keyboard on the host won't work, mouse won't move, numlock light won't change. There's no blue screen - the host is sitting at the login screen completely unresponsive. I can't find any relevant logs in event viewer after rebooting. What could cause the host machine to freeze like that? It's not a one time occurrence - it happens every time at the exact same point. Thank god this server isn't in production yet!

    Read the article

  • Move files from ftp server to s3

    - by lev
    I would like to set up an ftp server, where users will upload files, and for each file, put it on s3 storage, and delete it from the ftp server. (the server runs on ec2 ubuntu) Here are the stuff I already tried, with no success.. Mount s3 bucket using s3fs. I followed those instructions, but there is a bug in the latest version of s3fs, that prevents it from working. The bug was fixed on the develop branch, but I don't want to use unstable version on my production. Use vsftpd and using s3cmd sync via cron to sync the files periodically. The problem with that approach, is that s3cmd can start running in the middle of a file upload, and start synching the incomplete file. Also s3cmd doesn't give any feedback it the sync fails, so I have no way of knowing if I can delete the files after the sync command finished running. Use pure-ftpd's upload script feature (which allows to run a script after a file is finished uploading), but I noticed that if the file upload was failed in the middle, the script will run anyway, and I have no way of knowing if the upload was successful or not. I've been at it for a few days now, and I'm at a loss here. Any suggestions will be welcomed.

    Read the article

  • Play framework 2.2 using Upstart 1.5 (Ubuntu 12.04)

    - by Leon Radley
    I'm trying to get Play 2.2 working with upstart. I've been running Play 2.x with upstart since it's release and it's never been a problem. But since the release of 2.2 and the change to http://www.scala-sbt.org/sbt-native-packager/ play doesn't want to start any more. Here's the config I'm using description "PlayFramework 2.2" version "2.2" env APP=myapp env USER=myuser env GROUP=www-data env HOME=/home/myuser/app env PORT=9000 env ADDRESS=127.0.0.1 env CONFIG=production.conf env JAVAOPTS="-J-Xms128M -J-Xmx512m -J-server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 expect daemon # If you want the upstart script to build play with sbt pre-start script chdir $HOME sbt clean compile stage -mem $SBTMEM end script exec start-stop-daemon --pidfile ${HOME}/RUNNING_PID --chuid $USER:$GROUP --exec ${HOME}/bin/${APP} --background --start -- -Dconfig.resource=$CONFIG -Dhttp.address=$ADDRESS -Dhttp.port=$PORT $JAVAOPTS I've changed the JAVAOPTS to include the -J- and I've also changed the path to use the new startscript located in the /bin/ dir. I've read that upstart 1.4 has setuid and setguid. I've tried removing the start-stop-daemon but I haven't got that working either. Any suggestions would be appreciated.

    Read the article

  • SQL Server Subscriber Migration

    - by SuperCoolMoss
    We're currently have one way transaction replication from a SQL Server 2005 OLTP publisher/distrbituor to two subscribers (one SQL 2005 and the other SQL2008 R2). Replication security is via the SQL Agents' domain service account (the same account is used on all boxes). The SQL2008R2 subscriber is used for BI purposes and hosts a database that has a subset of the Production publisher database tables, with different security and indexes. We need to migrate this BI subscriber to a newer box with more performant hardware. The plan is as follows: Stop replicating to the BI box (continue replicating to the other subscriber). Backup all databases on the BI box (including system databases). Restore all databases (including master in single user mode) to the new BI box (this has SQL Server 2008R2 already installed). Take the old BI box off the network and shut it down. Rename and Re-IP the new BI box to be the same as the old box. Switch replication back on. Are there any flaws in this approach?

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >