Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 245/319 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Can't remove route from routing table

    - by anon
    (I am on Windows Server 2003.) I see a couple of unusual entries in my routing table that I would like to remove: Network Destination Netmask Gateway Interface Metric XXX.27.44.1 255.255.255.255 127.0.0.1 127.0.0.1 20 XXX.27.255.255 255.255.255.255 XXX.27.44.1 XXX.27.44.1 20 All the "XXX"'s are the same octet. I would strongly prefer NOT to clear the routing table, since this is a production server. Here is what I've tried: route delete XXX.27.44.1 The route specified was not found. route delete XXX.27.44.1 mask 255.255.255.255 127.0.0.1 metric 20 The route specified was not found. route delete XXX.27.255.255 The route specified was not found. route delete XXX.27.255.255 mask 255.255.255.255 XXX.27.44.1 metric 20 The route specified was not found. I also tried adding the routes in hopes that I could delete them: route add XXX.27.44.1 mask 255.255.255.255 127.0.0.1 metric 20 The route addition failed: The parameter is incorrect. route add XXX.27.255.255 mask 255.255.255.255 XXX.27.44.1 metric 20 The route addition failed: The parameter is incorrect. Bonus question: What do these entries do, and how did they get there?

    Read the article

  • Redmine plug-in fails at rake db:migrate_plugins

    - by Drew
    Hey, First post, so hope I'm in the right place. While trying to install the Redmine plug-in 'Wiki Extensions', I keep getting stuck when I try to run the "rake db:migrate_plugins RAILS_ENV=production" command. I am moving server and I'm in bit over my head. Haven't found anything on Google that has helped me much, though I might have missed something. I have pasted in the output with --trace: (in /srv/www/vastpark.org/redmine) ** Invoke db:migrate_plugins (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate_plugins Migrating engines... Migrating acts_as_activity_provider... Migrating acts_as_attachable... Migrating acts_as_customizable... Migrating acts_as_event... Migrating acts_as_list... Migrating acts_as_searchable... Migrating acts_as_tree... Migrating acts_as_versioned... Migrating acts_as_watchable... Migrating awesome_nested_set... Migrating classic_pagination... Migrating coderay-0.9.2... Migrating gravatar... Migrating open_id_authentication... Migrating prepend_engine_views... Migrating redmine_wiki_extensions... == CreateWikiExtensionsComments: migrating =================================== -- create_table(:wiki_extensions_comments) rake aborted! An error has occurred, all later migrations canceled: Mysql::Error: Table 'wiki_extensions_comments' already exists: CREATE TABLE 'wiki_extensions_comments' ('id' int(11) DEFAULT NULL auto_increment PRIMARY KEY, 'wiki_page_id' int(11), 'key_word' varchar(255), 'user_id' int(11), 'comment' text, 'created_at' datetime, 'updated_at' datetime) ENGINE=InnoDB

    Read the article

  • Restarting shell script with &disown using Monit

    - by Solas Admin
    I have a shell script that runs a C++ backend mail system (PluginHandler). I need to monitor this process in Monit and restart it if it fails. The script: export LD_LIBRARY_PATH=/usr/local/lib/:/CONFIDENTAL/CONFIDENTAL/Common/ cd PluginHandler/ ./PluginHandler This script does not have a PID file and we run this script by executing ./rundaemon.sh &disown ./pluginhandler starts the process and starts logging into /etc/output/output.log I stop the process by identifying the process ID with [ps -f | grep PluginHandler] and then killing the process. I can check the process in Monit just fine, but I think Monit is starting the process if it is not running but it can't do &disown so the process ends as soon as it starts. This is the code in the monitrc file for checking this process: check process Backend matching "PluginHandler" if not exist then alert start "PATH/TO/SCRIPT/rundaemon.sh &disown" alert [email protected] only on {timeout} with mail-format {subject: "[BLAH"} I tried to stop the script from terminating by modifying the script like the following but this does not work either. export LD_LIBRARY_PATH=/usr/local/lib/:/home/CONFIDENTAL/production/CONFIDENTAL/Common/ cd PluginHandler/ (nohup ./PluginHandler &) return Any help to write a proper Monit rules to resolve this issue would be greatly appreciated :)

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • How to recover the plesk database?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Resolve another domain from current AD domain

    - by faulty
    We have 2 AD domain setup in our office. First is the primary domain for our office and exchange. The 2nd one is for development use to simulate production environment of our clients. Both domain are hosted on Windows 2008 R2 Enterprise. We, the development team has no access to the office domain other than login and email purpose. DNS is running on PDC of both domain. Both domain does not use public domain name. Now, our machines are joined to the development domain and we use outlook to access our office's exchange. We've added DNS entries for both the domain. From time to time we are having problem resolving our office domain (i.e. during outlook login), which we need to edit our NIC's DNS to have only DNS server from our office and then flush DNS. After that switch back once it's able to resolve. Is there a permanent solution for this scenario like specifying that the office domain be resolve with another DNS server when requested from the development domain? Thanks

    Read the article

  • VMWare converter performance

    - by bellocarico
    Hello, I have a question about my test lab. It's more to understand the concept more than apply this into production: I have an ESXi with few VMs linux/windows configured and I'd like to use VMWare converter to create backups. To speedup the process I decided to create a Windows VM on the same ESXi host where I've installed Windows 7 and VMWare Converter. The Host has a gigabit card but it's currently connected to a 100Mb FD port. Windows 7 sees a 1gb card connected. When I do the backup using VMWare converter I specify the host IP as source and destination, so I thought the copy could be faster then use my laptop across the network. Well, to cut a long sotry short: I get dreadful performance (4Mb/sec). I'm a buit confused on this because despite the fact that the host is running 100Mb communication between VMs and hosts shouldn't (correct me if I'm wrong) have any limitation instead. I did tweak windows 7 to optimise network performane but I got just a little improvement. i still need 4 hours to back up a 50Gb (thin) VM. Additionally I wanted to ask: Would jumbo frame help in this? I know that jumbo frame have to be supported end to end, and the network switch where the host is currently connected doesn't support this, but I was wondering: 1) Does ESXi host support jumbo frames at all? 2) Can I enable it somehow? 3) If I do so, I guess bulk transfert between VMs and host would improve, but would this affect the communication going through the real switch as this doesn't do jumbo? Thanks for reading

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • Write permissions LAMP (Debian Lenny)

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Ubuntu karmic, chroot, Ubuntu server 8.04 LTS and Plesk

    - by morpheous
    I am pretty new to the Linux environment, and although I have been using Ubuntu Karmic desktop for development work, I have always prefered the GUI tools and very rarely use the terminal. I am about to launch a site (a VPS) which will be running Ubuntu server 8.04 LTS and Plesk. I want to install both the server (8.04 LTS) and Plesk in a 'chroot' on my Karmic desktop. I also want to test install some packages I have written for the server, and generally familiarize myself with the environment before I signup to the hosting service. I will be running a streamline server (no GUI), with only the following software: PHP Python mySQL PostgreSQL Apache Plesk Third party packages I developed In terms of mail, I think the hosting provider provides a mail service, so I dont know if I will need to run my own mail server. I will like some advice on the following: How can I install Ubuntu 8.0.4 server LTS in a chroot? How can I install Plesk in Ubuntu 8.0.4 server LTS (from the command line) Can anyone recommend how I back up the remote server when I go live (i.e. what tools to use)?. I will need to back up both databases, as well as some config data and installation packages

    Read the article

  • Oracle logical standby fails with ORA-01919

    - by DCookie
    I have an Oracle logical standby database being managed via data guard. Just this morning the redo apply process began failing with an ORA-01919 error, indicating one of our application roles did not exist. However, I can see the role on both primary and standby databases. We also have a physical standby that has long since applied the redo where this is happening on the logical, without issue. I have opened an SR with Oracle. I was wondering if anyone out there has seen this before. I guess I should mention: Oracle 10.2.0.4, Win2003 Server SP2. UPDATE: So far, Oracle Support has not provided an answer. I thought I'd post here what I have learned so far. It appears that a grant of DBA on the primary host to a role works fine for users granted the role. It does not work on the logical standby. IOW: create role TEST; grant dba to TEST; grant TEST to auser; connect auser set role TEST; grant <existing role> to <existing user>; This works on the primary instance but fails on the logical. A workaround appears to be to grant each role on the primary to the role TEST with admin option in the logical: grant <existing role> to TEST with admin option; <== do this on the logical standby Then the command works on the logical standby.

    Read the article

  • IIS Digest repeatedly asking for authentication

    - by David Budiac
    I have a development copy of an ASP.NET intranet site checked out and running on my local machine. We're using digest authentication to allow users to log in using their active directory accounts. On my development copy only, Digest sometimes will repeatedly prompt for login information usually ~9 times per page request. After repeatedly logging in (or it also works to cancel out of 8 out of the 9 prompts), I can use the site as normal. I cannot pinpoint what is triggering the issue. Sometimes this problem triggers upon the next page request, sometimes after I edited/saved/refreshed a page, and sometimes it doesn't happen at all. Each prompt triggers several logon (Event ID 4624 & 4672) security events in the Events Viewer. Shortly after each burst of logon events, I'll see a burst of logoff events (Event ID A co-worker who has a nearly an identical setup (Windows 7, IIS 7) is not experiencing the issue. Our production copy (that is running on a different server) also does not experience the issue. We've tried to compare our settings in IIS, not really finding any differences. I'm using chrome but I've experienced the issue in other browsers.

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • Best tool for monitoring backups, etc. and trending statstics from that data

    - by Randy Syring
    I have done some research on nagios, opennms, and zenoss but am not confident that I have found what I am looking for. The main driving force for me right now is being able to monitor backups. This includes mysql, mssql, and eventually some file system backups. We have a tool that wraps the backup process for these different systems and collects statistics. So, items like: number of databases backed up size of db backup file size of db backup file compressed time to make backup time to zip file I want to be able to A) have notifications if the jobs are not run according to schedule B) be able to set thresholds on the statistics which would trigger notifications C) I want to be able to trend and graph the statistics I am planning on sending this information to the monitoring application through an HTTP POST. Or, the monitoring application could pull it from a log file as well. However, we will have other processes with other "arbitrary" (from the monitoring system's perspective) statics that will want to monitor and trend, so flexibility is very important. The tool or tools should also be able to do general monitoring and trending of network interfaces, server load, etc. Once we get the backup monitoring in place, we will want to include those items as well. Thanks.

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • OS X: Storing MySQL data securely, on an encrypted FileVault image using a soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • vyatta Server Reboots by itself

    - by Fernando
    I have an issue regarding some hardware, maybe you can help me. First, I set up a Supermicro Superserver SYS-5016I-NTF with a Intel Xeon X3470 and 4 GB of Ram with a Hotlava Card Tambora 64G4 with Intel Chipset 82599EB and 4x10G SPF+ ports. Installed Vyatta community edition 6.3. I used it as router making BGP connections with 2 operators. No load at all, temp ranges normal. But the issue is that it reboots by itself in a ramdom way. Not very often, once every few days. But it is unacceptable for production purposes. So I try to test on different hardware, and installed Vyatta community edition 6.3 on a Dell PowerEdge 2950, with Xeon(R) E5345 @ 2.33GHz and 4 GB of Ram. Same Vyatta configuration as Supermicro Server. With same hotlava Card model ( I bought two of them ) Well I have reboots with this equipment as well. Same frecuency as above. I have checked syslog no strange logs until boot process starts to be logged. So it seems server reboot suddenly. I have installed latest driver for the chipset of the Hotlava card. Servers are placed in a datacenter with UPS So finally two things in common in both servers: Hotlava Card. Someone with issues with this card, or the chipset?? Could be it this card?? Vyatta 6.3 community edition. I don't thing is the problem. Is a regular Debian with packages to glue together different services. Or maybe is something I am missing. Andy ideas, suggestions?? Thank you very much... Fernando

    Read the article

  • FastCGI Error when installing PHP on IIS7.5

    - by ytoledano
    I'm trying to install MediaWiki on a Win2008r2 server, but can't manage to install PHP. Here's what I did: Grabbed a Zip archive of PHP and unzipped it into C:\PHP. Created two subdirs: c:\PHP\sessiondata and c:\PHP\uploadtemp. Granted modify rights to the IUSR account for the subdirs. Copied php.ini-production as php.ini Edited php.ini and made the following changes: fastcgi.impersonate = 1 cgi.fix_pathinfo = 1 cgi.force_redirect = 0 open_basedir = "c:\inetpub\wwwroot;c:\PHP\uploadtemp;C:\PHP\sessiondata" extension = php_mysql.dll extension_dir = "./ext" upload_tmp_dir = C:\PHP\uploadtemp session.save_path = C:\php\sessiondata Install Web server role, selected CGI and HTTP Redirection options. In the Handler Mappings: Added Module Mapping. Entered the following values: Path = *.php, Module = FastCgiModule, Executable = c:\php\php-cgi.exe, Name = PHP via FastCGI. Created a test page into wwwroot directory: phpinfo.php and set the contents like this: < ?php phpinfo(); ? Browsed to http://localhost/phpinfo.php But then I get: HTTP Error 500.0 - Internal Server Error An unknown FastCGI error occured Detailed Error Information Module: FastCgiModule Notification: ExecuteRequestHandler Handler: PHP via FastCGI Error Code: 0x800736b1 Requested URL: http://localhost:80/phpinfo.php Physical Path: C:\inetpub\wwwroot\phpinfo.php Logon Method: Anonymous Logon User: Anonymous Does anyone know what I'm doing wrong here? Thanks.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >