Search Results

Search found 30804 results on 1233 pages for 'hardware test'.

Page 314/1233 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • migrating puppet clients to a new puppet master (old puppet master server gone, only using backup)

    - by user47650
    My puppet master server had a hardware failure, and I have restored to another box. However this box has different hardware and hostname. If I restore the existing /etc/puppet directory to the new server, the puppetmaster will not start with the following error; # puppetmasterd --debug --verbose Could not prepare for execution: Retrieved certificate does not match private key; please remove certificate from server and regenerate it with the current key So what steps do I need to take to allow the new puppetmaster to start, and to generate a new puppetmaster certificate using the old ca.. Also will the puppet clients actually report in to a different puppet server using a server certificate that has been generated with the old CA?

    Read the article

  • Windows 7 randomly installs an "Unknown Device" successfully

    - by Amazed
    Rarely (several days to weeks between occurrences,) and seemingly at random, I get a balloon notification from Windows 7 (x64 SP1 Home Premium) that it is installing hardware for me. Whatever is being installed does so without error. However, no new hardware has been installed or plugged in! When I click the balloon it doesn't give me any useful information: Looking in the event log, I find this entry: Event ID: 20001 Source: UserPnp Task Category: 7005 Message: Driver Management concluded the process to install driver FileRepository\usb.inf_amd64_neutral_153b489118ee37b8\usb.inf for Device Instance ID USB\VID_0000&PID_0000\6&3AF9A177&0&0060&&02 with the following status: 0x0. It appears to be USB related. My motherboard has both USB 2.0 and 3.0 controllers. My keyboard and mouse are plugged into the 2.0 slots and the data/recharge cable for a tablet (but not the tablet itself) was plugged in to the 3.0 slot. No other USB devices have been attached for several days/reboots. Why is Windows doing this?

    Read the article

  • Windows 2003 IIS FTP Server Migration w/ User Accounts

    - by Brad
    I'm trying to figure out the best way to migrate an FTP server from old hardware to new hardware. The server is on a domain, but not all the users setup on the server (to use FTP) are domain accounts, some are local to the server. For example, I have users both ways: domain\username machinename\username The new machine name will be different. So I need to copy all the files with permissions in tact from the old server to the new server. Then I need to convert all the user accounts from the old server to the new server. Then I need to change the file permissions so that they are no longer oldserver\username but newserver\username. Can this be accomplished all with CALCS? Is there an easy way that perhaps I'm missing?

    Read the article

  • Virtualizor + VPS Backup (Bare Metal Restore capable) Using rSync 3

    - by Gaia
    I am using virtualizor to manage 3 XEN VPS. Hardware node and each VPS run CentOS 5.x. My backup needs are as follows: 1) I need to be able to bare metal restore the entire hardware node, excluding the VPSes (which would be restored via #2 below) 2) I need to have a complete backup of each VPS, ideally a backup that can be deployed on any other host that uses Xen, if the need arises. Naturally, I would also need to use this backup to restore an entire VPS to an earlier state within the same host. Which folders rSync needs to keep backed up in order to accomplish the above? The rSync specialists aren't sure of it either. Thanks

    Read the article

  • What Hypervisors support non-homogenous clusters?

    - by edude05
    I've been using Citrx Xenserver for awhile on a few machines that don't support Hardware Virtualization as a test for various small servers. I recently have been experimenting with moving the PV Vms between machines but Xenserver gives me errors that roughly say I need to have homogenous hardware for this to work. Because of this I haven't been able to setup XenMotion or any of the nice features that come with server pooling in Xenserver. I'm considering moving away from XenServer, however I can't seem to find a Hypervisor that explicitly supports non-homogenous clusters. On a side note, we do have a few idenitally configured Dell 1950s that haven't had any VM solution setup on yet, so if we can find a solution that can allow us to move PVs to those as well that would be great. Non free solutions are OK as well. What hypervisor will allow this? Thanks!

    Read the article

  • How can I broadcast video live (preferably wirelessly)?

    - by Blixt
    Update I've gotten plenty of feedback on the software solutions and the unanimous solution for having a handheld device to record video seems to be to use a mobile phone (I was hoping there'd be some webcam-like device with wifi support...) I'd appreciate more hardware suggestions now. That is, what mobile phones have good video recording quality (and battery time)? I'm looking for a solution to broadcast video live on the internet from a location (an apartment), with a device that can be carried around. What options are there? I'm looking for complete solutions (i.e., what hardware to use, what software to use, how it should all be set up.) Currently, I have my mobile phone (Nokia N95 8GB) with Qik installed connected to wifi, but unfortunately the videos get bad quality (especially since it's indoors with poor lighting) plus the battery gets used up quickly.

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • maemo - n900 - SIP call quality

    - by Walter White
    Hi all, I have been using SIP / VoIP on my n900 to make calls and my problem is after about 15 minutes of talk time, more recently 18 minutes exactly, my connection dies and I can no longer hear them or them me. I have tested this with various VoIP providers to confirm that it is not specific to any one provider, but instead my phone. I also have tested this on my laptop. I sent my phone to be tested at some place that tests hardware and no problems were found with the hardware. What can I do to rectify the 15 minute call barrier with SIP on my phone? The other problem I have too is that for the wireless broadband to start working again, I need to restart the phone, it appears the network driver gets overloaded. The one thing that appears to work fine is making cellular calls. I have yet to have call quality drop off after 15 minutes over a cellular connection. Thanks, Walter

    Read the article

  • Calibrating Displays in Boot Camp 3.0 on MacBook with external display

    - by Brian Reiter
    The LED display on my MacBook Pro is very blue-ish without correction. In OS X the advanced mode of the display color calibration tool is excellent and I can largely color-correct the display. Windows 7 incorporates a color calibration tool but it is less powerful. It largely consists of a software gamma correction tool and color charts to use with the hardware controls on your display (which don't exist on a notebook or an apple external display). How can I color-correct Windows in Boot Camp to match the OS X correction without using a Spyder or other special calibration hardware?

    Read the article

  • Hyper-V performance comparisons vs physical client?

    - by rwmnau
    Are there any comparisons between Hyper-V client machines and their physical equivalent? I've looked around and can find 4000 articles about improving Hyper-V performance, but I can't find any that actually do a side-by-side comparison or give benchmarking numbers. Ideally, I'm interested in a comparison of CPU, memory, disk, and graphics performance between something like the following: Some powerful workstation (with plenty of RAM) with Windows 7 installed on it directly Same exact worksation with Hyper-V Server 2008 R2 (the bare Server role) and a full-screen Windows 7 client machine Virtual Server 2005 had performance that didn't compare at all with actual hardware, but with the advances in CPU and hardware-level virtualization, has performance improved significantly? How obvious would it be to a user of the two above scenarios that one of them was virtualized, and does anybody know of actual benchmarking of this type?

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • Gateway GT5220 Boot/POST Failure

    - by John Rudy
    I have a Gateway GT5220 I'm troubleshooting. It is, in fact, the machine I just gave my father for his birthday a couple months ago. (Prior to that, it was my home PC. My home PC is now the MacBook on which I'm writing this.) Before going any further, I suspect that the answer will be, "It's worse than that, it's dead, Jim, it's dead, Jim, it's dead, Jim." At least, mobo and/or CPU. The initial symptoms were as follows: Turn on power All fans fire up (thus making it so I can't hear if the hard drive is spinning or not, nor are my hands sensitive enough anymore to feel it) No LEDs remained lit on the front panel. (Initially, the hard drive indicator flashed briefly.) No beep, no video, no nothing. Following some advice I found here, I tried to "drain the stored power." After following those steps, the new symptoms were: Turn on power All fans fire up The front panel LEDs remained lit! After about 20, maybe 30 seconds, we had video! Sort of. We got to the Gateway splash/POST screen, which appeared thoroughly corrupted. How corrupted? Well, I imagine it's what a POST screen would look like after reading the wrong passage out of the Necronomicon: It stayed there. I gave it at least 5, maybe 6 minutes, and it didn't move. So I shut her down, started her up again, and now (this is where we currently stand, symptomatically) we have this: Turn on power All fans fire up The front panel LEDs remain lit No video, no beep, no nothing. I'm a software guy; haven't done real hardware troubleshooting in years. My gut tells me that the mobo and/or CPU is fried, and unfortunately my gut didn't get to be as big as it is being wrong all the time. :( In addition to the link above, I have read all of the following (trying to save you some LMGTFY trouble): Gateway Support POST Error Messages and Handling About a zillion (useless) POST beep code sites A kioskea.net post indicating that most likely we're at what I consider "total loss" (mobo and/or CPU) My questions: Are there any conditions other than mobo/CPU that could cause symptoms like these? Is it worth my time to try the next hardware troubleshooting step?(IE, remove all non-critical hardware from the machine, try to boot, systematically replace one by one until we find the failing component) Which mobos will fit in the Gateway GT5220 case (with rear ports correctly aligned)? (Why this is not a dupe: I wouldn't have posted this question if it hadn't been for the funkadelic possessed video display on the one occasion we got video out. I think that justified this not being an exact dupe. Of course, if the community overrules, I will understand.)

    Read the article

  • MS SQL to MySQL using MySQL Migration Toolkit: permission issue

    - by Zeno
    I have a MS SQL imported into SQL Server 2008 from a .bak and I set it to Mixed mode. I have a SQL user (called "test") that can correctly access the database using SQL Server. I need to convert this to a MySQL database, so I got the MySQL Migration Toolkit. I pick "MS SQL Server" and then it asks for the hostname/username/password/database. I'm not 100% sure on these, but I used "localhost" (running on same computer), left the port as is (1433) and the username/password ("test") for the SQL Server. And I used the database name for the SQL Server database I'm looking to import. I clicked next, enter my MySQL database details and then attempt to run it and I get this error: Connecting to source database and retrieve schemata names. Initializing JDBC driver ... Driver class MS SQL JDBC Driver Opening connection ... Connection jdbc:jtds:sqlserver://localhost:1433/Orders;user=test;password=blah;charset=utf-8;domain= The list of schema names could not be retrieved (error: 0). ReverseEngineeringMssql.getSchemata :Network error IOException: Connection refused: connect Details: net.sourceforge.jtds.jdbc.ConnectionJDBC2.<init>(ConnectionJDBC2.java:372) net.sourceforge.jtds.jdbc.ConnectionJDBC3.<init>(ConnectionJDBC3.java:50) net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:178) java.sql.DriverManager.getConnection(Unknown Source) java.sql.DriverManager.getConnection(Unknown Source) com.mysql.grt.modules.ReverseEngineeringGeneric.establishConnection(ReverseEngineeringGeneric.java:141) com.mysql.grt.modules.ReverseEngineeringMssql.getSchemata(ReverseEngineeringMssql.java:99) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) com.mysql.grt.Grt.callModuleFunction(Unknown Source)

    Read the article

  • How do I configure PHP5 and Apache2 on Ubuntu Server?

    - by rofls
    I'm trying to follow these instructions (under the Troubleshooting PHP 5 heading). I have PHP installed and when when I run a2enmod php5 it says "Module php5 already enabled". The problem is I created a file, test.php, that's just this: <?php phpinfo(); ?> and put it in /var/www, like the instructions tell me to, but running curl http://localhost/test.php produces an Apache made 404 that says it can't find that file. I have: ServerName localhost DocumentRoot /var/www in one of the sites-available in the /etc/apache2 directory. I should probably figure this out on my own, but the instructions say for troubleshooting do: "If the problem persists, check your PHP file authorisations (it should be readable at least by Ubuntu user "apache"), and check if the PHP code is correct. For instance, copy your PHP file, replace your whole PHP file content by "" (without the quotation marks): if you get the PHP test page in your web browser, then the problem is in your PHP code, not in Apache or PHP configuration nor in file permissions. If this doesn't work, then it is a problem of file authorisation, Apache or PHP configuration, cache not emptied, or Apache not running or not restarted." And I don't know where the PHP file authorisations are or how to do that.

    Read the article

  • Weirdness After Reinstall The Windows Operating System

    - by Eka Anggraini
    I want to ask, and ask advice from you guys. I have reinstall my OS and successful, I'm turn off the restart a few times is fine .. Later a few hours later, when I turn it on, a sudden there was an error : Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM You can attempt to repair this file by starting Windows Setup using the original Setup CD-ROM Select 'R' at the first screen to start repair. So I intend to repair, reinstall all again .. I enter cd windows : press any key .. with blue background text below: Setup is loading files (windows executive) Setup is loading files (hardware abstraction layer) Then I was waiting until half an hour, and no changes, I repeated the process several times is also nil. Please advice and solutions about the problem where? Hardware / cd Windows ?

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by user36428
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • kernel software trap handling

    - by Tony
    I'm reading a book on Windows Internals and there's something I don't understand: "The kernel handles software interrupts either as part of hardware interrupt handling or synchronously when a thread invokes kernel functions related to the software interrupt." So does this mean that software interrupts or exceptions will only be handled under these conditions: a. When the kernel is executing a function from said thread related to the software exception(trap) b. when it is already handling a hardware trap Is my understanding of this correct? The next bit: "In most cases, the kernel installs front-end trap handling functions that perform general trap handling tasks before and after transferring control to other functions that field the trap." I don't quite understand what it means by 'front-end trap handling functions' and 'field the trap'? Can anyone help me?

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Limit Windows PC Network/Internet Throughput

    - by Jon Cram
    I have a Vista x64 machine on a fairly fast Internet connection and either buggy drivers for the onboard Ethernet or faulty onboard Ethernet hardware. If I sustain too high a throughput on the Ethernet connection the network connection within Windows fails and I have to restart the machine to restore connectivity. I don't believe I can fix this issue (I'm erring towards faulty hardware) but would like to mitigate the effects by limiting my network throughput. I'm in a position where I would like to download a 5GB file from the Internet (a game install via Steam) and am certain that as this will take a few hours I will not be able to complete the download before my network connection within Windows fails. From downloading content through a BitTorrent client I have found that by limiting the download throughput to around 150 kilobytes per second I can maintain a steady network connection. I can't directly limit the throughput of the download through the Steam client and would instead like to find out how I can limit the throughput of my Ethernet connection within Windows. Any suggestions on how I can achieve this?

    Read the article

  • Does vmWare ESXi 4.0 U1 support the Promise SuperTrak EX8650 SATA card?

    - by RTNN
    Hi, can anyone tell me if vmWare ESXi 4.0 U1 has support for the Promise SuperTrak EX8650 SATA card? In the hardware support guide I find that VmWare should have support for the Promise SuperTrak EX8650 SATA card but only in version ESX 3.5. Is this card not supported for ESXi 4.0 U1 or what? From the hardware guide! Partner Name Model Manufacturer Device Type Supported Releases Promise SuperTrak EX8650 Promise Technology Inc SAS-RAID ESX 3.5 U5*1 1 , ESX 3.5 U4*1 1 Promise SuperTrak EX8760T Promise Technology Inc SAS ESX / ESXi 4.0 U1*2 2 , ESX / ESXi 4.0*2 2

    Read the article

  • How to create custom content for nginx error 502 page, keep origin url on browser

    - by user123862
    i'm trying to get custom language and message for nginx error page but keep url on browser.. not success for eg: i go to url : xaluan.com/aaa/bbb.html on the time server down.. nginx will show error 502. with the same url but custom message as my language. test 1. I created a custom page at /usr/local/nginx/html/205.html as following config but it show on web site when error is default nginx error at domain.com/50.html ( the content of webpage not same as i created) error_page 502 /502.html; location = /502.html { root /usr/local/nginx/html; } test 2. Then i create same page at my www domain folder /home/xaluano/public_html/502.html but this keep redirect me to root domain.com/502.html the content now same as i created. but.. the url still not as i need error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; internal; } EDIT UPDATE for more detail 10/06/2012 please download my nginx config http://pastebin.com/7iLD6WQq and vhost config following: http://pastebin.com/ZZ91KiY6 == the case test.. if apache httpd service stop: #service httpd stop then open browser go to: xaluan.com/modules.php?name=News&file=article&sid=123456 I will see the 502 error with the same url on browser address == Custome error page I need the config which help when apache fail .. will show the custom message tell user wail for 1 minute for service back then refress current page with same url ( refresh I can do easy by javascript ), Nginx dosent change url so java-script can work out. any help will be great.. thank in advance

    Read the article

  • Server freezes at XX:25

    - by Karevan
    We've ordered a 50 euro/month server on hetzner.de, it has debian OS. The problem is that server is freezing in random time of the day and nothing appears in log. Only hardware reboot helps. Part of the log file while it was freezing: Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] New connection from 95.211.120.220 Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] Logout. Aug 17 22:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22828]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:09:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22835]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:17:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22842]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 17 23:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22847]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: imklog 4.6.4, log source = /proc/kmsg started. Aug 18 09:47:47 Debian-60-squeeze-64-minimal rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1229" x-info="http://www.rsyslog.com"] (re)start Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpuset Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpu Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Linux version 2.6.32-5-amd64 (Debian 2.6.32-45) ([email protected]) (gcc version 4.3.5 (Debian 4.3.5$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro As you can see, it appears only the fact of starting. No. theres no way to look in server's console right after when it freezes, sadly. Datacenter supporters do not really want to help about that. Server has been installed 30th july, times and dates of freezes are down there: 6 august, 0:25 18 august, 2:27 21 august, 1:25 26 august, 23:26. We decided that freezing around ??:25 isn't a hardware fault, and decided to reinstall the OS. Later, 31 august, our admin backed up all files, reinstalled Debian, and restored the backup. But then, 7 september, server went down again, at 5:05. We thought it was related to Anyone else experiencing high rates of Linux server crashes during a leap second day? and turned ntp off. But then the server went down twice again, 21 september, 17:29 and 24 september, 20:27. I called all linux admins I knew to help with solving it and they said everything is fine about configuring OS and it could be hardware only. But they dont know why it always freezes at XX:25-30. Maybe some of you know about something related to that?

    Read the article

  • Cisco Multi-DMZ firewall

    - by BParker
    I need to find a firewall that will give me 1 LAN port, and 5-7 DMZ ports. I have a requirement to replace some FreeBSD systems that are used to run some testing equipment. It is essential that the DMZ ports cannot communicate with each other, but the LAN port can communicate with everyone. That way a user on the LAN can connect to the test systems, but the test systems are isolated entirely and cannot interfere with each other. One of the DMZ's will be connected to a VMWare ESXi server, one to a standard server, and the rest to various types of equipment. The lan port will be connected to the corporate LAN switch. Sorry if i am a little vague, I am just trying to work all this out myself! Currently we have a FreeBSD configured, but the quad port NIC's are pretty expensive, and the PC itself is old, so i would prefer to replace it with a dedicate piece of kit which can do the same job, but more reliably! These test rigs are used all over the place, and get moved quite often, so i am aiming for Cisco kit for ease of configuration and reliability of the hardware itself. Thanks

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >