Search Results

Search found 21130 results on 846 pages for 'nfs client'.

Page 709/846 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • Postfix: Using google apps for stmp errors

    - by Zed Said
    I am using postfix and need to send the mail using google apps smtp. I am getting errors after I thought I had set everything up correctly: May 11 09:50:57 zedsaid postfix/error[22214]: 00E009693FB: to=<[email protected]>, relay=none, delay=2466, delays=2462/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22213]: 0ACB36D1B94: to=<[email protected]>, relay=none, delay=2486, delays=2482/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) May 11 09:50:57 zedsaid postfix/error[22232]: 067379693D3: to=<[email protected]>, relay=none, delay=2421, delays=2417/3.4/0/0.06, dsn=4.7.0, status=deferred (delivery temporarily suspended: SASL authentication failed; cannot authenticate to server smtp.gmail.com[74.125.155.109]: no mechanism available) main.cf: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = zedsaid.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = #relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all delay_warning_time = 4h smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 ## Gmail Relay relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_mechanism_filter = login smtp_tls_eccert_file = smtp_tls_eckey_file = smtp_use_tls = yes smtp_enforce_tls = no smtp_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_received_header = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport debug_peer_list = smtp.gmail.com debug_peer_level = 3 What am I doing wrong?

    Read the article

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

  • Installing mod_pagespeed (Apache module) on CentOS

    - by Sid B
    I have a CentOS (5.7 Final) system on which I already have Apache (2.2.3) installed. I have installed mod_pagespeed by following the instructions on: http://code.google.com/speed/page-speed/download.html and got the following while installing: # rpm -U mod-pagespeed-*.rpm warning: mod-pagespeed-beta_current_x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 7fac5991 [ OK ] atd: [ OK ] It does appear to be installed properly: # apachectl -t -D DUMP_MODULES Loaded Modules: ... pagespeed_module (shared) And I've made the following changes in /etc/httpd/conf.d/pagespeed.conf Added: ModPagespeedEnableFilters collapse_whitespace,elide_attributes ModPagespeedEnableFilters combine_css,rewrite_css,move_css_to_head,inline_css ModPagespeedEnableFilters rewrite_javascript,inline_javascript ModPagespeedEnableFilters rewrite_images,insert_img_dimensions ModPagespeedEnableFilters extend_cache ModPagespeedEnableFilters remove_quotes,remove_comments ModPagespeedEnableFilters add_instrumentation Commented out the following lines in mod_pagespeed_statistics <Location /mod_pagespeed_statistics> **# Order allow,deny** # You may insert other "Allow from" lines to add hosts you want to # allow to look at generated statistics. Another possibility is # to comment out the "Order" and "Allow" options from the config # file, to allow any client that can reach your server to examine # statistics. This might be appropriate in an experimental setup or # if the Apache server is protected by a reverse proxy that will # filter URLs in some fashion. **# Allow from localhost** **# Allow from 127.0.0.1** SetHandler mod_pagespeed_statistics </Location> As a separate note, I'm trying to run the prescribed system tests as specified on google's site, but it gives the following error. I'm averse to updating wget on my server, as I'm sure there's no need for it for the actual module to function correctly. ./system_test.sh www.domain.com You have the wrong version of wget. 1.12 is required.

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • What does it mean for the file name to be shown with red background

    - by user56614
    I'm trying to install Cisco VPN client on Linux Ubuntu 10.04. The installer creates the directory, places all the necessary files in it, and then fails to launch the binary. I tried to launch it myself, the system rebukes me too. Closer inspection yields the following: eugene@eugene-desktop:/opt/cisco/vpn/bin$ sudo chmod u+x vpnagentd eugene@eugene-desktop:/opt/cisco/vpn/bin$ ls -la total 5124 drwxr-xr-x 2 root root 4096 2010-10-23 11:51 . drwxr-xr-x 6 root root 4096 2010-10-23 11:51 .. -rwxr-xr-x 1 root root 1607236 2010-10-23 11:51 vpn -rwsr-xr-x 1 root root 1204692 2010-10-23 11:51 vpnagentd -r--r--r-- 1 root root 697380 2010-10-23 11:51 vpndownloader.sh -rwxr-xr-x 1 root root 1712708 2010-10-23 11:51 vpnui -rwxr-xr-x 1 root root 3654 2010-10-23 11:51 vpn_uninstall.sh eugene@eugene-desktop:/opt/cisco/vpn/bin$ ./vpnagentd bash: ./vpnagentd: No such file or directory eugene@eugene-desktop:/opt/cisco/vpn/bin$ sudo ./vpnagentd sudo: unable to execute ./vpnagentd: No such file or directory The file name "vpnagentd" is shown in white letters with red background. The other three executables are in green letters with black background, as expected. Any ideas?

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • Microsoft Windows DHCP: Steering IPv4 clients into specific scopes based on MAC

    - by Easter Sunshine
    We have visitors on our campus who bring their own laptops and devices and use our wireless and wired networks. When we receive a copyright infringement notice (typically BitTorrenting), we are required to quarantine that MAC address so that it no longer has Internet access. No matter what website it tries to visit, it is sent to a web page explaining to the user that the device has been quarantined. We have thus far implemented this in ISC DHCP on Linux. We have multiple VLANs with one or more public-IP subnets and one RFC1918 quarantine subnet each. All clients are leased IPs in the public-IP subnet(s) unless you're in a list of known bad MACs. Then, you are sent to the quarantine subnet so that your traffic is unroutable on the Internet (you are isolated by subnet only, not by VLAN). We would like to move to Windows DHCP in light of the IPAM role but I cannot figure out how to replicate this in Windows DHCP 2012 (Assign DHCP IPs for specific MAC prefixes on Windows Server 2008 R2 suggests it was not possible in 2008 R2), even while using policies. So here's what I'd like: The administrator/help desk provides and maintains a list of MAC addresses that are to be quarantined. The DHCP server places those MACs into the quarantine subnet on the respective VLAN, no matter which VLAN the client is in. I don't think reservations would work: We currently have about 300 registered bad MACs and about 12 VLANs. I don't want to make 300 x 12 reservations nor have to add 12 reservations per new MAC address. Not to mention all of the quarantine subnets are /24s. We do not have NPS/NAC. You do not have to register your MAC address get network access. We use Cisco routers/switches. Thanks.

    Read the article

  • Postfix not working

    - by user1488723
    A while ago I installed the postfix mail server on my ubuntu 10.04 VPS. At the time it was working good but now it's just stopped working. I was trying to enable SASL authentification and somewhere it must have went really wrong. I've studied the postfix main.cf and done everything in an orderly fashion to ensure that it is nothing wrong. I also have Dovecot installed and configured dovecot.conf to run with Postfix. If I try to do telnet localhost 25 while logged in on the server I just get: Connection closed by foreign host. If I try to do telnet mail.example.com 25 "from the outside" I get: telnet: Unable to connect to remote host: No route to host And when I check the server log after the failed attempts I see this: Jun 28 15:49:31 msv postfix/smtpd[11839]: initializing the server-side TLS engine Jun 28 15:49:31 msv postfix/smtpd[11839]: connect from localhost.localdomain[127.0.0.1] Jun 28 15:49:31 msv postfix/smtpd[11839]: warning: SASL: Connect to /var/spool/postfix/private/auth failed: Connection refused Jun 28 15:49:31 msv postfix/smtpd[11839]: fatal: no SASL authentication mechanisms Jun 28 15:49:32 msv postfix/master[11598]: warning: process /usr/lib/postfix/smtpd pid 11839 exit status 1 Jun 28 15:49:32 msv postfix/master[11598]: warning: /usr/lib/postfix/smtpd: bad command startup -- throttling main.cf file looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no delay_warning_time = 4h myhostname = mail.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydomain = example.com myorigin = $mydomain mydestination = $mydomain relayhost = mynetworks = 127.0.0.1 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_loglevel = 2 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = /var/spool/postfix/private/auth smtpd_sasl_security_options = noanonymous Dovecot.conf file looks like this: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/mail mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • Puppet Decentralized Setup

    - by paul.tw
    I want to migrate my existing Puppet setup from master/client to a decentralized solution. I know other solutions, such as Ansible are easier to setup for that purpose, but I really want to stay with Puppet. I found "supply_drop"(https://github.com/pitluga/supply_drop) on github, so I followed the instructions and did the following: rvm gemset create testing rvm use 1.9.3@testing gem install supply_drop The output is the following: [m@ms-MacBook-Pro:~ $ irb 1.9.3-p547 :001 require 'supply_drop' NameError: uninitialized constant Capistrano from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop/tasks.rb:1:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop.rb:10:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `rescue in require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:144:in `require' from (irb):1 from /Users/m/.rvm/rubies/ruby-1.9.3-p547/bin/irb:12:in `' Since that doesn't work without problems, I was wondering which alternatives are there available to do the same. Do you have any suggestings?

    Read the article

  • Wireless disconnects at random after upgrade to Ubuntu 10.4

    - by Daniel Elessedil Kjeserud
    After upgrading my home server from Ubuntu 8.10 to 10.4 my wireless seemingly drops out, even though my IRC client keeps it's connection to the servers, so it looks like the machine just stops taking wireless requests. A ping will give a me this Request timeout for icmp_seq 27 ping: sendto: Host is down After a while the machine just starts responding again, without any interaction from me. When the machine comes back, this is what dmesg gives me [ 18.296288] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.296350] wlan0: deauthenticating from 00:1b:63:22:a4:5f by local choice (reason=3) [ 18.296440] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.298697] wlan0: direct probe responded [ 18.298706] wlan0: authenticate with AP 00:1b:63:22:a4:5f (try 1) [ 18.306836] wlan0: authenticated [ 18.306886] wlan0: associate with AP 00:1b:63:22:a4:5f (try 1) [ 18.309396] wlan0: RX AssocResp from 00:1b:63:22:a4:5f (capab=0x411 status=0 aid=2) [ 18.309402] wlan0: associated [ 18.310187] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 18.447742] apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16ac) [ 18.447748] apm: overridden by ACPI. [ 19.163282] padlock: VIA PadLock not detected. [ 28.352022] wlan0: no IPv6 routers present kjes@brin:~$ lspci 02:07.0 Network controller: RaLink RT2561/RT61 rev B 802.11g It's on a wireless network with WPA2, the machine worked without any problems on the same wireless network since Ubuntu 8.10 was the most resent version, and there have been no changes to my network recently. Even though the server drops out, everything else on the network keeps working like normal.

    Read the article

  • Problems installing MySQL-python via yum / missing dependency / incompatibility problem?

    - by bs0
    I have come up against problems installing MySQL-python via yum. Our server is running Centos 5.5 and MySQL Version 5.1.45, Python-dev is installed. Yum complains about the missing dependency libmysqlclient_r.so.15: Missing Dependency: libmysqlclient_r.so.15()(64bit) is needed by package MySQL-python-1.2.1-1.x86_64 (base) The server is up to date and the packages mysql mysql-devel python-devel are installed. The missing dependency is nowhere on the system: locate libmysqlclient /usr/lib64/libmysqlclient.so /usr/lib64/libmysqlclient.so.15 /usr/lib64/libmysqlclient.so.16 /usr/lib64/libmysqlclient.so.16.0.0 /usr/lib64/libmysqlclient_r.so /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/libmysqlclient_r.so.16.0.0 /usr/lib64/mysql/libmysqlclient.a /usr/lib64/mysql/libmysqlclient.la /usr/lib64/mysql/libmysqlclient.so /usr/lib64/mysql/libmysqlclient_r.a /usr/lib64/mysql/libmysqlclient_r.la /usr/lib64/mysql/libmysqlclient_r.so /usr/local/cpanel/lib64/libmysqlclient.so.14 rpm -qa | grep -i mysql MySQL-devel-5.1.45-0.glibc23 MySQL-bench-5.0.89-0.glibc23 MySQL-shared-5.1.45-0.glibc23 MySQL-server-5.1.45-0.glibc23 MySQL-test-5.1.45-0.glibc23 MySQL-client-5.1.45-0.glibc23 The Python version is python-2.4.3-27.el5.x86_64: Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 Any suggestions would be greatly appreciated.

    Read the article

  • Cannot access firewalled jboss server from Internet Explorer

    - by Simon Gibbs
    I've produced a website for a client One Single Menu using JBoss and hosted it on Rackspace Cloud Servers running Ubuntu's Maverick Meerkat. Following advice, I esablished some iptables rule to protect jboss: iptables -I INPUT 1 -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -A INPUT -j DROP Now, several versions of IE on several computers on at least two different ISPs cannot access the onesinglemenu.com. Curl from within the datacenter, Firefox, and Safari on the same ISPs can all access the server fine. I even tried IE and Firefox on the same computer and IE failed but Firefox worked. The error behaviour is that IE hangs on connecting without reporting an error, even after a minute or so. No page is displayed at all. I find it quite odd that I'm having a browser specific connection issue, but it appears to be the case. Help!

    Read the article

  • YUM error. Is this a cert error

    - by Julia Roberts
    Nov 13 13:38:57 host abrt: detected unhandled Python exception in '/usr/bin/yum' Nov 13 13:38:57 host abrtd: New client connected Nov 13 13:38:57 host abrt-server[3508]: Saved Python crash dump of pid 3151 to /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151 Nov 13 13:38:57 host abrtd: Directory 'pyhook-2012-11-13-13:38:57-3151' creation detected Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-former Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-release Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-rhx Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Nov 13 13:38:57 host abrtd: Package 'yum' isn't signed with proper key Nov 13 13:38:57 host abrtd: 'post-create' on '/var/spool/abrt/pyhook-2012-11-13-13:38:57-3151' exited with 1 Nov 13 13:38:57 host abrtd: Corrupted or bad directory /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151, deleting There is also nothing in the crash dump file. Ideas? yum update Loaded plugins: fastestmirror, rhnplugin, security An error has occurred: Internal Server Error See /var/log/up2date for more information Is yum broken

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • internet-based sync software that will keep running after Windows Live Sync stops doing PC-to-PC-syncs?

    - by Warren P
    According to the wikipedia page, Microsoft Live Sync will shortly stop offering the PC-to-PC sync service. There are lots of apps to sync two PCs on the same LAN, but I want to sync two PCs that are in different cities, across the internet, traversing two different NATs, and that requires some kind of service running in the internet that both connect into. There is already a few questions about syncing folders and files, but this is not a duplicate because none of them answer this basic question: Microsoft Live Sync works better than RSYNC, or any of the linked SYNC solutions in any of the "not really duplicates" because it works even when the two PCs have NAT and firewalls between them that forbid direct connectivity, because Windows Live Sync has a free always-on internet server that all the client PCs connect into. I'm looking for a FREE (no-fees) Microsoft Live Sync work-alike PC-to-PC sync solution that works between PCs and Macs, at least, as well as between PCs, and works behind NAT and firewalls at least as well as Microsoft's solution. (Note that Microsoft's solution makes only outbound socket calls to a microsoft server, so this solution must necessarily include a server-hub component that is hosted publically on a free site and which does not require that I set up and manage and pay for my own public internet hosting site) Hint: None of the answers in the linked duplicate are equivalent (PureSync,FreeFileSync,BestSync 2010,SyncButler,Comodo BackUp,QuickShadow,Gbridge) in that none of them work for the PC to Mac situation, where firewalls and nats prevent direct connection, or else they require money to be paid. When Microsoft Live Sync / Live Mesh finally kills direct PC-to-PC mode, the limitation will be that you will have to pay for more than 25 GB of cloud service, and you can then only sync PC #1 to PC #2 if you first sync to the cloud, then down to other clients. I can currently sync 100 gb of data from one computer to another, only temporarily "moving the data" through Microsoft's data servers without using up my Skydrive storage quota.

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >