Search Results

Search found 3548 results on 142 pages for 'unix'.

Page 123/142 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Turn off gzip for a location in Nginx

    - by Nyxynyx
    How can gzip be turned off for a particular location and all its sub-directories? My main site is at http://mydomain.com and I want to turn gzip off for both http://mydomain.com/foo and http://mydomain.com/foo/bar. gzip is turned on in nginx.conf. I tried turning off gzip as shown below, but the Response Headers in Chrome's dev tools shows that Content-Encoding:gzip. How should gzip/output buffering be disabled properly? Attempt: server { listen 80; server_name www.mydomain.com mydomain.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /var/www/mydomain/public; index index.php index.html; location / { gzip on; try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_read_timeout 300; } location /foo/ { gzip off; try_files $uri $uri/ /index.php?$args ; } }

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • append $myorigin to localpart of 'from', append different domain to localpart of incomplete recipient address

    - by PJ P
    We have been having some trouble getting Postfix to behave in a very specific fashion in which sender and recipient addresses with only a localpart (i.e. no @domain) are handled differently. We have a number of applications that use mailx to send messages. We would like to know the username and hostname of the sending party. For example, if root sends an email from db001.company.local, we would like the email to be addressed from [email protected]. This is accomplished by ensuring $myorigin is set to $myhostname. We also want unqualified recipients to have a different domain appended. For example, if a message is sent to 'dbadmin' it should qualify to '[email protected]'. However, by the nature of Postfix and $myorigin, an unqualified recipient would instead qualify to [email protected]. We do not want to adjust the aliases on all servers to forward appropriately. (in fact, every possible recipient doesn't have an entry in /etc/passwd) All company employees have mailboxes on Exchange, which Postfix eventually routes to, and no local Linux/Unix mailboxes are used or access. We would love to tell our application owners to ensure they use a fully qualified email address for all recipients, but the powers that be dictate that any negligence must be accommodated. If we were to keep $myorigin equal to $myhostname, we could resolve this issue by having an entry such as the following in 'recipient_canonical_maps': @$myorigin @company.com However, unfortunately, we cannot use variables in these map files. We also want to avoid having to manually enter and maintain the actual hostname in 'recipient_canonical_maps' for each server. Perhaps once our servers are 'puppetized' we can dynamically adjust this file, but we're not there yet. After an afternoon of fiddling I've decided to reach out. Any thoughts? Thanks in advance.

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • Configure php mail() on Windows/IIS

    - by Adam Tuttle
    I have a Windows Server 2003 / IIS web server running various application servers, and ended up begrudgingly adding PHP into the mix. I know Win/IIS isn't the ideal environment for PHP, but it's what I've got and I need to make it work. From phpinfo(): Configuration File (php.ini) Path: C:\WINDOWS Loaded Configuration File: C:\php\php.ini From C:\php\php.ini: [mail function] ; For Win32 only. SMTP = localhost smtp_port = 25 ; For Win32 only. ;sendmail_from = [email protected] ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ;sendmail_path = ; Force the addition of the specified parameters to be passed as extra parameters ; to the sendmail binary. These parameters will always replace the value of ; the 5th parameter to mail(), even in safe mode. ;mail.force_extra_parameters = Lastly, I have IIS setup to run an SMTP relay that allows connection and relay, but only from localhost. But when I try something that uses mail(), I get this error: The e-mail could not be sent. Possible reason: your host may have disabled the mail() function... Any ideas?

    Read the article

  • How to troubleshoot when one has no idea where to start?

    - by Chris Walton
    I am looking for hints, tips and answers on how to get started on troubleshooting when: The problem is intermittent The problem could lie literally anywhere - operating system; free source software; my own software developments; purchased software; crumbs on the keyboard; the specific combination of software I am currently running; Maxwell's demon; the little blue men actually running the machine have gone on strike; etc. I have expertise only in a few of the areas that are potential candidates for the cause of the problem. The specific problem I am having is detailed below as an example, but I am not seeking answers to my current problem, but rather where and how to start on tackling such problems. I am currently encountering a problem with my new machine. On a few occasions the machine has just frozen; not accepting keystrokes, mouseclicks, or anything except the power on/off switch. Invariably I have been merely browsing the web; I have had a few (<= 6 other applications) running. None of these applications are major; and represent a mix of commercial programs and open source programs, typically migrated from Unix of some variety. My machine is a Windows 7 I7 quad core laptop.

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • Can't seem to stop Postfix backscatter

    - by Ian
    I've just migrated to a Postfix system and can't seem to stop the backscatter messages to unknown addresses on the site. I have a file, validrcpt, that lists all the valid emails on the site - about eight of them. Yet when a message is sent to a non-existent address, instead of just dropping it, postfix is replying with a "Recipient address rejected: User unknown in virtual mailbox table" email. Do I have something set wrong? I've read http://www.postfix.org/BACKSCATTER_README.html but unless I'm caffeine deficient, I don't see what's happening and perhaps I'm just to used to my old qmail setup. Here's postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 local_recipient_maps = hash:/etc/postfix/validrcpt mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/dovecot.conf -m "${EXTENSION}" mailbox_size_limit = 0 mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname policy-spf_time_limit = 3600s readme_directory = no recipient_bcc_maps = hash:/etc/postfix/recipient_bcc recipient_delimiter = + relay_recipient_maps = hash:/etc/postfix/relay_recipients relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination,check_policy_service unix:private/policy-spf,reject_rbl_client zen.spamhaus.org,reject_rbl_client bl.spamcop.net,reject_rbl_client cbl.abuseat.org,check_policy_service inet:127.0.0.1:10023 smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/dovecot/dovecot.pem smtpd_tls_key_file = /etc/dovecot/private/dovecot.pem smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = digitalhit.com virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • How to cache authentication in Linux using PAM/Kerberos authentication (for CVS)?

    - by Calonthar
    We have several Linux servers that authenticate Linux user passwords on our Windows Active Directory Server using PAM and Kerberos 5. The Linux distro we use is CentOS 6. On one system, we have several Version Control Systems like CVS and Subversion, both of which authenticate users throug PAM, such that users can use their normal Unix resp. Windows AD accounts. Since we started using Kerberos for password authentication, we experienced that CVS on a client machine is often much slower in establishing a connection. CVS authenticates the user on every request (eg. cvs diff, log, update...). Is is possible to cache the credentials that kerberos uses, sucht that is does not need to ask the Windows AD server every time a user executes a cvs action? Our PAM config /etc/pam.d/system-auth looks like the following: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so broken_shadow account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • Hardware chose: ASUS Eee Pad Slider or ASUS Eee Pad Transformer for web development?

    - by JamesM
    I was just wondering out of the following Tablets which one seams better to get? I am a web-developer, Always using Unix/Linux/BSD, I want a tablet that has a keyboard. http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ http://www.tweaktown.com/news/18311/asus_eee_pad_slider_transformer_tablets_with_physical_keyboard/index.html I know both are similar, but not sure what one I should get. The Slider seems very nice but again the keyboard is fixed to the tablet unlike the Transformer. P.S: I'm going to use one of the above to showcase my programming work at school, as well as just being used as a cheaper notebook than the $300 Windows.7 locked down notebooks. By Locked down, I mean we pay $300 for them and after 3 years we can do what ever to them, they are Lenovo thinkpad mini-10 and What they have installed is all you get, they don't let us install what ever OS on them. And with the question on both of those links, I think that the transformer would be better but that is only taking in the fact of it being both a tablet and a notebook. What I really care about is power; which one is more powerful? It will be running kFreeBSD-Debian-Squeeze with Linux-Mint theme with several other packages. Though I'm not going to run Windows (which I feel is bloated), I still want power. To help keep my computer from slowing down with cache, I will have a cron.d/hourly script cleaning out the cache memory.

    Read the article

  • NetBackup's bplist doesn't get user/group info for Windows files

    - by Gnustavo
    I'm trying to get information about storage consumption from NetBackup's bplist output. I'm running NBU 6.0MP5 on a RHEL 3 server. The server is backing up several Solaris, Linux, and Windows machines. When I use bplist to get information about files backed up on any UNIX machine I get something like this: # bplist -C unixclient -R 99 -l -s 01/28/2006 -e 01/29/2006 / drwxr-xr-x test ccase 0 Nov 16 09:28 /l/home2/test/ -rw------- test ccase 4737 Jan 06 17:54 /l/home2/test/.bash_history -rw-rw-r-- test ccase 104 Nov 11 2004 /l/home2/test/.bashrc However, when I use it to list files backed up on any Windows client I can't get the user and group information. They both always appear as 'root'. Like this: # bplist -C winclient -t 13 -R 99 -l -s 02/20/2006 / drwx------ root root 0 Feb 20 14:26 /C/temp/ -rwx------ root root 41 Feb 20 14:26 /C/temp/asdf.txt drwx------ root root 0 May 25 2004 /C/temp/CTRMNGR/ Does anyone know why bplist doesn't show the correct user/group for Windows files? If it can't, is there a way to get that information using another command? Thanks. Gustavo.

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • How to encourage Windows administrators to pick up scripting?

    - by icelava
    When I worked as an administrator in my first job, I was frustrated that our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • Postfix "mail-to-script" pipe only delivers empty messages

    - by user68202
    i have a problem here. I want that a incoming email is piped to a php script in the system through postfix. My System is running with ispconfig 3, postfix and dovecot (< virtual mailbox users are saved in mysql). I looked already into this one: How to configure postfix to pipe all incoming email to a script? ... the script is executed, but no "message" is delivered to the script. My setup so far: In ISPConfig 3 i have set up the following email route: Active Server Domain Transport Sort by Yes example.com pipe.example.com piper: 5 excerpt from my postfix master.cf: piper unix - n n - - pipe user=piper:piper directory=/home/piper argv=php -q /home/piper/mail.php so far it is working great (mail sent to [email protected]) (mail.log): Jun 21 16:07:11 example postfix/pipe[10948]: 235CF7613E2: to=<[email protected]>, relay=piper, delay=0.04, delays=0.01/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via piper service) ... and no errors in mail.err the mail.php is sucessfully executed (its chmod 777 and chown'ed to piper), but creates a empty .txt file (normally it should contain the email message): -rw------- 1 piper piper 0 Jun 21 16:07 mailtext_1340287631.txt the mail.php script ive used, is the one from http://www.email2php.com/HowItWorks if i use their (commercial) service to pipe an email to the mail.php (in a apache2 environment) through a provided "pipe-email", the message is saved sucessfully and complete. But as you can see, i dont want to use external services. -rw-r--r-- 1 web2 client0 1959 Jun 21 16:19 mailtext_1340288377.txt So, whats wrong here? I think it has something to do with the "delivering configuration" in my system...

    Read the article

  • Setting the default permissions for files uploaded via FTP to a directory

    - by Kerri
    Disclaimer: I'm just a web designer/coder, and server admin stuff is my weakest point of them all. So be easy on me (and very specific). I'm using a simple CMS (Unify) on a site, where part of the functionality is that the client can upload files to a specified directory (using FTP). The permissions for the upload directory are set to 755. But when files are uploaded through the interface, they are uploaded with permissions set to 640 (instead of 644), so site visitors cannot acces the files. When I emailed the CMS's support about this, they told me that it was a server setting, and I need to make sure that files uploaded through FTP are set to 644. Makes perfect sense, but I have no idea how to do this. Any help would be greatly appreciated. This site is a shared site hosted by Network Solutions (Unix), so my access options are limited. I can edit .htaccess files, and php.ini, but that's about all I have access to. It appears I can't even log on via shell. ETA: 11/11/2010 Thanks all. I was able to work around this problem by setting up the CMS's settings in a different way. I'd be interested in following up on Nick O'Niel's suggestions, because I think he's on the right track, but unfortunately I can't access the necessary files on this particular server. So, anyway, I'm leaving this open, since the original questions isn't exactly resolved. Unfortunately, I probably can't put a correct answer to the test, since the shared server in question has nearly all of its config files tightly locked down.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >