Search Results

Search found 5240 results on 210 pages for 'smb conf'.

Page 85/210 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How can I configure Cyrus IMAP to submit a default realm to SASL?

    - by piwi
    I have configured Postfix to work with SASL using plain text, where the former automatically submits a default realm to the latter when requesting authentication. Assuming the domain name is example.com and the user is foo, here is how I configured it on my Debian system so far. In the postfix configuration file /etc/main.cf: smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $mydomain The SMTP configuration file /etc/postfix/smtpd.conf contains: pwcheck_method: saslauthd mech_list: PLAIN The SASL daemon is configured with the sasldb mechanism in /etc/default/saslauthd: MECHANIMS="sasldb" The SASL database file contains a single user, shown by sasldblistusers2: [email protected]: userPassword The authentication works well without having to provide a realm, as postifx does that for me. However, I cannot find out how to tell the Cyrus IMAP daemon to do the same. I created a user cyrus in my SASL database, which uses the realm of the host domain name, not example.com, for administrative purpose. I used this account to create a mailbox through cyradm for the user foo: cm user.foo IMAP is configured in /etc/imapd.conf this way: allowplaintext: yes sasl_minimum_layer: 0 sasl_pwcheck_method: saslauthd sasl_mech_list: PLAIN servername: mail.example.com If I enable cross-realm authentication (loginrealms: example.com), trying to authenticate using imtest works with these options: imtest -m login -a [email protected] localhost However, I would like to be able to authenticate without having to specify the realm, like this: imtest -m login -a foo localhost I thought that using virtdomains (setting it either to userid or on) and defaultdomain: example.com would do just that, but I cannot get to make it work. I always end up with this error: cyrus/imap[11012]: badlogin: localhost [127.0.0.1] plaintext foo SASL(-13): authentication failure: checkpass failed What I understand is that cyrus-imapd never tries to submit the realm when trying to authenticate the user foo. My question: how can I tell cyrus-imapd to send the domain name as the realm automatically? Thanks for your insights!

    Read the article

  • Facing application redirection issue on nginx+tomcat

    - by Sunny Thakur
    I am facing a strange issue on application which is deployed on tomcat and nginx is using in front of tomcat to access the application from browser. The issue is, i deployed the application on tomcat and now setup the virtual host on nginx under conf.d directory [File i created is virtual.conf] and below is the content i am using for the same. server { listen 81; server_name domain.com; error_log /var/log/nginx/domain-admin-error.log; location / { proxy_pass http://localhost:100; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } Now the issue is this when i am using rewrite ^(.*) http://$server_name$1 permanent; in server section and access the URL then this redirects to https://domain.com and i am able to log in to app and able to access the links also [I am not using ssl redirection in this host file and i don't know why this is happening] Now when i removed this from server section then i am able to access the application from :81 and able to logged into the application but when i click on any link in app this redirect me to the login page. I am not getting any logs in application logs as well as tomcat logs. Please help on this if this is a redirection issue of nginx. Thanks, Sunny

    Read the article

  • Linux authentication via ADS -- allowing only specific groups in PAM

    - by Kenaniah
    I'm taking the samba / winbind / PAM route to authenticate users on our linux servers from our Active Directory domain. Everything works, but I want to limit what AD groups are allowed to authenticate. Winbind / PAM currently allows any enabled user account in the active directory, and pam_winbind.so doesn't seem to heed the require_membership_of=MYDOMAIN\\mygroup parameter. Doesn't matter if I set it in the /etc/pam.d/system-auth or /etc/security/pam_winbind.conf files. How can I force winbind to honor the require_membership_of setting? Using CentOS 5.5 with up-to-date packages. Update: turns out that PAM always allows root to pass through auth, by virtue of the fact that it's root. So as long as the account exists, root will pass auth. Any other account is subjected to the auth constraints. Update 2: require_membership_of seems to be working, except for when the requesting user has the root uid. In that case, the login succeeds regardless of the require_membership_of setting. This is not an issue for any other account. How can I configure PAM to force the require_membership_of check even when the current user is root? Current PAM config is below: auth sufficient pam_winbind.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account sufficient pam_winbind.so account sufficient pam_localuser.so account required pam_unix.so broken_shadow password ..... (excluded for brevity) session required pam_winbind.so session required pam_mkhomedir.so skel=/etc/skel umask=0077 session required pam_limits.so session required pam_unix.so require_memebership_of is currently set in the /etc/security/pam_winbind.conf file, and is working (except for the root case outlined above).

    Read the article

  • Location directive in nginx configuration

    - by ryan
    I have an nginx server setup to act as a fileserver. I want to set the expires directive on images. This is how a part of my config file looks like. http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; location ~* \.(ico|jpg|jpeg|png)$ { expires 1y; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I get the following error when I reload config - "Location directive not allowed here". Can someone tell me what the right syntax for this is? Thanks in advance. EDIT : Found the answer myself. Added it in a comment. Closing this.

    Read the article

  • Xinerama creates a panning viewport

    - by iblue
    EDIT: I've created a bug report: https://bugs.freedesktop.org/show_bug.cgi?id=48458 My Setup I have 4 monitors, 1920x1080, which are in portrait mode (rotated left). They are connected to two radeon graphic cards. As usual, a picture says more than a thousand words. The problem Everything works fine, when Xinerama is disabled. But when I enable Xinerama, things get weird. When I move the mouse of the screen and return, the screen contents begin to move with the mouse, only on this monitor. It seems like the virtual display size does not match the real screen size, which activates a panning viewport. Any idea how to stop this? The video I created a video to demonstrate the issue: http://www.youtube.com/watch?v=zq_XHji1P24 xorg.conf This is my xorg.conf: Section "ServerLayout" ##################[ Evilness begins here ]############# Option "Xinerama" "on" # <--- Makes it go b0rked! ##################[ End of all evil ]############# Identifier "BOFH Console of Doom" Screen 0 "Screen-0" 0 0 Screen 1 "Screen-1" RightOf "Screen-0" Screen 2 "Screen-2" RightOf "Screen-1" Screen 3 "Screen-3" RightOf "Screen-2" EndSection Section "ServerFlags" Option "RandR" "false" EndSection Section "Module" Load "dbe" Load "dri" Load "extmod" Load "dri2" Load "record" Load "glx" EndSection Section "Monitor" Identifier "Monitor-0" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-1" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-2" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-3" Option "Rotate" "left" EndSection Section "Device" Identifier "Radeon-0-0" Driver "radeon" BusID "PCI:9:0:0" Option "ZaphodHeads" "DVI-0" Screen 0 EndSection Section "Device" Identifier "Radeon-0-1" Driver "radeon" BusID "PCI:9:0:0" Option "ZaphodHeads" "DVI-1" Screen 1 EndSection Section "Device" Identifier "Radeon-1-0" Driver "radeon" BusID "PCI:4:0:0" Option "ZaphodHeads" "DVI-2" Screen 0 EndSection Section "Device" Identifier "Radeon-1-1" Driver "radeon" BusID "PCI:4:0:0" Option "ZaphodHeads" "DVI-3" Screen 1 EndSection Section "Screen" Identifier "Screen-0" Device "Radeon-0-0" Monitor "Monitor-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-1" Device "Radeon-0-1" Monitor "Monitor-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-2" Device "Radeon-1-0" Monitor "Monitor-2" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-3" Device "Radeon-1-1" Monitor "Monitor-3" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Hugepages not utilized by MySQL 5.0, CentOS 5

    - by TechZilla
    I've set up Hugepages, but i'm not seeing any of them reserved. Have I missed a step, or for some particular reason, is MySQL is unable to utilize the Hugepages? I have not created a mount of hugetlbfs, although from what I read, MySQL would not call pages in such a manner. If I'm wrong, please let me know, as that would be a trivial solution. Almost all my MySQL tables are using InnoDB. NOTE: I created a hugetlbfs, no change as expected. Is it possible that rebooting would rectify this situation? I would not want to go through the procedure, as this is high availability, but would do so if necessary. This is the configurations, which I believe are relevant. /etc/sysctl.conf ... ## Huge Pages vm.nr_hugepages = 4096 vm.hugetlb_shm_group = 27 ## SHM kernel.shmmax = 34359738368 kernel.shmall = 8589934592 ... /etc/security/limits.conf ... mysql soft nofile 12888 mysql hard nofile 51552 @mysql soft memlock unlimited @mysql hard memlock unlimited /etc/my.cnf [mysqld] large-pages ... grep Huge /proc/meminfo HugePages_Total: 4096 HugePages_Free: 4096 HugePages_Rsvd: 0 Hugepagesize: 2048 kB id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql) context=root:system_r:unconfined_t:SystemLow-SystemHigh tail -6 /var/log/mysqld.log InnoDB: HugeTLB: Warning: Failed to allocate 1342193664 bytes. errno 12 InnoDB HugeTLB: Warning: Using conventional memory pool 120808 15:49:25 InnoDB: Started; log sequence number 0 1729804158 120808 15:49:25 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution I would really appreciate any help, I'm completely out of ideas. If I missed any more relevant configs, or diagnostics, please comment and I'll add it to the question.

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Apache can't be restarted after changes to the configuration file

    - by Sharifhs
    Hello, I can't successfully configure the apache and php configuration files, can anybody help me in this way? Apache 2.2.16 (win32-x86-no_ssl.msi) was installed into “C:\Apache2.2 “location. Then PHP 5.3.3 (VC9 x86 Thread Safe) zip file was downloaded and extracted on “C:\php” location. From “C:\php” I renamed the “php.ini-development” file into “php.ini” “php.ini” file was opened with notepad, and modified as: doc_root = "C:\Apache2.2\htdocs" extension_dir = "C:\php\ext" The following lines were added to the Apache's configuration file “httpd.conf”: LoadModule php5_module "C:/php/php5apache2_2.dll" AddType application/x-httpd-php .php PHPIniDir "C:/php" N.B.: Thanks all for comments and answer, but I can't reply none your comments, I don't know why. May be I'm not privileged to put any comment as I'm new here (is it the case?)! That's why I'm to edit my post to reply you all. Tell me what can I do? @ jer.salamon: do you want me to post full httpd.conf file? It'll be longer then! @ davr: the server started first, but when I configured those files, its never started again @jer.salamon: did you mean keeping this way: doc_root = extension_dir = "ext" It not yet restared!

    Read the article

  • How to best tune my SAN/Initiators for best performance?

    - by Disco
    Recent owner of a Dell PowerVault MD3600i i'm experiencing some weird results. I have a dedicated 24x 10GbE Switch (PowerConnect 8024), setup to jumbo frames 9K. The MD3600 has 2 RAID controllers, each has 2x 10GbE ethernet nics. There's nothing else on the switch; one VLAN for SAN traffic. Here's my multipath.conf defaults { udev_dir /dev polling_interval 5 selector "round-robin 0" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout none path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names yes # prio rdac } blacklist { device { vendor "*" product "Universal Xport" } # devnode "^sd[a-z]" } devices { device { vendor "DELL" product "MD36xxi" path_grouping_policy group_by_prio prio rdac # polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And iscsid.conf : node.startup = automatic node.session.timeo.replacement_timeout = 15 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 10 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 After my tests; i can barely come to 200 Mb/s read/write. Should I expect more than that ? Providing it has dual 10 GbE my thoughts where to come around the 400 Mb/s. Any ideas ? Guidelines ? Troubleshooting tips ?

    Read the article

  • su not giving proper message for restricted LDAP groups

    - by user1743881
    I have configured PAM authentication on Linux box to restrict particular group only to login. I have enabled pam and ldap through authconfig and modified access.conf like below, [root@test root]# tail -1 /etc/security/access.conf - : ALL EXCEPT root test-auth : ALL Also modified sudoers file, to get su for this group <code> [root@test ~]# tail -1 /etc/sudoers %test-auth ALL=/bin/su</code> Now, only this ldap group members can login to system. However when from any of this authorized user, I tried for su, it asks for password and then though I enter correct password it gives message like Incorrect password and login failed. /var/log/secure shows that user is not having permission to get the access, but then it should print message like Access denied.The way it prints for console login. My functionality is working but its no giving proper messages. Could anyone please help on this. My /etc/pam.d/su file, [root@test root]# cat /etc/pam.d/su #%PAM-1.0 auth sufficient pam_rootok.so # Uncomment the following line to implicitly trust users in the "wheel" group. #auth sufficient pam_wheel.so trust use_uid # Uncomment the following line to require a user to be in the "wheel" group. #auth required pam_wheel.so use_uid auth include system-auth account sufficient pam_succeed_if.so uid = 0 use_uid quiet account include system-auth password include system-auth session include system-auth session optional pam_xauth.so

    Read the article

  • Display with intel integrated graphics, bitcoin mine with Radeon 6950

    - by karategeek6
    I'm on Ubuntu Linux 11.04 64 bit. I have an intel i5 with integrated graphics and a Radeon 6950, with one monitor. I would like to run my graphics on the integrated card, and run bitcoin mining on the 6950. I have bitcoin mining working when I use the 6950 for both display and mining. Every time I try and and use the integrated graphics instead, OpenCL doesn't recognize my 6950. Using aticonfig --initial when using the integrated graphics for display breaks things. So I used the xorg.conf it created as a basis and tried to manually edit it. I really don't know what I'm doing, though. My last attempt is given below. The graphics ran off the integrated card, but the 6950 wasn't recognized. Any help would be greatly appreciated! xorg.conf: #Section "ServerLayout" # Identifier "Intel Layout" # Screen "Default Screen" # Identifier "aticonfig Layout" # Screen "aticonfig-Screen[0]-0" # Screen 0 "aticonfig-Screen[0]-0" 0 0 #EndSection Section "Module" Load "glx" EndSection # Intel Section "Device" Identifier "Intel Integrated Graphics" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Monitor" Identifier "Default Monitor" Option "VendorName" "Monitor Vendor" Option "ModelName" "Monitor Name" Option "DPMS" "true" EndSection Section "Screen" Identifier "Default Screen" Device "Intel Integrated Graphics" Monitor "Default Monitor" DefaultDepth 24 EndSection # ATI Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Translating debian network configuration to gentoo

    - by thpetrus
    I just got rid off Debian on my VPS (OpenVZ) and installed Gentoo on it, however it is a plain Gentoo image without further configuration, i.e. no working network. I'm not familiar with Debian and coulnd't figure out how to get the network set up, these are the debian network files /etc/network/interfaces: auto venet0 iface venet0 inet manual up ifconfig venet0 up up ifconfig venet0 127.0.0.2 up route add default dev venet0 down route del default dev venet0 down ifconfig venet0 down iface venet0 inet6 manual up ifconfig venet0 add ipv6addr/128 down ifconfig venet0 del ipv6addr/128 up route -A inet6 add default dev venet0 down route -A inet6 del default dev venet0 auto venet0:0 iface venet0:0 inet static address external_ip netmask 255.255.255.255 auto venet0:1 iface venet0:1 inet static address internal_ip netmask 255.255.255.255 Please note that external_ip, internal_ip and ipv6addr are placeholders. I copied the /etc/resolv.conf, know the gateway_ip and also have another ouput of ifconfig, if necessary. This is what I came up with, /etc/conf.d/net: config_venet0="127.0.0.2 netmask 255.255.255.255 brd 0.0.0.0" config_venet0:0="external_ip netmask 255.255.255.255 brd 0.0.0.0" route_venet0:0="default via gateway_ip" config_venet0:1="internal_ip netmask 255.255.255.255 brd 0.0.0.0" Broadcast IP is taken from ifconfig debian output - however it doesn't work. A symbolic link net.venet0:0 -> net.lo in /etc/init.d/ was created and I added net.venet0:0 to the boot runlevel.

    Read the article

  • Apache virtual server httpd-vhosts undocumented issue

    - by Ethon Bridges
    I have read the Apache documentation on the https-vhosts.conf file and after a couple of hours fighting this problem, figured it out on my own. Here's the situation: We have a domain that ends in a .ws Apparently you can't do this in the conf file. You MUST use the ? wildcard or it will not work. The * wildcard will not work either. Further, in the ServerAlias directive, anything past the first entry will not work if the first entry in the ServerAlias directive is not correct. Here is an example of an entry that does NOT work. Note that anotherdomain.com and yetanotherdomain.com will fail because thedomain.ws is not configured correctly: <VirtualHost *:80> DocumentRoot /opt/local/apache2/sites/ourdomain ServerName www.thedomain.ws ServerAlias thedomain.ws another domain.com yetanotherdomain.com <Directory /opt/local/apache2/sites/ourdomain> allow from all </Directory> </VirtualHost> Here is an example of our working entry: <VirtualHost *:80> DocumentRoot /opt/local/apache2/sites/ourdomain ServerName www.thedomain.ws? ServerAlias thedomain.ws? another domain.com yetanotherdomain.com <Directory /opt/local/apache2/sites/ourdomain> allow from all </Directory> </VirtualHost> If there is documentation of this, I sure didn't see it.

    Read the article

  • Using gitlab behind Apache proxy all generated urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ {edit to clarify} I've set up Apache to proxy and run the nginx server the package installed on port 8888 (or so I thought). As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from http://localhost:8888, and all the checkout urls Gitlab gives are also localhost - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain; [Update] It looks like nginx is still listening on the wrong port if I don't specify localhost:8888 as the external_url. I found this in /var/log/gitlab/nginx/error.log 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: still could not bind() Apache setup looks like: <VirtualHost *:80> ServerName my.local.domain ServerSignature Off ProxyPreserveHost On AllowEncodedSlashes NoDecode <Location /> ProxyPass http://localhost:8888/ ProxyPassReverse http://127.0.0.1:8888 ProxyPassReverse http://my.local.domain </Location> </VirtualHost> Which seems to proxy everything back ok if Gitlab listens on localhost:8888 - I just need Gitlab to start displaying the right URL, instead of localhost:8888.

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Setting up SSL on JBoss 5

    - by socal_javaguy
    How can I enable SSL on JBoss 5 on a Linux (Red Hat - Fedora 8) box? What I've done so far is: (1) Create a test keystore. (2) Placed the newly generated server.keystore in $JBOSS_HOME/server/default/conf (3) Make the following change in the server.xml in $JBOSS_HOME/server/default/deploy/jbossweb.sar to include this: <!-- SSL/TLS Connector configuration using the admin devl guide keystore --> <Connector protocol="HTTP/1.1" SSLEnabled="true" port="8443" address="${jboss.bind.address}" scheme="https" secure="true" clientAuth="false" keystoreFile="${jboss.server.home.dir}/conf/server.keystore" keystorePass="mypassword" sslProtocol = "TLS" /> (4) The problem is that when JBoss starts it logs this exception (during start-up) (but I am still able to view everything under http://localhost:8080/): 03:59:54,780 ERROR [Http11Protocol] Error initializing endpoint java.io.IOException: Cannot recover key at org.apache.tomcat.util.net.jsse.JSSESocketFactory.init(JSSESocketFactory.java:456) at org.apache.tomcat.util.net.jsse.JSSESocketFactory.createSocket(JSSESocketFactory.java:139) at org.apache.tomcat.util.net.JIoEndpoint.init(JIoEndpoint.java:498) at org.apache.coyote.http11.Http11Protocol.init(Http11Protocol.java:175) at org.apache.catalina.connector.Connector.initialize(Connector.java:1029) at org.apache.catalina.core.StandardService.initialize(StandardService.java:683) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:821) at org.jboss.web.tomcat.service.deployers.TomcatService.startService(TomcatService.java:313) I do know that's there's more to be done to enable full SSL client authentication....

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • Asus K50I sound issues

    - by MrStatic
    I have an Asus K50IJ (Bestbuy) laptop and have issues with my sound. Speakers themselves work fine but when I plug into the headphone jack it auto mutes the front channel and no sounds comes out of either the speakers or the headphones. If I then unmute the channel I get sound from both the speakers and the headphones. alsamixer shows the Headphone channel as all grayed out. /etc/modprobe.d/alsa-base.conf I have tried snd-hda-intel model="asus-laptop" and snd-hda-intel model="asus" In Sound Preferences I have gone to output and changed the Connector to 'Analog Headphones' that results in no sound from either speakers or headphones. As one forum suggested I tried to comment out blacklist snd_pcsp in the blacklist.conf which resulted in no change. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) Subsystem: Santa Cruz Operation Device 1043 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at fe9f4000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel

    Read the article

  • DRBD setup problem

    - by cuthieu
    I'm so new to DRBD, please help me fixing the problem below. Enclosed my drbd.conf. Many thanks [root@skonkwerks1 ~]# drbdadm create-md all open(/dev/hdb3) failed: No such file or directory Command 'drbdmeta /dev/drbd0 v08 /dev/hdb3 internal create-md' terminated with exit code 20 drbdsetup exited with code 20 [root@skonkwerks1 ~]# vi /etc/drbd.conf global { usage-count no; } resource repdata { protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120; } disk { on-io-error detach; } # or panic # net { cram-hmac-alg "hdd1"; shared-secret "testing"; } syncer { rate 10M; } on skonkwerks1 { device /dev/drbd0; disk /dev/hdb1; address 172.29.156.1:7788; meta-disk internal; } on skonkwerks2 { device /dev/drbd0; disk /dev/hdb1; address 172.29.156.2:7788; meta-disk internal; } }

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

  • A lots of Apache processes are using my CPU uses always more than 70%

    - by Barkat Ullah
    I am running a plesk panel in 1and1. I have 120 sites running and all are using pligg cms, each site has 600 visitors per day. Please see the details of my server below: HDD-1000GB RAM-16GB Processor-6 Core I always see a lot of apache processes running in my # top view, so the server seems overloaded. If I can reduce the amount apache processes I think the server will be ok. But I don't know why too many apache processes are running. Please see the link below for the screenshot of my # top view: http://dl.dropbox.com/u/26967109/%23Top-2.jpg Sometimes I saw too many connection error in my plesk control panel, so I added the below line in my [mysqld] section: set-variable=max_connections=416 But I didn't find a solution yet. I have also added maxclients and serverlimit 416 in the config /etc/httpd/conf/httpd.conf But no solution yet. I am researching around more than 7 days but don't get any solution. Please help me to solve the problem. In peak hours my sites are taking too much time to load, but off-peak hour it is ok. Please help me to find out the actual problem.

    Read the article

  • Default /server-status location not inheriting in Apache

    - by rmalayter
    I'm having a problem getting /server-status to work Apache 2.2.14 on Ubuntu Server 10.04.1. The default symlinks for status.load and status.conf are present in /etc/apache2/mods-enabled. The status.conf does include the location /server-status and appropriate allow/deny directives. However, the only vhost I have in sites-enabled looks like this. The idea is to proxy anything with a Tomcat URL to a cluster of tomcats, and anything else to an IIS box. However, this seems to result in requests to /server-status being sent to IIS. Copying the /server-status in explicitly to the Vhost configuration doesn't seem to help, no matter what order I use. Is it possible to include /server-status do this within a vhost configuration that has a "default" proxy rule?: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED <Proxy balancer://tomcatCluster> BalancerMember ajp://qa-app1:8009 route=1 BalancerMember ajp://qa-app2:8009 route=2 ProxySet stickysession=ROUTEID </Proxy> <ProxyMatch "^/(mytomcatappA|mytomcatappB)/(.*)" > ProxyPassMatch balancer://tomcatCluster/$1/$2 </ProxyMatch> #proxy anything that's not a tomcat URL to IIS on port 80 <Proxy /> ProxyPass http://qa-web1/ </Proxy>

    Read the article

  • SASL - Plaintext password not accepted - Encrypted works

    - by leviathanus
    I have a very strange issue! SASL does not work properly, as it does not accept plain-text passwords (like Outlook sends them) Oct 2 10:35:09 srf cyrus/imap[4119]: accepted connection Oct 2 10:35:09 srf cyrus/imap[4119]: badlogin: [217.XX.XXX.140] plaintext [email protected] SASL(-1): generic failure: checkpass failed Now I switch to "Encrypted password" in Thunderbird. I have the same issue as Outlook above on Thunderbird if I turn on "Plain Password"): Oct 2 10:40:40 srf cyrus/imap[14644]: accepted connection Oct 2 10:40:41 srf cyrus/imap[14622]: login: [217.XX.XXX.140] [email protected] CRAM-MD5 User logged in Same with Postfix: Without Oct 2 10:42:48 srf postfix/smtpd[17980]: connect from unknown[217.XX.XXX.140] Oct 2 10:42:48 srf postfix/smtpd[17980]: warning: SASL authentication failure: cannot connect to saslauthd server: Permission denied Oct 2 10:42:48 srf postfix/smtpd[17980]: warning: SASL authentication failure: Password verification failed Oct 2 10:42:48 srf postfix/smtpd[17980]: warning: unknown[217.XX.XXX.140]: SASL PLAIN authentication failed: generic failure With "Encrypted password": Oct 2 10:45:27 srf postfix/smtpd[21872]: connect from unknown[217.XX.XXX.140] Oct 2 10:45:28 srf postfix/smtpd[21872]: 50B3A332AAB: client=unknown[217.XX.XXX.140], sasl_method=CRAM-MD5, [email protected] Oct 2 10:45:28 srf postfix/cleanup[21899]: 50B3A332AAB: message-id=<[email protected]> Oct 2 10:45:28 srf postfix/qmgr[6181]: 50B3A332AAB: from=<[email protected]>, size=398, nrcpt=1 (queue active) Oct 2 10:45:28 srf postfix/smtpd[21872]: disconnect from unknown[217.XX.XXX.140] Config: /etc/imapd.conf:sasl_mech_list:LOGIN PLAIN CRAM-MD5 and /etc/postfix/sasl/smtpd.conf:mech_list: LOGIN PLAIN CRAM-MD5 I have no idea where to dig. Please advise.

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • site to listen on port 88

    - by JohnMerlino
    I want to get one of my sites to listen on port 88. In ports.conf in /etc/apache2 on ubuntu server, I add so web app can listen on port 88: NameVirtualHost *:80 Listen 80 NameVirtualHost *:88 Listen 88 I have this in my etc/apache2/apache2.conf, I have this: # Include the virtual host configurations: Include sites-enabled/ Under sites enabled, I have a file looks like this: Listen *:88 NameVirtualHost *:88 <VirtualHost *:88> ServerName dogtracking.com DocumentRoot /home/doggps/public_html/eaglegps.com/current/public <Directory /home/doggps/public_html/eaglegps.com/current/public> AllowOverride all Options -MultiViews </Directory> <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> </VirtualHost> Then I try to restart apache: /etc/init.d/apache2 restart And I get: * Restarting web server apache2 /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:21 2012] [warn] NameVirtualHost *:88 has no VirtualHosts /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:22 2012] [warn] NameVirtualHost *:88 has no VirtualHosts (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >