Search Results

Search found 5559 results on 223 pages for 'httpd conf'.

Page 164/223 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Directory permissions on Ubuntu Server 10.04 LTS

    - by SebastianOpperman
    I have set up a second drive on Ubuntu Server. The directory displays correctly but Windows users cannot write or create files on the directory. I have Samba set up so Windows can access the drives. here is the last bit of my /etc/samba/smb.conf [personeel] path = /media/windows browsable = yes guest ok = yes writable = yes read only = no create mask = 0775 directory mask = 0775 I want the directory to be shared with writable permissions to everyone who can access the Ubuntu Server. I have tried sudo chmod but to no success. Any help would be appreciated

    Read the article

  • SOAP and NHibernate Session in C#

    - by Anonymous Coward
    In a set of SOAP web services the user is authenticated with custom SOAP header (username/password). Each time the user call a WS the following Auth method is called to authenticate and retrieve User object from NHibernate session: [...] public Services : Base { private User user; [...] public string myWS(string username, string password) { if( Auth(username, password) ) { [...] } } } public Base : WebService { protected static ISessionFactory sesFactory; protected static ISession session; static Base { Configuration conf = new Configuration(); [...] sesFactory = conf.BuildSessionFactory(); } private bool Auth(...) { session = sesFactory.OpenSession(); MembershipUser user = null; if (UserCredentials != null && Membership.ValidateUser(username, password)) { luser = Membership.GetUser(username); } ... try { user = (User)session.Get(typeof(User), luser.ProviderUserKey.ToString()); } catch { user = null; throw new [...] } return user != null; } } When the WS work is done the session is cleaned up nicely and everything works: the WSs create, modify and change objects and Nhibernate save them in the DB. The problems come when an user (same username/password) calls the same WS at same time from different clients (machines). The state of the saved objects are inconsistent. How do I manage the session correctly to avoid this? I searched and the documentation about Session management in NHibernate is really vast. Should I Lock over user object? Should I set up a "session share" management between WS calls from same user? Should I use Transaction in some savvy way? Thanks Update1 Yes, mSession is 'session'. Update2 Even with a non-static session object the data saved in the DB are inconsistent. The pattern I use to insert/save object is the following: var return_value = [...]; try { using(ITransaction tx = session.Transaction) { tx.Begin(); MyType obj = new MyType(); user.field = user.field - obj.field; // The fields names are i.e. but this is actually what happens. session.Save(user); session.Save(obj); tx.Commit(); return_value = obj.another_field; } } catch ([...]) { // Handling exceptions... } finally { // Clean up session.Flush(); session.Close(); } return return_value; All new objects (MyType) are correctly saved but the user.field status is not as I would expect. Even obj.another_field is correct (the field is an ID with generated=on-save policy). It is like 'user.field = user.field - obj.field;' is executed more times then necessary.

    Read the article

  • Yum through http proxy

    - by eodchop
    I have several Fedora 13 servers that have to connect through an http proxy for yum updates. All port 80 traffic has to be routed through this proxy. I have setup the proxy server in the network settings GUI. I can browse the internet just fine. I have also setup my proxy information in /etc/yum.conf as follows: proxy=http:proxy.largecorp.corp/accelerated_pac_base.pac proxy_user=user proxy_password=password I then added the export HTTP_PROXY="http:proxy.largecorp.corp/accelerated_pac_base.pac" to /etc/bashrc and sourced the file. When i run yum update: Loaded plugins:presto, refresh-packagekit Error: Cannot retrieve repository metadata (repomd.xml) fro repository: fedora. Please verify its path and try again. All of the repo urls are the defaults, as this is a fresh install.

    Read the article

  • How to "debug" a keyboard in Linux? Like pressing a key and seeing a code in a terminal.

    - by Somebody still uses you MS-DOS
    I didn't have an answer to my problem about adding additional keyboards in my Ubuntu 10.04. Questions mark is not working in my keyboard, only using Alt Gr key + W. So, I don't know if this is a problem with Ubuntu or Virtualbox itself (I'm running it inside a VM). I would like to debug this problem. The keyboard is plugged in, so when I press a key I believe something is being sent to my operating system, some code, I don't know. I would like to digg this problem, find some damn key code and find some damn *.conf file and manually fix my problem. So, do an application like this exist in Linux?

    Read the article

  • Linux distro for acer 4741G laptop

    - by sandundhammikaperera
    Hi all, I need to install linux for my acer 4741 laptop. Anyone who did this before and managed to solve the device driver problems please share your experience with me. I already installed the backtrack linux and I able to make it work the both wireless and wired network connections and also the sound card is also working. But the problem is that I unable to configure 1360x768 resolution of the display. The display looking really flat and ugly under that linux. some help ? can you guide how to correctly configure the /etc/X11/xorg.conf ? --Thanks in advance--

    Read the article

  • /etc/environment and cron

    - by clorz
    I've got two machines: Fedora and CentOS. And a cronjob 0-59 * * * * env > /home/me/env.log On CentOS I can see that /etc/environment is affecting the output while on Fedora it does not. I want Fedora to be like CentOS. What do I need to make it happen? /etc/pam.d/crond on Fedora auth sufficient pam_rootok.so auth required pam_env.so auth include system-auth account required pam_access.so account include system-auth session required pam_loginuid.so session include system-auth /etc/pam.d/crond on CentOS auth sufficient pam_env.so auth required pam_rootok.so auth include system-auth account required pam_access.so account include system-auth session required pam_loginuid.so session include system-auth /etc/security/pam_env.conf is the same on both systems and consists of commented out lines. Even if I make /etc/pam.d/cron.d files the same, problem still persists.

    Read the article

  • Is there any trick to join and use Windows 8/8.1 with Samba 4 (4.1.6)?

    - by tenshimsm
    It seems that Samba doesn't like at all. I've followed various tutorials and I can't get Windows 8 to work properly with a Ubuntu Server as domain controller. This week i've downloaded ubuntu 14.04 lts and set a fast domain configuration. As usual all other Windows version (XP and 7) work but the newest M$ nightmare doesn't. In this try it doesn't even join the domain, keeps saying the my username or password are wrong. My /etc/samba/smb.conf # Global parameters [global] workgroup = DOMAIN realm = DOMAIN.LAN netbios name = DOM server role = active directory domain controller dns forwarder = 8.8.8.8 idmap_ldb:use rfc2307 = yes [netlogon] path = /var/lib/samba/sysvol/domain.lan/scripts read only = No [sysvol] path = /var/lib/samba/sysvol read only = No [test] directory mode = 0750 path = /SHARES/test read only = no Does anyone have a tutorial that really works? Because I've tried many, each one with different configurations that works only with the people that made them. And is there a way to import my old AD users, computers and ID in a way that I won't need to rejoin all computers?

    Read the article

  • Apache: Stealth 404 the admin area until authenticated via basic auth, then allow access

    - by Kzqai
    Given a administrative area with urls like this: wp-admin/ wp-admin/whatever wp-admin/another-page wp-adminsecretlogin/ A standard basic-auth coverage would provide a username and password prompt on all three urls, and return a 403 on all failed auth attempts. This is a pretty obvious signal that something exists there, and thus is an invitation to script/brute force access. I would like to instead, require basic auth everywhere, but when not authenticated, not prompt for username and password, and instead return a 404 not found error for all urls except a wp-adminsecretlogin/ url. At that individual-to-the-site url, basic auth could go through, and unlock the rest of the administrative functionality (though the standard application login would still be necessary). How would I do that via apache .htaccess or .conf directives?

    Read the article

  • Sudo asks for password twice with LDAP authentication

    - by Gnudiff
    I have Ubuntu 8.04 LTS machine and Windows 2003 AD domain. I have succesfully set up that I can log in with domain username and password, using domain prefix, like "domain+username". Upon login to machine it all works first try, however, for some reason when I try to sudo my logged in user, it asks for the password twice every time when I try sudo. It accepts the password after 2nd time, but not the first time. Once or twice I might think I just keep entering wrong pass the first time, but this is what happens always, any ideas of what's wrong? pam.conf is empty pam.d/sudo only includes common-auth & common-account, and common-auth is: auth sufficient pam_unix.so nullok_secure auth sufficient pam_winbind.so auth requisite pam_deny.so auth required pam_permit.so

    Read the article

  • RedirectPermanent vs RewriteRule [R]

    - by notbrain
    I currently have a perm_redirects.conf file that gets included into my apache config stack where I have lines in the format RedirectPermanent /old/url/path /new/url/path It looks like I'm required to use an absolute URL for the new path, e.g.: http://example.com/new/url/path. In the logs I'm getting "incomplete redirect target /new/url/path was corrected to http://example.com/new/url/path." (paraphrased). In the 2.2 docs for RewriteRule, at the bottom they show the following as being a valid redirect, with only the url-paths instead of an abs URL for the right hand side of the redirect: RewriteRule ^/old/url/path(.*) /new/url/path$1 [R] But I can't seem to get that format to work to replicate the functionality of the RedirectPermanent version. Is this possible?

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • Having two FTP ports for the user

    - by user1663896
    I'm running vsftpd on RedHat 6.4 using TLS/SSL on port 990. It works great. I have been tasked to have my VSFTPD server running on unencrypted port 21 as well. This gives my users to either use clear text FTP on port 21 or TLS/SSL on port 990. I have tried the following in my vsftpd.conf file and did not work. listen_port=990 listen_port=21 In my config file it has the following SSL parameters: chroot_local_user=YES ssl_enable=YES allow_anon_ssl=NO anonymous_enable=NO anon_world_readable_only=NO force_local_data_ssl=NO force_local_logins_ssl=NO require_ssl_reuse=NO Can VSFTPD run on port 21 and 990? Thanks in advanced.

    Read the article

  • Firewall is blocking internet traffic to OpenVPN clients

    - by user268905
    I have a virtual network setup with a Linux router/firewall connected to two private networks. An OpenVPN server in routing mode and a web server are in one of the networks. On the other are linux client machines which access the webserver and the Internet through the OpenVPN server. Also, external clients can access the OpenVPN from the Internet. The OpenVPN's server.conf is setup to use routing mode in udp, push DNS and routes to the network it is in so clients can access the webserver. Here are my very strict firewall rules. After connecting to the OpenVPN server, my clients can not access the Internet or the web server. When I allow FORWARD traffic to go through, it works just fine. The OpenVPN server has full internet connectivity. What firewall rule do I need to add to allow internet traffic to reach my clients?

    Read the article

  • The requested operation has failed! (cannot find answer)

    - by Geoff
    I know this problem is plastered all over the web but I've been searching and trying for hours with no luck. Can someone please give me some help? I originally installed Apache 2.0.64 along with PHP 5.2.17, I went through all of the steps in this tutorial with no luck, I found that the culprit was the LoadModule line. After looking on the internet I found a whole bunch of stuff but a lot of it was referring to PHP 5 and Apache 2.2. Since there seemed to be more info on apache 2.2 I removed apache 2.0.64 and installed 2.2. I added the code to LoadModule in the conf file but I got the same problem. I then followed the steps in this tutorial because it was slightly different with some things I hadn't tried yet but still I get the same problem. If I comment out LoadModule... it works fine but otherwise I get "The requested operation has failed!". This is what I ended up keeping since it works only having to comment one line. LoadModule php5_module "c:/php/php5apache2_2.dll" <IfModule mod_php5.c> AddType application/x-httpd-php .php PHPIniDir "c:/php" DirectoryIndex index.php </IfModule> EDIT: How can I stop getting this error message? UPDATE: Also, please note that I took note of the message in the PHP site that stated if PHP 5.2 was to be run with Apache to use the VC6 and not VC9. I had VC9 so I replaced it with VC6, the file is labeled php-5.2.17-nts-Win32-VC6-x86.zip

    Read the article

  • How can I change the default keyboard layout in Ubuntu 10.10?

    - by startoftext
    Ubuntu keeps detecting my keyboard layout as Romanian. It almost always does this on a reboot and some times on waking from sleep. Some times it just randomly changes back to Romanian while the computer is on. I always set it back to "USA" in the keyboard preferences and delete the Romanian layout. What files configure this in Ubuntu? I looked in xorg.conf and did not find any keyboard settings. How can I set this to USA permanently? I have a laptop with a typical US layout built in keyboard and I also have an apple keyboard that is also US layout connected via USB. I have run other distributions on this same setup before and never had this happen.

    Read the article

  • Setting Environment Variable for Tomcat 6 Servlet

    - by amaevis
    I'm using Ubuntu's default installation of Tomcat 6. I'm deploying a ROOT.war, and trying to set an environment variable specific to it, i.e. accessible from System.getenv() in the Servlet.init(config). According to the docs (http://tomcat.apache.org/tomcat-6.0-doc/config/context.html), I can specify this in a Context element in conf/Catalina/localhost/ROOT.xml. I've created that with these contents: <Context> <Environment name="FOO" value="bar" type="java.lang.String" override="false"/> </Context> And I've deployed the webapp as usual, i.e. to webapps/ROOT.war. Server.getenv("FOO") in the Servlet.init(config) still returns null. What am I missing?

    Read the article

  • Postgres Remote Access

    - by boot-baby-boot
    I am trying to connect to postgres remotely.I have followed this tutorial http://www.cyberciti.biz/faq/howto-fedora-linux-install-postgresql-server/ and have executed the following commands to see if the remote access is possible. [root@printmyworld ~]# egrep -i "(listen_addresses|port|tcpip_socket).*=.+" /var /lib/pgsql/data/postgresql.conf #listen_addresses = '*' # what IP address(es) to listen on; #port = 5432 [root@printmyworld ~]# lsof +c0 -anPiTCP -upostgres COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME postmaster 9323 postgres 3u IPv4 2875987353 TCP 127.0.0.1:5432 (LISTEN ) postmaster 9323 postgres 4u IPv6 2875987354 TCP [::1]:5432 (LISTEN) I am suspicious of this line: postmaster 9323 postgres 3u IPv4 2875987353 TCP 127.0.0.1:5432 (LISTEN My server ip address is 1yy.000.1xx.000 .Should it be 1yy.000.1xx.000:5432

    Read the article

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • Activating ssl on tomcat

    - by toom
    I want to encrypt the http traffic on a tomcat instance via ssl. Therefore I followed the most simplistic approach described on various webpages. But anyway it simply does not work. Here is what I did: "keytool -genkey -alias tomcat -keyalg RSA" and I enterd "changeit" as the password (since this is the defaut chosen by tomcat) Altering $CATALINA_HOME/conf/servers.xml by uncommenting the following line Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS"/ Restarting tomcat Entering https://localhost:8443 does not work. However, I can still access the page via normal http like http://localhost:8080 The logfile does not contain any suspicious information. What is going wrong here?

    Read the article

  • Why does my CentOS logrotate run at random times?

    - by Mike Pennington
    I put a logrotate configuration file in /etc/logrotate.d/ and expected the logs to rotate at a consistent time; however, they do not... log rotation times are seemingly random +/- one hour. Why are the log rotation start times random, and how can I change this? Informational: my logrotate config file looks like this... /opt/backups/network/*.conf { copytruncate rotate 30 daily create 644 root root dateext maxage 30 missingok notifempty compress delaycompress postrotate ## Create symbolic links in daily/ PATH=`/usr/bin/dirname $1`; FILE=`/bin/basename $1`; /bin/ln -s $1 $PATH/daily/$FILE endscript }

    Read the article

  • mod_wsgi -apache configuration file

    - by Kevin
    guys sorry I'm a newbie to this but I've been following the mod_wsgi configuration tutorial and it's very spotty. In my httpd.conf file I add the virtual host like so: 'Main' server configuration # The directives in this section set up the values used by the 'main' server, which responds to any requests that aren't handled by a definition. These values also provide defaults for any containers you may define later in the file. # All of these directives may appear inside containers, in which case these default settings will be overridden for the virtual host being defined. # ServerName wsgihost DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /myapp /Users/KL/modwsgi/env/myapp.wsgi <Directory "/Users/KL/modwsgi/env"> <Files myapp.wsgi> Order allow,deny Allow from all </Files> </Directory> Now, when I also added in my local host the following: 127.0.1.1 wsgihost but I can't seem to connect. Am I doing something terribly wrong?

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • different user group can not upload file in the server

    - by Dallal
    I have a CentOS server running in Thailand, and I'm in Canada. The guy at the computer center who set up the server for me doesn't really understand much about linux and left me off an issue to solve myself. I just moved from Mac Server to Linux server, and the first thing I'm facing a problem now is `file name` has failed to upload due to an error The uploaded file could not be moved to `location name` So what happen is that I knew from my experiences of these problem is all about permissions. So I go ahead and checked on my whole folder and found that everything in the folder permission is like myusername mygroupname then I checked the httpd file in the server and it is default to apache apache. My question is that how can I make my user to be in the same group with apache group so that I don't have to have any problem about uploading, changing data in my file....? But without having to affect other user in the same server. I'm holding Administrator account, but not root account, but I can change stuff on the server root no problem. When I was with godaddy.com there never been any problem about the permission and I wish I know how they configure that :(

    Read the article

  • Two 24' displayed vertically in Ubuntu

    - by QuinnBaetz
    I just got two 24' inch monitors, and want them to display vertically side by side. I got one to but can't figure out the command to get them both vertical and displaying. xorg.conf: Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" SubSection "Display" Virtual 3840 1200 EndSubSection EndSection Section "Device" Identifier "Configured Video Device" EndSection xrandr: Screen 0: minimum 320 x 200, current 3840 x 1200, maximum 3840 x 1200 VGA connected 1920x1200+0+0 (normal left inverted right x axis y axis) 519mm x 324mm 1920x1200 60.0*+ 1600x1200 60.0 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.0 60.0 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 TMDS-1 connected 1920x1200+1920+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 60.0*+ 1600x1200 60.0 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.0 60.0 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 any clue on the command to make them vertical? Thanks!

    Read the article

  • Can't boot Windows after installing Linux

    - by user4035
    I have a partition /dev/sdb1, where my old Windows XP resides. All the files are there intact and I can see them, mounting the disk from Linux. Linux is on /dev/sdb2. But when I choose Windows in LILO prompt, it doesn't load. I have the following lilo.conf: boot = /dev/sdb # Linux bootable partition config begins image = /boot/vmlinuz root = /dev/sdb2 label = Linux read-only # Partitions should be mounted read-only for checking # Linux bootable partition config ends # Windows bootable partition config begins other = /dev/sdb1 label = Windows table = /dev/sdb # Windows bootable partition config ends What can be wrong?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >