Search Results

Search found 10445 results on 418 pages for 'basic'.

Page 363/418 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • CUPS Web Admin Error 500 Unknown

    - by Floyd Resler
    I keep getting a 500 Unknown error whenever I navigate off the home page of my CUPS web admin. I'm sure I have something misconfigured but I'm not sure what. Here's my configuration: # # "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $" # # Sample configuration file for the CUPS scheduler. See "man cupsd.conf" for a # complete description of this file. # # Log general information in error_log - change "warn" to "debug" # for troubleshooting... LogLevel warn # Administrator user group... SystemGroup lpadmin sys root # Only listen for connections from the local machine. Listen 192.168.6.101:631 Listen /var/run/cups/cups.sock ServerName 192.168.6.101 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS BrowseAddress 192.168.6.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to the admin pages... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to configuration files... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Set the default printer/job policies... # Job-related operations must be done by the owner or an administrator... Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # Set the authenticated printer/job policies... # Job-related operations must be done by the owner or an administrator... AuthType Default Order deny,allow AuthType Default Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... AuthType Default Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # # End of "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $". #

    Read the article

  • Problem connecting to SSH in office network

    - by Jeune
    I have trouble connecting via SSH to a server whenever I am in the office. I get as far as being prompted for my password and then after that there's a long wait which always ends in a Write failed: Broken pipe This is only for connecting via SSH. I use svn to commit files to a repository hosted on the same server and there are no hitches. Furthermore, this only happens in our office. When I go the university or whenever I am at home or at the coffee shop I am able to connect seamlessly. There are no firewalls in our office. It's just a basic wireless router connected to a modem setup. It's the same setup I have at home and I guess the same setup in the coffee shop. What are the causes for a broken pipe and why does this phenomenon only happen when I try connect via SSH and not when I work with svn on the same server? Updated: Some debug logs after authentication: debug3: packet_send2: adding 48 (len 64 padlen 16 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env LIBGL_DRIVERS_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env DEFAULTS_PATH debug3: Ignored env SESSION_MANAGER debug3: Ignored env USERNAME debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env DESKTOP_SESSION debug3: Ignored env LIBGL_ALWAYS_INDIRECT debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug1: Sending env LANG = en_PH.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env GNOME_KEYRING_PID debug3: Ignored env MANDATORY_PATH debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 UPDATE 2011-14-07: I am able to connect to the server via SSH now. I didn't do anything but that's because there is no one in the office but me! Having said that, is it possible that it has something to do with the number of sessions an SSH server can handle? UPDATE 2011-14-07: I try to login via SSH through Putty on another machine running windows together with my current SSH session in Ubuntu and now it seems my SSH session in Ubuntu has been dropped. I can't type into the terminal. Is Putty the culprit now?

    Read the article

  • PHP at the root directory using Ngnix on Linode and Ubuntu 12.04

    - by Steve Kinney
    I originally set up my Linode to use it with the Sinatra applications using Phusion Passenger that I was developing and I have it working great for that. However, as time goes on, I find myself needing just a wee bit of PHP to do a server-side thing here or there. My basic set up was based off of this Linode recipe (I copied and pasted the parts that I needed—I did not install Redis and Node). If I go to http://scholarsnyc.com/index.php everything works great. If I just go the base URL however, I get a 403 Forbidden error (I have a vanilla HTML page there for now). I've played with file permissions and the same file will work if I call it directly. I've done my homework and nothing I try seems to work. I'm sure there is an obvious error. I'm also sure that there are some rookie mistakes in my Nginx configuration (some of those mistakes are the artifacts of trying different fixes from my research. user www-data www-data; worker_processes 1; events { worker_connections 1024; } upstream php { server 127.0.0.1:9001; } http { passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12; passenger_ruby /usr/local/bin/ruby; include mime.types; default_type application/octet-stream; index index.php index.html index.htm; sendfile on; keepalive_timeout 65; server { server_name localhost scholarsnyc.com www.scholarsnyc.com; root /srv/www/scholarsnyc.com/public; location / { index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { server_name data.scholarsnyc.com; root /srv/www/data.scholarsnyc.com/public; passenger_enabled on; } server { server_name tech.scholarsnyc.com; root /srv/www/tech.scholarsnyc.com/public; location / { root /srv/www/tech.scholarsnyc.com/public; index index.php index.html index.htm; } } } Any other optimizations are also appreciated. I literally don't know what to do at this point.

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • Dangers of Running Computers w/o Air Conditioning

    - by Daniel Bingham
    I recently moved in to an apartment with out air conditioning. This is fine most of the time as I am in upstate New York. It only ever gets above the high 70s during the hottest of the summer months. And when it does, I'm stubborn enough that I'll just deal with wearing minimal clothing around the house. However, I'm worried about my computers. I'm a software developer and gamer, so many of my machines are very high powered. And at least one of them is a server that must be left on 24/7 (not just a game server - also serves multiple websites). I've never before had to worry about the heat too much, as I always lived in buildings with central air. The in building temperature rarely got much above 70 F. All of the machines I built had good enough air cooling that I never saw a problem. Now the temperature in building is pushing 100F and I'm worried that the machines will not be able to keep themselves cool enough by simply blowing already hot air over themselves. The hottest of them I've turned off. However, the server I cannot. It's an old Dell (not custom build) that runs on a Pentium 4 (2.2GHz). It only has a single hard drive, integrated video. And it'd not running any processor intensive servers. Just basic LAMP. It used to run a MUD server, but that's off for now. So it should be idling most of the time. I haven't been able to find any sort of built in temperature sensors in the hardware... at least not any that the programs I've found in the Debian repository can read. And it's an inherited machine to which I do not have the full specs, so I don't know the tolerances anyway. How worried should I be about it melting down on me? How worried should I be about the hard drive melting or becoming corrupted? To generalize the question for other people, what are the safe temperature tolerances for most machines. How widely does it vary, and how does one go about determining when their machine is running too hot and needs to be shut down?

    Read the article

  • Pxe net install Centos with Static IP

    - by Stu2000
    I seem to be unable to perform a kickstart installation of centos5.8 with a netinstall. It correctly gets into the text installer, but keeps sending out a request for the dhcp server and failing. I have tried to manually set the IP everywhere. Here is my pxelinux.cfg file DEFAULT menu PROMPT 0 MENU TITLE Ubuntu MAAS TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT local LABEL centos5.8-net kernel /images/centos5.8-net/vmlinuz MENU LABEL centos5.8-net append initrd=/images/centos5.8-net/initrd.img ip=192.168.1.163 netmask=255.255.255.0 hostname=client101 gateway=192.168.1.1 ksdevice=eth0 dns=8.8.8.8 ks=http://192.168.1.125/cblr/svc/op/ks/profile/centos5.8-net MENU end and here is my kickstart file: # Kickstart file for a very basic Centos 5.8 system # Assigns the server ip: 192.211.48.163 # DNS 8.8.8.8, 8.8.4.4 # London TZ install url --url http://mirror.centos.org/centos-5/5.8/os/i386 lang en_US.UTF-8 keyboard us network --device=eth0 --bootproto=static --ip=192.168.1.163 --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=8.8.8.8,8.8.4.4 --hostname=client1-server --onboot=on rootpw --iscrypted $1$Snrd2bB6$CuD/07AX2r/lHgVTPZyAz/ firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc Europe/London bootloader --location=mbr --driveorder=xvda --append="console=xvc0" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work part /boot --fstype ext3 --size=100 --ondisk=xvda part pv.2 --size=0 --grow --ondisk=xvda volgroup VolGroup00 --pesize=32768 pv.2 logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=528 --grow --maxsize=1056 logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow %packages @base @core @dialup @editors @text-internet keyutils iscsi-initiator-utils trousers bridge-utils fipscheck device-mapper-multipath sgpio emacs Here is my dhcp file: ddns-update-style interim; allow booting; allow bootp; ignore client-updates; set vendorclass = option vendor-class-identifier; subnet 192.168.1.0 netmask 255.255.255.0 { host tower { hardware ethernet 50:E5:49:18:D5:C6; fixed-address 192.168.1.163; option routers 192.168.1.1; option domain-name-servers 8.8.8.8,8.8.4.4; option subnet-mask 255.255.255.0; filename "/pxelinux.0"; default-lease-time 21600; max-lease-time 43200; next-server 192.168.1.125; } } Is it impossible to prevent it asking for a dynamic ip before trying to install from the net? Perhaps there is an error in of my files? My dhcp server is set to ignore client-updates, and is set to only works with one mac address whilst testing.

    Read the article

  • Unable to telnet out on port 25 on windows server 2008

    - by NickGPS
    Hi All, I just setup a Windows 2008 R2 server and am trying to get a basic mail server up and running so that I can send emails from my applications. I setup a virtual SMTP server in IIS6 and tried doing a local telnet to port 25, which seemed to work fine. There were no errors during this stage and I can see the mail message appear in the Queue folder. The problem is that mail never leaves the Queue folder. I then tried to telnet to a remote mail server on port 25 but couldn't connect:- telnet 209.85.227.27 25 Could not open connection to the host, on port 25: Connection failed) I checked my firewall and there is a default setting to allow all outgoing TCP traffic with no restriction. I even setup a specific rule for outgoing port 25 traffic but to no avail. I then ran a SmtpDiag.exe command .\SmtpDiag.exe [email protected] [email protected] and received the following output Searching for Exchange external DNS settings. Computer name is WIN-SERVERNAME. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.227.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Is there any other diagnostics I can do to figure out if it's my firewall or something else? I have removed antivirus to make sure that it wasn't causing the problem. Any ideas would be much appreciated.

    Read the article

  • Cannot connect to WEBrick on home network

    - by Chris Stewart
    I'm an Android developer and often my applications require server-side code. I typically use Ruby on Rails for the web app, and during development will run the server on my local machine (Mac OS X) with WEBrick. In the morning when I get to the office, I'll run ifconfig in the console to see what IP my laptop has been given that day. I'll use that IP in my Android app when making requests to the web app in question. This all works fine, when I'm in my office. When I get home, I attempt to do the same thing, find my laptop's IP via ifconfig, set it in my app's config file, but the destination can never be found. To exclude my app from the set of hurdles, I attempt to visit the web server IP (e.g., http://192.168.1.4:3000) from my phone's browser, and it cannot connect. If I try from my laptop, which is running the web server, it works fine. If I try from another machine, on the same network, it also is unable to connect. Given this, I think I've narrowed it down to some kind of configuration in my home network, but I frankly have no idea what the cause could be. I don't have anything special at home, your basic Verizon FiOS router/modem with everything connected via Wi-Fi (Wi-Fi for both phone and laptop at work as well, fyi). I've tried disabling the firewall on my Verizon router, enabling port forwarding, and just about everything else I could do for port 3000, and nothing has changed. Dear Server Fault geniuses, please help a poor developer out. :) Edit: Some follow up items to add. My Mac's firewall is not active, and all incoming requests are allowed. I've also verified on my phone and laptop, that they're on the same network (192.168.1.4 Mac, 192.168.1.9 Phone). I have no idea why this isn't working. Edit 2: I went into System Preferences, enabled Web Sharing, and tried to view the website from my phone and it didn't connect. So it's not WEBrick or related to Rails. The firewall on my machine is off and the firewall on my router is off. Edit 3: Some progress. I set up port forwarding for port 3000 to my laptop, found the external IP, and used that and it connected fine. So, there's definitely something not quite set up correctly on my internal network.

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • hyperv machine guest loads slow

    - by Dani Avni
    this is by far one of the strangest things I have seen. I have a win 2008R2 cluster with a CSV. the CSV itself is on an iSCSI storage (hitachi HUS 110) basic config of the two hosts in the cluster is Dell R610 Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI these machine work great and I can load a 2008R2 test guest machine on them in less than 90 seconds after the above config is running for over a year, I now need to add a new host. now the host is Dell R620 (Still intel but different CPU) Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI I added this new host to the domain and to the cluster, I gave it access to the CSV and I tried loading the same guest machine that loads in 90 seconds in the other hosts. the machine loads in about 6 minutes. no matter how many times I try this the old hosts load the machine in about 90 seconds and this new host in around 6 minutes to eliminate any problems with the iSCSI connection, I added a new LUN and directly accessed it from the new host and I was working at around 300MB/s so no problem there. I also tested the connection between the other hosts and the new one and network is working fine there too. to eliminate problems in HyperV, I copied the machine to the local disk of the new host and it loaded in less than 20 seconds. now is the point were things get a lot stranger: in my tests I tried installing a fresh windows guest machine to the CSV from the new host. I noticed that while the fresh windows was installing, my test guest was loading in less than 90 seconds even on the new host (I repeated this a few times). If I paused the fresh install guest and tried loading the test guest again it loaded in 6 minutes. and again after I resumed the guest installation the test guest loaded fast. after the fresh windows was also loaded, I ran tests loading the fresh window and my test machine. each one of them loaded in about 5 minutes when I tried loading them separately. however when I started both of them in the same time they both loaded in around 2.5 minutes it seems that the iSCSI disk access is only working if it is under some load (although I never got to above 10% utilization according to the task manager) does anyone have any idea what could be the problem?

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Loss of wireless network connectivity when playing video via HDMI cable

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas? EDIT: I am now leaning towards the possibility that the HDMI cable is somehow interfering with the wireless network, when large amounts of data are being pushed through the cable. Is this possible?

    Read the article

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

  • Problems set-up Single Sign-On using Kerberos authentication

    - by user1124133
    I need for Ruby on Rail application set authentication via Active Directory using Kerberos authentication. Some technical information: I are using Apache installed mod_auth_kerb In httpd.conf I added LoadModule auth_kerb_module modules/mod_auth_kerb.so In /etc/krb5.conf I added following configuration [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EU.ORG.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] EU.ORG.COM = { kdc = eudc05.eu.org.com:88 admin_server = eudc05.eu.org.com:749 default_domain = eu.org.com } [domain_realm] .eu.org.com = EU.ORG.COM eu.org.com = EU.ORG.COM [appdefaults] pam = { debug = true ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } When I test kinit validuser and enter password then authentication is successful. klist returns: Ticket cache: FILE:/tmp/krb5cc_600 Default principal: [email protected] Valid starting Expires Service principal 02/08/13 13:46:40 02/08/13 23:46:47 krbtgt/[email protected] renew until 02/09/13 13:46:40 Kerberos 4 ticket cache: /tmp/tkt600 klist: You have no tickets cached In application Apache configuration I added IfModule mod_auth_kerb.c> Location /winlogin> AuthType Kerberos AuthName "Kerberos Loginsss" KrbMethodNegotiate off KrbAuthoritative on KrbVerifyKDC off KrbAuthRealms EU.ORG.COM Krb5Keytab /home/crmdata/httpd/apache.keytab KrbSaveCredentials off Require valid-user </Location> </IfModule> I restarted apache Now some tests: When I try to access application from Win7, I got pop-up message box, with text: Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentification without a secure connection) When I enter valid credentials then my application opens successfully, and all works fine. Questions: Is ok that for user pop-ups such windows? If I use NTLM authentication then there no such pop-up. I checked IE Internet Options and there 'Enable Integrated Windows Authentication' is checked. Why IE try to send username and password to application apache? If I correct to understand then Windows self must make authentication via Active Directory using Kerberos protocol. When I try to access application from Win7 and I enter incorrect credentials to pop-up message box Application say Authentication failed (this is OK) In apache error log I see: [error] [client 192.168.56.1] krb5_get_init_creds_password() failed: Client not found in Kerberos database But now I cannot get possibility to enter valid credentials, only when I restart IE I can get again pop-up box. What could be incorrect or missing in my Kerberos setup? I read in some blog post that probably something is needed to be done in Active Directory side. What exactly?

    Read the article

  • Nginx + Passenger running a RoR app is returning 401 when 302 is expected

    - by DBruns
    I've got a RoR app running on Passenger on top of Nginx. I'm using devise for my authentication method and have a link that gets sent in an email to users that requires authentication to view. If a user clicks the link from Outlook, and IE is the default browser, IE makes an HTTP request using the following headers: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 401 Unauthorized Content-Type: /; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 401 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 WWW-Authenticate: Basic realm="Application" Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _vxwer_session=[sessionstr]; path=/; HttpOnly X-Runtime: 0.011918 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 31 You need to sign in or sign up before continuing. 0 When the exact same URL is typed into the address bar, it does this: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 Location: http://www.company.com/users/sign_in Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _xswer_session=[session_info_here]; path=/; HttpOnly X-Runtime: 0.010798 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 6f <html><body>You are being <a href="http://www.company.com/users/sign_in">redirected</a>.</body></html> 0 I expect them to return the same thing regardless.

    Read the article

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • I want to virtualize my workstation (Tier 1), Looking for Bare Metal Hypervisor for consumer grade components

    - by Chase Florell
    I find myself in this similar bind at least once a year. The bind whereby I'm either upgrading a motherboard, or an OS hard drive. It drives me crazy to have to reinstall Windows, Visual Studio, all my addins, reconfigure my settings etc... every single time. I have a layout and I like and I want to stick with it. My question is... Is there a Bare Metal Hypervisor on the market that will enable me to virtualize my consumer grade workstation? I really want to avoid Host/Client virtualization. Bare Metal is definitely a better way to go for my needs. Is this a good approach, or am I going to suffer some other undesirable side effects by doing this? Clarification My machine has very limited purposes. My primary use is Visual Studio 2010 Professional where I develop ASP.NET MVC Web Applications. The second piece of software that I use (that's system intensive) is Photoshop CS3. Beyond that, my applications are limited to Outlook, Internet Explorer, Firefox, Opera, Chrome, LinqPad, and various other (small) apps. Beyond this, I'm considering working on a node.js project and might run ubuntu on the same hypervisor if possible. System Specs: Gigabyte Motherboard Intel i7 920 12 GB Ram basic 500GB 7200RPM HDD for OS 4 VelociRaptors in Raid 1/0 for build disk Dual GTS250 (512MB) Graphics cards (non SLI) for quad monitors On a side note I also wouldn't be opposed to an alternative suggestion if the limitations are too great. I could install the ESXi (or Zen Server) on my box, and build a separate "thin client" to RDP into the virtual machine. It appears as though RDP supports dual monitors. Edit (Dec 9, 2011) It's been nearly a year since I first asked this question. Since then, there have been a lot of great strides in Hypervisor technology... AND MokaFive is now released for corporate use. I'd love to dig into this question a little more and find out if there is a solid BareMetal Hypervisor for workstations running consumer grade components (IE: not Dell, HP, Lenovo, Etc).

    Read the article

  • Trouble creating FTP in Server 2008

    - by Saariko
    I have been trying to create an FTP server on my new Server 2008. I have been following both (very detailed and highly published here guides) For setting up using IIS Manager http://learn.iis.net/page.aspx/321/configure-ftp-with-iis-7-manager-authentication/ and For anonymous FTP http://www.trainsignaltraining.com/windows-server-2008-ftp-iis7 I am able to log as an anonymous user. My need is to use a named user, so I need to use the IIS Manager. I get error 530 when trying to log as a user. Connected to 127.0.0.1. 220 Microsoft FTP Service User (127.0.0.1:(none)): ftpmanager 331 Password required for ftpmanager. Password: 530-User cannot log in. Win32 error: Logon failure: unknown user name or bad password. Error details: Filename: Error: 530 End Login failed. ftp> I can not learn from this message anything. My password is set to: 1234 (so I don't think I make a mistake here - testing purposes only ofc) Thank you. Note - I went over other posts on SE that I read, and couldn't get the result: IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in. FTP Error 530, User cannot log in, home directory inaccessible. Having trouble setting up FTP server on Windows Server 2008 EDIT I think I found some errors with the physical path. Going to Basic settings, and Test Connection on the physical path, gave me the following error: The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that \$ has Read access to the physical path. Then test these settings again. I am not sure which/whom should get access to the Root folder !? I want to point out, I managed to login with a domain user (change authorization and authentication methods) but this is NOT the requested solution. I checked to make sure that the FTP, folders, access is working properly. I am bit lost here. ==== More tries: I have enabled another Allow rule for ALL Users. I still get the same error. It seems that it doesn't matter if i use a correct or wrong password, I still get Error 530.

    Read the article

  • Triple (3) Monitors under Linux

    - by widgisoft
    I have a 3 monitor setup (each 1680x1050) via an Nvidia NVS440 (2 GPUs, 2 outputs per GPU totalling 4 outputs); this works fine under Windows XP,7 but caused considerable headaches under Linux (Ubuntu 9.04). I had previously used an XFX 9600GT and the onboard XFX 9300GS to produce the same result but the card was noisy and power hungry and I was hoping that there was some magical switch in the NVS4400 that got rid of this annoying problem - turns out the NVS440 is just 2 cards on one physical PCB :-p (I searched the net high and low for people using this card under Linux but found nothing, if anything the card uses less power and is fan less so I was to benefit from it either way) Anyway, using either set up there were 5 solutions available: Have 3 separate X instances, all un joined Have 3 separate X instances, adjoined by Xinerama Have 2 separate X instances - One using twin-view, both adjoined by Xinerama Have 2 separate X instances - One using twin-view but no Xinerama Have a single Twin-view setup and leave the 3rd screen unplugged :-p The 4rd option, using 2 separate X instances and twinview (but no xinerama) was the best balance in terms of performance and usability but caused 2 really annoying issues You couldn't control (without altering the shortcuts) which screen an application opened onto - and once it was opened you couldn't move it to another screen without opening up terminal and forcing it to move Nvidia's overriding or falsifying of Xinerama breaks and the 2 screens joined by Twin view behave like a single huge screen causing popups to open in the middle of both screens and maximising of windows stretches to the width of the first 2 screens Firefox can only run one instance as the same user so having multiple firefox windows requires at least 2 users The second option "feels" like the right option, but OpenGL is basically disabled and playing any sort of game or even running anything graphical causes a huge performance drop and instability - even trying to run a basic emulator for gba or gens just causes the system to fall over. It works just enough to stare at your desktop and do nothing but as soon as you start doing some work - opening windows, dragging things around - running multiple copies of firefox it just really feels slow. The last open, only going dual screen works perfectly and everything performs as required, full GPU acceleration - two logical screen spaces - perfect, just make it work across GPUs like windows! :-p Anyway, I know RandR was supposed to pick up the slack when it would introduced GPU objects of sorts to allow multiple GPUs to be stitched together to create one huge desktop at a much deeper layer than Xinerama. I was wondering if this has now been fixed (I noticed X server 1.7 is out) and whether anyone has got it running successfully? Again, my requirements are: One huge desktop to drag any window across Maximising of windows to each screen (as XP does) Running fullscreen apps on the primary screen and disabling the mouse from moving onto the others or on all 3 stretched Finally as a side note; I am aware of the Matrox triple (and dual) head splitter but even the price they go for on eBay is more than I can afford atm, my argument: I shouldn't have to buy extra hardware to get something to work on Linux when it's something that's existed in the windows world for a long time (can you tell I don't get on with X :-p); If I had the cash I'd have bought the latest version of this box already (the new version finally supports large resolutions as the displays I have 1680x1050 each).

    Read the article

  • How can I create an appointment on a shared Outlook calendar

    - by roryhewitt
    This isn't as basic a question as it may seem. Hence all the descriptive text... I and my team use Outlook 2007. I have my own personal calendar and also I share a calendar with others in my team (which is mainly used to notify everyone of vacations etc.). The others in my team are NOT technical people. I would like to create a shortcut or template that any of us can use to create an appointment in the shared calendar. My initial though was to create a new appointment, but rather than actually put it in the calendar, I would save it as an Outlook Template (.oft) file. Once created, I would send this to my team and tell them to put it in their Templates file and put a shortcut on their desktop. Then, if they want to put a vacation in the shared calendar, they just double-click on the shortcut, change the dates etc. and then save & close it. However, when I do that, it doesn't save the fact that it's an appointment on the shared calendar - it just adds the appointment to the team member's personal calendar. There doesn't seem to be a way to specify a calendar in the template. I've also tried this by saving the template as a .ics or .vcs file, with no better luck. Additionally, if a team member adds an appointment to the shared calendar, other 'sharees' aren't notified, unless the appointment is actually created as a meeting and the other sharees are explicitly invited. I found this online (http://office.microsoft.com/en-us/outlook-help/keep-everyone-informed-about-time-away-from-the-office-HA010209819.aspx) which APPEARS to say that what I want to do isn't built in functionality (since it shows a bunch of steps to go through. I'd PREFER not to have to add this stuff to everyone else's personal calendar directly. So... Is this possible to do, natively (i.e. directly in Outlook)? Would a Sharepoint calendar make more sense and allow this functionality? Is there a way to do what I want which will allow the other team members to be notified? Like I said, I'm looking for as simple an interface as possible - these people aren't going to want to do much more than open something and change dates. Additionally, they're probably not going to have any fancy software on their PC's, although they will be up to date with Java and (maybe) .NET frameworks. Also, before anyone gets funny, yes, this has to work with Outlook 2007, as it's a corporate standard - we're not able to change that, even though e.g. Google Calendar would do this wonderfully. Obviously if this functionality is available in Outlook 2010, then fantastic - we might be able to upgrade. Thanks!

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >