Search Results

Search found 6942 results on 278 pages for 'enabled'.

Page 185/278 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Windows 2003 print services for unix causing CUPS "lpd_command returning 1"

    - by Stephen P. Schaefer
    We have several Windows 2003 servers with print services for Unix on them, and which allow Linux machines running CUPS to use printers defined to CUPS with the URI lpd://printer_server/printer_queue_name - they work. An attempt to provide different printers on a different Windows 2003 server with print services for Unix newly enabled causes CUPS to behave like this: a newly defined printer will be in state "Idle". An attempt to print causes CUPS to change the printer state to "Disabled". In /var/log/cups/error_log, the relevant messages appear to be D [01/Dec/2012:06:14:18 -0800] [Job 16] lpd_command 02 hp775cm_ps D [01/Dec/2012:06:14:18 -0800] [Job 16] Sending command string (16 bytes)... D [01/Dec/2012:06:14:18 -0800] [Job 16] Reading command status... D [01/Dec/2012:06:14:18 -0800] [Job 16] lpd_command returning 1 E [01/Dec/2012:06:14:18 -0800] PID 18786 stopped with status 1! Since my Linux boxes can print to other printers via other Windows 2003 print spoolers, I'm wondering what obscure Windows component could be causing this. I don't think it is Windows firewall, since nmap sees the lpd port (515) open on the server. telnet to the server at port 515 declares Connected to server.internal.example.com (10.22.33.44). Escape character is '^]' Connection closed by foreign host. Windows clients successfully print to the CIFS/SMB share of the hp755cm_ps printer. What other reasons are there for Windows to refuse an lpd request?

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Over 200 active requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)"

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 230 requests from the 'localhost', like these: 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections.

    Read the article

  • How to ask memcached auth connection by sasl and pam?

    - by user199216
    I use memcached in a untrust network, so I try to use sasl and pam to auth connection to memcached. I installed sasl and pam module, compiled and installed memcached with sasl enabled. Also I created db and table for pam user. I run: $ sudo testsaslauthd -u tester -p abc123 -s /etc/pam.d/memcached 0: OK "Success." where the tester and abc123 is the authed user in db, which I inserted. But my python script cannot be authed, always authentication failed returned. It seems it dose not use pam to authentication, still use sasldb, because when I add user by: $ sudo saslpasswd2 -a memcached -c tester and input password: abc123, It can passed. Python script: client = bmemcached.Client(('localhost:11211'), 'tester', 'abc123') and error: bmemcached.exceptions.MemcachedException: Code: 32 Message: Auth failure. memcached log: authenticated() in cmd 0x21 is true mech: ``PLAIN'' with 14 bytes of data SASL (severity 2): Password verification failed sasl result code: -20 Unknown sasl response: -20 >30 Writing an error: Auth failure. >30 Writing bin response: no auth log found in: /var/log/auth.log Configurations: vi /etc/default/saslauthd MECHANISMS="pam" vi /etc/pam.d/memcached auth sufficient pam_mysql.so user=sasl passwd=abc123 host=localhost db=sasldb table=sasl_user usercolumn=user_name passwdcolumn=password crypt=0 sqllog=1 verbose=1 account required pam_mysql.so user=sasl passwd=abc123 host=localhost db=sasldb table=sasl_user usercolumn=user_name passwdcolumn=password crypt=0 sqllog=1 verbose=1 vi /etc/sasl2/memcached.conf pwcheck_method: saslauthd Do I make my question clear, english is not my native language, sorry! Any tips will be thankful!

    Read the article

  • Nginx case-insensitive reverse proxy rewrites

    - by BrianM
    I'm looking to setup an nginx reverse proxy to make some upcoming server moves and load balanced implementations much easier within our apps. Since our servers are all IIS case sensitivity hasn't been an issue, but now with nginx it's becoming one for me. I am simply looking to do a rewrite regardless of case. Infrastructure notes: All backend servers are IIS Most services are WCF services I am trying to simplify the URLs so I can move services around as we continue to build out I can't set my location to case insensitive due to the following error: nginx: [emerg] "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/sites-enabled/test.conf:101 The main part of my conf file where I am trying to handle the rewrite is as follows location /svc_test { proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header host $http_host; proxy_pass http://backend/serviceSite/WFCService.svc; } location ~* /test { rewrite ^/(.*)/$ /svc_test/$1 last; } It's the /test location that I can't get figured out. If I call http://nginxserver/svc_test/help I get the WCF help page to display correctly and I can make all available REST calls. This HAS to be a boneheaded regex issue on my part, but I have tried several variations and all I can get are 404 or 500 errors from nginx. This is NOT rocket science so can someone point me in the right direction so I can look like an idiot and just move on?

    Read the article

  • Amazon Ec2: Problem In Setting up FTP Server

    - by Muntasir
    after setting up My vsFtp Server ON Ec2 i am facing problem , my client is Filezilla and i am getting this error Response: 230 Login successful. Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 500 OOPS: invalid pasv_address Command: PORT 10,130,8,44,240,50 Response: 500 OOPS: priv_sock_get_cmd Error: Failed to retrieve directory listing Error: Connection closed by server this is the current setting in my vsftpd.conf #nopriv_user=ftpsecure #async_abor_enable=YES # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list GNU nano 2.0.6 File: /etc/vsftpd/vsftpd.conf # #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES pasv_enable=YES pasv_min_port=2345 pasv_max_port=2355 listen_port=1024 pasv_address=ec2-xxxxxxx.compute-1.amazonaws.com pasv_promiscuous=YES Note: i have already open those port in security group i mean listen port, min max if someone shows me how to fix this i will be very greatful thanks

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • Connection speed drops from 1 Gbps to 10 Mbps (Vista 64)

    - by Kevin Hakanson
    I recently got a Windows Home Server (HP MediaSmart Server EX490) setup so I could do backups and other things. However, I am having trouble on my Vista 64 PC. The backup will be making great progress, then it will just slow down. At one point, I noticed the lights on my Netgear GS105 indicated it was not using a 1000 Mbps connection, but a 10 Mbps one. I checked the Status of Local Area Connection (Intel(R) 82567V-2 Gigabit Network Connection) and that also showed the same slow speed. This has happened several times in the last couple days. When I disabled the network device, and then enabled it, it established the 1 Gpbs connection again. However, some of the times the Sent Bytes Activity on the Status windows indicate that the data flow is still slow (100 to 1000 bytes every couple seconds). Obviously, at this rate I could backup faster to floppy disk. :) My question is how to diagnose and fix this problem. When I look at the Administrative Events, I do a Errors: Bonjour Service 456: ERROR: read_msg errno 10054 (An existing connection was forcibly closed by the remote host.) And a Warning: e1yexpress Intel(R) 82567V-2 Gigabit Network Connection Link has been disconnected. I am suspicious there is some power saving mode. I found a post suggesting System Idle Power Saver(SIPS) may be the issue. I am going to try that, but looking for other suggestions or diagnostic advice. I have several new items in this configuration: server, client software, switch and cat6 cables.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Nginx upload PUT and POST

    - by w00t
    I am trying to make nginx accept POST and PUT methods to upload files. I have compiled nginx_upload_module-2.2.0. I can't find any how to. I simply want to use only nginx for this, no reverse proxy, no other backend and no php. Is this achievable? this is my conf: nginx version: nginx/1.2.3TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g' --add-module=/usr/src/nginx-1.2.3/nginx_upload_module-2.2.0 server { listen 80; server_name example.com; location / { root /html; autoindex on; } location /upload { root /html; autoindex on; upload_store /html/upload 1; upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; upload_aggregate_form_field "$upload_field_name.md5" "$upload_file_md5"; upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size"; upload_pass_form_field "^submit$|^description$"; upload_cleanup 400 404 499 500-505; } } And as an upload form I'm trying to use the one listed at the end of this page: http://grid.net.ru/nginx/upload.en.html

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Cant access Dell BMC IPMI Over IP

    - by Bobb
    I have Dell R210 with iDRAC BMC (new name for old BMC). Which is on-board feature with shared NIC (I believe). Server is on colocation and I didnt set it up before sent there... So I asked for the remote hands to setup IPMI Over IP. They enabled it, set the IP and everything. The IP is different than main box IP. Also the box is cabled to NIC1 and the BMC supposed to share it (am I right?) I can see new IP in the Open Server Administrator (installed on the box). I tried Supermicro IPMI tool and I tried Dell ipmish.exe command like this ipmish -ip xxx -u root -p calvin sysinfo gives BMC is not detected What could be wrong? is there a diagnostics tool I can try? It must be something obvious. I just never used things like that before.... P.S. I read something about encryptions key in the Dell docs. But I understand that is for encrypted IPMI 2.0 and ipmish can use IPMI 1.5 without encryption.

    Read the article

  • Confusion about Kerberos, delegation and SPNs.

    - by Vilx-
    I already posted this question on SO, but the nature of it is between programming and server configuration, so I'll re-post it here as well. I'm trying to write a proof-of-concept application that performs Kerberos delegation. I've written all the code, and it seems to working (I'm authenticating fine), but the resulting security context doesn't have the ISC_REQ_DELEGATE flag set. So I'm thinking that maybe one of the endpoints (client or server) is forbidden to delegate. However I'm not authenticating against an SPN. Just one domain user against another domain user. As the SPN for InitializeSecurityContext() I'm passing "[email protected]" (which is the user account under which the server application is running). As I understand, domain users have delegation enabled by default. Anyway, I asked the admin to check, and the "account is sensitive and cannot be delegated" checkbox is off. I know that if my server was running as a NETWORK SERVICE and I used an SPN to connect to it, then I'd need the computer account in AD to have the "Trust computer for delegation" checkbox checked (off by default), but... this is not the case, right? Or is it? Also - when the checkbox in the computer account is set, do the changes take place immediately, or must I reboot the server PC or wait for a while?

    Read the article

  • Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X?

    - by Daryl Spitzer
    I enabled HTTPS on the Apache server built-in to Mac OS X 10.6 (on my MacBook Pro) by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf ...in /etc/apache2/httpd.conf and modifying /etc/apache2/extra/httpd-ssl.conf to include: DocumentRoot "/Users/dspitzer/foo/bar" ServerName dot.com:443 ServerAdmin [email protected] ... SSLCertificateFile "/private/etc/apache2/siab_cert.pem" SSLCertificateKeyFile "/private/etc/apache2/siab_key.pem" Then I restart apache (with sudo apachectl restart) and go to https://localhost/ in Safari, where I get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.</p> </body></html> I've tried changing 443 in /etc/apache2/extra/httpd-ssl.conf to 8443 and going to https://localhost:8443/ and I get the same error. I read http://serverfault.com/questions/88037/why-am-i-getting-this-403-forbidden-error and confirmed that execute permission is given for all parent directories of the vhost dir: /Users/dspitzer/foo/bar. Is there a log file somewhere that might give me a clue?

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Why am I getting a warning that windows is logging on with a temporary profile to run a task scheduler task?

    - by Dan C
    I am having a strange problem with the Windows Server 2008 Task Scheduler. I have to run a small command-line application every few minutes. This application just executes a quick web service call on the localhost and adds an entry to a log file; so it should not need anything special in terms of permissions. First, I created a new user account "my_scheduler" just for the task. This account is a member of the Users group (not sure what other settings I should turn on/off) and set it's password to not expire. I then create a task to run the application every few minutes. I set it to "Run whether user is logged on or not" and turned on "Do not store password. The task will only have access to local resources" (I did this since it's not hitting anything on the network. I did not turn on "Run with highest privileges" since it does not seem to need them. I set the schedule to "After triggered, repeat every 30 minutes for a duration of 1 day" and "Allow task to be run on demand" (no other settings enabled). However, I notice that in the Event Log, I see a bunch of these warnings whenever the task is run: "Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off." Even though I get the warning, the task is executing (I see the log entries appearing). Another (possibly related) issue is that I also see that it's starting multiple copies of the task (within a few seconds of each other) even though it should only start one. This is also a big problem. Any idea how I can fix this? Thanks in advance, Dan

    Read the article

  • Personal VPN Solutions

    - by dragonmantank
    I want to set up a VPN for my laptop to connect back at home so that I don't have to directly expose my desktop computer to the internet. Here is what I have: Internet -> DD-WRT v24sp1-mega -> Desktop PC w/ Windows 7 Ultimate -> MacBook w/ OSX 10.6 What would be the easiest thing to do? DD-WRT has PPTP and OpenVPN built in and Windows 7 has RRAS itself but thus far I've run into some problems. Are there any other alternatives, or suggestions on getting these to work? PPTP I tried setting up PPTP directly on DD-WRT using these directions. When I tried connecting using my external IP from the MacBook I just kept getting that the remote server did not respond. OpenVPN According to the instructions here I don't have enough open nvram to set up OpenVPN. RRAS I got RRAS set up without a problem and can connect from the MacBook to the Windows 7 box while I'm on the same network. I port forwarded 1723 on the DD-WRT back to the Windows 7 box and made sure that PPTP Passthrough was enabled. Again, like PPTP, it just kept timing out.

    Read the article

  • Prosody mod auth external not working

    - by Yang
    I installed mod_auth_external for 0.8.2 on ubuntu 12.04 but it's not working. I have external_auth_command = "/home/yang/chat/testing" but it's not getting invoked. I enabled debug logging and see no messages from that mod. Any help? I'm using the Candy example client. Here's what's written to the log after I submit a login request (and nothing in err log): Oct 24 21:02:43 socket debug server.lua: accepted new client connection from 127.0.0.1:40527 to 5280 Oct 24 21:02:43 mod_bosh debug BOSH body open (sid: %s) Oct 24 21:02:43 boshb344ba85-fbf5-4a26-b5f5-5bd35d5ed372 debug BOSH session created for request from 169.254.11.255 Oct 24 21:02:43 mod_bosh info New BOSH session, assigned it sid 'b344ba85-fbf5-4a26-b5f5-5bd35d5ed372' Oct 24 21:02:43 httpserver debug Sending response to bf9120 Oct 24 21:02:43 httpserver debug Destroying request bf9120 Oct 24 21:02:43 httpserver debug Request has destroy callback Oct 24 21:02:43 socket debug server.lua: closed client handler and removed socket from list Oct 24 21:02:43 mod_bosh debug Session b344ba85-fbf5-4a26-b5f5-5bd35d5ed372 has 0 out of 1 requests open Oct 24 21:02:43 mod_bosh debug and there are 0 things in the send_buffer Oct 24 21:02:43 socket debug server.lua: accepted new client connection from 127.0.0.1:40528 to 5280 Oct 24 21:02:43 mod_bosh debug BOSH body open (sid: b344ba85-fbf5-4a26-b5f5-5bd35d5ed372) Oct 24 21:02:43 mod_bosh debug Session b344ba85-fbf5-4a26-b5f5-5bd35d5ed372 has 1 out of 1 requests open Oct 24 21:02:43 mod_bosh debug and there are 0 things in the send_buffer Oct 24 21:02:43 mod_bosh debug Have nothing to say, so leaving request unanswered for now Oct 24 21:02:43 httpserver debug Request c295d0 left open, on_destroy is function(mod_bosh.lua:81) Here's the config I added: modules_enabled = { ... "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP" ... } authentication = "external" external_auth_protocol = "generic" external_auth_command = "/home/yang/chat/testing"

    Read the article

  • Apache Simple Configuration Issue: per-user directory is accessing /~user instead of ~user

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work?

    Read the article

  • Samba: share home directories when home directories are symbolic links

    - by Owen
    I have set up a new Ubuntu 9.10 system for five users. In the system is a large LVM volume where all the data is to be kept. The main system disk is not for this purpose, so I attempted to move the home directories using usermod -d /var/data/username -m And started creating my shares for these new home locations. But then I thought: hey, Samba has built-in home directory sharing! So I enabled that, and it didn't work. The shares were not published to the network. Only the share for user 'owen' was published; his folder hadn't been moved. So I thought: maybe Samba home sharing only works for default home locations, so how about I move the home directories back to where they were, and then make them symlinks. root@boxenmkiv:/home# ls -l total 4 lrwxrwxrwx 1 brett brett 25 2010-04-03 08:48 brett -> /var/data/brett/ lrwxrwxrwx 1 carly carly 23 2010-04-03 08:48 carly -> /var/data/carly/ lrwxrwxrwx 1 dave dave 21 2010-04-03 08:48 dave -> /var/data/dave/ lrwxrwxrwx 1 kate kate 23 2010-04-03 08:47 kate -> /var/data/kate/ drwxr-xr-x 4 owen owen 4096 2010-04-03 08:44 owen Like so. Still no go. The only users share which is published to the network is 'owen' who as you can see above has not had his home directory moved. I have also added the following to my smb.conf [global] follow symlinks = yes wide symlinks = yes unix extensions = no With no luck. Am I going about doing this the entirely wrong way? Should I just give up and manually create shares for the users? Thanks in advance.

    Read the article

  • APC fragmention woes on Apache AWS EC2 Small instance with WordPress and W3TC

    - by two7s_clash
    AWS EC2 Small instance, Apache 2 running WordPress and W3TC. Within an hour, my APC fragmentation hits 100%. My APC settings are: apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 100M apc.optimization = 0 apc.num_files_hint = 512 apc.user_entries_hint = 1024 apc.ttl = 7200 apc.user_ttl = 7200 apc.gc_ttl = 3600 apc.cache_by_default = 1 apc.use_request_time = 1 apc.filters = "apc\.php$" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 2M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.rfc1867 = 0 apc.rfc1867_prefix = "upload_" apc.rfc1867_name = "APC_UPLOAD_PROGRESS" apc.rfc1867_freq = 0 apc.localcache = 0 apc.localcache.size = 256M apc.coredump_unmap = 0 apc.stat_ctime = 0 apc.canonicalize = 1 apc.lazy_functions = 0 apc.lazy_classes = 0 /etc/php.d/apc.ini More poop can be seen here. Mostly cribed settings from here. The shm was meant to be whittled down from such a high value after some observation, but apparently such a large value isn't even high enough.... I found an similar question/answer here. I do have some virtual hosts setup, but they aren't being touched much at all. Having users logged into the admin panel of WP does make things worse, but that's certainly not the main culprit. The question asker seems to suggest that it turns out W3TC is probably causing the problem, which the plugin author seems to agree with, but there aren't any helpful details beyond that. Why is it causing the problem? Do I just take it for now and turn off object caching with APC? Is there nothing I can do? Does having it turned on without being used for object caching actually help anything? Would memcache be an ok substitute just for object caching here? Finally, maybe I just shouldn't worry so much about the fragmentation?

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • PHP upgrade to 5.3 from 5.2, sessions no longer get stored

    - by Damo
    background link: http://stackoverflow.com/questions/7014945/php-upgrade-5-2-to-5-3-session-issue I have upgraded PHP on my 2008 std server from PHP 5.2 to PHP 5.3. Following the upgrade, sessions no longer work correctly. I have copied over the settings from my PHP.ini files which are applicable and configure new settings in line with the server or PHP's recommendations. PHP executes fine correctly, however session data does not get saved. I have session data stored in c:\temp. For each session created, I can see the session file in this folder. However no information gets written into the session file. Permissions wise, IUSR and EVERYONE has write access to this folder. If I downgrade to PHP 5.2, sessions are saved correctly and the site functions correctly. I have followed advise to ensure my code is optimised. closing session files correctly and forcing a session reset. I'm stumped. session Session Support enabled Registered save handlers files user sqlite Registered serializer handlers php php_binary wddx DirectiveLocal ValueMaster Value session.auto_startOffOff session.bug_compat_42OnOn session.bug_compat_warnOnOn session.cache_expire180180 session.cache_limiternocachenocache session.cookie_domainno valueno value session.cookie_httponlyOffOff session.cookie_lifetime00 session.cookie_path// session.cookie_secureOffOff session.entropy_fileno valueno value session.entropy_length00 session.gc_divisor100100 session.gc_maxlifetime14401440 session.gc_probability11 session.hash_bits_per_character44 session.hash_function00 session.namePHPSESSID53PHPSESSID53 session.referer_checkno valueno value session.save_handlerfilesfiles session.save_path/temp/temp session.serialize_handlerphpphp session.use_cookiesOnOn session.use_only_cookiesOnOn session.use_trans_sid00

    Read the article

  • Excel cannot access the file with IIS7&Windows Serer 2008 R2(64bit)

    - by user838204
    I have a web project(.Net4) that needs to access Excel file, but it ends up with the following error message: Error occured during file generation.Microsoft Excel cannot access the file 'D:\xx\xx\abc.xls'. There are several possible reasons: • The file name or path does not exist. (Actually it's there) • The file is being used by another program.(It cant happen) • The workbook you are trying to save has the same name as a currently open workbook. In IIS7, I use DefaultAppPool with the Identity "myservice" who's under the Group of Administrators. In the Authentication Page of my website under IIS, Anonymous Authentication was enabled and set to "Application pool identity" and ASP.NET Impersonation was disabled. After searching the solution for hours, I found the following but NONE of them work Create folder in C:\Windows\SysWOW64\config\systemprofile\Desktop. Plz refer:this Grant rights of "myservice" in Component Services. Plz refer:this One thing strange, there is nothing in the Group of IIS_IUSRS. Is that normal? Cause I remember at least two users (DefaultAppPool & Classic .Net AppPool). Plz tell me how to fix the access problem. I assume that's permission problem of IIS but I cant solve it. Thank you.

    Read the article

  • Postfix mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=6SrBcYq65l8QU Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=6SrBcYq65l8QU Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >