Search Results

Search found 11268 results on 451 pages for 'shweta simply'.

Page 384/451 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • connect() failed (111: Connection refused) while connecting to upstream

    - by Burning the Codeigniter
    I'm experiencing 502 gateway errors when accessing a PHP file in a directory (http://domain.com/dev/index.php), the logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before, how do I do a solution for this type of 502 gateway error? This is the nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • Prevent Windows 7 User Accounts from accessing files in other User Accounts

    - by Mantis
    I'm trying to set up another User Account on my Windows 7 Professional laptop for use by another person. I do not want that person to have access to any of the files in my User Account on the same machine. This machine has a single hard disk formatted with NTFS. User accounts data is stored in the default location, C:\Users. I use the computer with a Standard Account (not an Administrator). Let's call my user account "User A." I have given the new user a Standard Account. Let's call the new user's account "User B." To be clear, I want User B to have the ability to log in to her account, to use the computer, but to be unable to access any of the files in the User A account on the same machine. Currently, User B cannot use Windows Explorer to navigate to the location C:\Users\User A. However, by simply using Windows Search, User B can easily find and open documents saved in C:\Users\User A\Documents. After opening a document, that document's full path appears in "Recent Places" in Windows Explorer, and the document appears as a file that can be opened using the "Recent" feature in Word 2010. This is not the desired behavior. User B should not have the ability to see any documents using Windows Search or anything else. I have attempted to set permissions using the following procedure. Using an Administrator account, navigate to C:\Users and right-click on the "User A" folder. Select "Properties." In the "User A Properties" window that appears, click the "Security" tab. Click the "Edit..." button to change permissions. IN the "Permissions for User B" window that appears, under "Group or User Names," select User B. Under "Permissions for User B", check the box under the "Deny" column for the "Full Control" row. Ensure that the "Deny" box is automatically checked for all the other rows, and then click "OK." The system should then begin working. The process could take several minutes. When I followed this procedure, I received several "Access Denied" errors, suggesting that the system was unable to set the permissions as I had directed. I think this might be one of the reasons why User B is still able to access files in User A's account folders. Is there any other way I could accomplish my goal here? Thank you.

    Read the article

  • How to change radius of rounded rectangle in Photoshop

    - by MattDiPasquale
    At this point, I'm thinking of just using CSS 3, esp. since I'm a programmer, but I'd like to do this with Photoshop because I think it's nicer since I'm working with images anyway, among other reasons... Before I move on, my first question is: Is there a place like SuperUser for designers (or for Photoshop-like or questions)? What I Want: I want the icons on http://www.mattdipasquale.com/ to look like those on http://about.me/mattdipasquale. About.me has an outdated Twitter icon and does not have icons for GitHub, StackOverflow, etc. So, although I like the look of their icons, I want to be able to create these icons myself instead of using their versions. What I Have: I have different iphone icons, like the Facebook iPhone icon, Twitter iPhone icon, etc., that I got from iTunes, using Firebug to find the URL of the background image. I opened them up in Photoshop and pressed option + command + i to reduce the image size to 32px x 32px with Bicubic Sharper (best for reduction). I now have a square icon layer. Closing the Gap: In addition to the icon layer, I want to have a clipping-mask layer that will apply the 5px rounded-corners, 1px stroke, and 1px bevel. (Note: I just want to apply effects to the edges of the icon because the gloss and other effects are already encoded in the iTunes image. Also, I'm just guessing about the pixel values, but I want it to look good, like the icons on about.me.) What settings should I use for the blend options to make the icons look good, like iphone icons or those used by about.me? Why a Clipping Mask? The reason I want to use a clipping mask is that I want ease of reproducibility. I want to be able to apply the same styling to other square icon layers by simply replacing the square icon layer and then saving for web. If there is a better way to achieve such ease of reproducibility, please suggest it. I've seen Photoshop iPhone icon templates, but I couldn't figure out how to use them with my own images. Thanks! Matt

    Read the article

  • No network upsets gnome

    - by Darren Cook
    An issue that has been bothering me for over a year now. My notebook, running ubuntu 10.04, is almost all the time using a wired connection, with static IP address. And a remote DNS server. Network is configured with entries in /etc/network/interfaces and /etc/resolv.conf, rather than whatever the gnome UI tool was (*) But if I'm out, or simply unplug the network cable, a few things get weird. Specifically the gnome-panel stops working - it is still there, but isn't updating. And opening a nautilus window (e.g. to look at files on the local disk) has huge time-outs. By that I mean it will not open the window for something like 30 or 60 seconds; but when it does finally open it I can see the files and it is perfectly usable. Everything else works fine, alt-tab between windows, etc. I use the commandline to find the pid of gnome-panel, kill it, wait a couple of seconds, and it opens up a fresh panel which is normally usable. (Something like 10 minutes later it will have locked/crashed again; the same for the nautilus windows.) I'm guessing this is a DNS issue? Would setting up a local DNS server help? Guess number 2 was related to having a file server mount (samba, though running on another linux box), and symbolic links to files and directories on that file server on my desktop. My question is a bit vague... Does anyone recognize these symptoms, and have a suggestion? Or do you have some troubleshooting suggestions for narrowing down the problem? My /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 127.0.0.1 testsite.local #Other test website URLs here UPDATE: Some timings to open some desktop folder icons. This is after pulling out the network cable. A sub-directory of the desktop took 23 secs to open up. Content appears immediately (just 8 files, it has no further subdirectories). The home directory icon took 12 seconds to open up, but then took about 30 seconds for the files to appear. I closed it and tried again. This time it took 18 seconds to open up, but then 70 seconds before anything appeared. *: I couldn't work out how to use the gnome network tool for my needs, which include 3-4 static IPs for testing virtual hosts locally.

    Read the article

  • Ubuntu - Ruby Daemon script creates two processes - sh and ruby - PID file points at sh, not ruby

    - by Jonathan Scoles
    The PID file for a ruby process I have running as a daemon is getting the wrong PID. It appears that running /etc/init.d/sinatra start creates two processes - sh and ruby, and the PID that ends up in the PID file is that of the sh process. This means that when I then run /etc/init.d/sinatra stop or /etc/init.d/sinatra restart, it is killing sh and leaving the ruby process still running. I'd like to know a) why is my script launching two processes - sh and ruby, and not just ruby, and b) how do I fix it to just launch ruby? Details of the setup: I have a small Sinatra server set up on an ubuntu server, running as a daemon. It is set to automatically at server startup run a script named sinatra in /etc/init.d that launches the a control script control.rb, which then runs a ruby daemon command to start the server. The script is run under the 'sinatrauser' account, which has permissions for the directories the script needs. contents of /etc/init.d/sinatra #!/bin/bash # sinatra Startup script for Sinatra server. sudo -u sinatrauser ruby /var/www/sinatra/control.rb $1 RETVAL=$? exit $RETVAL To install this script, I simply copied it to /etc/init.d/ and ran sudo update-rc.d sinatra defaults contents of /var/www/sinatra/control.rb require 'rubygems' require 'daemons' pwd = Dir.pwd Daemons.run_proc('sinatraserver.rb', {:dir_mode => :normal, :dir => "/opt/pids/sinatra"}) do Dir.chdir(pwd) exec 'ruby /var/www/sinatra/sintraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1' end portion of output from ps -A 6967 ? 00:00:00 apache2 10181 ? 00:00:00 sh <--- PID file gets this PID 10182 ? 00:00:02 ruby <--- Actual ruby process running Sinatra 12172 ? 00:00:00 sshd The PID file gets created in /opt/pids/sinatra/sinatraserver.rb.pid, and always contains the PID of the sh instance, which is always one less than the PID of the ruby process EDIT: I tried micke's solution, but it had no effect on the behavior I am seeing. This is the output from ps -A f. This output looks the same whether I use sudo -u sinatrauser ... or su sinatrauser -c ... in the service start script in /etc/init.d. 1146 ? S 0:00 sh -c ruby /var/www/sinatra/sinatraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1 1147 ? S 0:00 \_ ruby /var/www/sinatra/sinatraserver.rb

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • GRUB 2 freezing at OS selection screen, what could be the cause?

    - by Michael Kjörling
    Mains power is somewhat unreliable where I live, so every now and then, the computer gets rebooted when the PSU can't maintain proper voltage during a brown-out or momentary black-out. It's happened a few times recently that when power is restored, the BIOS POST completes successfully, GRUB starts to load and then freezes. I've seen this at the Welcome to GRUB! message, but it seems to happen more often just past the switch to the graphical OS list. At this point, the computer will not respond to anything (arrow keys, control commands, Ctrl+Alt+Del, ...) - it simply sits there displaying this image, seemingly doing nothing more. At that point, turning the computer off using the power button and letting it sit for a while (cooling down?) has allowed it to boot successfully. Turning the computer off and immediately back on seems to give the same result (successful POST then freeze in GRUB). This behavior began recently, although does not seem to be directly correlated with my hard disk woes (although it may be relevant that GRUB resides on that physical disk, I don't know). Once the computer has booted, it runs without a hitch. I know that a "proper" solution would be to invest in a UPS, but what might be causing behavior like this? I was thinking in terms of perhaps the CPU shutting down as a thermal control measure, but if that was the cause then wouldn't I see similar freezes during use (which I do not)? What else could cause freezes apparently closely but not perfectly related to the BIOS handover from POST to OS bootloader? The BIOS settings are to reset to previous power status after a power loss. Since the PC in question is almost always turned on, this means restore to full power status. I have no expansion cards installed that make any BIOS extensions known by screen output during the boot process, at least, but I do have a few expansion cards installed. Haven't made any changes in that regard in a long time, now. I haven't touched GRUB itself for a long time, whether configuration or binaries, so I don't think that's the problem. Also, it doesn't really make sense that a bug in GRUB would manifest itself only once in a blue moon but significantly more often after a power failure.

    Read the article

  • How can I troubleshoot a "Hardware Malfunction" blue screen?

    - by AaronSieb
    My computer has suddenly started crashing to a blue screen with the following text: hardware malfunction call your hardware vendor for support *the system has halted* The crash occurs randomly during normal use. I have thus far always been able to reproduce it by transferring the contents of a large folder... But I'm not sure if this is caused by the file transfer, or simply because the transfer takes long enough for something else to trigger it. A bit about my hardware I have an dual core Intel CPU, and Asus motherboard. Video card is by nVidia, and connects via PCIe. My hard drives are in pairs, and connect via SATA to a RAID controller on the motherboard. They are configured to use a RAID0 configuration. What I've tried so far There is nothing in the Windows Event Log. WhoCrashed was unable to find any crash records. ScanDisk runs to completion (it launches prior to Windows load) and reports no errors. MemTest reports no errors (to 200% coverage). System temperatures are in the range of 40 to 50 degrees Celsius, with video card temperatures in the range of 60 to eighty degrees Celsius. I have stripped the system down to a minimal configuration (hard drive, video card, one memory module, motherboard, CPU, power supply). The problem still occurrs. However, this has allowed me to rule out a few components: It is not the video card because the problem still occurred after replacing the video card another one I had on hand. It is not the hard drive or anything software related because the problem occurred after a fresh installation of Windows on a replacement hard drive. It is not the hard drive cables because I replaced those with new ones and still had the problem. It is not the power supply because the problem still occurred after replacing the power supply with another one I had on hand. It is probably not the memory because I've tried three different memory modules in three different memory slots and was still able to replicate the issue. Is there anything I can do to confirm what's causing the issue? At the moment it seems as though it must be either the motherboard or CPU, but those are both difficult components to replace... In addition, both components are relatively new (two to three years old). I will gladly edit in any additional information I can get my hands on, and/or focus the question as I can find more details...

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • sftp and public keys

    - by Lizard
    I am trying to sftp into an a server hosted by someone else. To make sure this worked I did the standard sftp [email protected] i was promted with the password and that worked fine. I am setting up a cron script to send a file once a week so have given them our public key which they claim to have added to their authorized_keys file. I now try sftp [email protected] again and I am still prompted for a password, but now the password doesn't work... Connecting to [email protected]... [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied (publickey,password). Couldn't read packet: Connection reset by peer I did notice however that if I simply pressed enter (no password) it logged me in fine... So here are my questions: Is there a way to check what privatekey/pulbickey pair my sftp connection is using? Is it possible to specify what key pair to use? If all is setup correctly (using correct key pair and added to authorized files) why am I being asked to enter a blank password? Thanks for your help in advance! UPDATE I have just run sftp -vvv [email protected] .... debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: SHA1 fp 45:1b:e7:b6:33:41:1c:bb:0f:e3:c1:0f:1b:b0:d5:e4:28:a3:3f:0e debug3: sign_and_send_pubkey debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password It seems to suggest that it tries to use the public key... What am I missing?

    Read the article

  • getfacl command and Linux file permissions - getting 403 error when accessing Wordpress

    - by tommytwoeyes
    I'm configuring Wordpress for a friend, and I just screwed up the Wordpress directory permissions (I suspect) using setfacl. Webfaction doesn't allow sudo or allow me to change the directory group ownership using chown. Now it appears that something I did is causing the entire application to give me 403 errors when I try to access it. The current directory listing looks like this (I set the whole thing to 777 temporarily to try to recover access to it): drwxrwsr-x+ 6 myusername myusername 4096 Mar 2 07:07 ./ drwxr-xr-x 3 root root 4096 Feb 25 19:48 ../ -rwxrwxr-x+ 1 myusername myusername 286 Mar 2 06:33 gzip.php -rwxrwxr-x+ 1 myusername myusername 4831 Mar 4 20:02 .htaccess -rwxrwxr-x+ 1 myusername myusername 397 Feb 25 19:49 index.php -rw-rw-r--+ 1 myusername myusername 15606 Feb 25 19:49 license.txt -rw-rw-r--+ 1 myusername myusername 9200 Feb 25 19:49 readme.html drwxrwsr-x+ 6 myusername myusername 4096 Feb 25 19:49 .svn/ -rwxrwxr-x+ 1 myusername myusername 4337 Feb 25 19:49 wp-activate.php drwxr-xr-x+ 10 myusername myusername 4096 Mar 4 20:03 wp-admin/ -rwxrwxr-x+ 1 myusername myusername 40283 Feb 25 19:49 wp-app.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-atom.php -rwxrwxr-x+ 1 myusername myusername 274 Feb 25 19:49 wp-blog-header.php -rwxrwxr-x+ 1 myusername myusername 3931 Feb 25 19:49 wp-comments-post.php -rwxrwxr-x+ 1 myusername myusername 244 Feb 25 19:49 wp-commentsrss2.php -rwxrwxr-x+ 1 myusername myusername 3485 Feb 25 20:15 wp-config.php drwxr-xr-x+ 6 myusername myusername 4096 Feb 26 08:52 wp-content/ -rwxrwxr-x+ 1 myusername myusername 1255 Feb 25 19:49 wp-cron.php -rwxrwxr-x+ 1 myusername myusername 246 Feb 25 19:49 wp-feed.php drwxrwxr-x+ 9 myusername myusername 4096 Feb 25 19:49 wp-includes/ -rwxrwxr-x+ 1 myusername myusername 1997 Feb 25 19:49 wp-links-opml.php -rwxrwxr-x+ 1 myusername myusername 2453 Feb 25 19:49 wp-load.php -rwxrwxr-x+ 1 myusername myusername 27787 Feb 25 19:49 wp-login.php -rwxrwxr-x+ 1 myusername myusername 7774 Feb 25 19:49 wp-mail.php -rwxrwxr-x+ 1 myusername myusername 494 Feb 25 19:49 wp-pass.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rdf.php -rwxrwxr-x+ 1 myusername myusername 334 Feb 25 19:49 wp-register.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-rss2.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rss.php -rwxrwxr-x+ 1 myusername myusername 9655 Feb 25 19:49 wp-settings.php -rwxrwxr-x+ 1 myusername myusername 18644 Feb 25 19:49 wp-signup.php -rwxrwxr-x+ 1 myusername myusername 3702 Feb 25 19:49 wp-trackback.php -rwxrwxr-x+ 1 myusername myusername 3210 Feb 25 19:49 xmlrpc.php The getfacl output looks like this: # file: . # owner: myusername # group: myusername user::rwx group::r-x group:apache:rw- mask::rwx other::r-x I simply wanted to change the ownership to myusername:apache and the file permissions to 755. I have no idea how to fix the permissions now. Any help would be really appreciated! Thanks, Tom

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Can an administration extraction of an MSI file perform registry and/or system wide changes?

    - by Wil
    I am always getting MSI (or setup EXEs which are basically MSI) files, and half the time they really do not need to be a setup. Microsoft is probably one of the biggest sources - almost every time I want to download a little source code sample, it has a MSI which if you install, only usually has three files. I would rather not do an install and add it to the add/remove programs and who knows what else (although I am sure it wouldn't be that bad) for the sake of three files! For this reason, I always use the following command: MSIEXEC /a <filename.msi> /qb TARGETDIR=<directory name> Now, this works fine and I have never had problems... However, I was just browsing some articles on Technet and found the following resource about administration installs. Apparently, MSI files can have two sequences: The AdminUISequence Table and the AdminExecuteSequence Table. I am not so worried about the AdminUISequence Table as it states that "The installer skips the actions in this table if the user interface level is set to basic UI or no UI", and this is what the /qb switch I use does. However, there is nothing similar written against AdminExecuteSequence Table. I realise that many people who write MSI files simply do it for a single end user and probably do not even touch the admin install options, however, is it possible for them to set items that can affect the system and if so, is there a fail proof way of extracting? I do already use 7-zip, however despite it being on the "supported" page, MSI support is lacking... well... completely sucks. It looses the file names and is generally useless. They have a bug which was closed with no reason/resolution over three years ago, and I opened a forum post and haven't had a reply. I would not really want to install any additional programs if I could help it and just want peoples opinions on this. Thanks. edit - Should also say, I run with UAC on, and I have never ever had a elevation prompt whilst performing the MSIEXEC operation, so I am guessing I have never had a system wide change, however, I am still curious as to if it is possible... As if changes (even just to the user) are possible I would do this locally/in a VM and never on a server or place of importance!

    Read the article

  • Distinction between an extranet and a DMZ

    - by Markus Yrjölä
    I've been reading about intranets, extranets, DMZs and VPNs now, and I'd need some clarifications related to extranets and DMZs. I understand that they are different types of concepts - extranet allows limited access to some intranet resources, while DMZ is a subnet that sits between the internet and intranet and hosts the external-faced services. However, I'd like to know what is their distinction in practice in a usual setup? The Wikipedia article on extranets says that extranets are similar to DMZs because they are used for the same purpose (providing access to some services/resources without exposing the whole intranet). The article also states that an extranet is a part of a VPN, and this TechNet article also states that extranet access is often implemented similarly to remote intranet access, e.g. with a VPN. The TechNet article also says that commonly the extranet is hosted inside the DMZ. This Pearson article says "Although [the DMZ] is technically located within the intranet, [it] can serve as the extranet as well". This is slightly confusing. Consider this scenario: A company has a B2C website hosted in the DMZ. The website can be accessed from anywhere, but requires user authentication. The underlying web app has its database inside the intranet and also interacts with some web services that are hosted inside the intranet (i.e. it accesses intranet resources). The way I see it, the website does effectively offer a restricted access to the intranet. But can it be considered an extranet? If we take the Wikipedia definition of an extranet literally - "An extranet is a computer network that allows controlled access from outside of an organization's intranet" - I think it can. Let's say that the above can't be considered an extranet. What if we change the scenario slightly, and say it's a B2B website, where the access is e.g. limited to connections coming from a specific business partner (by using site-to-site VPN, for example). In this case it surely is an extranet, right? If this is the case, then the difference between extranet services and any other services hosted in the DMZ is simply access restrictions?

    Read the article

  • Completely remove user account and create another with same name in Windows 7

    - by TeaJay
    Here's my question simply and then the details in case they help to get me an appropriate answer. Question: How can I completely and permanently delete a user account in Windows 7 so that I can create another one with the same user name without the computer name extension added, eg Jane Smith not Jane Smith.computer name? The details: I just did a clean install of Windows 7 Professional 32 bit. (My laptop crashed, I reinstalled Vista and restored backup files but things weren't working so I decided to just get Windows 7 since I had to start over anyway). I used Windows Easy Transfer to save just about everything, even customizing to include a user's appdata from Windows.old which was created when I reinstalled Vista -- not knowing that another windows.old file would be created with the installation of Windows 7. After installing Windows 7, I used Windows Easy Transfer to transfer the user file, appdata, to the new user account which I gave the same name (Jane Smith) in case having a different name would cause problems with reading files or something. Afterwards, I realized that I did not want ALL of that junk. So, I thought no problem, I'll just delete the user account I just created, nothing lost, and create another one this time transferring only the files I wanted (using the customize option in windows easy transfer). I wanted to keep the same user name, e.g. Jane Smith, so after I deleted the user account I checked the files, and I didn't see. It was late so I went to bed and the next morning I created a new user with that same name (Jane Smith). The files looked fine if I remember correctly. Meanwhile, I updated the computer and it restarted a couple times. As I was moving files to the "Jane Smith" user account file, things weren't working as they should. I was actually moving files to the deleted user account and that the current user account was named "Jane Smith.computer name" and that's where the files needed to go. I don't like this. It's too confusing. I want just "Jane Smith". How can I do this without just changing the user name (which doesn't change it in the file path etc)? I want the first one GONE. If I can't do this, is it a problem to create an account with another name and still transfer files to it without path or other problems? I hope this question makes sense and that someone can help me. Thank you in advance!

    Read the article

  • apache mod_cache in v2.2 - enable cache based on url

    - by Janning
    We are using apache2.2 as a front-end server with application servers as reverse proxies behind apache. We are using mod_cache for some images and enabled it like this: <IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache CacheIgnoreCacheControl On CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The image urls vary completely and have no common start pattern, but they all end in ".png". Thats why we used the root in CacheEnable / If not served from the cache, the request is forwarded to an application server via reverse proxy. So far so good, cache is working fine. But I really only need to cache all image request ending in ".png". My above configuration still works as my application server send an appropriate Cache-Control: no-cache header on the way back to apache. So most pages send a no-cache header back and they get not cached at all. My ".png" responses doesn't send a Cache-Control header so apache is only going to cache all urls with ".png". Fine. But when a new request enters apache, apache does not know that only .png requests should be considered, so every request is checking a file on disk (recorded with strace -e trace=file -p pid): [pid 19063] open("/var/cache/apache2/mod_disk_cache/zK/q8/Kd/g6OIv@woJRC_ba_A.header", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) I don't want to have apache going to disk every request, as the majority of requests are not cached at all. And we have up to 10.000 request/s at peak time. Sometimes our read IO wait spikes. It is not getting really slow, but we try to tweak it for better performance. In apache 2.4 you can say: <LocationMatch .png$> CacheEnable disk </LocationMatch> This is not possible in 2.2 and as I see no backports for debian I am not going to upgrade. So I tried to tweak apache2.2 to follow my rules: <IfModule mod_disk_cache.c> SetEnvIf Request_URI "\.png$" image RequestHeader unset Cache-Control RequestHeader append Cache-Control no-cache env=!image CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache #CacheIgnoreCacheControl on CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The idea is to let apache decide to serve request from cache based on Cache-control header (CacheIgnoreCacheControl default to off). And before simply set a RequestHeader based on the request. If it is not an image request, set a Cache-control header, so it should bypass the cache at all. This does not work, I guess because of late processing of RequestHeader directive, see https://httpd.apache.org/docs/2.2/mod/mod_headers.html#early I can't add early processing as "early" keyword can't be used together with a conditional "env=!image" I can't change the url requesting the images and I know there are of course other solutions. But I am only interested in configuring apache2.2 to reach my goal. Does anybody has an idea how to achieve my goal?

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Windows 7 remains powered on when restarting

    - by BombDefused
    I'm running windows 7 x64 on an MSI P67A-GD53 motherboard, in an Antec P280 Super Midi Towercase with a Corsair 650w PSU. I've just installed a second instance of windows 7 x64 on a separate disk (this is to keep my games separate from my work OS). The problem is that it appears now that I cannot restart from either instance of Windows 7. The shut down command, and sleep commands work as expected. When I try to restart, the shutdown happens but the system never reboots. Everything remains powered on, until I hold down the power button to force the power off. Ithink (but am not 100% sure) this has only started since I installed the second OS, and am assuming this has something to do with the motherboard needing to know which OS to run up again? Some other forums I've read suggest that the PSU has a major role in restart and could be at fault. Changing the boot order of the disks in the BIOS does not change anything. Any suggestions greatfully recieved! Update: I now have a reproduceable issue: I think the secondary OS install may have been a red herring. It was when windows tried to reboot during the install that I noticed the issue. After playing around with installing drivers, and rebooting many many times, I have found that it is the OC genie setting on the MSI motherboard that seems to trigger the problem. This makes sense as I only started using the OC genie feature a couple of weeks ago, and probably hadn't used restart in that time. However... simply turning off OC genie does not make the issue go away. I have to turn off OC genie, shutdown, start enter bios, go to the "Save and Exit" menu "Restore Defaults" yes to "Load optimized defaults", which will reset to clear the problem. Now when the PC boots into windows, I can restart as normal (and from the OS on either HDD). I only know how to control the issue, and don't still know the root cause. I'd like to be able to use the OC genie function if anyone can suggest a why I'm seeing this problem. Could it be that I'm drawing too much power when using OC feature?

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • Cannot access a very specific site from my router

    - by DJDarkViper
    This is a problem for me because this site is important to me. It's MY website. And sadly my email is hosted on my site (which I cant access either) When I try to access my website when connected to my Linksys E3000 router, these days it simply just doesn't go through. When I ping it, its all Request Timed Out, and when I tracert C:\Users\Kyle>tracert blackjaguarstudios.com Tracing route to blackjaguarstudios.com [199.188.204.228] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms CISCO26565 [192.168.1.1] 2 16 ms 15 ms 11 ms 11.4.64.1 3 11 ms 9 ms 11 ms rd1cs-ge1-2-1.ok.shawcable.net [64.59.169.2] 4 20 ms 21 ms 22 ms 66.163.76.98 5 37 ms 36 ms 35 ms rc1nr-tge0-9-2-0.wp.shawcable.net [66.163.77.54] 6 112 ms 84 ms 85 ms rc2ch-pos9-0.il.shawcable.net [66.163.76.174] 7 86 ms 89 ms 90 ms rc4as-ge12-0-0.vx.shawcable.net [66.163.64.46] 8 90 ms 84 ms 85 ms eqix.xe-3-3-0.cr2.iad1.us.nlayer.net [206.223.115.61] 9 97 ms 97 ms 99 ms xe-3-3-0.cr1.atl1.us.nlayer.net [69.22.142.105] 10 128 ms 128 ms 126 ms ae1-40g.ar1.atl1.us.nlayer.net [69.31.135.130] 11 101 ms 97 ms 96 ms as16626.xe-2-0-5-102.ar1.atl1.us.nlayer.net [69.31.135.46] 12 100 ms 97 ms 197 ms 6509-sc1.abstractdns.com [207.210.114.166] 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 * * * Request timed out. 17 * * * Request timed out. 18 * * * Request timed out. 19 * * * Request timed out. 20 * * * Request timed out. 21 * * * Request timed out. 22 * * * Request timed out. 23 * * * Request timed out. 24 * * * Request timed out. 25 * * * Request timed out. 26 * * * Request timed out. 27 * * * Request timed out. 28 * * * Request timed out. 29 * * * Request timed out. 30 * * * Request timed out. Trace complete. C:\Users\Kyle> SHAW Cable being my ISP. Figuring this was all something to do with some setting I made on the router, I reset the thing back to factory defaults. Nope. So I'm at a bit of a loss what to do here, as NO device (Computers, Laptops, Tablets, Phones, PS3/ 360, etc) can access my site or its features, so it's not just my computer either. But every other site is just fine. When I connect to my neighbors router, the site comes up just fine. And shes with SHAW as well. What should I do?!

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • My server freezes within a few hours of logging out. Staying logged in keeps the server running

    - by HappyEngineer
    I have an Ubuntu Godaddy server I use to host mail and webapps. It started having problems a couple months ago. It would lock up and stop responding to anything. I couldn't ssh into it, so I'd have godaddy power cycle the server. I have never seen anything that looked suspicious in the var logs (although I'm no expert at reading them). An fsck turned up no problems. Godaddy replaced the ram, but found no hardware problems. I started logging the output from "top" to a log file and found that even that stops running when the server freezes. Now, here is the crazy part: It got so bad that it would actually go down every few hours, but then it stopped going down. I eventually realized I had left an ssh terminal logged into the machine running top. This seemed unlikely to be a reason, but after the server was up with no problems for a full week (remember, it had been going down after just a few hours), I disconnected from the ssh session. Lo and behold, within a few hours the server froze again! I had them power cycle again and then left another ssh session open with top. It has been going without problems for 8 days now. I told others about this and they hardly believe me. I simply can't imagine what is going on. I don't know what else to try other than to just get a new server and reinstall everything. Does anyone have any ideas about what I can look for to determine what the cause is? Is it possible there's some sort of exploit on the server which only runs if everyone is logged out of the system? EDIT: The power management gone haywire sounds plausible, so I've modified the /boot/grub/menu.lst to boot with acpi=off and apm=off. It appears to have prevented kacpid and kacpid_notify from being in the process list, so I assume I did that right. I've disconnected all my sessions from the server. I'll check later tonight to see if it's still up. If it goes down then I'll try the pinging process idea. EDIT: It went down again. It lasted about a day. I've had them reboot, so now I'll try running "nohup ping -i 5 google.com &" and then disconnect. If it goes down again I'll come back. Hopefully someone will have some more ideas.

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >