Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 528/750 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Windows 7 Aero theme's "greyed out" - no found fix

    - by Robsta
    Brand new machine that was working fine then randomly it changed the theme when I booted into a sort of "basic" theme (white task bar, no see through windows etc) I've done and attempted many fixes and I still don't understand why it doesn't work. I've tried these two solutions: "How to enable Windows 7 Aero Theme" and "Windows 7 Aero Themes Greyed out" These solutions included registy changes, stopping/starting services, and force starting the aero theme. The closest I got seems to be when I went into: Control panel (category view) Find and fix problems (System and Security) Display Aero Desktop Effects I follow through the wizard and let it do its thing and then I get an error window that pops up: Personalization - "This theme can't be applied to the desktop. Try clicking a diffrent theme." That's what I get from the wizard. What can I do? My drivers are all up to date, there are no viruses on the computer, directx is installed and updated, and the registry is all correct. EDIT: When I boot the computer, I get a notification stating that windows failed to communicate with the windows desktop services.

    Read the article

  • Disable CTRL+mouse wheel zooming in Chrome?

    - by Peter Nore
    I'm a normal-sighted person and I would like to view pages at 100% all the time. I use keyboard shortcuts that involve CTRL a lot, so about twenty times a day I accidentally hit CTRL at the same time that I'm scrolling, which results in the page being reflowed and repainted. This in is annoying because it can take up to 30 seconds to fix the issue, depending on how complex the site layout is. On sites with dynamic layout such as Google Docs the problem is more serious; accidentally hitting CTRL+mouse wheel corrupts the display and forces me to refresh the page entirely, sometimes causing me to loose information in the process. I would like to either decouple CTRL+mouse wheel from zoom, or disable zoom functionality altogether. This is possible on Firefox by using about:config; is there a similar way to edit detailed settings in Chrome? Would I have access to the detailed settings if I used Chromium instead of Chrome? I'll probably jump ship back to Firefox if I can't solve this problem. There is a superuser question that asks basically the same thing I'm asking, but for Firefox and Internet Explorer exclusively. Other people on the Chrome forum have had related issues, but none have the same problem. "I would really like it if I could deactivate the auto zoom in/out." had "something with laptops and Windows 7", not the feature built into Chrome. Other people have had PDF specific issues, which doesn't concern me. I've also tried searching for extensions that allow you to disable the scroll; I had hoped that "Zoom Lock" would have the ability to lock the zoom at 100% and prevent CTRL+scroll wheel from distorting the display, but it doesn't work for my use case. Google Chrome version 9.0.597.84 (Official Build 72991) Operating System: Ubuntu 10.10

    Read the article

  • How do I identify Blackberry / OWA users in my IIS logs?

    - by Quinten
    We just rolled out a Blackberry Express Server, and would like to make sure that all Blackberry devices that our users own are connecting SOLELY through the BES server. We are running Exchange 2010 SP1. I've read some links that discuss blocking BIS at the firewall level. Before doing that, however, I'd like to individually contact all users with Blackberries and make sure that they have a chance to switch to the BES server. I've sent a company-wide email, but unsurprisingly folks tend to tune these out until they are forced into action. Is there an easy way to identify the users with Blackberries by searching IIS logs, or perhaps using the Exchange Management Shell? Especially some automated way? I've tried searching for the Blackberry identifier, but it does not appear next to any user name, so it's not as helpful as it could be. Edit: to clarify, what I'm talking about is the fact that Blackberries can use OWA to download mail to the phone. We do not allow IMAP or POP access through our firewall so that's not a concern--just folks with Blackberries using Blackberry's hack to allow it to connect to Exchange without a BES server. As far as I know, Blackberries are the only popular phones that use this method to download mail.

    Read the article

  • What program sent which packet to the network [closed]

    - by Erik Johansson
    I would like to have a tcpdump like program that shows which program sent a specific packet, instead of just getting the port number. This is a generic problem I've had on and off sometimes when you have and old tcpdump file lying around you have no way to find what program was sending that data.. The solution in how i can identify which process is making UDP traffic on linux ? is an indication that I can solve this with auditd, dTrace, OProfile or SystemTap, but doesn't show how to do it. I.e. it doesn't show the source port of the program calling bind().. The problem I had was strange UDP packets, and since those ports are so short lived it took me a while to solve this issue. I solved this by running an ugly hack similar to: while true; date +%s.%N;netstat -panut;done So either a method better than this hack, a replacement for tcpdump, or some way to get this info from the kernel so I can patch tcpdump. EDIT: This was asked on superuser "tracking what programs sends to net", no good solution though.

    Read the article

  • How can I control which sound card Ubuntu uses for playback?

    - by GorillaSandwich
    I am dual-booting Ubuntu 9.04 and Windows XP but am new to Ubuntu. In Windows, I use an M-Audio Audiophile 2496 sound card for recording (because it has RCA input jacks for my mixer), but I don't use it for playback (because my speakers use a 1/8 inch jack); instead, I use the motherboard's built-in sound card. I tried to recreate this arrangement in Ubuntu, but despite selecting the built-in card for all playback under System > Preferences > Sound, I still have inconsistent results. Rhythmbox plays back through the integrated card, but Flash content in the browser and games in the OS send their audio to the Audiophile card. I have seen recommendations to use a program called "Jack" to control this, but I installed it and found it baffling. How can I control which card is used for playback, other than disabling one card (as I discovered how to do and explain below)? Also, is there a GUI for disabling hardware, or is it necessary to edit a configuration file?

    Read the article

  • Connect Chrome to TOR

    - by Jack M
    I'm having difficulty connecting Chrome to TOR. I started trying yesterday. I started Vidalia and the TOR Browser and then followed the advice at http://lifehacker.com/5614732/create-a-tor-button-in-chrome-for-on+demand-anonymous-browsing - downloading Proxy Switchy and setting it up as stated. This resulted in Error 130 (net::ERR_PROXY_CONNECTION_FAILED) (in Chrome, when I tried to load a webpage). So I looked into Vidalia's settings and noticed that it appeared to be using port 9051, so I set that instead of 8118 as everyone on the internet seems to be suggesting. Then I got a new error: Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED). Digging a bit, I found that Tor should be set as a SOCKS proxy, not an HTTP proxy, so I unticked "use same settings for all protocols" in Proxy Switchy and just set localhost:9051 for SOCKS. That got me Error 7 (net::ERR_TIMED_OUT). And that's when I came here for help. I typed up the above question, but then at the last minute decided to do a bit more reading and found someone here suggested using some command line arguments via a Windows shortcut: "C:\snip\chrome.exe" --proxy-server=";socks=127.0.0.1:9051;sock4=127.0.0.1:9051;sock5=127.0.0.1:9051" --incognito check.torproject.org And that worked perfectly. Yesterday. Today it doesn't, so I'm having to post this question after all. check.torproject.org gives me a "no" with Chrome, but a "yes" with the default Tor Browser. I tried closing Chrome and restarting it (yes, with the correct shortcut) after Vidalia started, but still nothing. The port number hasn't changed or anything. What gives? EDIT: I realized I had a "non tor" instance of Chrome running and that possibly the was causing the command line args t be ignored when I started the new instance. Closed all instances of chrome and ran my Chrome Tor shortcut, and it did get rid of the "not using Tor" message -- because I got another Time Out error instead. Vidalia's bandwidth graph didn't even blink.

    Read the article

  • nginx config woes for multiple subdomains & domains

    - by Peter Hanneman
    I'm finally moving away from Apache and I've got the latest development version of nginx running on a fully updated Ubuntu 10.04 VPS. I've got a single dedicated IP for the box (1.2.3.4) but I've got two separate domains pointing to the server: www.example1.com and www.example2.net. I would like to map the fallowing relationships between urls and document roots in the config: www.example1.com / example1.com -> /var/www/pub/example1.com/ subdomain.example1.com -> /var/www/dev/subdomain/example1.com/ www.example2.net / example2.net -> /var/www/pub/example2.net/ subdomain.example2.net -> /var/www/dev/subdomain/example2.net/ Where the name of the requested subdomain is a folder under /var/www/dev/. Ideally a request for a non-existent subdomain(no matching folder found) would result in a rewrite to the public site (eg: invalid.example1.com -- www.example1.com) however a mere "404 Not Found" wouldn't be the worst thing in the world. It would also be nice if I didn't need to modify the config every time I mkdir a new subdomain folder - even better if I don't need to edit it for a new domain either...but now I'm getting greedy... :p Although in my defense Apache did all of this with a single directive. Does anyone know how I can efficiently mimic this behavior in nginx? Thanks in advance, Peter Hanneman

    Read the article

  • Android webbrowser returns code 500 for webpage on Nginx webserver

    - by Paxxil
    Hey! I've come to a very weird behavior of a web browser on android mobile phone (I've tried HTC Wildfire and HTC Desire phones). I have a web server with Nginx v0.8.54. When i try to open a web page on the phone it shows me error: The requested item could not be loaded! (Status code: 500) BUT it only happens when I am requesting page through Mobile network. On Wifi it works just fine .... but there is more .... if I stop Nginx and start Apache web server it works just fine on both Mobile network and wifi. I've also tried other mobile network and it is the same behavior. Some server stats: Firewall is OFF Selinux is OFF the web page (using Nginx web server) opens normally on any other browser (IE, FF, Opera, Chrome, Safari) on the laptop or PC Nothing in nginx error.log This is the only entry in access.log when the page is requested: xxx.xxx.xxx.xxx - - [17/Mar/2011:11:19:49 -0500] 200 "GET / HTTP/1.1" 27405 "-" "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; Desire_A8181 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" index.html has only "Hello World" string in it. There is no fishy javascript or anything else. .... but there is even more.... if i open the same page on another server, with the same Nginx build, with the same server and web server configuration.... it opens just fine. if anyone has any idea on what may be going on, i would really appreciate it if you let me know. Thanks! EDIT: i forgot to mention that page opens OK on Iphone and Nokia

    Read the article

  • EMC VNX iSCSI setup - unsure about SP/port assignment

    - by pauska
    We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI. From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them? Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports? It also gives the following example: Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)? Edit: Another brainpuzzle I don't get it - each host (as in server) should not have more iSCSI bandwidth available than the storage processor. What on earth does this matter? If serverA have 1gbit and serverB have 100mbit, then the resulting bandwith between them is 100mbit. How can this result in some kind of oversubscription? Edit4: Wait, what. Active and passive ports? The VNX runs in a ALUA configuration with asymmetrical active/active.. there shouldn't be any passive ports, only preferred ones..

    Read the article

  • Shut Out of XP - No Admin Password or CDR

    - by ashes999
    I inherited an old WinXP/Linux dual-boot machine from the stoneage. Because it has Linux, the regular boot process is replaced with the Fedora boot loader; I cannot, therefore, press F8 strategically to tell my PC to boot from CD. Even if I could, it's a moot point; the CDR doesn't seem to recognize any CDs. To make things worse, there's no option to network boot. The original user is probably long gone; I don't know the password for any of the Administrator group users. I can login using my corp account, but that's unprivileged on this machine. Since I'm not an admin, I can't do crazy things, like looking at boot.ini. Or deleting files. I only have 500MB free on my C drive. I'm pretty sure I can't boot from a USB, since I didn't see any settings for this in my BIOS. How can I get admin access for my user? Edit: Things I've tried: Boot from CD (CD not recognized) Launch CD from XP (CD not recognized) Install Daemon Tools Lite so I can install from an ISO -- don't have admin privileges XP password recovery tool -- requires admin privileges Adding an admin user -- no access to Control Panel Users since I'm not an admin Logging in as both the admin users on the system (trying some standard passwords) Using Fedora to chntpw (the Fedora version installed is ancient -- 2.7)

    Read the article

  • Using gitlab behind Apache proxy all generated urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ {edit to clarify} I've set up Apache to proxy and run the nginx server the package installed on port 8888 (or so I thought). As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from http://localhost:8888, and all the checkout urls Gitlab gives are also localhost - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain; [Update] It looks like nginx is still listening on the wrong port if I don't specify localhost:8888 as the external_url. I found this in /var/log/gitlab/nginx/error.log 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: bind() to 0.0.0.0:80 failed (98: Address already in use) 2014/08/19 14:29:58 [emerg] 2526#0: still could not bind() Apache setup looks like: <VirtualHost *:80> ServerName my.local.domain ServerSignature Off ProxyPreserveHost On AllowEncodedSlashes NoDecode <Location /> ProxyPass http://localhost:8888/ ProxyPassReverse http://127.0.0.1:8888 ProxyPassReverse http://my.local.domain </Location> </VirtualHost> Which seems to proxy everything back ok if Gitlab listens on localhost:8888 - I just need Gitlab to start displaying the right URL, instead of localhost:8888.

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • Courier-imap login problem after upgrading / enabling verbose logging

    - by halka
    I've updated my mail server last night, from Debian etch to lenny. So far I've encountered a problem with my postfix installation, mainly that I managed to broke the IMAP access somehow. When trying to connect to the IMAP server with Thunderbird, all I get in mail.log is: Feb 12 11:57:16 mail imapd-ssl: Connection, ip=[::ffff:10.100.200.65] Feb 12 11:57:16 mail imapd-ssl: LOGIN: ip=[::ffff:10.100.200.65], command=AUTHENTICATE Feb 12 11:57:16 mail authdaemond: received auth request, service=imap, authtype=login Feb 12 11:57:16 mail authdaemond: authmysql: trying this module Feb 12 11:57:16 mail authdaemond: SQL query: SELECT username, password, "", '105', '105', '/var/virtual', maildir, "", name, "" FROM mailbox WHERE username = '[email protected]' AND (active=1) Feb 12 11:57:16 mail authdaemond: password matches successfully Feb 12 11:57:16 mail authdaemond: authmysql: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> Feb 12 11:57:16 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=105, sysgroupid=105, homedir=/var/virtual, [email protected], fullname=<null>, maildir=xoxo.sk/[email protected]/, quota=<null>, options=<null> ...and then Thunderbird proceeds to complain that it cant' login / lost connection. Thunderbird is definitely not configured to connect through SSL/TLS. POP3 (also provided by Courier) is working fine. I've been mainly looking for a way to make the courier-imap logging more verbose, like can be seen for example here. Edit: Sorry about the mess, I've found that I've been funneling the log through grep imap, which naturally didn't display entries for authdaemond. The verbose logging configuration entry is found in /etc/courier/imapd under DEBUG_LOGIN=1 (set to 1 to enable verbose logging, set to 2 to enable dumping plaintext passwords to logfile. Careful.)

    Read the article

  • SSH connection problem - allowed from LAN but not WAN

    - by Kerem Ulutas
    I tried to setup my Arch Linux installation to be an SSH host, but here is the thing: I can ssh localhost, it fails to login via public key and asks for username and password, but still able to login. When I try ssh my_wan_ip it gives ssh_exchange_identification: Connection closed by remote host error. I've read all topics about this error and none helped me. By the way, just confirmed, it gives ssh: connect to host my_dyndns_hostname port 22: Connection refused from another machine (outside of my network, it has different wan ip). I have sshd: ALL in "hosts.allow", ALL:ALL in "hosts.deny". I am able to connect to my own pc via ssh, ping my own pc, but my ssh setup seems to be the problem, it gives that annoying error when I try to ssh from wan. /etc/ssh/ssh_config /etc/ssh/sshd_config And finally, here is the debug output for both sshd and ssh: (i ran ssh command and i took output to sshd debug after that): sshd debug ssh debug I can edit my question according to your needs. Just ask for any more information needed. BTW I have no iptables running. I have one cable dsl modem connected to a asus wl-330gE wireless access point, they both have their firewall disabled. I configured NAT so port 22 is directed to the pc I'm having this trouble. Any help appreciated, thanks..

    Read the article

  • How do I diagnose the cause of a freeze after resuming in Windows XP (SP3)?

    - by Software Monkey
    I have just built a new computer from parts. Whenever I resume from any sleep mode (S1, S3 or S4) the computer freezes within about 60 seconds of the welcome screen appearing. I have updated the BIOS and all drivers to current from the motherboard manufacturer's site. I have reset BIOS settings to default, including disabling AMD Cool n Quiet. The windows event logs are not helpful at all. Other than immediately after resuming the system is stable as long as AMD CnQ is disabled. The system is: Mobo : MSI 790GX-G65 CPU : AMD Phenom II 965 BE at 3.6 GHz Memory : Corsair DDR3 1600, at 1333 MHz and 9-9-9-21 HDDs : 1 EIDE, 2 SATA in RAID-0 DVD : 1 Card Reader: 1 multi-card reader Keyboard is attached via PS2 and mouse is USB. Any thoughts or pointers would be most welcome. EDIT: It appears that the computer may not freeze if a program is left running which puts it under significant load. I left a stress test running which keeps all cores under 85% load, and my son put the computer to sleep - while this program is running it I have been able to resume from S3 successfully 4 times, compared against about 20 tests with the computer idle which have all frozen. So this may be related to being in an idle state when it resumes.

    Read the article

  • Is VGA port hot-pluggable?

    - by Martin Bøgelund
    In meetings, I often see people detaching the VGA connector from one running laptop and connecting it to another, while the projector is still on. Is this 100% risk free, and OK by design of the VGA standard? If there's a risk involved in hot-plugging VGA, can it be removed by turning off or suspending either laptop, display, or both? I see this being done all the time without causing disaster, so clearly I'm not interested in answers stating "we do it all the time, so it should be OK!". I want to know if there's a risk - real or in theory - that something breaks when doing this. EDIT: I did an internet search on the topic, and I never found a clear statement as to why it is safe or unsafe to hot swap VGA devices. The typical form is a forum question asking basically the same question as I did, and the following types of statements Yes it's hot swappable! I do it all the time! It involves some kind of risk, so don't do it! You're some kind of moron if you think there's a risk, so just do it! But no explanation as to why it safe or not... Joe Taylors answer below contains a link to a forum post and answers that basically give me the same statements as mentioned above. But again, no good explanation why. So I looked for an actual manual for a projector, and found "Lenovo C500 Projector User’s Guide". It states on page 3-1: Connecting devices Computers and video devices can be connected to the projector at the same time. Check the user’s manual of the connecting device to confirm that it has the appropriate output connector. [image] Attention: As a safety precaution, disconnect all power to the projector and devices before making connections. But again, no good explanation.

    Read the article

  • Is it possible to add/register an MIB for the Windows built-in SNMP service?

    - by michielvoo
    I need to build monitoring into an existing .NET application. I will use SNMP to send the application's status to the Windows SNMP service. I have used a .NET library to create the SNMP SET request according to the MIB that I have been provided with, and with the correct community. My code now sends multiple 'variables' in a SET request, for example: Id: ".1.3.6.1.4.1.43607.1.1.1.1.1" (ObjectIdentifier) Data: 42 (Integer32) On my machine I have enabled the SNMP service, configured a community with READ/WRITE permissions, and added localhost to the list of hosts to accept requests from. When I send the SET request I get a response, but it has error status 17 which, according to MSDN means SNMP_ERRORSTATUS_NOTWRITABLE. The response also has error index set to 8, which is the number of variables I send. If I send 7 variables, the error index is set to 7. I think the problem is that the Windows SNMP service is preconfigured to only accept SET requests for a fixed set of MIBs. How can I get the Windows SNMP service to 'accept' my custom MIB SET request? Edit: I downloaded and installed the Windows Server 2003 Resource Kit and tried to 'compile' the MIB file with mibcc.exe ("SNMP MIB Compiler") but I have not been able to compile any MIB files (even the most basic ones like SNMPv2-SMI.mib).

    Read the article

  • What was SPX from the IPX/SPX stack ever used for?

    - by Kumba
    Been trying to learn about older networking protocols a bit, and figured that I would start with IPX/SPX. So I built two MS-DOS virtual machines in VirtualBox, and got IPX communications working (after much trial and error). The idea being to get several old DOS games to run, link up to a multiplayer match, interact with each game window, and capture the traffic using Wireshark from the host machine. From this, I got Quake, Masters of Orion 2, and MechWarrior 2 to communicate back and forth. Doom, Doom2, Duke3d, Warcraft, and several others either buggered up under the VM or just couldn't see the other VM on the IPX network. What did I discover? None of the working games used SPX. Not even Microsoft's NET DIAG used SPX. They all ran ONLY on top of IPX. I can't even find SPX examples or use-cases of SPX traffic running over IEEE 802.3 Ethernet II framing. I did find references that it was in abundant use on token ring, but that's it. Yet any IPX-aware application that I've hunted down so far usually advertises itself as "IPX/SPX", which seems to be a bit of a misnomer, since it doesn't seem to use SPX. So what was SPX used for? Any DOS applications out there that use it which will run under my VM setup? Edit: I am aware that IPX is to SPX as IP is to TCP (layer 3 to layer 4), so I expected to see an SPX layer underneath the IPX layer in Wireshark when I ran my tests.

    Read the article

  • VPN Setup: Mac OS X and SonicWall

    - by noloader
    I'm trying to get VPN access up and running. The company has a SonicWall firewall/concentrator and I'm working on a Mac. I'm not sure of the SonicWall's hardware or software level. My MacBook Pro is OS X 10.8, x64, fully patched. The Mac Networking applet claims the remote server is not responding. The connection attempt subsequently fails: This is utter bullshit, as a Wireshark trace shows the Protected Mode negotiation, and then the fallback to Quick Mode: I have two questions (1) does Mac OS X VPN work in real life? (2) Are there any trustworthy (non-Apple) tools to test and diagnose the connection problem (Wireshark is a cannon and I have to interpret the results)? And a third question (off topic): what is broken in Cupertino such that so much broken software gets past their QA department? EDIT (12/14/2012): The network guy sent me "VPN Configuration Guide" (Equinox document SonicOS_Standard-6-EN). It seems an IPSec VPN now requires a Firewall Unique Identifier. Just to be sure, I revisited RFC 2409, where Main Mode, Aggressive Mode, and Quick Mode are discussed. I cannot find a reference to Firewall Unique Identifier. I think I am screwed here: I am trying to connect to a broken (non-standard) firewall, with a broken Mac OS X client. Fortunately, I can purchase VPN Tracker Personal (a {SonicWall|Equinox}-authored client) for $129US from Equinox. So much for standards....

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Sudoers file allow sudo on specific file for active directory group

    - by tubaguy50035
    I have active directory sign in working on an Ubuntu 12.04 box. When the user signs in, I have a script that runs that needs sudo permission (since it modifies the samba config file). How would I specify this in my sudoer's file? I've tried: %DOMAIN\\AD+Programmers ALL=NOPASSWD: /usr/local/bin/createSambaShare.php I've found various resources on the internet stating that this is how it would be done, but I'm not sure that I have the first part right. What are they using as the DOMAIN? The workgroup or the realm? I use Samba + winbind for active directory integration. Here's my smb.conf: [global] security = ads netbios name = hostname realm = COMPANYNAME.COM password server = passwordserver workgroup = COMPANYNAME idmap uid = 1000-10000 idmap gid = 1000-10000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no EDIT: The users that should have access to run that script are all part of the Programmers group which has an Active Directory Domain Services Folder of Company.com/Staff/Security Groups (not sure if that matters or not).

    Read the article

  • Virtual Windows 2008 Server Activation with ESX

    - by Logman
    I had a decommissioned server (Dell PE2950) that we could still use, it had OEM Windows 2003 Std on it but wanted to use it as a new host with VMware ESX5 to put a couple legacy severs on it. I wiped it clean and maxed out the memory. But when I added the memory I noticed the product key sticker was a "WindowsServer08 Std 1-4cpu" product key, and it also had a Virtual Key. Not sure why it had Win2003 and not Win2008 from the start, but I would like to use that license if I can. The virtual host would stay on the same physical server, so there shouldn't be a problem with licensing... but I do not want to use Hyper-V unless I can not help it. I have installed ESX5 on the server, but I cannot get the Windows 2008 server to activate. The product key is hard to read, and I have checked the key quite a few times. But my question is... Is it because Hyper-V was not installed on the host? But I thought you could use the product key alone on a virtual host? Maybe because I am not using a Dell Windows 2008 disk but iso from MS directly via the Volumne Licensing site? EDIT: well, Im pretty sure I got the product key correct. If its not the product key, could the activation problem be because Im not using hyper-v or maybe the correct install dvd? EDIT2: maybe because I added 28GB of memory? Originally 4GB...

    Read the article

  • Online computer not responding to pings

    - by mastercork889
    I was doing a bit of scanning on my network lately, knew all the hostnames to each computer connected. But whilst pinging one of them ping returned Request timed out.. This is strange as I know the computer is online and that the computer responds correctly to pinging on a different (enterprise) network. Is there something on the computer, my network, or my computer that is bugging with this? - That's just a sub-question, I don't expect this to be the main answer. The real question: Why does this happen? Why does pinging the IP4 address not work? EDIT : Pinging the Hostname used to default to the IP4 address, but now it defaults to the IP6 address. Why does this happen? But now that it pings using IP6, how come it all of a sudden works? > ping -6 THE_COMPUTER Pinging THE_COMPUTER [lengthy IP6 address] with 32 bytes of data: Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Reply from [lengthy IP6 address]: time=1ms Ping stats: Sent = 4, Recieved = 4, Lost = 0 (0% loss) But when this is done using IP4 it doesn't work. So there are now two questions: How come IP6 works and not IP4? Why does IP4 not work?

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >