Search Results

Search found 13995 results on 560 pages for 'home'.

Page 515/560 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Mysql server fails to start

    - by Nicolas Thery
    Googling since two hours, I require your assistance. I'm on a Debian virtual machine and I cloned it. The only change is the new IP adress it has. Mysql doesn't start any more: Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! There is no process called mysql. All the mysql log files in /var/log are empty. here is my.cnf file : [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M general_log_file = /var/log/mysql/mysql.log log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M [mysqld_safe] syslog Here is the result of ifconfig : eth0 Link encap:Ethernet HWaddr 00:0c:29:12:98:9a inet adr:192.168.1.138 Bcast:192.168.1.255 Masque:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:754 errors:0 dropped:0 overruns:0 frame:0 TX packets:106 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:101177 (98.8 KiB) TX bytes:17719 (17.3 KiB) lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) As requested, here is the result of : sudo -u mysql mysqld, here is the result : root@debian:/home/nicolas/Bureau# sudo -u mysql mysqld 121004 14:26:57 [Note] Plugin 'FEDERATED' is disabled. mysqld: Can't find file: './mysql/plugin.frm' (errno: 13) 121004 14:26:57 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121004 14:26:57 InnoDB: Initializing buffer pool, size = 8.0M 121004 14:26:57 InnoDB: Completed initialization of buffer pool 121004 14:26:57 InnoDB: Started; log sequence number 0 70822697 121004 14:26:57 [Note] Recovering after a crash using /var/log/mysql/mysql-bin 121004 14:26:57 [Note] Starting crash recovery... 121004 14:26:57 [Note] Crash recovery finished. 121004 14:26:57 [ERROR] mysqld: Can't find file: './mysql/host.frm' (errno: 13) 121004 14:26:57 [ERROR] Fatal error: Can't open and lock privilege tables: Can't find file: './mysql/host.frm' (errno: 13)

    Read the article

  • Can't ping devices by IP address for devices allocated IPs by DHCP

    - by GiddyUpHorsey
    I have a home network with a Trendnet wireless router and a Windows Domain. The Domain Controller/DNS server is a Windows 2000 Server and is configured to forward queries to DNS servers provided by the ISP. The router provides DHCP and is configured with the Windows 2000 Server as the DNS server. The network has been set up for a couple of years and usually works fine. When I connect iPhones to the network over WiFi, the router can ping the iPhones through its browser based admin interface, but Windows machines that are part of the Windows Domain cannot. A laptop was connected to the network over WiFi that wasn't joined to the domain and it could see the iPhones. The router UI shows that the laptop has a reserved IP allocated via DHCP. All machines either have a static or DHCP allocated IP on the 192.168.0.* subnet. Router - 192.168.0.1 - Static - Wired Windows Domain Controller - 192.168.0.8 - Static - Virtual Windows 7 Workstation - 192.168.0.200 - DHCP Auto - Wired VMWare ESXi Host - 192.168.0.201 - Static? - Wired iPhone 1 - 192.168.0.202 - DHCP Auto - WiFi iPhone 2 - 192.168.0.203 - DHCP Auto - WiFi Windows Vista Laptop - 192.168.0.204 - DHCP Reserved - WiFi Using the Windows 7 machine (200), I try to ping each machine and the only DHCP machine that responds is itself. The other DHCP machines fail with Reply from 192.168.0.200: Destination host unreachable.. Using nslookup fails with *** domain.controller.name can't find 192.168.0.203: Non-existent domain. Using the Windows 2000 Domain Controller (8), I try to ping each machine and the only DHCP machine that responds is the Windows 7 machine (200). Pinging the other DHCP machines fails with Request timed out.. Using nslookup also fails with *** domain.controller.name can't find 192.168.0.203: Non-existent domain. Using the iPhone 2 (203), I try to ping (Network Ping Lite) the machines with static IP addresses and that works fine. When I try to ping the Windows 7 machine (200) it is unable to get a response. How do I configure the DNS server/Windows Domain/Router properly so that the Windows Domain machines can see the IPs allocated via DHCP?

    Read the article

  • Is there a way to permanently arrange 2 displays under XP?

    - by rumtscho
    When I am home or on a business trip, or on a meeting, I use my laptop in the usual way. When I get to work, I put it on the docking station and boot it with the lid closed. The image appears on the two displays connected to the docking station. On the left, there is an old monitor connected over VGA, on the right, a big widescreen connected over DVI. Obviously, the videocard seems to think that the DVI is the primary output, and the VGA the secondary one. Thus Windows always displays the widescreen on the left and the old FSC monitor on the right. So when I want to move the mouse pointer from the (physically) left display to the (physically) right display, I have to move it from right to left, which is a usability nightmare. Of course, I can just drag one display over the other one in the display properties, and then everything is as it should be. The catch: Windows remembers this only as long as it has the two displays. Every time it runs on the laptop display, it forgets the setting. Physically switching the monitors isn't an option, for ergonomical reasons. I prefer to run the more important applications on the bigger screen with the better colourspace, and the shape of my desk forces me to sit off-center, so the more important applications should be shown on the right display. Just switching the video ports doesn't help either. When I connect the big monitor over VGA, image quality deteriorates visibly. So what I do now is: every time I bring the laptop to my desk, I boot it. I wait the whole 7 minutes of XP booting, syncing network drives, etc. Then I fire up the display properties, switch to the last tab, drag the widescreen display to the right, and close. Only then can I start working. Does someone have a better idea? The laptop is a Dell Latitude 630 with Windows XP SP 3. It has an nVidia graphics card (not an onboard chip).

    Read the article

  • Mounting a drive in Ubuntu 9.10 (Karmic Koala)

    - by morpheous
    I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP. I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive. It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D). How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me) In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives? Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries. i). Apache 2.2 ii). PHP 5.2.11 iii). MySQL (5.1) iv). SVN v). Netbeans vi). C++ development tools (gcc, gdb, emacs etc) vii). QT toolkit viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/) I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e: (i) *.cpp, *.hpp, *.mak files for cpp projects (ii) individual websites On my XP machine the layout for C++ dev was like this: c:\dev\devtools (common libs and headers etc) c:\dev\workarea (root folder for projects) c:\dev\workarea\c++ (c++ projects) c:\dev\workarea\websites (web projects) I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)

    Read the article

  • How to connect to a Virtualbox guest from the host when network cable unplugged

    - by Greg K
    I'd like to work offline (I'm flying to the US twice this month), to do this I need access to a linux development server. When I work from home I boot a VirtualBox VM and that acts as my dev server for the day (providing Apache, PHP & MySQL to run my server side code). However, I'd like to work with my VM when I'm not connected to a network. I have my Ubuntu VM guest set up with a bridge connection so it can serve HTTP and provide SSH access from inside my local network. I've tried to manually configure my network settings on both Mac OSX (the host) and Ubuntu (the guest) but I can't even ping my own NIC address (127.0.0.1 can, 192.168.21.x I can't) in OS X when I unplug the cable. Manual network settings: $ ifconfig en0 en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:xx:xx:xx:xx:xx inet 192.168.21.5 netmask 0xffffff00 broadcast 192.168.21.255 media: autoselect (100baseTX <full-duplex,flow-control>) status: active I can ping localhost fine, as well as my VM (.20) and SSH too. $ ping 192.168.21.5 PING 192.168.21.5 (192.168.21.5): 56 data bytes 64 bytes from 192.168.21.5: icmp_seq=0 ttl=64 time=0.085 ms 64 bytes from 192.168.21.5: icmp_seq=1 ttl=64 time=0.102 ms 64 bytes from 192.168.21.5: icmp_seq=2 ttl=64 time=0.100 ms 64 bytes from 192.168.21.5: icmp_seq=3 ttl=64 time=0.094 ms $ ping 192.168.21.20 PING 192.168.21.20 (192.168.21.20): 56 data bytes 64 bytes from 192.168.21.20: icmp_seq=0 ttl=64 time=0.910 ms 64 bytes from 192.168.21.20: icmp_seq=1 ttl=64 time=1.181 ms 64 bytes from 192.168.21.20: icmp_seq=2 ttl=64 time=1.159 ms 64 bytes from 192.168.21.20: icmp_seq=3 ttl=64 time=1.320 ms Network cable unplugged: $ ifconfig en0 en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:xx:xx:xx:xx:xx media: autoselect status: inactive $ ping 192.168.21.5 PING 192.168.21.5 (192.168.21.5): 56 data bytes ping: sendto: No route to host ping: sendto: No route to host Request timeout for icmp_seq 0 ping: sendto: No route to host Request timeout for icmp_seq 1 Does OS X disable the NIC when the network cable is unplugged? Any way to stop it doing this?

    Read the article

  • samba 3.5 "force user" doesn't seem to be sticking

    - by myCubeIsMyCell
    After installing a new OS with newer version of samba, I'm having trouble accessing my shares. I can browse to the specific share, but only to the top level. As best I can tell from the logs, it seems the "force user" in the samba config isn't sticking beyond the initial connection. Details below. I installed a new version of CentOS on my storage server. My old CentOS (4?)install had samba version 3.0.33, new CentOS is using 3.5.10. No domain/AD involved ... just home workgroup. no real security... just some shares hidden & some defined as read-only. here's my config: [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = luna security = share # logs split per machine log file = /var/log/samba/log.%m log level = 2 # max 50KB per log file, then rotate max log size = 50 winbind use default domain = Yes [strge] comment = please path = /storage browseable = yes read only = no force user = windowsguest force group = users guest ok = yes So... the problem I'm running into is that the 'force user' only seems to hold for the initial connection & I see all the top level folders fine. When I drill into a folder I get access denied - which appears to be due to my windows user info being sent (trys to authenticate xuser - a non-existant user to samba, so maps to nobody & fails). Here's the smb error msg: [2012/11/29 14:30:27.326195, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [xuser] -> [xuser] FAILED with error NT_STATUS_NO_SUCH_USER [2012/11/29 14:30:27.326251, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [nobody] -> [nobody] FAILED with error NT_STATUS_NO_SUCH_USER Most of the top level directories are 755, some 777. Either way, can not access them. If I do a chown -R windowsguest.users ... no change... but if I do a chmod -R to 777 or 755 they become browsable... but still can't create files (even for 777 ones). Not sure what role it plays if any... but had to recreate the user windowsguest under the new os install, uid & gid match old user. Seems the main issue as far as I can tell is that samba isn't maintaining the 'force user' - but I could be wildly off base. Client OS is win7 pro x64. Thanks for any suggestions or advice!

    Read the article

  • File/printer sharing issues on network with multiple OSes

    - by DanZ
    My workplace consists of computers running a variety of different operating systems, and I have been running into problems getting some of them to connect to a shared drive and printer over the network. Here is a brief description of the computers involved and the issues I have encountered: 1: Dell desktop, Windows Vista Business-- This is the computer I want the others to connect to. It has a USB printer and eSATA hard drive enclosure that I have set up for sharing, with different accounts for the various users. 2: Fujitsu laptop, Windows XP Tablet edition-- No problems. Can connect to both the shared printer and hard drive. 3: Lenovo laptop, Windows Vista Business 64 bit-- No problems. Can connect to both the shared printer and drive. 4: Apple MacBook, OS 10.4-- Can connect to the shared drive, but not to the shared printer. I am aware that the printer issue is due to a known incompatibility between Vista and OS 10.4 and earlier with regards to Samba. It is not a big problem, however, as this computer can access a network printer. 5: Sony laptop, Windows Vista Home Premium-- Can connect to the shared printer, but not the shared drive. It can see computer 1 and its shared drive on the network, and appears to successfully log in to user accounts. However, if you try to access the shared drive, it says you do not have permission. I have tried both standard and administrator accounts, and none can access the drive from this computer. 6: MacBook Pro, OS 10.5 (there are two of these)-- Can connect to the shared printer, but not the shared drive. They can't see computer 1 on the network. For that matter, they also can't see each other or the older Mac, but can see and access shared folders on the XP machine (computer 2) and can see other PCs in the building. I was able to add the shared printer manually by typing in its network location, but was unable to manually add the shared drive in the same way. So, what I am looking for is suggestions on how to get computers 5 and 6 to connect to the shared drive. Since they can already connect to the shared printer (which is on the same computer as the shared drive), it seems reasonable that they should be able to access the drive as well.

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • why i failed to build rtorrent?

    - by hugemeow
    i am not root, so i have to build rtorrent from source and hope to install it in my home directory, but failed, why? [mirror@hugemeow rtorrent]$ ls AUTHORS autogen.sh ChangeLog configure.ac COPYING doc INSTALL Makefile.am NEWS rak README scripts src test [mirror@hugemeow rtorrent]$ ./autogen.sh aclocal... aclocal:configure.ac:7: warning: macro `AM_PATH_CPPUNIT' not found in library autoheader... libtoolize... using libtoolize automake... configure.ac: installing `./install-sh' configure.ac: installing `./missing' src/Makefile.am: installing `./depcomp' autoconf... configure.ac:7: error: possibly undefined macro: AM_PATH_CPPUNIT If this token and others are legitimate, please use m4_pattern_allow. See the Autoconf documentation. though autoge failed, configure script is created. [mirror@hugemeow rtorrent]$ ls aclocal.m4 autogen.sh ChangeLog config.h.in configure COPYING doc install-sh Makefile.am missing rak scripts test AUTHORS autom4te.cache config.guess config.sub configure.ac depcomp INSTALL ltmain.sh Makefile.in NEWS README src run configure, and fail for syntax error near unexpected token `1.9.6', what's wrong? what i should do in order to build this rtorrent for my centos? [mirror@hugemeow rtorrent]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes ./configure: line 2016: syntax error near unexpected token `1.9.6' ./configure: line 2016: `AM_PATH_CPPUNIT(1.9.6)' [mirror@hugemeow rtorrent]$ git branch * master [mirror@hugemeow rtorrent]$ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/c++11 remotes/origin/master [mirror@hugemeow rtorrent]$ git tag 0.9.0 0.9.1 [mirror@hugemeow rtorrent]$ git clean -dfx Removing Makefile.in Removing aclocal.m4 Removing autom4te.cache/ Removing config.guess Removing config.h.in Removing config.log Removing config.sub Removing configure Removing depcomp Removing doc/Makefile.in Removing install-sh Removing ltmain.sh Removing missing Removing src/Makefile.in Removing src/core/Makefile.in Removing src/display/Makefile.in Removing src/input/Makefile.in Removing src/rpc/Makefile.in Removing src/ui/Makefile.in Removing src/utils/Makefile.in Removing test/Makefile.in Edit 1: details about libtool and libtools [mirror@hugemeow rtorrent]$ libtoolize --version libtoolize (GNU libtool) 1.5.22 Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [mirror@hugemeow rtorrent]$ libtool --version ltmain.sh (GNU libtool) 1.5.22 (1.1220.2.365 2005/12/18 22:14:06) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Read the article

  • ubuntu 8.04lts + rdiff-backup: Should I install from source instead of using apt repositories?

    - by egarcia
    I'm trying to use rdiff-backup in order to make backup copies of some folders inside an Ubuntu 8.04LTS server. I'm attempting to do the backup on another server with a more modern Ubuntu distro (9.10). I'll call this one the "client". rdiff-backup needs to be installed on both the client and the server. It is available on the apt repositories on both machines, so I installed it using sudo apt-get install rdiff-backup. The problem is that the version installed on the server is older than the one on the client (1.1.15 vs 1.2.8). Thus I get errors when I try do make them work together. So I need both versions to be the same. What is the standard procedure in these cases? Should I attempt to upgrade the version on the server, or downgrade the version on the client? And how whould I do that? In case it is useful, I'd like to point out that the rdiff-backup apt-package has some dependencies - librsync1 & python-support Attaching the errors I got in case they help: rdiff-backup egarcia@test::/var/rails/ohwr/backup /home/kikito/backup/files Warning: Local version 1.2.8 does not match remote version 1.1.15. Exception ' Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup'] ' raised of class '<class 'rdiff_backup.Security.Violation'>': File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result Traceback (most recent call last): File "/usr/bin/rdiff-backup", line 30, in <module> rdiff_backup.Main.error_check_Main(sys.argv[1:]) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 304, in error_check_Main try: Main(arglist) File "/usr/lib/pymodules/python2.6/rdiff_backup/Main.py", line 321, in Main rps = map(SetConnections.cmdpair2rp, cmdpairs) File "/usr/lib/pymodules/python2.6/rdiff_backup/SetConnections.py", line 78, in cmdpair2rp return rpath.RPath(conn, filename).normalize() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 884, in __init__ else: self.setdata() File "/usr/lib/pymodules/python2.6/rdiff_backup/rpath.py", line 908, in setdata self.data = self.conn.rpath.make_file_dict(self.path) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 450, in __call__ return apply(self.connection.reval, (self.name,) + args) File "/usr/lib/pymodules/python2.6/rdiff_backup/connection.py", line 370, in reval if isinstance(result, Exception): raise result rdiff_backup.Security.Violation: Warning Security Violation! Bad request for function: rpath.make_file_dict with arguments: ['/var/rails/ohwr/backup']

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • Add Mirror for volumes other than the last one in Windows 7 (disk "not up-to-date")

    - by rakslice
    I'm using Windows 7 x64 Ultimate. I have an existing 4TB disk with 3 NTFS volumes, a new 3TB blank disk, and I'm trying to mirror the volumes onto the new disk. My Windows install is on an SSD which is Disk 0. The 4TB disk with volumes is Disk 1, and the new blank disk is Disk 2. I can add a mirror successfully for the last volume, but when I try to add a mirror for the first volume I immediately get errors (see below). Is there something I special I need to do to add a mirror for a volume other than the last one? More info: I opened Disk Management, right-clicked on the first volume on the existing disk, went to Add Mirror, and selected the new disk. The first time I did this I was prompted to convert the new disk to a Dynamic Disk, which I approved. Subsequently I got a message: The operation failed to complete because the Disk Management console view is not up-to-date. Refresh the view by using the refresh task. If the problem persists close the Disk Management console, then restart Disk Management or restart the computer. I've refreshed disk management, restarted the computer, and converted the new disk to basic and back to dynamic, but I still get that error message. Looking around for suggestions of a workaround, I saw a suggestion to use the diskpart command line tool. Running diskpart from the Start Menu as Administrator, I did select volume 2 (the first volume I want to mirror) and then add disk 2 (the new disk), and received a somewhat similar error: Virtual Disk Service error: The disk's extent information is corrupted. DiskPart has referenced an object which is not up-to-date. Refresh the object by using the RESCAN command. If the problem persists exit DiskPart, then restart DiskPart or restart the computer. A rescan appears to be successful: DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> rescan Please wait while DiskPart scans your configuration... DiskPart has finished scanning your configuration. but attempting to add the mirror again resulted in the same error. The only similar report I found online was this: http://www.sevenforums.com/hardware-devices/335780-unable-mirror-all-but-last-partition-drive.html Based on that I attempted to mirror the last volume on the disk to the new disk using diskpart, and that started successfully -- it is currently resynchronizing. More Background: In the course of dealing with a failing 3TB hard drive, I bought a replacement 4TB drive and installed it, then copied the partitions from the failing drive to it using Minitool Partition Wizard Home, and then removed the failing drive and was up and running again normally. Now I've received a warranty replacement for the failing drive, and installed it, and now I'm attempting to mirror my partitions to it.

    Read the article

  • Flickering issue in external monitor when used with Acer Aspire One D260 netbook (Intel GMA 3150)

    - by seyenne
    I recently purchased a Acer netbook, Aspire One D260. It runs flawlessly. Yesterday I bought a Samsung 23" TFT with a native resolution of 1920x1080. According to the information found in the internet and my local computer dealer, the Intel chipset can handle the native resolution of the monitor. However, this is only partly the case. I use the VGA cable to connect, the monitor instantly switches to the native resolution and now the problem: Occasionally, especially the first 2 hours after booting up, I have a flickering all over the screen, sometimes the entire screen is shaking and spinning around like crazy. I figured out that lowering the resolution avoids the flicker but this helps only for some time. I can rule out that it's the monitor's problem since I found no issues with another notebook. Right now, I have no problems with the netbook, for about 30 minutes I didn't experience any issues... But I don't know for how long, it occurs without warning :-) I'm worried that if I would bring the netbook back to the dealer and explain my problem, after testing it on an external screen in the local shop, everything works just fine... And I won't get helped with the problem because I can't prove it. (I'm currently in Thailand and over here, customer service is nothing like back home in Germany) What can I do? Is this a driver related issue? (I installed the latest GPU driver) Is it because of the VGA cable? (But why does it work sometimes without any problems and with no issues on the other notebook) I monitored the GPU/CPU temperature, nothing changes really over time.. Can it simply be a faulty GPU and is a replacement justifiable? I'm really stressed now because for the time I'm writing, the flickering didn't occur... but for sure, soon or later it will happen again.. I forgot to mention, the problem also happens if the netbook runs on battery, unplugged. So the only hardware that is plugged is the TFT screen. ...........and here it comes again, flickering has just begun. NEED HELP! Thank you all for reading through this and giving any suggestions if possible. Cheers

    Read the article

  • Providing DNS redirection to honeypot server for known bad domains

    - by syn-
    Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains. Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems. The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone. named.conf entry: include "blackholes.conf"; blackholes.conf entry example: zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; }; blackhole.zone entries: $INCLUDE std.soa @ NS ns1.ourdomain.com. @ NS ns2.ourdomain.com. @ NS ns3.ourdomain.com.                        IN            A                192.168.0.99 *                      IN            A                192.168.0.99

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • XP Pro product keys

    - by Bill
    I have a very serious problem. After my XP Home OS was trashed by rogue software - a trial of a thing named TuneUp - I did a clean install, including HDD reformat of Windows XP Pro from a purchased OEM disk. This was Service Pack 2, subsequently upgraded to SP3. I had conclusively mislaid the product key. I had to access data on the machine VERY urgently and I did not then know that under some circumstances Microsoft might agree to provide a replacement. I found what I now know must have been a pirate key on the Net which enabled installation but NOT activation. This of course left me functional but 30 days before meltdown - about 20 days left as I write this. Various retailers want around £100 for retail with matching product key. - this would be paying twice over just to continue use on the same computer. I have neither need nor intention of installing XP Pro on any other computer. I have tried a number of applications claiming to deal with this problem but none of them work. A Belarc profile shows that the pirate key has replaced the original one on the system. I have now found two keys, one of which might be the original, but neither work. I am about to upgrade the HDD and it looks like I will just be passing the problem on when I install XP. I have retrieved a key from the disk, but it is seemingly one Microsoft use in production and does not work. It is 76487-OEM-0015242-71798. The keys I have, one of which which might or might not be the original, are CD87T-HFP4G-V7X7H-8VY68-W7D7M and FC8GV-8Y7G7-XKD7P-Y47XF-P829W (or P819W - I believe it to be the latter, but the box will not accept it). The pirate key which has enabled this install and which is now stored on the system but will not activate is QQHHK-T4DKG-74KG7-BQB9G-W47KG. In these circumstances is it likely that Microsoft would issue a replacement? Is there any other solution? I am not trying to defraud anyone, just to keep on using the product I legitimately bought. Bill

    Read the article

  • Building NanoBSD inside a jail

    - by ptomli
    I'm trying to setup a jail to enable building a NanoBSD image. It's actually a jail on top of a NanoBSD install. The problem I have is that I'm unable to mount the md device in order to do the 'build image' part. Is it simply not possible to mount an md device inside a jail, or is there some other knob I need to twiddle? On the host /etc/rc.conf.local jail_enable="YES" jail_mount_enable="YES" jail_list="build" jail_set_hostname_allow="NO" jail_build_hostname="build.vm" jail_build_ip="192.168.0.100" jail_build_rootdir="/mnt/zpool0/jails/build/home" jail_build_devfs_enable="YES" jail_build_devfs_ruleset="devfsrules_jail_build" /etc/devfs.rules [devfsrules_jail_build=5] # nothing Inside the jail [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# sysctl security.jail security.jail.param.cpuset.id: 0 security.jail.param.host.hostid: 0 security.jail.param.host.hostuuid: 64 security.jail.param.host.domainname: 256 security.jail.param.host.hostname: 256 security.jail.param.children.max: 0 security.jail.param.children.cur: 0 security.jail.param.enforce_statfs: 0 security.jail.param.securelevel: 0 security.jail.param.path: 1024 security.jail.param.name: 256 security.jail.param.parent: 0 security.jail.param.jid: 0 security.jail.enforce_statfs: 1 security.jail.mount_allowed: 1 security.jail.chflags_allowed: 1 security.jail.allow_raw_sockets: 0 security.jail.sysvipc_allowed: 0 security.jail.socket_unixiproute_only: 1 security.jail.set_hostname_allowed: 0 security.jail.jail_max_af_ips: 255 security.jail.jailed: 1 [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mdconfig -l md2 md0 md1 md0 and md1 are the ramdisks of the host. bsdlabel looks sensible [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# bsdlabel /dev/md2s1 # /dev/md2s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 1012016 16 4.2BSD 0 0 0 c: 1012032 0 unused 0 0 # "raw" part, don't edit newfs runs ok [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# newfs -U /dev/md2s1a /dev/md2s1a: 494.1MB (1012016 sectors) block size 16384, fragment size 2048 using 4 cylinder groups of 123.55MB, 7907 blks, 15872 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 253184, 506208, 759232 mount fails [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mount /dev/md2s1a _.mnt/ mount: /dev/md2s1a : Operation not permitted UPDATE: One of my colleagues pointed out There are some file systems types that can't be securely mounted within a jail no matter what, like UFS, MSDOFS, EXTFS, XFS, REISERFS, NTFS, etc. because the user mounting it has access to raw storage and can corrupt it in a way that it will panic entire system. From http://www.mail-archive.com/[email protected]/msg160389.html So it seems that the standard nanobsd.sh won't run inside a jail while it uses the md device to build the image. One potential solution I'll try is to chroot from the host into the build jail, rather than jexec a shell.

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • How Do I Properly Run OfflineIMAP in a Crontab

    - by alharaka
    Installed Fedora. # cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }' Fedora release 14 (Laughlin) Installed offlineimap from yum, cuz I'm lazy these days. # yum info offlineimap | awk ' { print F "> " $0; print ""; }' Loaded plugins: langpacks, presto, refresh-packagekit Adding en_US to language list Installed Packages Name : offlineimap Arch : noarch Version : 6.2.0 Release : 2.fc14 Size : 611 k Repo : installed From repo : fedora Summary : Powerful IMAP/Maildir synchronization and reader support URL : http://software.complete.org/offlineimap/ License : GPLv2+ Description : OfflineIMAP is a tool to simplify your e-mail reading. With : OfflineIMAP, you can read the same mailbox from multiple : computers. You get a current copy of your messages on each : computer, and changes you make one place will be visible on all : other systems. For instance, you can delete a message on your home : computer, and it will appear deleted on your work computer as : well. OfflineIMAP is also useful if you want to use a mail reader : that does not have IMAP support, has poor IMAP support, or does : not provide disconnected operation. And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc. [general] ui = TTY.TTYUI accounts = Personal, Work maxsyncaccounts = 3 [Account Personal] localrepository = Local.Personal remoterepository = Remote.Personal [Account Work] localrepository = Local.Work remoterepository = Remote.Work [Repository Local.Personal] type = Maildir localfolders = ~/mail/gmail [Repository Local.Work] type = Maildir localfolders = ~/mail/companymail [Repository Remote.Personal] type = IMAP remotehost = imap.gmail.com remoteuser = [email protected] remotepass = password ssl = yes maxconnections = 4 # Otherwise "deleting" a message will just remove any labels and # retain the message in the All Mail folder. realdelete = no [Repository Remote.Work] type = IMAP remotehost = server.company.tld remoteuser = username remotepass = password ssl = yes maxconnections = 4 I have tried TTY.TTYUI, NonInteractive.Quiet and NonInteractive.Basic with different variations. With or without redirection, the crontab entries I try cause problems. $ crontab -l | awk ' { print F "> " $0; print ""; }' */5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1 */5 * * * * offlineimap I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?

    Read the article

  • Wireless 802.11x Disconnects

    - by BillP3rd
    I've looked at (and read) all of the similar questions and none of them get exactly to the issue I'm having at home. I have an 802.11g access point (two, actually, with different SSIDs and on different channels). One is an Airlink AR525W. The other is a Linksys WRT54G v.2. The issue is that at random times, my laptop will lose its wireless connection. This occurs regardless of which access point I'm connected to. When I lose the connection, the affected AP no longer appears in the list of available APs. Also, it doesn't have anything to do with walls or distance. It can happen within 30' and when my laptop is literally within line-of-sight. When it loses the signal, it can take from 10 to 30 minutes to reconnect and it always will without intervention. I've done all the “standard” things to troubleshoot the problem and it has improved. For example, I surveyed other access points in my vicinity and have selected a different channel for each of my APs that no one else nearby is using. Both APs are configured WPA2/AES. I'm down to wondering [Note: This is not a shopping question. I'm not buying a new AP] if the fact that I didn't drop two bills on my APs and instead opted for more modest solutions has anything to do with it? I've oft wondered why anyone would go for the high-end AP when they didn't have to. Also, I am aware of DD-WRT and have chosen not to go there because only one of my APs is supported. Oh, and one final thing. It an HP x64 laptop running Windows 7 Ultimate. The wireless interface is an Atheros AR9285 802.11b/g/n WiFi Adapter. All the latest drivers and service packs have been applied. It did the same thing with my old laptop (a Lenovo) so I don't the problem is in the laptop. It's really annoying when this happens and suggestions of things I haven't thought of or may have overlooked (No, really. As unlikely as it is, I admit that I may have overlooked something :-)) are appreciated.

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • How to configure Transparent IP Address Sharing (TAS) on a Mediatrix 4102 with DGW 2.0 firmware?

    - by Pascal Bourque
    I am making the switch to VoIP. I chose voip.ms as my service provider and Mediatrix 4102 as my ATA. One reason why I chose the Mediatrix over other popular consumer ATAs is that it's supposed to be easy to place it in front of the router, so it can give priority to its own upstream traffic over the home network's upstream traffic. This is supposed to work transparently, with the ATA and router sharing the same public IP address (the one obtained from the modem). They call this feaure Transparent IP Address Sharing, or TAS. Their promotional brochure describes it like this: The Mediatrix 4102 also uses its innovative TAS (Transparent IP Address Sharing) technology and an embedded PPPoE client to allow the PC (or router) connected to the second Ethernet port to have the same public IP address, eliminating the need for private IP addresses or address translations. I am interested by this feature because my router, an Apple Time Capsule, doesn't support QoS and cannot give priority to the voice packets if the ATA is behind the router. However, after hours of searching the web, reading the documentation, and good ol' trial and error, I haven't been able to configure the Mediatrix to run in this mode. Then I found a version of the manual that looks like it was for a previous version of the firmware (SIP), where there is an entire section dedicated to configuring TAS (starting at page 209). But my Mediatrix comes with the DGW 2.0 firmware, whose documentation does not mention TAS at all. So I tried to follow the TAS setup instructions from the SIP documentation and apply them to my DGW firmware, using the Variable Mapping Between SIP v5.0 and DGW v2.0 document as a reference, but no success. Some required SIP variables don't have an equivalent in DGW. So it looks like the DGW firmware does not support TAS at all, or if it does they are not doing anything to help us set it up. So right now, the Mediatrix is behind the router and VoIP works perfectly except when my upstream bandwidth is saturated. My questions are: Is downgrading to SIP firmware the only way to have my Mediatrix 4102 run in TAS mode? If not, anybody knows how to setup TAS on the DGW firmware? Is TAS mode the only way to give priority to the voice packets if I want to keep my current router (Apple Time Capsule)? Thanks!

    Read the article

  • PHPMyAdmin works with https Only (not http)

    - by 01010011
    Hi I've been having a problem getting phpmyadmin to work consistently on my XP desktop and laptop computers for months now. When I type into Chrome's browser on both machines, localhost/phpmyadmin, I kept getting Error #1045 Access Denied for user at root@localhost (using password yes). Eventually, I realized that I had two (2) versions of mysql installed (XAMPP and MySQL Server 5.1) on both machines. So I uninstalled the MySQL Server 5.1I from the desktop and phpmyadmin worked. But when I uninstalled MySQL Server 5.1 from my laptop, it did not work. But I realized I could still get into MySQL Commandline Client using my password and that my databases were still intact. So I uninstalled and reinstalled XAMPP on the laptop and phpmyadmin worked after that. Now I have a new problem. On phpMyAdmin's home page has a message at the bottom: Your configuration file contains settings (root with no password) that correspond to the default MySQL privileged account. Your MySQL server is running with this default, is open to intrusion, and you really should fix this security hole by setting a password for user 'root'. So I located the following lines in config.inc.php file: /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = true; and I just changed the last 2 lines as follows: $cfg['Servers'][$i]['password'] = 'mypassword'; $cfg['Servers'][$i]['AllowNoPassword'] = false; As soon as I did that and I tried to access phpmyadmin again, I got the Error #1045 message again, but when I tried https://localhost/phpmyadmin/ I got a red page saying this sites certificate is not trusted would you like to proceed anyway. And now it only works using https. I would really like to settle all my phpmyadmin problems once and for all so here are my questions: 1. Why does my laptop only access phpmyadmin via https? 2. How do I change my password in my configuration file? Also, if you have any other tips regarding phpMyAdmin, they are very welcome. Thanks in advance

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >