Search Results

Search found 13450 results on 538 pages for 'recent items'.

Page 335/538 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • restoring a failed SBS 2000 box...

    - by Brad Pears
    Hi there, I read a post where you had mentioned you have had a PERC card blow up on you in an SBS box... I've got a similar situation where one of my RAID drives failed and then the power supply failed before I could replace the drive... I then replaced the power supply and the failed drive and reconfigured the RAID array. I had a recent full backup of the my Win2k SBS's C: drive stored on my SYmantec backup exec server so I installed win2K server on the c: partition and then once I had that up and running, installed the backup exec agent so as to do a restore of the entire c drive including system state. THis all worked just fine, until I had to reboot. I received an "incorrect drive configuration" error and then it hangs. I figure that likely makes sense becasue I think my RAID array is configured slightly different now in that the partitions may be sizeded ever so slightly differently now than they were before I think... Is there a way I can just restore from my backup BUT maybe exclude some of the registry and hidden boot files it wants to restore so that it is booting with the current configuration now active on that machine - not the pre blow up configuration files? I also read a post that indicated you might have to install the exact same service pack etc... etc.. before attemting a restore but that does not make sense to me being as the entire c drive contents are going to be overwritten by the restore anyway? THe basic OS install is just to be able to get the backup exec agent installed . I can;t understand why one would need to install the exact same SP level. CAn you shed some light on what I might be able to do to get this thing up and running? Thanks,Brad

    Read the article

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

  • Tuning Windows 7 for use in a VM

    - by intuited
    I'm running Windows 7 in a VirtualBox Virtual Machine, and would like to make it run in a more streamlined fashion. I'll be using the install primarily for testing web apps, and have no need for it to run quickly. I would like it to run with minimal memory requirements, and with minimal changes to its virtual hard drive's contents. Changes to the hard drive contents, for example the paging file, result in larger snapshot sizes. Another recent post of mine seems to be related to this issue, but does not directly address issues with Windows. One concern that I have is that Windows seems to be using 17% of its paging file even with over 900MB of memory marked "Standby" or "Free". My uneducated guess is that this is being used to store indexes or some other data that helps to speed up the system but is not really necessary. I'm also wondering if it's normal for Windows to use over 500 MB of "In Use" memory with no apps running. Will this amount decrease if I reduce the amount of "installed" memory in the VM? What steps can I take to reduce the system's memory footprint without incurring an increase in paging file usage?

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • Updating Dell PowerEdge firmware on Ubuntu?

    - by Shtééf
    The company I work for recently got hands on a batch of second hand PowerEdge SC1425 machines. We'd like to put these to good use. Our operating system of choice is Ubuntu Server 10.04 64-bit, which installs just peachy on this type of machine. Now I'd like to install the firmware updates from Dell, which are apparently marked as recommended. This includes the updates for the BIOS, the BMC, and possibly some other hardware. I find it incredibly difficult to locate the files on the Dell website, and install any of them on an Ubuntu system: I downloaded the file OM_6.2.0_SUU_A01.iso. I believe I've read that the SUU DVD should be able to update any recent PowerEdge. Is this correct? Is this the latest version? Besides the version number, does A01 have any meaning? Is this image bootable? (At the moment, I just nosed around with a loop device mount.) Running /bin/bash ./suu from the DVD, I get: # /bin/bash ./suu ./suu: line 262: ./java/linux/i386/bin/java: No such file or directory The file exists and is executable, though. But I cannot execute it directly from the shell either.

    Read the article

  • FreeNAS AFP Doesn't Authenticate

    - by Timothy R. Butler
    I just set up a FreeNAS 8.0.3 server and am trying to use its AFP (Netatalk) service to access it via a Mac OS X Lion system. I created the ZFS volume, set its permissions to include my user in its owner group (and set group write permissions), created an AFP share with AFP3 and told that share to "allow" @uninet (my group). I have a user on the server named tbutler, matching the user on my Mac. I can see the server, "Beatrice," in Finder. When I try to login in Finder using "Connect As...," the user "tbutler" and the proper password, I am returned to the main Finder window with the black bar now saying "Connection Failed." Here's the most recent data from /var/messages on the server, which shows me trying to login both as a "Registered User" and a "Guest": Jul 30 00:29:07 freenas afpd[8972]: AFP3.3 Login by nobody Jul 30 00:29:08 freenas afpd[8972]: AFP logout by nobody Jul 30 00:29:08 freenas afpd[8972]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:08 freenas afpd[8972]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:08 freenas afpd[8972]: AFP statistics: 0.14 KB read, 0.12 KB written Jul 30 00:29:14 freenas afpd[8975]: AFP3.3 Login by tbutler Jul 30 00:29:14 freenas afpd[8975]: AFP logout by tbutler Jul 30 00:29:14 freenas afpd[8975]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:14 freenas afpd[8975]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:14 freenas afpd[8975]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:20 freenas afpd[8978]: AFP3.3 Login by tbutler Jul 30 00:29:20 freenas afpd[8978]: AFP logout by tbutler Jul 30 00:29:20 freenas afpd[8978]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:20 freenas afpd[8978]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:20 freenas afpd[8978]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:27 freenas afpd[8983]: AFP3.3 Login by nobody (My clock is clearly not properly set, but be that as it may...) Any suggestions? UPDATE: Apparently this problem occurs if one gives the AFP share a password in the AFP share settings box. When I removed the password and tried to login using a user account again, it worked just fine.

    Read the article

  • Networking Problem MrxSmb event 50 "Delayed Write Failed" errors occurring all of the sudden

    - by Johnny Musso
    JUST THIS MONTH, we have started getting reports from a number of very stable clients that MrxSmb event id 50 errors keep appearing in their system event logs. Otherwise, they do not appear to have any networking problems except that there is a critical legacy application which seems to either be generating the MrxSmb errors or having errors occur because of them. The legacy application is comprised of 16 bit and 32 bit code and has not been changed or recompiled in many years. It has always been stable on Windows XP systems. The customers that have the problem usually have a small (5 clients or less) peer to peer network with all Windows XP systems. All service packs are loaded on the XP machines. Note: The only thing that seems to correct the problem is disabling opportunistic locking. I don't like this solution because it seems to slow down the network and sometimes causes record locking issues between users (on some networks). Also, this seems to have just started happening - as if a Windows update for XP has caused it? However, I have removed recent updates and it did not correct the issue. Thanks in advance for any help you can provide.

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • What could cause these "failed to authenticate" logs other than failed login attempts (OSX)?

    - by Tom
    I've found this in the Console logs: 10/03/10 3:53:58 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:53:58 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). 10/03/10 3:54:00 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:54:00 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). 10/03/10 3:54:03 PM SecurityAgent[156] User info context values set for tom 10/03/10 3:54:03 PM authorizationhost[154] Failed to authenticate user (tDirStatus: -14090). There are about 11 of these "failed to authenticate" messages logged in quick succession. It looks to me like someone is sitting there trying to guess the password. However, when I tried to replicate this I get the same log messages except that this extra message appears after five attempts: 13/03/10 1:18:48 PM DirectoryService[11] Failed Authentication return is being delayed due to over five recent auth failures for username: tom. I don't want to accuse someone of trying to break into an account without being sure that they were actually trying to break in. My question is this: is it almost definitely someone guessing a password, or could the 11 "failed to authenticate" messages be caused by something else?

    Read the article

  • Servers in DMZ will not communicate with each other

    - by Tukaro
    (Full disclosure: I rate barely above "noob" when it comes to networking.) My workplace recent got a new web server. Since we're nearing the end of an overhaul of our website, we're doing a slooooow migration between the old web server and the new one. The old webserver (we'll call it SERVOLD) is Windows Server 2008 with IIS 7. It does not have SQL Server installed. The new server (SERVNEW) is Windows Server 2008 R2, IIS 7.5, with the same version of SQL Server installed. Both are located in the DMZ for our network, and both have their own outward-facing IP address (.3 and .4, respectively). Each server can communicate fine with computers within the domain (not in the DMZ), and those same computers have no trouble communicating with either server. Both servers are also accessible from the internet just fine. However, no matter what, these two servers just refuse to recognize each other. They have the same Workgroup name listed (WORKGROUP), and I thought that would be enough for them to recognize each other. What needs to happen such that I can get these two servers to communicate with each other? We want to do a gradual roll-over to the new website (new one uses ASP.NET, old one uses CFMX), so being able to use one database between both servers is a necessity. Thanks!

    Read the article

  • Local SSL connections are causing redirect loop (after Ubuntu update)

    - by codeinthehole
    Following a recent Ubuntu update, my local websites are no longer serving their pages over SSL. For example, my .htaccess file attempts to ensure /sign-in is always served over HTTPS: RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /sign-in RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,QSA,R=301] However when I make a request to /sign-in on the domain site2-local , I get the error "The page isn't redirecting properly" with the following in /var/log/apache2/error.log [Tue Jun 08 12:20:57 2010] [info] [client 127.0.1.1] Connection to child 0 established (server site1-local:443) [Tue Jun 08 12:20:57 2010] [info] Seeding PRNG with 656 bytes of entropy [Tue Jun 08 12:20:57 2010] [info] Initial (No.1) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.2) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.3) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.4) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.5) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.6) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.7) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.8) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.9) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.10) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] (70007)The timeout specified has expired: SSL input filter read failed. [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] Connection closed to child 0 with standard shutdown (server site2-local:443) There is a connection to site1-local (another site on my machine which shares the certificate), which I don't understand. Anyone know what is causing this issue?

    Read the article

  • Gwibber on Ubuntu doesn't open

    - by Radian
    Gwibber Doesn't open . When I tried to Open it from Command Line I got this error ** (gwibber:3752): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:3752): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:3752): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' No dbus monitor yet Updating... ERROR:dbus.proxies:Introspect error on com.Gwibber.Service:/com/gwibber/Service: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) Traceback (most recent call last): File "/usr/bin/gwibber", line 67, in client.Client() File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 447, in init self.w = GwibberClient() File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 29, in init self.model = gwui.Model() File "/usr/lib/python2.6/dist-packages/gwibber/gwui.py", line 43, in init self.services = json.loads(self.daemon.GetServices()) File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 68, in call return self._proxy_method(*args, **keywords) File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 140, in call **keywords) File "/usr/lib/pymodules/python2.6/dbus/connection.py", line 620, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) I tried to remove it and to Install again but It has the same error It was Working probably , then suddenly it didn't

    Read the article

  • Unknown user in terminal

    - by Giles B
    Im having a strange problem with the terminal in OS X. When I open the terminal the username at the command prompt is: unknown-04-0c-ce-e3-0d-c2: ~ I can't pinpoint when this first started or why unfortunately. I usually use iTerm for web development purposes but this also occurs in the normal OS X Terminal app. Any ideas/help would be really appreciated. Thanks Update: Thanks to @fayadfami and @aliasgar for the correct answers and steering me in the right direction. Also this forum post helped http://forums.macrumors.com/showthread.php?t=152407 The extract from the right post: Having run into the exact same issue myself, and having come across this thread while attempting to figure it out, I thought I'd post the answer. OS X is initially setting your hostname to what's set for your Computer Name in Sharing; however, if you're set up for DHCP and you match a current lease on your DHCP server (i.e., match the IP address of another recent user), OS X will then set your hostname to whatever the DHCP server currently has for that lease. This freaked me out incredibly at first, as I had just reformatted (having just purchased my first Mac and wanting to see how the installer worked) and knew I had not yet changed the Computer Name in Sharing -- yet my system hostname at the Terminal prompt was indeed changed to what I had previously set, pre-format. I grepped around, not finding the name anywhere save log entries; I thought either the format didn't actually properly wipe everything, or I was losing my mind. Finally I logged into my router (it's a Linksys WRT54GS running OpenWRT), and found the hostname in the current leases file. I then manually set my Mac's IP to something different, and volia! -- the hostname was back to what I expected. I hope this helps save someone from the same paranoia I went through.

    Read the article

  • Missing Data on VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • ImportError: No module named _socket? WSGI Deployment into Apache

    - by Sxkaur
    I am using WSGI 3.3 for python 2.7.3 (32bit) for Apache 2.2. I got the binary WSGI from http://code.google.com/p/modwsgi/downloads/detail?name=mod_wsgi-win32-ap22py27-3.3.so. I have been trying to deploy an application but keep on receiving the ImportError: no module named _socket. I have included my wsgi and error logs. APACHE config: #LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule wsgi_module modules/mod_wsgi.so <Directory C:/Users/xxxxd/Documents/cahd> AllowOverride None Options None Order deny,allow Allow from all </Directory> WSGIScriptAlias / C:/Users/xxxxd/Documents/cahd/cahd/django.wsgi import os, sys sys.path.append('C:/Users/xxxxd/Documents) sys.path.append('C:/Users/xxxxd/Documents/cahd/') os.environ['DJANGO_SETTINGS_MODULE'] = 'cahd.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() The error was: [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1 ]File "C:/Users/xxxxd/Documents/cahd/django.wsgi", line 10, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import django.core.handlers.wsgi [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\core\\handlers\\wsgi.py", line 8, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from django import http [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\http\\__init__.py", line 11, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from urllib import urlencode, quote [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\urllib.py", line 26, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\socket.py", line 47, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import _socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] ImportError: No module named _socket

    Read the article

  • Apache no longer starts at Windows boot up

    - by w3d
    I have Apache installed as part of XAMPP - local test server. It is configured as a Windows (XP) Service. Startup type is "Automatic". For a long time now it has always started when Windows boots up, but recently this has stopped happening. I now need to start it manually via the XAMPP Control Panel - at which point it appears to start up perfectly OK. The only recent updates to the machine (that I recall) are Windows Updates - none of which appear to have "known issues" that relate to this. And updates to Google Chrome. Any ideas what could prevent Apache from starting automatically at Windows (XP) boot up? EDIT#1 There are 2 related Errors in my system event log regarding the Service Control Manager: Timeout (30000 milliseconds) waiting for the Apache2.2 service to connect. The Apache2.2 service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. When I manually start the Apache server after boot up there are 2 "information" events stating that it was "sent a start control" and that it "entered the running state". Although I notice it appears to take 19 seconds between the start control being sent and entering a running state - according to the event log. So, maybe 30 seconds during boot up isn't long enough (anymore) for Apache to start??

    Read the article

  • How to check that all ZFS snapshots within a pool are without holds before destroying that pool

    - by Graham Perrin
    Question Already I can check each snapshot of a filesystem individually, manually. I would prefer to check all at once (all with a single command or script). Please: can that be done with a script? Background From the man page for zfs(8): zfs holds [-H] [-r] snapshot… … -r Specifies that a hold with the given tag is applied recursively to the snapshots of all descendent file systems. I wondered whether recent snapshots are treated as descendants of older snapshot. No: Last login: Sat Dec 8 09:02:26 on ttys003 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-12-08-081957 NAME TAG TIMESTAMP macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$ zfs hold experiment gjp22@2012-12-08-081957 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-12-08-081957 NAME TAG TIMESTAMP gjp22@2012-12-08-081957 experiment Sat Dec 8 9:04 2012 macbookpro08-centrim:~ gjp22$ zfs holds -r gjp22@2012-10-28-212255 NAME TAG TIMESTAMP gjp22@2012-10-28-212255 problem with LocalStorage for WOT for Safari Mon Oct 29 6:44 2012 macbookpro08-centrim:~ gjp22$

    Read the article

  • Winamp question: Generating 'dynamic playlists' from file playlists -OR- mass-tagging by file playli

    - by Daddy Warbox
    I'm trying to think of a way to do this. I sort my songs into a variety of playlists corresponding to different 'moods' I might have as I listen to them, and some songs fit for more than one kind of mood (e.g. a jazz song might be 'stylish' and 'emotional', or something to that effect). I also give them star ratings for a general sort of opinion about them. I want to be able to filter and sort my media library by the moods I want or don't want, as well as by star rating. Anyone have a good way to do something like this? I can't seem to use Winamp's dynamic playlists to generate lists from existing filesystem playlists (e.g. songs in a given .m3u files). Hand-tagging files with Winamp's tag editor is a royal pain. It's trouble enough just giving a star rating and sorting into playlists as is. If there is there a way to mass tag songs within each playlist with mood words to allow me to create dynamic playlists, I'd be fine (for now). It'd be nice if I could do this via some kind of hotkey for each song, too. I'm looking to see if I can use a macro program or something to do that, though. Thanks in advance. P.S: Alternatively, would something like Foobar have functions like this? Note: Italics are recent edits.

    Read the article

  • Interrupted system call during "hg convert"

    - by Aaron Digulla
    When I run "hg convert" to convert a Subversion repository to Mercurial, I get this error: fetching revision log for "/trunk" from 1538 to 0 run hg sink post-conversion action Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 46, in _runcatch return _dispatch(ui, args) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 454, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 324, in runcommand ret = _runcommand(ui, options, cmd, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 505, in _runcommand return checkargs() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 459, in checkargs return cmdfunc() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 453, in <lambda> d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/usr/lib/pymodules/python2.6/mercurial/util.py", line 386, in check return func(*args, **kwargs) File "/usr/lib/pymodules/python2.6/hgext/convert/__init__.py", line 229, in convert return convcmd.convert(ui, src, dest, revmapfile, **opts) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 398, in convert c.convert(sortmode) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 312, in convert parents = self.walktree(heads) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 109, in walktree commit = self.cachecommit(n) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 267, in cachecommit commit = self.source.getcommit(rev) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 433, in getcommit self._fetch_revisions(revnum, stop) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 814, in _fetch_revisions for entry in stream: File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 122, in __iter__ entry = pickle.load(self._stdout) IOError: [Errno 4] Interrupted system call abort: Interrupted system call Apparently, it is possible to restart a read on EINTR but how would I do that with pickle.load()? Also I wonder where that signal comes from? I suspect it's SIGCHILD but shouldn't popen() handle that?

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • Cannot build digiKam

    - by Tichomir Mitkov
    I'm trying to compile digiKam 2.8.0. I have installed the required libraries but cMake seems to stuck without any meaningful reason. Here is the output of cMake: $ cmake -DCMAKE_BUILD_TYPE=relwithdebinfo -DCMAKE_INSTALL_PREFIX=/usr/local . -- Found Qt-Version 4.7.1 (using /usr/bin/qmake) -- Found X11: /usr/lib64/libX11.so -- Found KDE 4.6 include dir: /usr/include -- Found KDE 4.6 library dir: /usr/lib64 -- Found the KDE4 kconfig_compiler preprocessor: /usr/bin/kconfig_compiler -- Found automoc4: /usr/bin/automoc4 -- Local kdegraphics libraries will be compiled... YES -- Handbooks will be compiled..................... YES -- Extract translations files..................... NO -- Translations will be compiled.................. YES -- ---------------------------------------------------------------------------------- -- Starting CMake configuration for: libmediawiki ----------------------------------------------------------------------------- -- The following external packages were located on your system. -- This installation will have the extra features provided by these packages. ----------------------------------------------------------------------------- * QJSON - Qt library for handling JSON data ----------------------------------------------------------------------------- -- Congratulations! All external packages have been found. ----------------------------------------------------------------------------- -- ---------------------------------------------------------------------------------- -- Starting CMake configuration for: libkgeomap -- Found Qt-Version 4.7.1 (using /usr/bin/qmake) -- Found X11: /usr/lib64/libX11.so -- Check Kexiv2 library in local sub-folder... -- Found Kexiv2 library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkexiv2 -- kexiv2 found, the demo application will be compiled. -- ---------------------------------------------------------------------------------- -- Starting CMake configuration for: libkface -- Found Qt-Version 4.7.1 (using /usr/bin/qmake) -- Found X11: /usr/lib64/libX11.so -- First try at finding OpenCV... -- Great, found OpenCV on the first try. -- OpenCV Root directory is /usr/share/opencv -- External libface was not found, use internal version instead... -- ---------------------------------------------------------------------------------- -- Starting CMake configuration for: kipi-plugins -- Check Kexiv2 library in local sub-folder... -- Found Kexiv2 library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkexiv2 -- Check for Kdcraw library in local sub-folder... -- Found Kdcraw library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkdcraw CMake Error at extra/libkdcraw/cmake/modules/FindKdcraw.cmake:137 (file): file Internal CMake error when trying to open file: /home/tichomir/Downloads/digikam-2.8.0/extra/libkdcraw/libkdcraw/version.h for reading. Call Stack (most recent call first): extra/kipi-plugins/CMakeLists.txt:123 (FIND_PACKAGE) -- Check Kipi library in local sub-folder... -- Found Kipi library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkipi CMake Warning at extra/kipi-plugins/CMakeLists.txt:139 (MESSAGE): libkdcraw: Version information not found, your version is probably too old. -- Found GObject libraries: /usr/lib64/libgobject-2.0.so;/usr/lib64/libgmodule-2.0.so;/usr/lib64/libgthread-2.0.so;/usr/lib64/libglib-2.0.so -- Found GObject includes : /usr/include/glib-2.0/gobject -- Check for Ksane library in local sub-folder... -- Found Ksane library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libksane -- Check for KGeoMap library in local sub-folder... -- Found KGeoMap library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkgeomap -- Check Mediawiki library in local sub-folder... -- Found Mediawiki library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libmediawiki -- Check Vkontakte library in local sub-folder... -- Found Vkontakte library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkvkontakte -- Boost version: 1.38.0 -- libkgeomap: Found version 2.0.0 -- Found X11: /usr/lib64/libX11.so -- CMake version: cmake version 2.8.9 -- CMake version (cleaned): cmake version 2.8.9 -- -- ---------------------------------------------------------------------------------- -- kipi-plugins 2.8.0 dependencies results <http://www.digikam.org> -- -- libjpeg library found.................... YES -- libtiff library found.................... YES -- libpng library found..................... YES -- libkipi library found.................... YES -- libkexiv2 library found.................. YES -- libkdcraw library found.................. YES -- libxml2 library found.................... YES (optional) -- libxslt library found.................... YES (optional) -- libexpat library found................... YES (optional) -- native threads support library found..... YES (optional) -- libopengl library found.................. YES (optional) -- Qt4 OpenGL module found.................. YES -- libopencv library found.................. YES (optional) -- QJson library found...................... YES (optional) -- libgpod library found.................... YES (optional) -- Gdk library found........................ YES (optional) -- libkdepim library found.................. YES (optional) -- qca2 library found....................... YES (optional) -- libkgeomap library found................. YES (optional) -- libmediawiki library found............... YES (optional) -- libkvkontakte library found.............. YES (optional) -- boost library found...................... YES (optional) -- OpenMP library found..................... YES (optional) -- libX11 library found..................... YES (optional) -- libksane library found................... YES (optional) -- -- kipi-plugins will be compiled............ YES -- Shwup will be compiled................... YES (optional) -- YandexFotki will be compiled............. YES (optional) -- HtmlExport will be compiled.............. YES (optional) -- AdvancedSlideshow will be compiled....... YES (optional) -- ImageViewer will be compiled............. YES (optional) -- AcquireImages will be compiled........... YES (optional) -- DNGConverter will be compiled............ YES (optional) -- RemoveRedEyes will be compiled........... YES (optional) -- Debian Screenshots will be compiled...... YES (optional) -- Facebook will be compiled................ YES (optional) -- Imgur will be compiled................... YES (optional) -- VKontakte will be compiled............... YES (optional) -- IpodExport will be compiled.............. YES (optional) -- Calendar will be compiled................ YES (optional) -- GPSSync will be compiled................. YES (optional) -- Mediawiki will be compiled............... YES (optional) -- Panorama will be compiled................ YES (optional) -- ---------------------------------------------------------------------------------- -- -- ---------------------------------------------------------------------------------- -- Starting CMake configuration for: digiKam -- Check for Kdcraw library in local sub-folder... -- Found Kdcraw library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkdcraw CMake Error at extra/libkdcraw/cmake/modules/FindKdcraw.cmake:137 (file): file Internal CMake error when trying to open file: /home/tichomir/Downloads/digikam-2.8.0/extra/libkdcraw/libkdcraw/version.h for reading. Call Stack (most recent call first): core/CMakeLists.txt:156 (FIND_PACKAGE) -- Check Kexiv2 library in local sub-folder... -- Found Kexiv2 library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkexiv2 -- Check Kipi library in local sub-folder... -- Found Kipi library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkipi -- Check Kface library in local sub-folder... -- Found Kface library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkface -- Check for KGeoMap library in local sub-folder... -- Found KGeoMap library in local sub-folder: /home/tichomir/Downloads/digikam-2.8.0/extra/libkgeomap -- PGF_INCLUDE_DIRS = /usr/local/include/libpgf -- PGF_INCLUDEDIR = /usr/local/include/libpgf -- PGF_LIBRARIES = pgf -- PGF_LDFLAGS = -L/usr/local/lib;-lpgf -- PGF_CFLAGS = -I/usr/local/include/libpgf -- PGF_VERSION = 6.12.24 -- PGF_CODEC_VERSION_ID = 61224 -- Could NOT find any working clapack installation -- Boost version: 1.38.0 -- Check for LCMS1 availability... -- Found LCMS1: /usr/lib64/liblcms.so /usr/include -- Paralelized PGF codec disabled... -- Identified libjpeg version: 62 -- Found MySQL server executable at: /usr/sbin/mysqld -- Found MySQL install_db executable at: /usr/bin/mysql_install_db CMake Warning at core/CMakeLists.txt:310 (MESSAGE): libkdcraw: Version information not found, your version is probably too old. -- libkgeomap: Found version 2.0.0 -- Found gphoto2: -L/usr/lib64 -lgphoto2_port;-L/usr/lib64 -lgphoto2 -lgphoto2_port -lm -- WARNING: you are using the obsolete 'PKGCONFIG' macro, use FindPkgConfig -- WARNING: you are using the obsolete 'PKGCONFIG' macro, use FindPkgConfig -- PKGCONFIG() indicates that lqr-1 is not installed (install the package which contains lqr-1.pc if you want to support this feature) -- Could NOT find Lqr-1 (missing: LQR-1_INCLUDE_DIRS LQR-1_LIBRARIES) -- Found SharedDesktopOntologies: /usr/share/ontology -- Found SharedDesktopOntologies: /usr/share/ontology (found version "0.5.0", required is "0.2") -- -- ---------------------------------------------------------------------------------- -- digiKam 2.8.0 dependencies results <http://www.digikam.org> -- -- Qt4 SQL module found..................... YES -- MySQL Server found....................... YES -- MySQL install_db tool found.............. YES -- libtiff library found.................... YES -- libpng library found..................... YES -- libjasper library found.................. YES -- liblcms library found.................... YES -- Boost Graph library found................ YES -- libkipi library found.................... YES -- libkexiv2 library found.................. YES -- libkdcraw library found.................. YES -- libkface library found................... YES -- libkgeomap library found................. YES -- libpgf library found..................... YES (optional) -- libclapack library found................. NO (optional - internal version used instead) -- libgphoto2 and libusb libraries found.... YES (optional) -- libkdepimlibs library found.............. YES (optional) -- Nepomuk libraries found.................. YES (optional) -- libglib2 library found................... YES (optional) -- liblqr-1 library found................... NO (optional - internal version used instead) -- liblensfun library found................. YES (optional) -- Doxygen found............................ YES (optional) -- digiKam can be compiled.................. YES -- ---------------------------------------------------------------------------------- -- -- Adjusting compilation flags for GCC version ( 4.5.1 ) -- Configuring incomplete, errors occurred! Actually this line shows a sign of error CMake Error at extra/libkdcraw/cmake/modules/FindKdcraw.cmake:137 (file): file Internal CMake error when trying to open file: /home/tichomir/Downloads/digikam-2.8.0/extra/libkdcraw/libkdcraw/version.h for reading. 'version.h' doesn't exists instead there is a file 'version.h.cmake' I have installed libkdcraw (64-bit) from sources. I'm using OpenSuse

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >