Search Results

Search found 14539 results on 582 pages for 'date conversion'.

Page 478/582 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Virus cleanup + dying drive = XP Automatic Updates crashing in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Windows 7 doesn't start anymore

    - by martani_net
    Hi, I've experienced some BSOD on windows 7 RC, and some freezing when startup, but today was big surprise, it doesn't start anymore. I tried to start on safe mode and no results too, it shows the starting animation, then a blue screen for less than a second and turns off immediately. The only thing I remember did today is update flash player under Firefox, then chrome stopped working even after logging off, and once restarting, it doesn't start anymore. Anyone experienced the same issue? any hints? [EDIT 3] Solved : Windows 7 have a very smart repair strategy, it works automatically, and it tried every possible fix, what fixed my problem was the system restore to a previous date, all this happened automatically. [EDIT2] these are the last lines in the ntbtlog.txt file Did not load driver \SystemRoot\System32\drivers\vga.sys Loaded driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Loaded driver \SystemRoot\system32\drivers\CHDRT32.sys Loaded driver \SystemRoot\system32\DRIVERS\VSTAZL3.SYS Loaded driver \SystemRoot\system32\DRIVERS\VSTDPV3.SYS Loaded driver \SystemRoot\system32\DRIVERS\VSTCNXT3.SYS Loaded driver \SystemRoot\system32\drivers\modem.sys Loaded driver \SystemRoot\system32\DRIVERS\usbccgp.sys Loaded driver \SystemRoot\System32\Drivers\usbvideo.sys [edit] this is the BSOD I get : http://twitpic.com/i87cx Thank you.

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • How to use Cisco AnyConnect VPN Client?

    - by ktm5124
    I wrote a related question earlier, which is still unresolved. This question is much more specific. So I installed Cisco AnyConnect VPN Client on Snow Leopard. I connect to my work VPN. Once connected, I can't ping my work machine. I don't see any computers on the network. If the client were not running, I wouldn't believe myself to be connected to the VPN. Is there something that I am doing wrong? Do I have to route my network traffic through the tunnel manually? (ifconfig route comes to mind) Is the POST request that I am about to submit going to go through the tunnel created by my VPN? I guess the main question is: why do I feel so in the dark? Cisco says I am connected to my VPN, but for all I know it is invisible. N.B. I do have the up-to-date Cisco VPN Client: version 2.3.2016. I installed it about a week ago.

    Read the article

  • MacBook Pro and Backtrack 5R1 Configuration

    - by user119346
    I have a Macbook pro Quad core (2.2/8gb ram/750gb hdd). I have went through tons of forums on the Internet, but none of them seemed to be updated for the current Backtrack 5R1, or the question of getting it to correctly work on the MBP. Can anyone help? I don’t have a USB Dongle, and I want to be able to use the internal airport extreme wireless of the MBP to use BT 5R1. I have downloaded Backtrack 5R1 onto VMWare Fusion, and got it up and running, but to no avail. It keeps recognizing my card as a Ethernet connection. Kismac wont recognize the card either. So what I am asking for is this: The proper “download method.” for Backtrack 5R1 to my MacBook Pro. (YES I AM WILLING TO RE-DOWNLOAD BT 5R1). The Complete process from start to finish, UP TO DATE, from someone who has done this using an MBP Running Lion OSX. The proper tweaks, settings, or commands to get my airport extreme wireless card to work (it is BROADCOM 4331 I think). The wireless connection I need to use the tools on both Backtrack 5R1 and Kismac. I mainly need to test WEP cracking on my network for security. The difference between running BT 5R1 on VMWARE Fusion and running from downloading it directly to the MBP, if there is, How to download it directly to the MBP?

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Excel data representation: show me all people who did not pass the exam

    - by dreftymac
    Background I have an excel spreadsheet with the results of a pass/no-pass exam. Students are allowed to take the exam as often as they want until they either pass, or give up trying. student ;; result ;; date [email protected] ;; no-pass ;; 2000-06-07 [email protected] ;; pass ;; 2000-06-07 [email protected] ;; pass ;; 2000-06-07 [email protected] ;; no-pass ;; 2000-06-07 [email protected] ;; pass ;; 2000-06-07 [email protected] ;; pass ;; 2000-06-08 [email protected] ;; no-pass ;; 2000-06-08 Question Using a pivot-table or something else, how can I get excel to show me a clean report or representation of this data on another sheet that answers the question: Who are all the people who took the exam, but never got a passing grade? In the above example it would just show me [email protected] ;; no-pass ;; with all the dates that delta took the exam. I know excel is not a database nor a reporting tool per-se, but it would be great if I could get it to do this.

    Read the article

  • How to make project auto-estimate duration based on work?

    - by Bruno Brant
    This one has bothered me for a long while. I like to do estimates thinking on how much time a certain task will take (I'm in TI business), so, let's say, it takes 12 hours to build a program. Now, let's say I tell Project that my beginning date is today. If I allocate one resource to this task, it means that the task will last 1,5 days, implying that it will end tomorrow. But right now, that is not what it's doing. I say that the task will take 1 hour, and when I add a resource to it, it allocate the resource at [13%] basis, which means that the duration is still fixed... project is trying to make the task last for a day. I have, on many occasions, accomplished this. What I do is build a plan based on these rough estimates for effort, then I allocate tasks to resources. Times conflict, so I level resources and then Project magically tells me how long, in days, will it take. But every time I have to start estimating again, I end up having trouble on how to make project work like that.

    Read the article

  • WSUS KB978338 Chain of Supersession Incorrect?

    - by Kasius
    The chain appears to be KB978338 to KB978886 to KB2563894 to KB2588516 (newest). All four of these updates are approved on our WSUS server. KB978338 is listing as Not Applicable on all machines, because it has been superseded. This is the behavior I would expect. However, our security office is reporting that KB978338 should still be installed on all machines because its actual effect is not replicated by any of the updates that follow it. Here is the analysis I was sent: KB978886 applies to Vista SP1 only. The rollout of SP2 did not address the ISATAP vulnerability and reintroduces it. KB2563894 only updates two files (Tcpip.sys and Tcpipreg.sys). It does not update the 12 other affected ISATAP, UDP, and NUD .sys and .dll files. (MS11-064) KB2588516 addresses malformed continuous UDP packet overflow. But does not address the ISATAP related NUD and TCP .sys and .dll files. (MS11-083) So yes, many IP vulnerabilities. But each KB addresses specific issues that do not cross over to other KBs. We can install KB978338 by manually running the .MSU file, but we aren't certain if that will overwrite the couple files that get updated by later patches since we would be installing the patch out of order. Is the above analysis correct? Is the chain of supersession incorrectly defined? If it is, what is the proper way to report it so that it can be changed by the correct Microsoft team? We are currently using 32-bit and 64-bit installations of Vista SP2. Note: I should mention that I posted this on Technet as well. I will keep this up-to-date with any information I get on there.

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • Exchange 2010 sends out spam.

    - by Magnus Gladh
    Hi. I have an Exchange Server 2010, that uses a smart host to send out mails. A day ago the owner of smart host contact us and told us that we send out spam. I have try different open relay test on the net and all of them come back saying that this server is secured and can not be used as relay server. But I can see in my Exchange Queue Viewer that it keeps coming in new messages. Here is an example of how it looks. Identity: mailserver\3874\13128 Subject: Olevererbart:: [email protected] Pfizer -75% now Internet Message ID: <[email protected]> From Address: <> Status: Ready Size (KB): 6 Message Source Name: DSN Source IP: 255.255.255.255 SCL: -1 Date Received: 2010-12-09 21:46:22 Expiration Time: 2010-12-11 21:46:22 Last Error: Queue ID: mailserver\3874 Recipients: [email protected] How can I secure our exchange server more, to stop this from happening? Could I have got an virus that hooks up to our exchange server and send mail throw that? As I can see the From Address is always <, is there someway that I can stop sending mails that don't have a from address that I describe? Pleas help

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • Debian Stable: Can't update kernel, libc won't update.

    - by pascal
    I use Debian Stable (squeeze) on a virtual host where I can't touch the kernel, it's stuck (and will be for some time as support told me) at Linux 2.6.18-028stab070.3 #1 SMP Wed Jul 21 18:33:27 MSD 2010 x86_64 So when I try to update, several packages fail with FATAL: kernel too old for example Preparing to replace libgcc1 1:4.6.0-11 (using .../libgcc1_1%3a4.6.1-1_amd64.deb) ... Unpacking replacement libgcc1 ... Setting up libgcc1 (1:4.6.1-1) ... FATAL: kernel too old Segmentation fault dpkg: error processing libgcc1 (--configure): subprocess installed post-installation script returned error exit status 139 and some version chaos ensued: The following packages have unmet dependencies: libc-dev-bin : Depends: libc6 (> 2.13) but 2.11.2-13 is installed libc6 : Depends: libc-bin (= 2.11.2-13) but 2.13-5 is installed libc6-dev : Depends: libc6 (= 2.13-5) but 2.11.2-13 is installed libquadmath0 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed libstdc++6 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed locales : Depends: glibc-2.13-1 What should I do? I want to keep the system up-to-date, so I want to pin as few packets as possible, but I also don't want to have to compile anything manually. Trying to pin the status quo and figured out where the error came from: ldconfig segfaults. -v doesn't print anything more so I can't tell what's the actual problem. # ldconfig FATAL: kernel too old

    Read the article

  • ExpressCache not working after Windows 8 reinstall on Samsung Series 7 Gamer

    - by Morven
    I have a Samsung Series 7 Gamer laptop which came with Windows 8. After doing a reinstall of Windows, the ExpressCache software is no longer caching. Running "eccmd -info" shows me that the software is present and it has the MSATA drive partition configured. However, it's not actually caching anything. These are the results after having the system booted for days: C:\windows\system32eccmd -info ExpressCache Command Version 1.0.94.0 Copyright¬ 2010-2012 Condusiv Technologies. Date Time: 11/3/2013 12:26:20:263 (JAMETHIEL #36) EC Cache Info ================================================== Mounted : Yes Partition Size : 7.46 GB Reserved Size : 3.00 MB Volume Size : 7.46 GB Total Used Size : 86.50 MB Total Free Space : 7.38 GB Used Data Size : 16.63 MB Used Data Size on Disk : 84.38 MB Tiered Cache Stats ================================================== Memory in use : 32.00 MB Blocks in use : 136 Read Percent : 0.02% Cache Stats ================================================== Cache Volume Drive Number : 1 Total Read Count : 97242 Total Read Size : 4.13 GB Total Cache Read Count : 0 Total Cache Read Size : 595.50 KB Total Write Count : 161546 Total Write Size : 5.89 GB Total Cache Write Count : 0 Total Cache Write Size : 0 Bytes Cache Read Percent : 0.01% Cache Write Percent : 0.00% As you can see on the last two lines, cache read and write percent is nigh on zero. Anyone know where to look next? The only guides I can find deal with ExpressCache not being present or not having a configured drive.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

  • Virus cleanup; Windows Automatic Updates service crashes in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • Cisco 7206 error trying to copy running-config (Bad file number)

    - by jasondewitt
    I have a cisco 7206 that terminates a bunch of pppoa sessions for dsl users. Today I noticed that if I tried to "show run" nothing happened. I mean that it doesn't show anything and just sends me right back to the command prompt. I decided I should probably try and back up the config and that is where I'm stuck. Any time I try to copy the running-config to tftp or to pcmcia card that I know is not full I get the following error: %Error opening system:/running-config (Bad file number) I get this error when I try to do anything with the running config. I've been googling around, but I haven't found any thing else that talks about this error. I've seen people say to erase the nvram and then try to "copy run start", but I don't want to erase the nvram until I can pull off a copy of the running-config. I would try to reboot it, but the startup-config that is on the nvram looks to be woefully out of date (good job me!). Any ideas what might be wrong? or how I can get the running config off the router?

    Read the article

  • GUI interfaces to ATI card behave weirdly out of the box and after updates.

    - by jdk
    My Lenovo W500 came with an ATI Mobility FireGL V5700 and both the Catalyst control center software and Vista display manager display four monitors. What's really annoying is the behaviour. My two active displays (laptop display + my external monitor) are always #s 3 and 4 respectively which doesn't make sense. This is out of the box. Additionally dragging & dropping is jumpy and displays #1 and 2 (always inactive because they don't exist to the software) are often preventing me from dragging #3 and 4 to the rightmost side. They also auto-snap to weird positions and certain sensible positions like position one directly over top of the other are not possible. The exact same annoyances are present when using the Windows Display manager too. In other words the interface is crap and I'm looking for a fix that's not wishing I had gone with nVidia instead. I've updated drivers, and Catalyst control centre. Have latest Windows and AMD/ATI updates. Any thoughts? Graphics Software Driver Packaging Version 8.563.2.1-090401a-079160C-Lenovo Provider ATI Technologies Inc. 2D Driver Version 7.01.01.849 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/Class/{4D36E968-E325-11CE-BFC1-08002BE10318}/0001 Direct3D Version 7.14.10.0630 OpenGL Version 6.14.10.8306 Catalyst® Control Center Version 2009.0401.1328.22301 Graphics Hardware Primary Adapter Graphics Card Manufacturer Powered by ATI Graphics Chipset ATI Mobility FireGL V5700 Device ID 9591 Vendor 1002 Subsystem ID 2126 Subsystem Vendor ID 17AA Graphics Bus Capability PCI Express 2.0 Maximum Bus Setting PCI Express 2.0 x16 BIOS Version 010.088.000.021 BIOS Part Number BK-ATI VER010.088.000.021.034663 BIOS Date 2009/09/30 Memory Size 512 MB Memory Type DDR3 Core Clock in MHz 600 MHz Memory Clock in MHz 700 MHz

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >