Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 96/130 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Office 2007 constantly crashes, logged as Event ID 1000

    - by Nori
    I have a user, who despite my best efforts, is having constant Office 2007 crashes. I've tried deleting their profile and setting it up again, repairing office, uninstalling completely and then reinstalling, and swapping out memory sticks. One event log error I keep getting is the following: (note all the Office errors are event id 1000) Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: EMSMDB32.DLL, version: 12.0.6539.5000, time stamp: 0x4c1246f8 Exception code: 0xc0000005 Fault offset: 0x0005d8e2 Faulting process id: 0xf6c Faulting application start time: 0x01cb6633f33384f3 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\EMSMDB32.DLL Report Id: 0d4a2eab-d231-11df-80a0-4061868f5d10 I also get this: Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9 Exception code: 0xc0000005 Fault offset: 0x002357a9 Faulting process id: 0x5e4 Faulting application start time: 0x01cb661f4546aa77 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\olmapi32.dll Report Id: a4a90658-d224-11df-80a0-4061868f5d10 The Excel error is this: Faulting application name: EXCEL.EXE, version: 12.0.6535.5002, time stamp: 0x4bd2a7f1 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdbdf Exception code: 0xe06d7363 Fault offset: 0x0000b727 Faulting process id: 0x14a8 Faulting application start time: 0x01cb61ab7bc0abab Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\EXCEL.EXE Faulting module path: C:\Windows\syswow64\KERNELBASE.dll Report Id: ba0c454b-cd9e-11df-80a0-4061868f5d10 Also have gotten this for PowerPoint: Faulting application name: POWERPNT.EXE, version: 12.0.6500.5000, time stamp: 0x49a68f9d Faulting module name: COMShim.dll, version: 2010.3.325.110, time stamp: 0x4c51e0b1 Exception code: 0x40000015 Fault offset: 0x0001e388 Faulting process id: 0x1480 Faulting application start time: 0x01cb5fe9a0660e81 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\POWERPNT.EXE Faulting module path: C:\Program Files (x86)\FactSet\COMShim.dll Report Id: e03d2a21-cbdc-11df-9bc8-4061868f5d10 (Some of the above lines edited to keep you from scroll horizontally.) Lastly, I get this error several times a day, I don't think it is related but maybe it is: Failed extract of third-party root list from auto update cab at: http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab with error: A required certificate is not within its validity period when verifying against the current system clock or the timestamp in the signed file. Any ideas? This is driving me nuts.

    Read the article

  • nginx error page and internal directives not working as expected

    - by Romain
    I'd like to setup my nginx server to return a specific error page on HTTP 50x status codes, and I'd like this page to be unavailable by a direct request from users (e.g., http//mysite/internalerror). For that, I'm using nginx's internal directive, but I must be missing something, as when I put that directive on my /internalerror location, nginx returns a custom 404 error (which isn't even my own 404 error page) when a page crashes. So, to summarize, here's what seems to happen: GET /Home nginx passes the query to Python I'm simulating an application bug to get the 502 error code nginx tries to return /InternalError from its error_page rule because of the internal rule, it finally fails back to a custom 404 error code <-- why? the documentation says error_page directives are not concerned by internal: http://wiki.nginx.org/HttpCoreModule#internal Here's an extract from nginx.conf with a few comments to point things out: error_page 404 /NotFound; error_page 500 502 503 504 =500 /InternalError; # HTTP 500 Error page declaration location / { try_files /Maintenance.html $uri @pythonbackend; } location @pythonbackend { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location ~* \.(py|pyc)$ { # This internal location works OK and returns my own 404 error page internal; } location /__Maintenance.html { # This one also works fine internal; } location ~* /internalerror { # This one doesn't work and returns nginx's 404 error page when I trigger an error somewhere on my site internal; } Thanks very much for your help!!

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Problem restoring from tar backup: why are there /dev/disk/by-id/ symlinks and how can I avoid them?

    - by SK.
    Hello, I'm trying to make a bare-bone backup system with the most basic tools available on openSUSE 11.3 (in this case: bash, fdisk, tar & grub legacy) Here's the workflow for my scripts: backup.sh: (Run from external system, e.g. LiveCD) make an fdisk script ($fscript) from fdisk -l's output [works] mount the partitions from the system's fstab [works] tar the crucial stuff in file.tgz [works] restore.sh: (Run from external system, e.g. LiveCD) run fdisk $dest < $fscript to restore partitioning [works] format and mount partitions from system's fstab [fails] extract from file.tgz [works when mounting manually] restore grub [fails] I have recently noticed that openSUSE (though I'm sure it has nothing to do with the distro) has different output in /etc/fstab and /boot/grub/menu.lst, more precisely the partition name is for example "/dev/disk/by-id/numbers-brandname-morenumbers-part2" instead of "/dev/sda2" -- but it basically is a simple symlink. My questions about this: what is the point of such symlinks, especially if we're restoring on a different disk? is there a way to cleanly prevent the creation of those symlinks and use the "true" /dev/sdx everywhere instead? if the previous is no, do you know a way to replace those symlinks on the fly in a text file? I tried this script but only works if the file starts with the symlink description (case of fstab, not menu.lst): ### search and replace /dev/disk/by-id/... to /dev/sdx while read oldVolume rest; do # get first element, ignore rest of line if [[ "$oldVolume" =~ ^/dev/disk/by-id/.*(-part[0-9]*$)? ]]; then newVolume=$(readlink $oldVolume) # replace pointer by pointee, returns "../../sdx" echo /dev/${newVolume##*/} $rest >> TMP # format to "/dev/sdx", write line else echo $oldVolume $rest >> TMP # nothing to do fi done < $file mv -f TMP $file # save changes I've had trouble finding a solution to this on google so I was hoping some of the members here could help me. Thank you.

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • Tool to Save a Range of Disk Clusters to a File

    - by Synetech inc.
    Hi, Yesterday I deleted a (fragmented) archive file only to find that it did not extract correctly, so I was left stranded. Fortunately there was not much space free on the drive, so most of the space marked as free was from the now-deleted archive. I pulled up a disk editor and—painfully—managed to get a list of cluster ranges from the FAT that were marked as unused. My task then was to save these ranges of clusters to files so that I could examine them to try to determine which parts were from the archive and recombine them to attempt to restore the deleted file. This turned out to be a huge pain in the butt because the disk editor did not have the ability to select a range of clusters, so I had to navigate to the start of each cluster and hold down Ctrl+Shift+PgDn until I reached the end of the range (which usually took forever!) I did a quick Google search to see if I could find a command-line tool (preferably with Windows and DOS versions) that would allow me to issue a commands such as: SAVESECT -c 0xBEEF 0xCAFE FOO.BAR ::save clusters 0xBEEF-0xCAFE to FOO.BAR SAVESECT -s 1111 9876 BAZ.BIN ::save sectors 1111-9876 to BAZ.BIN Sadly my search came up empty. Any ideas? Thanks!

    Read the article

  • Linux drivers for laser printer Konica Minolta Magicolor 4750DN

    - by user51166
    I would like to install the konica minolta magicolor 4750DN in Linux (debian 64 bits, I know it's not really supported but that's not the issue right now) but all the manual says is "put the CD rom in and copy the drivers and PPD file". However I did not get the CD ! On their "fantastic" internet site (...) there are only available drivers for Windows and Mac OSX. I tried to extract the ppd file from the .dmg file for MacOSX 10.7 but, if the PPD file works, a compiled file (only mac compiled, MACH 4 architectures says the "file" command) does not (obviously "cannot execute binary file", since I'm trying to run a Mac file on Linux). Is there anybody who has the same printer that could lend me the Linux drivers on the CD ROM ? Couldn't find them anywhere on the internet. Any way to execute a Mach (or BSD) binary file on Linux (I don't think is possible, although some "emulators" may exist). Thank you very much. I buyed this printer even because it was advertised as "Linux compatible", only to get this bad surprise. I would be grateful if you could help me solve this problem.

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Maintaining "Portability" Between Linux and Windows 7

    - by lokheart
    I am using the following ways in my office's Windows 7 machine to maintain my "portabilibity" when disaster strikes and I need to switch computer while I have no luxury of time for reinstalling all my program to the new PC. a majority of programs I used are portable, mostly from portableapp.com, like notepad+, GIMP, even R, I extract them and store them in a folder in My document, in a structure similar to the default portableapp installation when they are installed to a thumbdrive only a few software that portable version is not available and I will install them as usual all of my working files are stored in a folder in My document I regularly backup them all using syncback, because this program can keep versioning of my backup, and the backup is stored in a portable drive. One day I need to switch my computer and the operation is relative simple for me: I just move the two folders mentioned above into the my document folder of the new PC, install those few "non-portable" program in it, and this is almost done, some minor hiccups can be solved by reinstalling the portableapp into the drive. Overall speaking it is a smooth process. I would like to maintain the same degree of "portability" in my home Linux desktop (Ubuntu or Mint, I'm still deciding), that is, if my Linux crash and I need to reinstall it again. All I need to do is the move the two folder back to the new Linux, and most of my work will be almost ready to be worked on again. But I don't know how to find a Linux-alternative of portableapps. Being a newer to Linux, can anyone tell me whether this is possible in Linux?

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • Varnish gets in a restart loop and causes the system to lock up; how can I fix?

    - by chrism2671
    Here is an extract from the syslog. Mar 2 14:01:10 ip-10-226-34-17 varnishd[20204]: Child (20205) not responding to ping, killing it. Mar 2 14:01:16 ip-10-226-34-17 varnishd[20204]: Child (20205) not responding to ping, killing it. Mar 2 14:01:16 ip-10-226-34-17 varnishd[20204]: Child (20205) died signal=3 Mar 2 14:01:21 ip-10-226-34-17 varnishd[20204]: Child cleanup complete Mar 2 14:01:21 ip-10-226-34-17 varnishd[20204]: child (13224) Started Mar 2 14:01:21 ip-10-226-34-17 varnishd[20204]: Child (13224) said Closed fds: 4 5 6 10 11 13 14 Mar 2 14:01:21 ip-10-226-34-17 varnishd[20204]: Child (13224) said Child starts Mar 2 14:01:21 ip-10-226-34-17 varnishd[20204]: Child (13224) said managed to mmap 536870912 bytes of 536870912 Mar 2 14:01:21 ip-10-226-34-17 varnishd[20204]: Child (13224) said Ready Mar 2 14:01:35 ip-10-226-34-17 varnishd[20204]: Child (13224) not responding to ping, killing it. Mar 2 14:02:10 ip-10-226-34-17 last message repeated 7 times Mar 2 14:03:15 ip-10-226-34-17 last message repeated 13 times Mar 2 14:03:20 ip-10-226-34-17 varnishd[20204]: Child (13224) not responding to ping, killing it. Mar 2 14:05:53 ip-10-226-34-17 varnishd[20204]: Child (13224) not responding to ping, killing it. Mar 2 14:05:53 ip-10-226-34-17 varnishd[20204]: Child (13224) not responding to ping, killing it. Mar 2 14:05:53 ip-10-226-34-17 varnishd[20204]: Child (13224) died signal=3 Mar 2 14:05:53 ip-10-226-34-17 varnishd[20204]: Child cleanup complete Mar 2 14:05:53 ip-10-226-34-17 varnishd[20204]: child (13288) Started I'm not expecting a solution here but any help just to decode what each line is doing would be very instructive. Many thanks!

    Read the article

  • Exchange 2003: recover mailbox when migrated to 2007

    - by lyngsie
    We have migrated almost all of our mailboxes from Exchange 2003 to 2007 so both are still up and co-existing in our domain. One mailbox is missing some mails, and I tried to recover from an Exchange 2003 backup using ExMerge as usual, but it throws an error: Error opening message store (MSEMS). Verify that the Microsoft Exchange Information Store service is running and that you have the correct permissions to log on. (0x8004011d) This should be normal when a mailbox has been migrated to 2007 and you try to recover a 2003 backup as stated here: http://www.petri.co.il/restoring-exchange-2003-mailboxes-exchange-2007-exmerge.htm We then tried the solution given in the article (copying the value "homemdb" for the user and pasting this in "msExchOrigMDB" for the Recovery Storage Group object in adsiedit). The error unfortuntely persists. So my question is, how can I extract a .pst from a mailbox in the recovery storage group when both Exchange 2003 and 2007 are running? I assume this is why the solution in the article didn't work. If no solution is possible using regular tools (exmerge, eseutil, etc.) which third-party tool should we choose - preferably the cheapest as we only need this one mailbox recovered.

    Read the article

  • LVM2 volume group lost

    - by MrG
    I updated one of my servers, but - although I took care not to modify - the volume groups on /dev/sdb1 were lost, although the physical volumes seem to be still there: [root@server ~]# pvscan PV /dev/sda2 VG VolGroup lvm2 [465,16 GiB / 0 free] PV /dev/sdb1 lvm2 [1,82 TiB] Total: 2 [2,27 TiB] / in use: 1 [465,16 GiB] / in no VG: 1 [1,82 TiB] [root@server ~]# pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sda2 VolGroup lvm2 a-- 465,16g 0 465,16g HftbaD-MBs0-3p7D-6O13-CrzU-T9Gb-6W0ofB /dev/sdb1 lvm2 a-- 1,82t 1,82t 1,82t dD4XZP-WStA-61xV-5Sff-ifmW-R4rR-JenHoU [root@server ~]# pvck -d -v /dev/sdb1 Scanning /dev/sdb1 Found label on /dev/sdb1, sector 1, type=LVM2 001 Found text metadata area: offset=4096, size=1044480 Found LVM2 metadata record at offset=10752, size=1037824, offset2=0 size2=0 Found LVM2 metadata record at offset=9216, size=1536, offset2=0 size2=0 Found LVM2 metadata record at offset=7168, size=2048, offset2=0 size2=0 Found LVM2 metadata record at offset=5632, size=1536, offset2=0 size2=0 I attempted to fix it as described here and was able to extract the 4 meta data sets listed above (using i.e. dd bs=1 skip=5632 count=1536 if=/dev/sdb1 of=output.file), none of them includes the lv_data which I'm missing. Please advise how I could access the files which should be on /dev/sdb1 there. Any help is appreciated!

    Read the article

  • Getting Win32_Service security descriptor using VBScript

    - by invictus
    Hi, I am using VbScript for retrieving the securitydescriptor of a Win32_Service. I am using the following code: SE_DACL_PRESENT = &h4 ACCESS_ALLOWED_ACE_TYPE = &h0 ACCESS_DENIED_ACE_TYPE = &h1 strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate, (Security)}!\\" & strComputer & "\root\cimv2") Set colInstalledPrinters = objWMIService.ExecQuery _ ("Select * from Win32_Service") For Each objPrinter in colInstalledPrinters Wscript.Echo "Name: " & objPrinter.Name ' Get security descriptor for printer Return = objPrinter.GetSecurityDescriptor( objSD ) If ( return <> 0 ) Then WScript.Echo "Could not get security descriptor: " & Return wscript.Quit Return End If ' Extract the security descriptor flags intControlFlags = objSD.ControlFlags If intControlFlags AND SE_DACL_PRESENT Then ' Get the ACE entries from security descriptor colACEs = objSD.DACL For Each objACE in colACEs ' Get all the trustees and determine which have access to printer WScript.Echo objACE.Trustee.Domain & "\" & objACE.Trustee.Name If objACE.AceType = ACCESS_ALLOWED_ACE_TYPE Then WScript.Echo vbTab & "User has access to printer" ElseIf objACE.AceType = ACCESS_DENIED_ACE_TYPE Then WScript.Echo vbTab & "User does not have access to the printer" End If Next Else WScript.Echo "No DACL found in security descriptor" End If Next However, every time I run it I get the message saying the resulting code is -2165236532 something, rather than the error codes defined in the manual. Anyone got any ideas? I am using Windows 7 professional.

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • how to remove background layer of djvu file

    - by Jon
    Hello, I've downloaded some files from the Internet Archive. They come in different file formats and most of the time I use pdf. However, sometimes the scans are saves in colour instead of b/w. This makes it difficult/impossible to read on a dedicated ebook reader. In that case I downloaded the djvu files as on the PC you can select which layer (color, bw,fore,back) one would like to see. Selecting the bw gives excellent results. However, the ebook reader does not has this option. The question is, how can I remove /extract a layer from the djvu file and save only this layer. So far I've tried the following two approaches: 1) select bw in the djvu viewer on the PC and printed to postscript file. Followed by a ps2pdf conversion. This works, but generates a fairly large pdf file. Sure, I can again upload it to any2djvu but it just seems to much manual work for each file. 2) I tried the shared annotation feature and said (mode bw). This works on the PC as desired but is ignored on the ebook reader as the other layers are still present. Any help or suggestions would be greatly appreciated.

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • Vim: How to Install Airline?

    - by MunkyCheez
    I'm just getting started with Vim. It's a fun experience, but I've found it to be kind of overwhelming. I'm trying to get this plugin installed, vim-airline, but I'm having a lot of trouble. The Installation section on the Github page simply states: copy all of the files into your ~/.vim directory Presumably, this means download the .zip, extract it, and copy all of those files into ~/.vim/. I did this, but Vim just starts up like normal, and running :help airline just gives: Sorry, no help for airline I assume that this means it isn't getting installed. Also, the statusbar remains the same. I'm new to Vim and would really like to get this working. Thanks in advance! EDIT: I also tried putting the files into /usr/share/vim/vim73/. No dice. EDIT 2: I ran :helptags ~/.vim/doc and now the help-page displays when I type :help airline, but I'm still not getting the plugin itself (the status bar). Vim looks the same, but it can now display the help page.

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • Storing Windows Updates for reuse

    - by Saiyine
    At work we update lots of computers using Windows Update. Windows XP and 7 all day long, rarely some Vista. We do it through a corporate proxy, as connecting them to a domain to build a Wsus server is out of the question, so we download about two gigabytes a day of the very same updates everyday. I've tried WSUS Offline. It's pretty complete but when it finishes it's common to be still missing hundreds of megabytes of updates, because its intention is not to fully update a system but to install the critical updates, as the developers explain in the forums. Now I'm trying with Autoupdater. It's far worse, with poor capabilities for non-English Windows XP, but at least it gives the option to install non critical updates in Windows 7. It still misses hundres of megabytes of updates after fully updating the system. And finally, both doesn't install the driver related updates of Windows 7, so they at most save us a couple of hundreds of megabytes and a reboot (with the associated login to the computer and to the proxy) out of three or four. So, is it possible to somewhat extract the installed updates in a Windows 7 system and not having to download the same updates again and again at least with machines with the same hardware? Or even better, a generic package with all the updates?

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >