Search Results

Search found 12793 results on 512 pages for 'format specifiers'.

Page 394/512 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • mount error 5 = Input/output error

    - by alharaka
    I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating. This the command I am using. debianvm:/home/me# whoami root debianvm:/home/me# smbclient --version Version 3.2.5 debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*********mount error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs) debianvm:/home/me# The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs and mount -t cifs, along with smbmount and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8, file_mode, and dir_mode as described here. That did not help either. I have also tried ntlm and ntlmv2 assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2 it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username correctly lists all the shares and shows it as the following. Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager] Sharename Type Comment --------- ---- ------- IPC$ IPC Remote IPC ETC$ Disk Remote Administration C$ Disk Remote Administration Share Disk Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED) NetBIOS over TCP disabled -- no workgroup available I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.

    Read the article

  • How to turn off Excel "Header Row" without losing data in it?

    - by Ken
    I've been sent an Excel spreadsheet with a weird first row. Some of the cells say "Column1", "Column2", etc., but I can't delete their contents. If I select the cell and hit backspace, it goes blank, but when I press return, it goes right back to saying "Column1". I found another answer here that suggested this could be caused by "Cell validation", but the validation window says "Any value", and also "show alert" (and I'm not seeing an alert), so I don't think that's it. The first row is white text on a blue background, if that means anything. The spreadsheet was sent to me in XLSX format, but I tried resaving as XLS and opening that, and it seems to make no difference. This is with the "ribbon" version of Excel (they got rid of the Help menu so I don't know how to see what version number it is!). Thanks! Update: The Excel online help says to use ribbon Home tab - Cells - Delete - ... to delete cells. When I select anything on the first row, this pop-up menu is dimmed. So maybe Excel doesn't think row 1 consists of "cells"? Though I don't know what else it would call them. Update 2: I found it, kind of. If I click the "Design" tab in the ribbon, then uncheck "Header Row", then first row becomes a normal row of cells again. Unfortunately, the contents disappear entirely. I want to delete a few cells, not all 50+! And if I copy the first row before turning off "Header Row", it disappears from the clipboard when I uncheck that. So I kind of know what mode it's stuck in, but not a good way out of it.

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • How to Create an XML File from an Excel File

    - by nicorellius
    I have an Excel spreadsheet file that has 5 or so columns and hundreds of lines. I need to convert this (export these data) to an XML file. I'm interested in three of the columns and they correspond to these XML tags, where info1 can be followed by info2, info3, etc... <?xml version="1.0" encoding="UTF-8" ?> <list> <info1> <id>111</id> <value>222</value> <des>333</des> </info1> </list> If possible, I would like to avoid building this XML manually. It wouldn't be too much trouble to rearrange the Excel file such that the three columns I'm interested in were in their own file. But then I would need to export those data into an XML file of the above format. Any ideas?

    Read the article

  • How to play 24 fps video smoothly on a 60Hz display?

    - by netvope
    I use mpc-hc to play videos on Win7 x64. With the default settings (#1), video playback is great most of the time. But for panning shots, playback is not smooth. I stepped through the video frame by frame and found that the panning movement is smooth (e.g. each frame shifts horizontally by 10 pixels), so the problem is how the 23.976 fps video is interpolated to 60Hz. The judder looks like what would be caused by a "2:3 pulldown", where the frames are played unevenly like: frame 1, 1, 2, 2, 2, 3, 3, 4, 4, 4, etc (#2) Using "optimal renderer settings" (#3) instead of the default disables the Aero theme and causes tearing. Setting my LCD display to 50Hz may have improved the judder slightly (but I can't really tell). My display does not support 24Hz or 48Hz, and forcing them in the Nvidia control panel gives blurry screen. I've tried other video players (VLC and KMPlayer), the ReClock Directshow Filter, video files from different sources (#4), turning on/off DXVA, and a computer with a different GPU, but the judder in the playback is similar. None of them solved the problem. So, how can I play 23.976 or 24 fps video smoothly on a 60Hz display? I think a video player could make the video smoother by doing linear interpolation, such as: 1. 100% frame 1 2. 60% frame 1 + 40% frame 2 3. 20% frame 1 + 80% frame 2 4. 80% frame 2 + 20% frame 3 5. 40% frame 2 + 60% frame 3 6. 100% frame 3 7. 60% frame 3 + 40% frame 4 .. etc Can any existing video player do this? Footnotes: (#1) Video renderer: EVR Custom Pres. (#2) This example converts a 24 fps video into 30 fps (#3) View Renderer settings Reset Reset to optimal renderer settings (#4) The files I have are all H.264 mkv files, but I don't think the file format/encoding matters.

    Read the article

  • Five stars of open data - example and review

    - by Joe
    (there may be a more suited SE site for this question so feel free to shift) I have some data I'd like to make open to the public - It's synatesis of some related data retrived from freedom of infomation requests over the last year. The data itself is at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.csv or for fans of Excel, at http://www.cs.rhul.ac.uk/home/joseph/domesday/Domesday-Scotland.xlsx . It's no more than a table with about five columns. I'd like to make this properly open data, so I was looking at the 5 star deployment scheme for Open Data. Much of which is fine but I'm confused towards the end and I could do with an explenation from people who know the answers. So to get achieve the star levels I need: "make your stuff available on the Web (whatever format) under an open license" trival - all I have to do is put the notes up on the page that will give the provance of the data. "make it available as structured data (e.g., Excel instead of image scan of a table)"… done… "use non-proprietary formats (e.g., CSV instead of Excel)" - done… "use URIs to identify things, so that people can point at your stuff" - this is where I start to get a bit hazy - does this mean there should be an URI for every line in the table? "link your data to other data to provide context" - this isn't massively clear to me - does this mean to give the provence of the data? One column of the data I've put out is a link to where the data came from - is that the sort of thing we're looking at? Any and all information and answers welcome… EDIT - or if anyone wants to recommend a place SE or other place to ask the question - that would be cool...

    Read the article

  • Evernote from vim

    - by juanpablo
    I search a way to edit evernote notes from vim I begin with this #!/bin/bash evernoteDir="$HOME/Library/Application*Support/Evernote/data" dataDir=$(ls -trlh $evernoteDir| tail -n 1| awk '{print $NF}') contentDir="$evernoteDir/$dataDir/content" file=$(ls -trlh $contentDir | tail -n 1| awk '{print $NF}') vim -c 's/div>/div>\r/g' $contentDir/$file/content.html https://gist.github.com/1256416 or maybe create a vim plugin for this ... you have any suggestion? EDIT: for a more simple edition of the evernote note in html format, I make this vim function " Markup function {{{ fun! MkdToHtml() "{{{ " markdown to html silent! execute '%s/ $/<br\/>/g' silent! execute '%s/\*\*\(.*\)\*\*/<b>\1<\/b>/g' silent! execute '%s/\t*###\(.*\)/<H3>\1<\/H3>/g' endf "}}} command! -complete=command MkdToHtml call MkdToHtml() nn <silent> <leader>mm :MkdToHtml<CR> " }}} and a vim function for open the last note edited fun! LastEvernote() "{{{ " a better solution is with evernote api let evernoteDir=expand("$HOME")."/Library/Application*Support/Evernote/data" let dataDir=system("ls -trlh ".evernoteDir."| tail -n 1| awk '{print $NF}'") let contentDir=evernoteDir."/".dataDir."/content" let contentDir=substitute(contentDir,"\n","",'g') let note=system("ls -trlh ".contentDir." | tail -n 1| awk '{print $NF}'") let note=substitute(note,"\n","",'g') sil! exec 'sp '.contentDir.'/'.note.'/content.html' sil! exec '1s/>/>\r/g' sil! exec '%s/<br.*\/>/<br\/>\r/g' sil! exec '%s/<\//\r<\//g' sil! exec 'g/^\s*$/d' normal gg sil! exec '1,4fo' sil! exec '$-1,$fo' endf https://gist.github.com/1289727

    Read the article

  • Word 2010, Multiple Columns, Vertical center one column only

    - by Nancy N Jones
    I am creating a document with two columns in Microsoft Word 2010. I want the first column to be centered vertically. I want the second column to be on the same page and the vertical placement to be from the top. I highlight my text in the first column that I want centered vertically, then go to Page Layout Margins Custom Margins Layout, you can choose to center the vertical alignment. I have choosen the "Section Start" to be "Column" and also tried "Continuous." In all cases it always shifts all of my second column information to a new page. I don't want my second column text to be on a new page, I want it to be on the same page and vertically aligned from the top--not the center. Am I understanding the functionality of the Section Start on the Layout tab correctly? Maybe the page layout isn't the correct formatting to use. What I am really doing is formatting columns. I haven't found anywhere to format the columns for this. Am I missing some important column formatting features? I know that I can use the paragraph formatting and add space above the first line of text to make it look like it is centered vertically. However, this is a template for a master document and will be changed frequently. I really would like the first column text to be automatically formatted to be centered vertically without having to go in and manually change the space above the paragraph every time. Your assistance would be greatly appreciated.

    Read the article

  • How do I install Windows XP from an external hard drive?

    - by Plasmer
    I'm trying to install Windows XP Media Center edition by copying the install disc image to an external hard drive and making it bootable. Has anyone had success getting this to work on systems that can't boot from dvds/floppies? I'm basically working from this guide: http://www.dl4all.com/other/21495-install-windows-xp-from-usb.html Update - 2/15/10 I used WinToFlash on my laptop to format my usb hard drive from my install dvd (Windows XP Media Center Version 2005 with Update Rollup 2 from Dell) and selected "boot from usb device" at the boot selection menu and the windows installer started up. However, an error message came up saying that: "A problem has been detected and windows has been shut down to prevent damage to your computer." Originally on my desktop machine, I had 1 150Gb SATA drive, and 2 150 Gb SATA drives striped together using RAID. From the hard drive diagnostics, it appears the windows install on one of the RAIDed disks lost a block and this has been preventing me from booting up. I replaced the standalone drive with a new 1Tb SATA drive and disconnected the other hard drives. Could the message be indicating a virus is on the unformatted drive? or the usb hard drive? Update 2 - 2/15/10 The external hard drive didn't find any viruses when scanned. I tried installing Vista Home Premium 64bit SP1 using WinToFlash and that installed successfully onto the new 1Tb drive. WinToFlash was really easy to use and helped a lot, thanks!

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Loading a big database dump into PostgreSQL using cat

    - by RussH
    I have a pair of very large (~17 GB) database dumps that I want to load into postgresql 9.3. After installing the database packages, learning more or less how to use them, and fiddling around a little on various StackExchange pages (particularly this question), it looks like a proper command for me to use is something like: cat mydb.pgdump | psql mydb because of the format the dump is in. My machine has 16 GB of RAM, and I'm not familiar with the cat command but I do know that my RAM is 99% exhausted and the database is taking a while to load. My machine isn't non-responsive to the point of hanging; I can run other commands in other terminal windows and have them execute at a reasonable clip, but I am wondering if cat is the best way to pipe in the file or if something else is more efficient? My concern is that maybe cat could be using up all the RAM so the database doesn't have much to work with, throttling its performance. But I'm new to thinking about RAM issues like this and don't know if I'm worrying about nothing. Now that I think about it, this seems to be more of a question about cat and its memory usage than anything else. If there is a more appropriate forum for this question please let me know. Thanks!

    Read the article

  • Optimal video resolution and encoding for recording games for YouTube?

    - by Rookie
    I want to record video from games, therefore I cannot use very large video resolution, but I still want to make the large video view to look as sharp as the original encoded video before upload. I tried to use YouTube's recommended 854x640 resolution, but it wasn't possible with h264 and the encoding software I used (Handbrake) converted it to a width of the nearest multiple of 4, which I think is a limitation of the h264 format. The video I encoded was sharp and fine quality, but when I uploaded it to YouTube, it lost a lot of quality and the preferred large video view looks almost as bad as a 320p video. I tried to wait a few days but it never got sharper (in case it didn't process it completely yet). So, which resolution and encoding options I should use, if I want the large video player to have the sharpest possible video, retaining the original video quality as good as possible? I noticed that recording with 640x480, the video was sharper than with 1280x720, so I'm not sure what im doing wrong here; both were h264. Is it anyhow possible to prevent YouTube from re-encoding the videos? I just wonder how people can make so sharp videos, while mine are all blurry after upload, but before upload they looked fine. I also tried YouTube's suggested bitrates with h264, but it didn't work any better.

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • Extract and view Outlook contacts attachment sent to Gmail

    - by matt wilkie
    A friend forwarded a contact list to my gmail account from Outlook (2007 or 2010, not sure which). I can see there is an attachment in gmail but when I save it to my local drive it's just a plain text file containing the text This attachment is a MAPI 1.0 embedded message and is not supported by this mail system. If I use gmail's "show original message" it contains in part: This is a multipart message in MIME format. ------=_NextPart_000_0016_01CC6656.CE12F030 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit ------=_NextPart_000_0016_01CC6656.CE12F030 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" eJ8+Ih0VAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEIgAcAGAAAAElQTS5NaWNy b3NvZnQgTWFpbC5Ob3RlADEIAQgABQAEAAAAAAAAAAAAAQkABAACAAAAAAAAAAEDkAYASAgAACgA --8<---snip---8<-- GUC/9NKH95rABgMA/g8HAAAAAwANNP0/pQ4DAA80/T+lDvAm ------=_NextPart_000_0016_01CC6656.CE12F030-- How do I save the attached winmail.dat properly, and open the winmail.dat and extract the contact list? I'm running Windows 7 x64, but have access to an ubuntu linux vmware appliance if needed. I have Outlook 2010, but can't use it to connect directly to gmail as pop3 and imap are blocked by the corporate firewall.

    Read the article

  • Allowing non-admin users to unstick the print spooler

    - by Reafidy
    I currently have an issue where the print que is getting stuck on a central print server (windows server 2008). Using the "Clear all documents" function does not clear it and gets stuck too. I need non-admin users to be able to clear the print cue from there work stations. I have tried using the following winforms program which I created and allows a user to stop the print spooler, delete printer files in the "C:\Windows\System32\spool\PRINTERS folder" and then start the print spooler but this functionality requires the program to be runs as an administrator, how can I allow my normal users to execute this program without giving them admin privileges? Or is there another way I can allow normal user to clear the print que on the server? Imports System.ServiceProcess Public Class Form1 Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1.Click ClearJammedPrinter() End Sub Public Sub ClearJammedPrinter() Dim tspTimeOut As TimeSpan = New TimeSpan(0, 0, 5) Dim controllerStatus As ServiceControllerStatus = ServiceController1.Status Try If ServiceController1.Status <> ServiceProcess.ServiceControllerStatus.Stopped Then ServiceController1.Stop() End If Try ServiceController1.WaitForStatus(ServiceProcess.ServiceControllerStatus.Stopped, tspTimeOut) Catch Throw New Exception("The controller could not be stopped") End Try Dim strSpoolerFolder As String = "C:\Windows\System32\spool\PRINTERS" Dim s As String For Each s In System.IO.Directory.GetFiles(strSpoolerFolder) System.IO.File.Delete(s) Next s Catch ex As Exception MsgBox(ex.Message) Finally Try Select Case controllerStatus Case ServiceControllerStatus.Running If ServiceController1.Status <> ServiceControllerStatus.Running Then ServiceController1.Start() Case ServiceControllerStatus.Stopped If ServiceController1.Status <> ServiceControllerStatus.Stopped Then ServiceController1.Stop() End Select ServiceController1.WaitForStatus(controllerStatus, tspTimeOut) Catch MsgBox(String.Format("{0}{1}", "The print spooler service could not be returned to its original setting and is currently: ", ServiceController1.Status)) End Try End Try End Sub End Class

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • How do I `SUM` by multiple columns in Excel

    - by dwwilson66
    I have a comma delimited file that includes two columns date/time (which imports as Excel's mm/dd/yyyy hh:mm custom format) and status of 1 or 0. The status represents a piece of equipment either being on or off. I'm trying to generate a graph that will show, hours up vs. down by day. CONSIDER: 1/1/2012 00:00, 1 1/1/2012 03:00, 0 1/1/2012 14:00, 1 1/3/2012 00:00, 0 This tells me that the equipment was up for three hours, down for eleven hours, and then up for thirty-four hours (across two calendar days). However, I would like to generate a graph that shows how many hours PER DAY we were up or down. CONSIDER: 1/1 XXXXXXXXXXXXX----------- (up 13, down 11) 1/2 XXXXXXXXXXXXXXXXXXXXXXXX (up 24) To me, it seems that I need to generate a dataset summing HOURS by STATUS by CALENDAR DAY...but I can't seem to find a flavor of pivot table or nested SUM(IF(SUMIF(...))) combination to make it work. Most troubling is accounting for date changes...in my example above, since my uptime starting at 14:00 on 1/1/2012 crosses midnight, I need to know that 10 uptime hours get totalled with 1/1/2012 and 24 uptime hours get totalled with 1/2/2012. I may be able to do something with a calendar list to drive the date summation, but then I need a way to compare 01/01/2012 to 01/01/2012 03:00 as equal. There's got to be a way along the lines of if(INTEGER-PORTIONS-OF-SERIAL-DATES-ARE-EQUAL,TOTAL-HOURS-IF-VALUE-IS_1,0) but nothing's worked so far. Any suggestions? I've been battling this most of the day, and need a fresh perspective. Thanks

    Read the article

  • Codec Problems with trying to edit videos with VirtualDub

    - by Roy Rico
    So, I'm a little frustrated. According to this post and various other internet sources, virtualdub is supposed to allow users to quickly split and join video files. I am using windows 7 64 Bit and the latest version of VirtualDub (64-bit). I have tried to edit various movie files, and each attempt at editing various files I have done has not worked for me. AVI file A.avi won't load, saying that it can't located the Decompressor for the "FMP4" format. I have tried this solution and this one, and neither of them work. I have tried setting the VFW Decompressor for 'Other MPEG4' setting to XVID or LIBAVCODEC. There is no change in Virtual Dub. AVI file C.avi will load in Virtual Dub, but any attempt to split it gives me an error that I don't have XVID codecs installed. I've attempted to install the proper codecs (Shark's Windows 7 Codecs, CCCP) with no change. AVI file C.avi will load, and it will split, but won't split using the "Direct Stream Copy" claiming the compression algorithm is incompatible. I tried the "Fast Recompress" option and it created a 27GB file out of what was supposed to be about a 300-400MB file. Can someone please give me some insight into what I'm messing up?

    Read the article

  • Missing whole disk device in OpenSolaris

    - by Jeff Mc
    I have begun experimenting with Solaris and ZFS as a NAS. All was going very smoothly until I had a drive failure. When I replaced the drive, I no longer have a device file mapped to the whole disk. /dev/dsk/c7t3d0 does not exist but c7t2d0 and c7t4d0 both do. Also the sd@3,0:wd file under the /devices/ tree is non-existent. Do I have to prepare/partition the disk somehow to cause the whole disk device to exist? Here are a few outputs that might be useful. jeffmc@ats-ds2:/dev/dsk$ zpool status pool: datapool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM datapool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 UNAVAIL 0 0 0 cannot open mirror-1 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 jeffmc@ats-ds2:/dev/dsk$ zpool replace datapool c7t3d0 cannot open 'c7t3d0': no such device in /dev/dsk must be a full path or shorthand device name jeffmc@ats-ds2:/dev/dsk$ sudo format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@0,0 1. c7t1d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@1,0 2. c7t2d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@2,0 3. c7t3d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@3,0 4. c7t4d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@4,0 5. c7t5d0 /pci@0,0/pci8086,3599@6/pci8086,330@0/pci1014,2cc@7,1/sd@5,0

    Read the article

  • free up not used space on a qcow2-image-file on kvm/qemu

    - by bmaeser
    we are using kvm/qemu with qcow2-images for our virtual machines. qcow2 has this nice feature where the image file only allocates the actually needed space by the virtual-machine. but how do i shrink back the image file, if the virtual machine's allocated space gets smaller? example: 1.) i create a new image with qcow2 format, size 100GB 2.) i use this image to install ubuntu. installation needs about 10 gb, the image-file grows up to about 10GB. nothing unexpected so far. 3.) i fill up the image with about 40 GB of additional data. the image-file grows up to 50GB. i am ok with that :-) 4.) this is where it gets strange: i delete all of the 40GB data on the image, but the image-size still eats up 50GB. question: how do i free up that 40GB of data and shrink the image to the only needed 10 GB? thanks in advance, berni

    Read the article

  • How can I backup entire installations of a program, instead of just manually backing up individual f

    - by NoCatharsis
    It seems pretty straightforward to backup individual files, such as pictures, saved games, or settings files - just copy them straight over to your 2nd HDD or to an online service like DropBox. However, is there any way to backup entire installations of a program? For instance, my Firefox directory has a lot of personal customizations and add-ons. I don't want to go through each item and decide to back it up or let it go. So my next option is to copy out the entire directory for backup. But, if I copy the entire directory back onto the HD after a format, it is not an integrated installation and this seems like it could be troublesome. I would assume Windows cannot detect the directory for uninstallation, or would not let you choose Firefox as your default browser, right? I'm no pro, but this sounds like a bad idea. So my question is whether there is a good way to preserve all necessary files, while also preserving the full installation process of an application. This is not specific to Firefox - I would like to know how to do this for any application. Thank you.

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >