Search Results

Search found 19245 results on 770 pages for 'paper size'.

Page 202/770 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Quarter turn pdf document

    - by Rogier
    We have created thousands of pdf files that are printed as a label on a special label printer. Printing these labels is ok, but some of the label paper are quarter turned and the pdf are printed incorrectly. There is a possibility to rotate the page before printing. But is it possible to rotate a pdf file and save it again as a pdf file? And there are thousands of pdf files, is it also possible to do this is a batch program?

    Read the article

  • What advantage does a 2400x600 dpi printer have over a 1200x1200 dpi printer?

    - by Cygon
    I've seen laser printers with a resolutions of 1200x1200 dpi and, strangely, 2400x600 dpi. As the measure is dots per inch, not Kdots on a page or something (where a higher vertical resolution might make sense because paper is rectangular, not square), I'm wondering what the uneven resolution is good for. Why print one square inch with 2400 dots vertically but only 600 horizontally? Does this look more detailed than 1200 by 1200 dots? Or is it better for textile printing or some other special case?

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • Cannot send email to info@ or support@

    - by user3022598
    I am trying to send email from my gmail account to a couple user accounts I have on my new Centos server. The email is setup correctly and I can send receive from accounts ok except info and support. I tried to setup two users "info" and "support" I have a php form that sends out email that works fine for all users except info and support. To test this and make sure that something did not change from yesterday i just created a new user "frank" and tried the submit form and it worked fine. From my gmail account i can email "frank" however i cannot email "info" or "support" The logs I pulled are as follows and i think i see the issue but no idea how to fix it. Aug 15 12:20:55 mail postfix/qmgr[1568]: 1815C20A83: from=, size=1815, nrcpt=1 (queue active) Aug 15 12:20:55 mail postfix/local[2270]: 1815C20A83: to=, relay=local, delay=0.28, delays=0.26/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:17:13 mail postfix/qmgr[1568]: 3C18520A7F: from=, size=1818, nrcpt=1 (queue active) Aug 15 12:17:13 mail postfix/local[2201]: 3C18520A7F: to=, orig_to=, relay=local, delay=0.28, delays=0.25/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:15:24 mail postfix/qmgr[1568]: 2F79420A79: from=, size=1813, nrcpt=1 (queue active) Aug 15 12:15:24 mail postfix/local[2155]: 2F79420A79: to=, orig_to=, relay=local, delay=0.29, delays=0.27/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) For some reason frank goes out fine, however support and info go to root? Why?

    Read the article

  • Problems when pasting Outlook 2010 signature logo into message body

    - by Austin ''Danger'' Powers
    Whenever I paste my company logo into a message in Outlook 2010, I run into a variety of complications and anomalies. The dimensions of my original logo image are 315x174 (source image is a PNG file). I am scaling this image down in Photoshop CS6 to a variety of smaller sizes for testing my Outlook signature (300x166, 250x138, 200x110,150x83 and 100x55 pixels). 300x166 = no distortion. This looks the same as in Photoshop (but far too large to use in my signature). 250x130 = distorted (gets stretched much wider by Outlook when pasting into message body). 200x110 = looks reasonable, but seems to have been scaled to a different size (smaller) by Outlook for no obvious reason. 150x83 = for some reason, this is scaled by Outlook to the exact same size that 200x110 was scaled to. In fact, a large range of similar dimensions are scaled to the exact same image size by Outlook. This is very frustrating. Why is this happening and what can be done to prevent it? 100x55 = when pasting my logo from Photoshop to Outlook with these dimensions all that happens is the cursor jumps forwards about an inch on the screen, leaving a blank space where the image was supposed to go. Any advice would be much appreciated.

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • MOSS 2007 SP2 DB index maintanance

    - by Mike H
    I've read in the "About Service Pack 2 for SharePoint Products and Technologies" paper that SP2 includes an update for the Update Statistics Timer Job that causes SharePoint to run SQL Server's online index rebuild feature (p.4). I'm uncertain of the terminology here but is this the rebuild that SQL Server uses for minor fragmentation (up to around 40%) and leaves the DB online? I'm also guessing that this will therefore not rebuild severely fragmented indexes as I think this requires the DB to come offline. Can someone please confirm my belief here?

    Read the article

  • Some Emails incoming to Outlook 2007 are blank, same emails work fine on webmail, iphone, etc

    - by Funran
    This is a pretty easy problem to describe. Basically users who have just been upgraded to Outlook 2007 (yeah I know 2010 is out), are not receiving SOME emails (from outside our domain, ie hotmail, yahoo). Receiving is not the correct word, these emails come in, along with their attachments, subjects, to/from line, etc. But the body is blank. If the same user goes into their webmail, iphone, blackberry instead, they can read the message fine. It's clear to me that something in Outlook 2007 is not generating the body correctly, so it just strips it. I just don't know WHY. Our mail server was recently upgraded to Exchange 2010, users on 2010 running outlook 2003 are working fine, it's just the random emails for users using 2007. I hope I made that clear enough, thank you for any future help guys. EDIT: I don't see rft, but i swear I've seen it before. Here is the view source on a recent email. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"><html><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <meta name="GENERATOR" content="MSHTML 8.00.6001.19120"> <DEFANGED_style_0 <="" style=""> </head> <body bgcolor="#ffffff"> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">MS,</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">Could you tell me please what the legal descrip &amp; Topo Quad name is for this Monroe P.ID Site?</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><em><font color="#0000ff" size="2" face="Calibri">Thanks, Henry Roye</font></em></p><DEFANGED_DIV></body></html>

    Read the article

  • Is there a way to programatically set the printer properties in windows ?

    - by panzerschreck
    Hello, As a paper saving drive throughout the organization, we plan to set 2 page printing as default setting on all the windows machines. I would like to contribute to this by writing a small script that can do that for all the machines, maybe send an email to all the users, and let them run the batch file. Is that possible, can you please guide me. I have no knowledge about windows scripting, I program in java for my living. Thanks for your time.

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Can Haproxy deny a request by IP if its stick-table is full?

    - by bantic
    In my haproxy configs I'm setting a stick-table of size 5 that stores every incoming IP address (for 1 minute), and it is set as nopurge so new entries won't get stored in the table. What I'd like to have happen is that they would get denied, but that isn't happening. The stick-table line is: stick-table type ip size 5 expire 1m nopurge store gpc0 And the whole configs are: global maxconn 30000 ulimit-n 65536 log 127.0.0.1 local0 log 127.0.0.1 local1 debug stats socket /var/run/haproxy.stat mode 600 level operator defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms backend fragile_backend tcp-request content track-sc2 src stick-table type ip size 5 expire 1m nopurge store gpc0 server fragile_backend1 A.B.C.D:80 frontend http_proxy bind *:80 mode http option forwardfor default_backend fragile_backend I have confirmed (connecting to haproxy's stats using socat readline /var/run/haproxy.stat) that the stick-table fills up with 5 IP addresses, but then every request after that from a new IP just goes straight through -- it isn't added to the stick-table, nothing is removed from the stick-table, and the request is not denied. What I'd like to do is deny the request if the stick-table is full. Is this possible? I'm using haproxy 1.5.

    Read the article

  • How do I make a PPT file as small as possible?

    - by grunwald2.0
    Currently I am agonizing over several large presentation files, which I happened to reprint to PDFs... One thing I wondered: Do PPT's (from Microsoft Powerpoint) always to have to be that big? And what would be the strategies to make a PPT smaller? (If we say "ceterus paribus" at e.g. 25 slides and assuming that one isn't allowed to use a cloud-based service like GDocs, rocketslide or Prezio.) Of course there are the obvious "bad guys": Images and graphics. But: How about roll-over animations etc, who knows how much space they take? How about "smart arts"? Could one save file size if one would use "Open Office" or "Libre Office" Impress? (I didn't try it yet.) And "what if": What if we need to include e.g. five images (or charts that can't be remade in Excel in time), how would we best reduce the file size impact of those five images, if we needed to? I ask all this from an honest "business" perspective. I am no nerd or "Microsoft MVP" and I don't intend on delving into LATeX or similar yet. But that doesn't mean that I am not curious and very willing to learn. I am basically interested in (proven) best practices. Yes I know this question is lacking "initial research", but I think the perspective of my question is interesting and unique to a lot of people and if we intend to make SE a "Q&A" / Wiki kind-of reference site, this question might be a good way to "collect" advice on a question that has a very defined goal: Minimum file-size.

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Website Reference about Server Placement

    - by Manuel Faux
    I have to do a student research project about "Server Placement in a Server Room". The paper should contain something like "place the racks about 3 meters away from any wall", "mind the maximum capacity load of the (false) floor" and other placement strategies. I have been searching for a while, but I did not find any reliable reference I can use in my work. Does anyone know some useful websites about server placement?

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • How to clean green gunk off case fans?

    - by Wesley
    Hi all, I just bought a used custom-built computer locally. In the process of checking the hardware, I notice that the case fans have accumulated a lot of green "gunk" and I've tried wetting the edge of a paper towel to wipe and rub it off. Still, there is always some residue remaining. What's the best way to clean off this gunk? Thanks in advance. No pictures, but I can take some if needed.

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

  • Is there a simple context-menu add-in that could make-up for the Windows-7 status bar deficiency?

    - by DanO
    Edit: I initially asked about free disk space and selected item size. It has since been pointed out that the selected item size summary is still availiable natively in the details pane. I had read elsewhere (wikipedia) that this was removed along with disk free space, which is not the case. Only free disk space has been completely removed. Selection size is still availiable. Is there a context menu add-in out there that could show the free disk space of the relevant drive, when you right click? This would go a long way to compensating for one of the only steps backward I’ve discovered in Windows 7 so far. I doubt anyone had created one specifially for this need before windows 7 because this information was previously easily accessible in the status bar. I thought about creating one, but it has been a while since I have messed with the Shell API, and I know there are coders out there who could do it faster and better. If you’ve heard of one, or know of something else to make-up for this Microsoft misstep, I’d appreciate hearing about it. If MS were listing to the community they would already have a powertoy or add-in of some kind to un-break this. (they could release it unsupported even), as there seem to be many power users that are extremely annoyed by this feature removal decision. If anyone has seen something, please post it here. As it has been only 4 days since official Windows 7 release, I'll wait at least a week to chose an answer. Here's a picture of protoype screenshot: SU question 19232 is related.

    Read the article

  • Virtual Server HDD shrinks without apparent reason

    - by Christian
    We have a virtual hosted Linux server, and in the last few months every now and then the HDD shrinks from 400GB down to the exact byte count that is in use. All existing data can be downloaded and displayed without a problem, but we can't upload or edit any files because of the "full" hard drive. Here is a screenshot, where "size" should be 400GB: This has happened twice before, and again today. The last times, when I reported the issue to the host, they said "that isn't possible, you must be doing it wrong", but soon after the call, the problem vanished without us doing anything, so I suppose that they have some kind of problem they're not willing to admit. Even after the fact, they acted like nothing was wrong and wrote me a mail in which they explained that I can use "df -h" to view available disk space (well duh, how do you think I noticed this particular issue?). Questions about if and what they had done were ignored. It has happened around the 25th to 28th of the month, so I suspect that they might have a cronjob running every 30 days or so which wreaks havoc with some VM configs. I just want to understand the problem, but the host support hasn't been very helpful in that regard. I have tried Googling the issue, but any combination of search terms I can come up with just gives me tutorials on how to change HDD size in a virtual machine. a) What could be the cause of shrinking HDD size in a Ubuntu 12.04.3 LTS server? Could there be anything in our virtual machine or is it more likely to be an issue with the vm host? b) Can I do anything about it without needing to contact the host's support? c) Is there anyway I can prevent this from happening at all?

    Read the article

  • Hashed pattern in bargraphs - MS excel

    - by user1189851
    I am drawing graphs for a paper that supports only black and white graphs. I need to show more than 3 histograms and want to have different patterns on them like hash, dotted, double hashed etc instead of different colors in the legend. I am using MS Excel 2007. I tried but dont find a way except for the option available in design tab that I find when I double click on the chart area( These are shades of grey color and I want patterns like hashing, dots etc). Thanks in advance,

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by user28146
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • windows 7 start menu showing incorrect data

    - by madmik3
    Hi, I've tried to rebuild my search index but it does not seem to help. When i search for anything even command I either get an empty list or a list of short cuts with the names Programs Documents Files ... They all have the default white paper icon. If I click on them I get an error message that says Internet security settings prevented one or more files from being opened. Any ideas? thanks

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >