Search Results

Search found 19406 results on 777 pages for 'satellite image'.

Page 428/777 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • Blending Background for Polar Distortion in GIMP

    - by Chris S
    I followed a tutorial to perform a polar distortion on a panoramic image. The instructions are geared for Photoshop but seem to mostly apply to GIMP as well. The only thing I couldn't really figure out was how they were able to automatically fill in the area around the circle by "extending" the border of the circle. e.g. In GIMP, performing the polar distortion leaves a black white canvas around the circle, not the attractive blended background shown in the tutorial. Is there an easy way to implement this? The only way I found was to reserve half of the "square" as blank canvas and then manually copy the image's top row of pixels over this empty portion. Then, after the polar distortion, I crop out the extra area. Although this achieves the effect, it seems a bit awkward. How do you stretch selections? Ideally, I just want to select the top row and stretch it vertically until it fills in half of the canvas. Instead I had to manaully copy, paste, translate, etc.

    Read the article

  • Mounting a VirtualBox shared folder on boot with fstab in OpenSuse 11.3

    - by ccook
    I have followed the steps found here, however, the share is not mounted on boot. The share will mount if i run 'mount -a' after booting. Why would the share not mount on boot? 1 - Set up a Virtual Machine and install OpenSUSE 11.2 2 - Create a shared folder on host (HostFolder) 3 - Setup the shared folder in Virtualbox Via the Virtual Machine details or via Devices Shared Folders... 4 - Install dependencies for running the Virtualbox installer You need to install the right development kernelpackage for your machinetype (use 'zypper search -i kernel' to see what's installed) sudo zypper install make gcc kernel-source kernel-hosttype/default-devel 5 - Run the Virtual Machine and go to Devices Guest Additions This mounts an iso image in your OpenSUSE guest. 6 - Open a root terminal and run cd /usr/src/linux make oldconfig && make prepare && make scripts && make dep cp ../linux-obj/$HOSTTYPE/default/Module.symvers . make prepare A commenter on previously mentioned thread says this step is unnecessary but it doesn't work without on my system. I suggest trying step 7 first and returning to step 6 if that fails. * 7 - Run ./VirtualboxLinux yourhosttype .run from the mounted iso image. 8 - Create shared folder in OpenSUSE (GuestFolder) 9 - Test with sudo mount -t vboxsf HostFolder /home/user/GuestFolder It works? Great! Let's set up the system so it automounts for your regular useraccount instead of root-only access. 10 - Add this line to /etc/fstab HostFolder /home/user/GuestFolder vboxsf defaults,uid=1000,gid=1000 0 0 11 - It works for me but if it still doesn't automount after a reboot; sudo mount -a

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Windows Photo Viewer can't open this picture because you don't have the correct permissions to access the file location

    - by Software Monkey
    My system in Windows 7 and fully up to date with all patches and options (except for Microsoft Silverlight, which I refuse to install). I get this error whenever I try to open an image using Windows Photo Viewer, such as when previewing from Explorer or when opening an image attachment to an email. I have already verified correct permissions to the file and all folders in the path. The strange thing is that every other program I have seems to open the images fine, including "Slideshow" from Windows Explorer. Even more strange, in WPV there is an "Open" menu that lists the other programs for images including GIMP and MS Paint and they open the very file that WPV is complaining about just fine. That should eliminate permissions as being the problem, especially since (logically at least) they are read/write while WPV is read-only. I have even edited and saved the images that WPV does not open. I am out of ideas, and searching for answer on the Web has resulted only in the same tired repitition of some flavor of "take ownership and reset permissions for the entire drive", which I have already done. And which is counter-indicated by the fact that only Windows Photo Viewer seems to have a problem. The one thing which is slightly unusual is that for normal files they are all on a second HDD mounted into C:, however for email attachments the temporary folder is C:\Temp\, which is directly on that drive.

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Web page layout becomes broken when moved to live.

    - by Moses
    I want to preface my question with the fact that I'm only a front-end web developer, so please excuse my gross lack of knowledge in this area. My company has three webservers: one for development (IIS v5), one for staging (IIS v5), and one for live deployment (IIS v6). Staging is an exact mirror of live. When I compare the staging and live web pages side by side in Firefox (3.6), the pages are identical. However, when I compare the staging and live pages with Internet Explorer (8), there are major differences... In staging, the squares for bulleted lists are small. In live, the squares are big. In staging, the borders for tables are thick. In live, the borders are thin. In staging, an ASP generated image is the proper height. In live, the image is cropped at the bottom by about 10px. In the end, the layout on live became broken because of these tiny differences, but why? Does the fact that live is on IIS 6 and staging is on IIS 5 account for the small variance in display? And is there any way I can change this server side? Also, is there any reason why Firefox displays both correctly and IE displays both incorrectly?

    Read the article

  • How to figure out what VirtualBox did?

    - by AndrejaKo
    I'm trying to boot a custom made-in-ASM OS on my recent laptop. The OS is intended to be installed on a floppy and during make creates a bootable floppy. Since I don't have a floppy drive, I installed it on a virtual floppy. After that I used WinToFlash's create bootable MS-DOS USB drive option to transfer the floppy image to an USB flash drive. Then I tried to boot my computer from it but got only a repeating broken string on screen. After all that I made a virtual hard disk image form the flash drive using this tutorial and tried to boot a virtual machine from it. First time I got same problem as on real computer. I then used the reset option and next time and every time after that OS booted correctly. My question is: How do I figure out what exactly happened to the virtual machine between first and second boot? UPDATE I just created a new VM with default settings for windows XP and it has the same problem that I have on a real computer. I was unable to reproduce the procedure which made the first VM work correctly.

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • PS3 controller -> PC -> emulators -> TV

    - by abrereton
    I'm researching a media PC for the living room. Playing videos, audio and streaming Internet is straightforward enough. I would also like to run a gaming console system. I was wondering if anyone has any thoughts on this. So far I've discovered that a PS3 controller (thankfully it uses USB and Bluetooth) can be connected to a PC. I've also found that MAME, MESS and PCSX2 are all the emulators I need (I can even emulate a TI-83 calculator with MESS). These emulators can re-map keys, so for example I can make the Nintendo's A button to the PS3 X button, or the SNES key pad could be the PS3 keypad or the analog stick. There are also front-ends to these emulators which can do fancy things like image scaling, anti-aliasing and double-buffering to improve the image quality of an 8-bit Mario on a 50 inch plasma. My set up would be this: PS3 controller connecting over Bluetooth to the PC, PC with Windows, PS3 controller drivers, all my emulators, Network drive with all my ROMs, PC connected to TV via HDMI TV playing Super Mario Kart Does this sound feasible? Does anyone have experience of doing anything like this? Is this a good idea or should I grow up and stop living in the past?

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • scalable yet doable small-medium office network

    - by Jared
    Hello, I'm studying up with both Microsoft and Cisco literature and I must say, my head is starting to get clustered up (pun intended). I've made a quick network diagram of a theoretical company... Company1 owns Company 2 and Company 3, which are all under separate rooms and networks, but must be able to share a few resources such as files or printers. Given the amount of info out there and best practices, I thought about posting here to get suggestions and see what would the pro's do. I can read and read all day and implement on my own, but if I dont get some outside input, how will I know if I'm doing something wrong, right? anyway, please take a look and see if this is an over-complicated network or a lackluster design for a small-medium company of about 35 people and lets say they will be double that number by end of the year... :) Using win2k3, esxi, windows xp. FCS - forefront client security, ACS - access control system, SPCWK - spiceworks, XCH - Exchange Im not allowed to post an image yet, so here's the link ---- GLIFFY IMAGE Flame suit is on just in case people get mad at me for making an "abomination". I'd really want to get the general overview properly before I dive into the more complicated things

    Read the article

  • CSS and JS files not being updated, supposedly because of Nginx Caching

    - by Alberto Elias
    I have my web app working with AppCache and I would like that when I modify my html/css/js files, and then update my Cache Manifest, that when the user accesses my web app, they will have an updated version of those files. If I change an HTML file, it works perfectly, but when I change CSS and JS files, the old version is still being used. I've been checking everything out and I think it's related to my nginx configuration. I have a cache.conf file that contains the following: gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; } And in default.conf I have my locations. I would like to have this caching working on all locations except one, how could I configure this? I've tried the following and it isn't working: location /dir1/dir2/ { root /var/www/dir1; add_header Pragma "no-cache"; add_header Cache-Control "private"; expires off; } Thanks

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Green flickering pixels that move with black images

    - by user568458
    Strange question... Occasionally, on my LCD screen, pixels that should be black flicker rapidly and constantly between black and green, about 4 flickers a second. The crazy part is, unlike dead/stuck pixels, they are relative to content on the screen and move with it. For example, I might be looking at a web page with a picture that has lots of black. There might be a couple of green flashing pixels in that black that shouldn't be there. I scroll the page, and the green flickering pixels move with the image. It seems that everyphysical pixel is fine, but somehow something interprets part of the image in a way that causes flickering green... It's not just in a web browser. My first thought was to blame a trolling blogger cunningly uploading an animated gif that simulates a failing pixel... but it happens in a wide range of applications. It seems to occur randomly, other than that it seems to only occur in areas of pure black, and it's always pure 100% green. It happens rarely enough that it's not a big deal, but it's such a strange problem it bugs me. I can't find any info on anything like this. I'm not even sure if it's hardware or software. Any ideas? (windows 7 laptop connected to LCD by DVI to HDMI cable)

    Read the article

  • Apache caching with mod_headers mod_expires

    - by Aaron Moodie
    Hi, I'm working on homework for uni and was hoping someone could clarify something for me. I need to set up the following: Configure the response header "Cache-Control" to have a "max-age" value of 7 days since access for all image files Configure the response header "Cache-Control" to have a "max-age" value of 5 days since modification for all static HTML files. Configure the response header "Cache-Control" to have a value of "public" for all static HTML and image files. Configure the response header "Cache-Control" to have a value of "private" for all PHP files. My question is whether it is better to use a FilesMatch, or the mod_expires ExpiresByType to best achieve this? I've so far used the following: <FilesMatch "\.(gif|jpe?g|png)$"> ExpiresDefault "access plus 7 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(html)$"> ExpiresDefault "modification plus 5 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(php)$"> Header set Cache-Control "private" </FilesMatch> Thanks.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • Aging SBS needs updates / Thoughts for one-off, off-line complete backup?

    - by tcv
    Hey guys, So, we checked out the status of an SBS 2003 at one of our more recent, spend-averse clients and found it to be woefully out-of-date. Scary out of date. I think it's running IE2. Ok, maybe not that far back. Anyway, I was thinking that I could use some kind of disk-imaging software to image the four IDE drives within and, in the event the server gets some kind of Update Induced Indigestion, I could completely restore. Usually my go-to software for this is Acronis, but my client will likely balk at a $500 price tag for a one-off backup with their server product. I had thought we could use the boot media from, say, Backup & Recovery 10 to take an off-line image of all the drives. According to their CHAT tech support, however, it will not work. I pressed for the technical reasons and they said they'd email me. They haven't emailed me. They still might. This server is running SBS 2003, pre sp2. It's got four IDE disks. One is a Basic disk, which contains the O/S. The others are bound as a dynamic disk. You might ask: "Don't they already have backup software?" They do! Backup Exec, a very low-end version that won't even do VSS. I don't know much about BE, but it seems to me that if the worst were to happen, it would mean building a new server O/S, installing BE (if the media is available), then restoring. Would it even work? I can take the system down for hours to do a backup and my goal here is a pretty dead-simple restore if the worst happens. Any and all suggestions are exciting. m

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • How do I calculate the cost of printing a given page?

    - by Alenanno
    I have seen questions like How much does a square inch of ink cost and How much more will a high-dpi image cost to print?, but mine isn't asking neither about a specific case, nor about how much something costs, as that would depend on the toner, for example. Rather, I was wondering how should I go about calculating the cost of printing a given page. Note that "given page" should be seen as a sort of x, i.e. the answer should be applicable in any case; I'd like this question to provide a good reference for those who want to calculate this cost. What should be taken into consideration? The cost of a single page (the paper only) is easily checkable, since you divide the cost of the whole package for the number of pages in the package itself. But how do I calculate the cost of the ink/toner? Which could translate to: how do I calculate the Ink Density1 for a given printer? I know it depends on quality of the printer itself, the type, the quality of the image being printed, the very nature of what I'm going to print, etc. But again, the focus of my question is not on the variables of this case, but rather the constants, hoping the math simile works for this case too. 1: Total amount of ink in one area of the page.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >