Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 220/289 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • What's needed in a complete ASP.NET environment?

    - by Christian W
    We have a ASP3.0 application with a few ASP.NET (2.0) dittys mixed in. (Our longtime goal is to migrate everything to ASP.NET but that's not important for this issue) Our current test/deploy workflow is like this: 1 Use notepad++ or VS2008 to fix a bug/feature (depending on what I have open) 2 Open my virtual test-server 3 Copy the fixed file over, either with explorer, or if I can be bothered to open it, WinMerge 4 Test that the fix works 5 Close the virtual test-server 6 Connect to our host with VPN 7 Use WinMerge to update the files necessary 8 Pray to higher powers that the production environment is not so different that something bombs. To make things worse, only I have access to my "test-server". So I'm the only one testing it. I really want to make this a bit more robust, I even have a subversion setup running. But I always forget to commit changes... And I don't even work in my checked out folder, but a copy of what is currently in production... Can someone recommend some good reading on deploying, testing, staging and stuff like that. I currently use VS2008 and want to use subversion or GIT (or any other free VCS). Since I'm the only developer, teamsystem is not really an option (cost-related). I have found myself developing an "improved" feature, only to find a bug in the same feature in the production system. And since my "improved" feature incorporated deleting some old functionality, I have to fix bugs directly in production... That's not a fun feeling... (I have inherited this system recently... So it's not directly my fault that it is like this ;) )

    Read the article

  • Data Store/Volume disconnecting. How to resume copy of VMDK?

    - by Serge
    I'm having an issue with my ESXi 4.1 hosts losing the datastore with FC SAN after a power outage. All 3 hosts disconnect so it's definitely a SAN issue. I've tried to resolve the issue on the SAN side with the SAN software support and Adaptec hardware support. No luck there. So I'm stuck with a SAN that will randomly disconnect the volume. I need to get the virtual machines (VMDK files) from the datastore. The problem is I can only get 5-20% before the data store disconnects. I have backups that are slightly older that I can use to replicate the VMDK differences to. What has not worked so far: Powering up the VMs, will boot up for 5-15 minutes then freeze vCenter migrate or clone of VM, will fail after similar period of time vCenter copy/paste of VMDK. Was able to get one 30GB VMDK and no luck after that. vMware Data Recovery. Fails at low %, can't resume, so next backup starts from begining. Veeam Backup & Recovery. Same as above, no resume function. If I can just find a backup solution that will resume from the failed spot that would solve my issue. Anyone have any ideas that I could try? EDIT 1 The SAN is Open-E DSS 6 running on a Supermicro 24 drive enclosure with 4 port Qlogic FC. Adaptec 52445 RAID card.

    Read the article

  • What is causing sudden freezing during running real-time program?

    - by Trevor Boyd Smith
    So I run a high intensive (CPU/GPU) real-time program. During normal execution suddenly everything freezes for 1-4 seconds. I opened "Process Explorer" in the background to help gain insight and maybe identify something. Here is what the CPU/GPU graphs looks like when I align them in time: Notice the 4 distinct drops in both the CPU/GPU. You can see that it goes from some sort of positive CPU/GPU usage to almost zero. These drops in the graph align with when the real-time program suddenly freezes. How do I find what is causing these sudden drops? NOTE: When you put your mouse over the graph it tells you the time, accurate to the second, for where your cursor is. Maybe this mouse over feature could be helpful in some way (e.g. what if you had a log of all processes every 100ms). EDIT: The real-time program is a video game and so I can't watch some sort of instrumentation while the video game is running. I need a solution that let's you look back in time somehow to see what was happening when the slow down occurred. EDIT: RE - Recording Data vs using real-time monitor: So the windows performance recorder is for some reason not recording what I expect it to record. So I switched to using "perfmon" and then opening it's "resource monitor". RE - Setting it up so I can view real-time monitor: In the video game I set it to spectate and then put the video game in "windowed" mode so that I can view the real time display that Resource Monitor has. Now that I can get semi-real time (only once per second... how do you get more than once per second?) I started looking at the various real time data readouts. Getting to the cause: I noticed a strong correlation in high disk IO and low CPU usage (which is also seen by having in-game freezing). How do you use resource monitor to find out who is doing all this offending disk IO?

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Setting up MySQL Linux slave with a Windows master

    - by philwilks
    I'm running a MySQL 5.0 database server on Windows Server 2008. The total size of the database is about 1Gb. I make daily backups, but I'd like to step up to having a slave server for extra protection. My thinking was that I wouldn't need the expense of a Windows machine to do this, and a Linux "cloud server" from RackSpace would do the job well for quite a low cost. However I have little experience with Linux, so I have a few questions... Does this sound like a good idea? Is there anything wrong with linking Windows and Linux MySQL servers? Does Linux have the equivalent of Remote Desktop Connection? If so can I use this from a Windows machine? Would a particular Linux distro be well suited to this task? RackSpace offer ArchLinux, CentOS, Debian, Fedora and Ubuntu. My immediate thinking is to go with Ubuntu as I've heard it's more friendly for people coming from a Windows background. Any comments you have would be very appreciated! Phil

    Read the article

  • Windows Action Center notification icon says Backup in Progress when no backup is occurring

    - by allquixotic
    How can I get Windows Action Center's little flag in the notification area to stop saying "Backup In Progress" on Windows 8? It's driving me nuts. I disabled the Windows Backup service completely and turned off File Recovery. Nothing that I can tell is using any disk I/O whatsoever, by examining Task Manager's resource monitor. It's just a visual cue that seems totally wrong considering my disk is only using about 50 KB/s of sporadic writes for superfetch etc. This wouldn't be a problem for me, since I rely on the knowledge that the Backup service is disabled and there's no disk activity, but I am trying to support a more traditional user who relies on visual cues from the operating system and trusts them over low-level observations like "...but the Windows Backup service is disabled!" Therefore this user still thinks that the backup is going on, even after the service is disabled. Technically, I think this is a bug in Windows 8. It really should not be displaying "Backup in Progress" if ... you know ... a backup, is not, in progress. Which it isn't. Is there a workaround?

    Read the article

  • Why do you use a 3PAR SAN? [closed]

    - by Starfish
    If you use a 3PAR SAN, I’d like to hear what you think about it, particularly compared to the HP EVA. What do you see as its advantages over other SANs like the EVA? What’s so special about the ASIC? We had HP quote us an EVA P6500 and 3PAR V400 with equivalent storage and the 3PAR was nearly twice the cost. My site has two EVA SANs with a combined capacity of ~80 TB. We want to replace the older and larger of the two. We’ve been looking at the EVA and the 3PAR to see which would be a better fit for us. I’m struggling to understand how the 3PAR differs from the EVA from a practical technical standpoint. When I read the sales literature and speak with the HP sales engineers, they spend a lot of time talking about how the 3PAR is better because of its ASIC. It’s ASIC this and ASIC that, but when I press them on how a 3PAR with thin provisioning is better than an EVA with thin provisioning, I can’t get a straight answer. Meanwhile, one of my colleagues, who has more say regarding which SAN we get, is enamored by the 3PAR, and he can’t explain clearly to me why he wants it over the EVA. Our needs are pretty simple. We have 10 servers running VMware and ~100 VMs. We use VMware’s thin provisioning currently, but we would like to start using thin provisioning on the new SAN. We don’t have a need for SSDs or migration between storage tiers. We plan on having FC or SAS drives for our most used data and SATA/FATA drives for the lesser used data which is how we have the EVAs configured. We also do not need any SAN-level snapshotting or replication.

    Read the article

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • Docking Station Sound Doesn't Work on Dell D830 with Windows 7

    - by cisellis
    I have a Dell Latitude D830 laptop that is running Windows 7 Enterprise x64. I connect to a docking station during the day with multiple monitors, a keyboard and a mouse. Everything runs with no problems including most of the docking station ports (usb, monitors, etc.) However, the sound port from the docking station does not work since the upgrade to Windows-7. Even with the laptop plugged in, the sound always comes out of the laptop, not the headphones plugged into the docking station. Here's what I've tried: I've seen other issues like via Google this that seem to be mostly unanswered. I found one or two that referenced using the Vista x64 drivers, especially the Nvidia drivers. I do not have an Nvidia chipset but I've reinstalled the sound drivers and that has not helped. I don't have a support contract and considering the cost is usually high to call Dell, that's not an option. Dell's forums are pretty much a wasteland and I've found no help there. Since this is a docking station I thought I might need to try the SATA or Intel chipset drivers from the dell site instead, however I'm not really sure and I need to work on this laptop in the meantime. I can't really afford the downtime to experiment with random drivers all day in case they turn out to be incompatible (Dell still hasn't added Windows 7 to their support site as far as I can tell). Does anyone have any other ideas? Has anyone had this issue and solved it? If so, how? Thanks in advance for your help.

    Read the article

  • What are the frequencies of current in computers' external peripheral cables and internal buses?

    - by Tim
    From Wikipedia, three different cases of current frequency are discussed along with the types of cables that are suitable for them: An Extra Ordinary electrical cables suffice to carry low frequency AC, such as mains power, which reverses direction 100 to 120 times per second (cycling 50 to 60 times per second). However, they cannot be used to carry currents in the radio frequency range or higher, which reverse direction millions to billions of times per second, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the power from reaching the destination. Transmission lines use specialized construction such as precise conductor dimensions and spacing, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. Types of transmission line include ladder line, coaxial cable, dielectric slabs, stripline, optical fiber, and waveguides. The higher the frequency, the shorter are the waves in a transmission medium. Transmission lines must be used when the frequency is high enough that the wavelength of the waves begins to approach the length of the cable used. To conduct energy at frequencies above the radio range, such as millimeter waves, infrared, and light, the waves become much smaller than the dimensions of the structures used to guide them, so transmission line techniques become inadequate and the methods of optics are used. I wonder what the frequencies are for the currents in computers' external peripheral cables, such as Ethernet cable, USB cable, and in computers' internal buses? Are the cables also made specially for the frequencies? Thanks!

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • Why does my PC successfully boot only when unplugged for more than a few minutes?

    - by philg
    I have an HP Pavilion Elite desktop computer, model HPE-490t. I like it because it didn’t cost too much, boots itself from an SSD, came with 16 GB of RAM, and has 6 CPU cores for editing video and camera RAW images. It has one behavioral quirk that I cannot explain, however. The recent power interruptions here in the Northeast got the machine into a state where it could not be restarted. It would power up for a second or two, shut down, and then power up again, never being able to get to the point of showing anything on the monitor. I unplugged it for about 10 seconds and plugged it back in. Same behavior (fails to boot). I unplugged it and walked away for an hour, then plugged it back in and it worked perfectly! I think something similar happened after installing a second hard disk drive into this machine. So the question is why does the computer behave differently depending on how long it has been unplugged? Where is energy stored that affects the machine’s ability to boot? Capacitors in the power supply? Battery on the motherboard (there is one for the clock, but that wouldn’t be exhausted by being unplugged for an hour, I don’t think)?

    Read the article

  • Why do manufacturers not show all hardware power usage?

    - by Drew
    I find it slightly more difficult to build a computer when I do not know how much power is needed for a component. When selecting a power supply for a computer, it is difficult to know how large of one to get. You don't want to go too large for cost reasons and circuit reasons, but you don't want to go too low and not be able to properly use every component. For instance, a graphics card might say "Minimum of a 500 Watt power supply. (Minimum recommended power supply with +12 Volt current rating of 30 Amps.)" But it really needs 360W (12V * 30A). So why don't they just say "Uses 360W max and xxxW peak"? Processors, I have noticed are good at reporting their power usage, but aside from processors and sometimes graphics cards, power usage is easily found. What is the power consumed by the Blu-ray / DVD drives? By the HDDs/SSDs? By the Mobo? etc. Why are these questions not easily answered when building a machine?

    Read the article

  • Will wear induced by turning computers off in the evening be offset by energy savings?

    - by sharptooth
    I'm asking this here because this is primarily a huge office scenario and administrators will more likely have the answer I'm looking for. Employees' desktop computers can be either left turned on for the whole night or switched off in the evening and turned back on in the morning. The latter will surely save energy. In the same time turning on and off is very harmful for the equipment - hardware often breaks specifically when turned on. Both energy and hardware replacements cost money. With energy it's quite obvious - you pay every month according to what your power meter shows. With hardware replacements it's worse - you need qualified stuff to quickly diagnose the problems and once something breaks the affected employee will have to wait for some time while his computer is fixed/replaced and the data is recovered. So the company has to choose between saving money on energy and saving money on computer maintaince and lost hours. Such decisions must be well though. Is there any detailed study of how turning computers off each evening affects their lifetime?

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • How to increase the speed between two external hard drives on my laptop?

    - by Roman
    Hello, I own Sony Vaio Z laptop with two external USB ports. It's quite new and has USB 2.0 support. I'm using Vista x64 on it. I also have two external usb hard drives, Iomega 500GB and WD for 1TB. Every hard drive has USB 2.0 support. I connect two devices to my laptop and trying to copy date from one hard drive to another. But it takes a lot of time! The speed is about 15 Megabytes per second. I have to wait toooooo long to copy all the information from one hard drive to another. When I try to copy information from my internal (SSD) hard drive, it works fine for both external drives. The speed is very high and it shows me something about 100 Megabytes per second. It makes me feel that USB 2.0 is OK on both drives. But when I'm trying to copy from one external drive to another external, I still get very low speed. I checked out Device Manager and here is the settings I have: (sorry, can't upload image because of my rating, check this url: http://picbite.com/image/122073daljo/ ) I think it's because two of my external drives use the same USB 2.0 controller. Is there any way to make it work faster? Is it possible to move one of my USB ports to other USB 2.0 controller? Or is there any software which can help me to automate copying all the files thru my internal drive? I have only about 3 gigabytes free space on internal drive and it's quite difficult to move manually every file from one hard drive to internal and then again to another internal.

    Read the article

  • Aging SBS needs updates / Thoughts for one-off, off-line complete backup?

    - by tcv
    Hey guys, So, we checked out the status of an SBS 2003 at one of our more recent, spend-averse clients and found it to be woefully out-of-date. Scary out of date. I think it's running IE2. Ok, maybe not that far back. Anyway, I was thinking that I could use some kind of disk-imaging software to image the four IDE drives within and, in the event the server gets some kind of Update Induced Indigestion, I could completely restore. Usually my go-to software for this is Acronis, but my client will likely balk at a $500 price tag for a one-off backup with their server product. I had thought we could use the boot media from, say, Backup & Recovery 10 to take an off-line image of all the drives. According to their CHAT tech support, however, it will not work. I pressed for the technical reasons and they said they'd email me. They haven't emailed me. They still might. This server is running SBS 2003, pre sp2. It's got four IDE disks. One is a Basic disk, which contains the O/S. The others are bound as a dynamic disk. You might ask: "Don't they already have backup software?" They do! Backup Exec, a very low-end version that won't even do VSS. I don't know much about BE, but it seems to me that if the worst were to happen, it would mean building a new server O/S, installing BE (if the media is available), then restoring. Would it even work? I can take the system down for hours to do a backup and my goal here is a pretty dead-simple restore if the worst happens. Any and all suggestions are exciting. m

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • how to block spam email using Microsoft Outlook 2011 (Mac)?

    - by tim8691
    I'm using Microsoft Outlook 2011 for Mac and I'm getting so much spam I'm not sure how to control it. In the past, I always applied "Block Sender" and "Mark as Junk" to any spam email messages I received. This doesn't seem to be enough nowadays. Then I've started using Tools Rules to create rules based on subject, but the same spammer keeps changing subject lines, so this isn't working. I've been tracking the IP addresses they also seem to be changing with each email. Is there any key information I can use in the email to apply a rule to successfully place these spam emails in the junk folder? I'm using a "Low" level of junk email protection. The next higher level, "high", says it may eliminate valid emails, so I prefer not to use this option. There's maybe one or two spammers sending me emails, but the volume is very high now. I'm getting a variation of the following facebook email spam: Hi, Here's some activity you have missed. No matter how far away you are from friends and family, we can help you stay connected. Other people have asked to be your friend. Accept this invitation to see your previous friend requests Some variations on the subject line they've used include: Account Info Change Account Sender Mail Pending ticket notification Pending ticket status Support Center Support med center Pending Notification Reminder: Pending Notification How do people address this? Can it be done within Outlook or is it better to get a third party commercial software to plug-in or otherwise manage it? If so, why would the third party be better than Outlook's internal tools (e.g. what does it look for in the incoming email that Outlook doesn't look at)?

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >