Search Results

Search found 16404 results on 657 pages for 'easy transfer'.

Page 81/657 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • iperf max udp multicast performance peaking at 10Mbit/s?

    - by Tom Frey
    I'm trying to test UDP multicast throughput via iperf but it seems like it's not sending more than 10Mbit/s from my dev machine: C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [156] local 192.168.1.99 port 49693 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [156] 0.0- 1.0 sec 1.22 MBytes 10.2 Mbits/sec [156] 1.0- 2.0 sec 1.14 MBytes 9.57 Mbits/sec [156] 2.0- 3.0 sec 1.14 MBytes 9.55 Mbits/sec [156] 3.0- 4.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 4.0- 5.0 sec 1.14 MBytes 9.56 Mbits/sec [156] 5.0- 6.0 sec 1.15 MBytes 9.62 Mbits/sec [156] 6.0- 7.0 sec 1.14 MBytes 9.53 Mbits/sec When I run it on another server, I'm getting ~80Mbit/s which is quite a bit better but still not anywhere near the 1Gbps limits that I should be getting? C:\> iperf -c 224.0.166.111 -u -T 1 -t 100 -i 1 -b 1000000000 ------------------------------------------------------------ Client connecting to 224.0.166.111, UDP port 5001 Sending 1470 byte datagrams Setting multicast TTL to 1 UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [180] local 10.0.101.102 port 51559 connected with 224.0.166.111 port 5001 [ ID] Interval Transfer Bandwidth [180] 0.0- 1.0 sec 8.60 MBytes 72.1 Mbits/sec [180] 1.0- 2.0 sec 8.73 MBytes 73.2 Mbits/sec [180] 2.0- 3.0 sec 8.76 MBytes 73.5 Mbits/sec [180] 3.0- 4.0 sec 9.58 MBytes 80.3 Mbits/sec [180] 4.0- 5.0 sec 9.95 MBytes 83.4 Mbits/sec [180] 5.0- 6.0 sec 10.5 MBytes 87.9 Mbits/sec [180] 6.0- 7.0 sec 10.9 MBytes 91.1 Mbits/sec [180] 7.0- 8.0 sec 11.2 MBytes 94.0 Mbits/sec Anybody has any idea why this is not achieving close to link limits (1Gbps)? Thanks, Tom

    Read the article

  • BIND zones and named files

    - by preethika
    I've installed BIND in my Windows server2003. i've configured the named file in C:\named\etc\named.conf as: options { directory "c:\named\zones"; allow-transfer { none; }; recursion no; }; zone "tisdns.com" IN { type master; file "db.tisdns.com.txt"; allow-transfer { none; }; }; My zone file is configured in C:\named\zones\db.tisdns.com.txt as: $TTL 6h @ IN SOA ns1.tisdns.com. hostmaster.tisdns.co… ( 2010010901 10800 3600 604800 86400 ) @ NS ns1.tisdns.com. ns1 IN A 192.168.0.17 mug IN A 192.168.0.103 key "rndc-key" { algorithm hmac-md5; secret "M0oW24WFQZhMu9wTq8qepw=="; }; controls { inet 127.0.0.1 port 53 allow { 127.0.0.1; } keys { "rndc-key"; }; }; In the above i've given the name to the domain as "tisdns". i want to create a new domain name in a different zone file. how can i create it?

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • Extract and view Outlook contacts attachment sent to Gmail

    - by matt wilkie
    A friend forwarded a contact list to my gmail account from Outlook (2007 or 2010, not sure which). I can see there is an attachment in gmail but when I save it to my local drive it's just a plain text file containing the text This attachment is a MAPI 1.0 embedded message and is not supported by this mail system. If I use gmail's "show original message" it contains in part: This is a multipart message in MIME format. ------=_NextPart_000_0016_01CC6656.CE12F030 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit ------=_NextPart_000_0016_01CC6656.CE12F030 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" eJ8+Ih0VAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEIgAcAGAAAAElQTS5NaWNy b3NvZnQgTWFpbC5Ob3RlADEIAQgABQAEAAAAAAAAAAAAAQkABAACAAAAAAAAAAEDkAYASAgAACgA --8<---snip---8<-- GUC/9NKH95rABgMA/g8HAAAAAwANNP0/pQ4DAA80/T+lDvAm ------=_NextPart_000_0016_01CC6656.CE12F030-- How do I save the attached winmail.dat properly, and open the winmail.dat and extract the contact list? I'm running Windows 7 x64, but have access to an ubuntu linux vmware appliance if needed. I have Outlook 2010, but can't use it to connect directly to gmail as pop3 and imap are blocked by the corporate firewall.

    Read the article

  • Content not being compressed even though I'm using zlib in php.ini

    - by Tola Odejayi
    I've edited my php.ini file so that it has these two entries: zlib.output_compression = On zlib.output_compression_level = 4 However, after restarting apache, when I request php pages, the headers returned in the response indicate that my server is still NOT serving compressed pages (here are selected headers as viewed using Chrome's Network feature): Cache-Control:no-cache, must-revalidate, max-age=0 Connection:Keep-Alive Content-Type:text/html; charset=UTF-8 Date:Mon, 17 Sep 2012 23:46:13 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Mon, 17 Sep 2012 23:46:13 GMT Pragma:no-cache Proxy-Connection:Keep-Alive Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Via:1.1 XXX-PRXY-07 X-Powered-By:PHP/5.2.17 What might I be doing wrong? Is there any other setting that I need to change? EDIT Here is another set of headers returned to another computer: Cache-Control:no-cache, must-revalidate, max-age=0 Connection:close Content-Type:text/html; charset=UTF-8 Date:Thu, 20 Sep 2012 09:45:26 GMT Expires:Wed, 11 Jan 1984 05:00:00 GMT Last-Modified:Thu, 20 Sep 2012 09:45:26 GMT Pragma:no-cache Server:Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.17 Transfer-Encoding:chunked Vary:Cookie X-Powered-By:PHP/5.2.17

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • DNS Zone file and virtual host question (repost because my question got moved and now I can't comment on the original for some reason)

    - by Jake
    Sorry this is a repost, the original question got moved to here form stack overflow and for some reason I can't comment or respond to answers on that one anymore. Hi all, I'm trying to set up a virtual host for redmine.SITENAME.com. I've edited the httpd.conf file and now I'm trying to edit my DNS settings. However, I'm not sure exactly what to do. Here's an snippet of what's already in the named.conf file (the file was made by someone else who is unreachable): zone "SITENAME.com" { type master; file "SITENAME.com"; allow-transfer { ip.address.here.00; common-allow-transfer; }; }; I figure if I want to get redmine.SITENAME.com working, I need to copy that entry and just replace SITENAME.com with redmine.SITENAME.com but will that work? I was under the impression I needed a .db file but I don't see any reference to one in the current named.conf file. I also don't see any .db files or files named SITENAME in named.conf's directory. Any ideas where these illusive pre-existing db files could be?

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • How to download large files when the download size is restricted ?

    - by Rahul
    ? In my office, the network admin has restricted the download limit to a size of 1.8MB for any file. This is for sub ordinates accounts only. But for my manager's PC, there are no restrictions. Is there any way to download files from my PC by using my managers' ip address. I just tried using his ip on my pc but, had the same problem. ? Earlier I was given access to our Linux server from my pc using putty. Then I used to download large files on to the server and then transfer from server to my machine using fire ftp. This transfer worked perfectly fine. But, now I don't have any access to the server. So can I be able to download large files using fire ftp from my own PC ? I'm using Windows XP machine. Please suggest a solution by any possible combination. Thanks.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • SQL Server 2005 to 2008 DB attach elp please!

    - by Brandon
    I have SQL Server 2005 Standard on my personal machine. I created a very big DB about 21 gb. I made a backup and transferred the .bak file via an ftp program to my dedicated server. I have SQL Server 2008 Enterprise Edition on my dedicated server. I tried restore the transferred .bak file but got an error. I posted the error on here and was told the database is corrupt. How? I don't know. The connection was not interrupted during the ftp transfer. The DB works on my own machine. So then I detached the db on my own machine and transferred the mdf and ldf file to my dedicated server through ftp again and again there were not interruptions. Now I try to attach the db and get this error: The header for file 'DB.mdf' is not a valid database file header. The FILE SIZE property is incorrect. (Microsoft SQL Server, Error: 5172) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.1442&EvtSrc=MSSQLServer&EvtID=5172&LinkId=20476 I already wasted 21 gb transferring the .bak file. Now I used another 21 to transfer mdf and additional ldf file. Please tell me there's a solution. The db can detach and attach fine on my machine in sql server 2005 but not SQL server 2008 on my server.

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

  • How do I prevent a tar pipe from causing swapping?

    - by Jeff Shattock
    I have a rather large filesystem that I need to transfer from one Linux server to another. I figured the best way to do this was via a tar/netcat pipe arrangment, something like tar c . | pv | nc blah blah blah And it works great, the network stays fairly saturated, life is good. Until the source machine starts swapping. The files are on a raid on the source system, so the read speed is much faster than the write speed on the other end. Since the dest machine hasnt picked up the data yet, the source machine needs to stick it somewhere, so into RAM it goes, until there is no more free RAM. It then starts swapping, which is horribly painful since that machine has its OS installed on a somewhat slow CF card. Both machines have 4GB of physical ram, 64 bit Ubuntu 9.04 server. GigE link between them. How do I prevent this swapping? Can I put a "speed-limit" on the tar or netcat process so that the transfer speed doesn't overwhelm the write throughput on the destination end? The man pages didn't list anything, but there might be something I'm overlooking.

    Read the article

  • How is network mounted software executed?

    - by CptSupermrkt
    I would like to understand how network mounted software works. For example, at my place of work, we have a software server. Each client machine (hundreds of them) automatically mounts directories from the software server on boot. For example, a program like Matlab is installed just once on the software server, but each client machine can start up an instance of Matlab. What is going on under the hood? Let's say I run /opt/bin/matlab and /opt/ is mounted from the software server, what happens when I press Enter to execute matlab on a client machine? The process is on the client machine, and I've already narrowed down that there isn't any implicit or hidden file transfer (i.e. copying matlab to my machine temporarily for that session) by running matlab on a computer with nearly zero disk space (i.e. not enough room to transfer). Since Matlab was installed on the server, how is my client computer executing it? What mechanism is controlling this? What is happening behind the scenes?

    Read the article

  • Using the same Windows 8 Upgrade installer on multiple PCs

    - by Karan
    As per this article: You may transfer the software to another computer that belongs to you. … You may not transfer the software to share licenses between computers. But what if I have a bunch of PCs with a mix of XP/Vista/Windows 7? Can I purchase either the Windows 8 Pro Upgrade $40 (download only) or $70 (DVD) version (both of which come without a key) only once and use it to upgrade all the PCs? Since I'm not sharing the license and each PC has its own valid genuine license, it should be allowed, right, or is it illegal? Even if they want people to shell out $40/$70 for each PC, how would they enforce the use of the installer/media on only one PC each? EDIT: I have been given to believe by a source that the installer will only check for the previous OS' key, which is what is confusing me (I have never purchased an upgrade version before this, only full retail or pre-installed versions). Is this true or will I need to enter two keys to make the upgrade work, one for the previous version and then one for Windows 8? If the latter is the case, then the issue is solved since obviously the same Windows 8 key will not be valid for multiple PCs.

    Read the article

  • RAID 6 that can read with least 1000 Mbit/s?

    - by Diblo Dk
    I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time. I have configured it with four 2 TByte drives and RAID 6. It have 256 MByt ram and transfer rate of 300 Mbps. The benchmark test showed: Min read rate: 136.3 Mbps Max read rate: 329,6 Mbps Avg read rate: 242,2 Mbps What could I had done to get at least 1000 Mbps? Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps) How would a RAID 10 had performed compared to RAID 6 or 5? Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller? UPDATE: The drives is SATA III 6 Gbps. http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

    Read the article

  • Why is my rsync so slow compared to pure cp or even scp?

    - by nfm
    I'm transfering the files from Linux to Windows 7 via a mounted share (the share is mounted from Windows on Linux).. I'm copying lots of data (i.e. nearly a TB) from the old to the new machine within my LAN. I'm unfortunate enough already that I only have 100MBit. Naturally I blindly used rsync but already wondered after a day why it feels so slow. Enabling the progress meter showed my a transfer rate of about 2MBit/s . So I took a reasonable big file (800MB) and tracked the transfer timing: cp : 05:33 scp (*): 06:33 rsync : 21:51 *) scp via localhost to the same Linux machine directly onto the share; completely useless but provided a progress meter The tests were as simple as (cp|scp|rsync) <source> <destination> No special arguments except host/port for scp. I even tried the -W switch for rsync but cancelled after ten minutes. rsync is 3.0.3 running on Lenny. To be able to interrupt the copy process anytime and resume lead me to rsync, but now I think I seriously need to reconsider this requirement. How's such a big difference possible?

    Read the article

  • BIND: forward 1st level zone

    - by raven
    First of all: sorry for the language, English is not my primary language. I have star-like DNS structure with many filials (more that 2): ^ | v filialNS_1.filial_1.city.local <---- ns.main.city.local <---- filialNS_2.filial_2.city.local ^ | v ns.mail.city.local is slave of all filials zones filialNS_1 is master of filial_1.city.local filialNS_2 is master of filial_2.city.local filialNS_N is master of filial_N.city.local I want to: serve DNS queries for xxx.filial_N.city.local with filialNS_N.filial_N.city.local forward all queries for xxx.xxx.xxx.local from filialNS_N to ns.main.city.local forward other queries to our provider's DNS on filial (or google-public-dns or anything else) FILIAL CONFIG named.conf zone "filial_1.city.local" { type master; file "/etc/namedb/dynamic/filial_1.city.local"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "2.76.10.in-addr.arpa" { type master; file "/etc/namedb/dynamic/2.76.10.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "local." { type forward; forward only; forwarders { <ns.main.city.local IP address> }; }; nslookup server.filial_1.city.local - works fine nslookup server.main.city.local Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find server.main.city.local: NXDOMAIN Where am I going wrong?

    Read the article

  • rsync --remove-source-files but only those that match a pattern

    - by Daniel
    Is this possible with rsync? Transfer everything from src:path/to/dir to dest:/path/to/other/dir and delete some of the source files in src:path/to/dir that match a pattern (or size limit) but keep all other files. I couldn't find a way to limit --remove-source-files with a regexp or size limit. Update1 (clarification): I'd like all files in src:path/to/dir to be copied to dest:/path/to/other/dir. Once this is done, I'd like to have some files (those that match a regexp or size limit) in src:path/to/dir deleted but don't want to have anything deleted in dest:/path/to/other/dir. Update2 (more clarification): Unfortunately, I can't simply rsync everything and then manually delete the files matching my regexp from src:. The files to be deleted are continuously created. So let's say there are N files of the type I'd like to delete after the transfer in src: when rsync starts. By the time rsync finishes there will be N+M such files there. If I now delete them manually, I'll lose the M files that were created while rsync was running. Hence I'd like to have a solution that guarantees that the only files deleted from src: are those known to be successfully copied over to dest:. I could fetch a file list from dest: after the rsync is complete, and compare that list of files with what I have in src:, and then do the removal manually. But I was wondering if rsync can do this by itself.

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >