Search Results

Search found 1666 results on 67 pages for 'andrew'.

Page 17/67 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • DFS-R (2008 and R2) 2 node server cluster, all file writes end in conflictAndDeleted

    - by Andrew Gauger
    Both servers in a 2 server cluster are reporting event 4412 20,000 times per day. If I sit in the conflictAndDetected folder I can observe files appearing and disappearing. Users report that their files saved by peers at the same location are overriding each other. The configuration began with a single server, then DFS-R was set up using the 2008 R2 wizard that set up the share on the second server. DFSN was set up independently. Windows users have drives mapped using domain based namespace (\domain.com\share). Mac users are pointed directly to the new server share created by DFS-R. It is PC users indicating most of the lost files, but there has been 2 reports by Mac users about files reverting.

    Read the article

  • SSL FTP fails on Windows 7 but not Windows XP clients

    - by Andrew Neely
    We currently use a free SSL-FTP client called Move-It-Freely to transmit data from a custom data entry program at over forty facilities scattered around the state to our central server. Under XP, it works flawlessly. Some facilities have upgraded to Windows 7. On these machines, uploads (transfers to us) work, downloads (transfers from us to them) fail. Replacing the Windows 7 machine with an XP machine solves the problem. We have also verified that the network firewall settings have not changed. This problem persists even if Windows firewall is not running. We were able to remote into one of the Windows 7 machines to verify that the Windows firewall was indeed turned off. We cannot replicate the problem on our own Windows 7 machines, and are at a loss of how to fix this feature for our customers. The data contain health-related information, and needs to be encrypted (hence SSL-FTP.) Despite hours spent on Google, we cannot find a solution.

    Read the article

  • Laptop connected to external monitor results in desktop overflow

    - by Andrew Fox
    I have a laptop (MSi FX600) connected to an external monitor (Toshiba 19AV500U) with the screen resolution set to the monitors recommended setting of 1440x900. The problem is the desktop is overflowing the display, meaning all edges of the desktop are cut off, so I cannot see the close, minimize buttons on the top and I only see half of the taskbar at the bottom. I have tried many different resolutions but none of them solve the problem. I have had this same problem when connecting to other external monitors in the past, but not all of them, it seems to be fairly random. I have tried to find a way to manually adjust the resolution but I cannot see a way to do this in my settings. All windows updates are up to date, I am connecting through an HDMI cable and I have it set to display only on the external monitor and not extended desktop.

    Read the article

  • How to select text in Pico when I don't have a carat character?

    - by Andrew Swift
    I am using Pico via Terminal/SSH on an iPad with a French bluetooth keyboard. There is no way to type the carat (^) key to select text (control-carat to start selection). The carat on the keyboard is used to type a circonflex accent (août), and only becomes active when one types a letter after pressing it. There is therefore no way to type control-^. When I do type ctrl-^ in Pico, the cursor moves to the previous line. Is there an alternate way to select text in Pico? I can't see how to use it without finding a solution.

    Read the article

  • Install Windows 8 64-bit over the top of Windows 8 32-bit

    - by Andrew Gee
    I currently have Windows 8 32-bit installed from MSDN (I didn't realise at the time that my processor supports 64-bit). I understand that you can't upgrade within Windows 32-bit to 64-bit directly from the ISO. I have burned the ISO to a DVD, and have attempted booting from this drive. The problem I am encountering: The operating system couldn't be loaded because a required file is missing or contains errors. File: CI.dll Error code: 0xc0000221 You'll need to use the recovery tools on your installation media. If you don't have any installation media (like a disc or USB device), contact your system administrator or PC manufacturer. Additional info: Computer: HP Pavillion m9280.uk-a Processor: AMD Phenom 9600 Quad-Core RAM: 3 1GB sticks Thanks in advance!

    Read the article

  • Block p2p downloading in my office?

    - by Andrew
    I work in an education office in a third world country. We pay for internet by the megabyte (no other choice) and have lately been using an incredible amount of bandwidth. This is because the office staff have found out about p2p sharing. As far as I know, Limewire is the only program they're using, but I'm sure it's just a matter of time before they discover the more general world of bittorrent. Using only a linksys router (that I could flash), is there any way for me prevent the office from destroying our bandwidth cap by downloading personal items (against policy). Even semi-fixes would be better than nothing.

    Read the article

  • Add 802.11n to existing 802.11g environment

    - by Andrew Robinson
    I have a small home network that is currently running 802.11g. Two computers that are capable of 802.11n and two devices (a BlackBerry and a Skype phone) that are limited to 802.11g. I have a few neighbors running 802.11g but their signals are very weak. How big an impact will the two G devices have on N speeds? Will they pull the whole network back down to G? These two devices are hardly ever used where as the other N devices are heavily used. If I add an N router to the network (instead of replacing the G) and set my existing G router to use channel 1 with 20MHz bandwidth and then set the N router to use 6 & 11 for 40MHz will I eliminate the overlap and allow for full speed on both networks?

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • Coffeeshop limits Internet connection to 30 minutes -- how does it recognize me if I delete my cooki

    - by Andrew
    I was connected to the Internet in a coffeeshop earlier today, but I was only allowed 30 minutes of access. I tried deleting my cookies after my time was up (though admittedly I didn't delete my Flash cookies -- would that have solved the problem?), but the connection still recognized that I'd already used 30 minutes, so I couldn't connect again. How did the connection recognize me still? The wireless was unprotected (no code or password), it just had a portal you had to pass through upon the initial connection. I'm not terribly familiar with web development or computer networks, so just trying to get a better idea of what's happening (and possibly to know what to do next time I use up my minutes =)).

    Read the article

  • htaccess rewrite different folder url, two index files

    - by Andrew
    I've been searching for awhile now and haven't found anything that comes close to what I'm trying to accomplish. Right now my URL's look like this: www.website.com/something which are using the root folder /index.php Now I have created plugins within folders: /plugins/PLUGINNAME/index.php I want to be able to have URLs like: www.website.com/plugins/PLUGINNAME/anything/iwant/here which are all using /plugins/PLUGINNAME/index.php and not the root directory index.php. Currently www.website.com/plugins/PLUGINNAME/ works, but anything after /PLUGINNAME/xxx defaults to the /index.php.

    Read the article

  • Win 7 Home Premium 64 bit running Cobian Backup 11 (Gravity)

    - by Andrew
    I'm really enjoying Cobian 11, but am fairly new to it. My question is this. I back up a pretty large folder on a regular basis. I started off by doing a Full backup, and have followed that monthly using differential backups. I was told that, to restore my computer after a crash, I need to copy back the original full backup AND copy back the latest differential over the full. That's fine. However, over the months there are quite a few large differential backups dated between the original Full one and the latest differential one. To free space on my backup HD, can I every now and then delete the differential backups that lie between the original Full and the latest differential, and just leave the original Full and the latest differential backup on the HD?

    Read the article

  • How to create a password-less service account in AD?

    - by Andrew White
    Is it possible to create domain accounts that can only be accessed via a domain administrator or similar access? The goal is to create domain users that have certain network access based on their task but these users are only meant for automated jobs. As such, they don't need passwords and a domain admin can always do a run-as to drop down to the correct user to run the job. No password means no chance of someone guessing it or it being written down or lost. This may belong on SuperUser ServerFault but I am going to try here first since it's on the fuzzy border to me. I am also open to constructive alternatives.

    Read the article

  • Configure IIS Web Site for alternate Port and receive Access Permission error

    - by Andrew J. Brehm
    When I configure IIS to run a Web site on Port 1414, I get the following error: --------------------------- Internet Information Services (IIS) Manager --------------------------- The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) However, as according to netstat the port is not in use. Completely aside from IIS, I wrote a test program (just to open the port and test it): TcpListener tcpListener; tcpListener = new TcpListener(IPAddress.Any, port); try { tcpListener.Start(); Console.WriteLine("Press \"q\" key to quit."); ConsoleKeyInfo key; do { key = Console.ReadKey(); } while (key.KeyChar != 'q'); } catch (Exception ex) { Console.WriteLine(ex.Message); } tcpListener.Stop(); The result was an exception and the following ex.Message: An attempt was made to access a socket in a way forbidden by its access permissions The port was available but its "access permissions" are not allowing me access. This remains after several restarts. The port is not reserved or in use as far as I know and while IIS says it is in use, netstat and my test program say it is not and my test program receives the error that I am not allowed to access the port. The test program ran elevated. The IIS Site is running MQSeries, but the MQ listener also cannot start on port 1414 because of this issue. A quick search of my registry found nothing interesting for port 1414. What are socket access permissions and how can I correct mine to allow access?

    Read the article

  • Running a bash script from an HTML link or button

    - by Andrew
    I have a webserver that's hosting lots of images. I want the client to be able to press a button or a link, which will run a bash script, which will create a video based on all these pictures. The script I'm trying to run is this: #!/bin/bash # cd to the directory cd /var/www/gallery # use ffmpeg to make video ffmpeg -pattern_type glob -i 'img-*jpg' -r 1 video.mp4 # Take the first file in the directory and name it video.mp4.jpg (for thumbnail) cp `ls | sort -n | head -1` video.mp4.jpg The script is located on the server. So when the client clicks the link or button, the script will run, and the video is created. I've tried both solutions listed here but I can't seem to get it to work. I have php installed on my server.

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • Can I list file names (or their parent directories) that were recently deleted using rm in OS X?

    - by Andrew Grimm
    Is it possible to find out which files and directories have recently been deleted by rm in OS X? Or failing that, is it possible to find which parent directories have had files or directories within it deleted? The OS version is Snow Leopard. Background: Last night, rvm (ruby version manager) did rm -rf of the ~/ruby directory from the home directory. (This bug has since been fixed) Ideally, I'd like to know what files within the ~/ruby directory were deleted, but failing that, I'd like to know if rvm deleted anything outside of ~/ruby . In case anyone's wondering about backups...: Just about everything within ~/ruby is a git project that has a remote repo, and I have a fairly recent Time Machine backup (only 20 days old).

    Read the article

  • Can't ping through default gateway

    - by Andrew G.H.
    I have the following configuration: Routing table on M3 is: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.3.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 Routing table on M1 is: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 So basically M3's gateway is M1, and M1's gateway is M2's wireless internet interface. If I ping 8.8.8.8 from M1, everything is ok, replies are received. Pinging from M1 to M3 and viceversa is also possible. I have configured M1 as gateway trafic forwarder using firestarter package and stopped firewall with it. iptables policies are ACCEPT for everything. Problem: I have tried ping-ing ip 8.8.8.8 from M3 but without success. What could be the source of this problem?

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • SUSE Linux and Xen on Mac Pro - How best to prepare and configure?

    - by Andrew J. Brehm
    This is a longwinded question, so bear with me please. I have a 2009 Mac Pro with two CPUs and 8 GB of memory which is totally overpowered for Mac OS X. I am also in the process of slowly moving away from Mac OS X as my main platform. Since the Mac Pro is really new and nice I have finally decided to use it for another platform. I am familiar with Linux and SUSE Linux. Ultimately I want to run some version of SUSE Linux (recommend one, doesn't have to be free as in no money) and Xen. Here are the individual questions: Which version of SUSE Linux should I use and how do I install it on a Mac Pro? Note that the distribution must come with usable Xen. I am willing to pay. I assume Xen will work on my computer (it has VT support etc.). Is my assumption correct? I want to run Windows 7 and another instance of SUSE Linux under Xen. Is it possible to run Mac OS X Server under Xen (on a Mac Pro)? Which email client under Linux supports imap is is best-suited for integrating with MobileMe? Does SUSE Linux support the ATI Radeon HD 4870 and the Apple Cinema Display 1920 x 1200 resolution? What else should I take into account?

    Read the article

  • How to set the subversion repository root in Debian?

    - by Andrew Whitehouse
    I have just switched from an old Fedora Core server to Debian Linux v5.0.4. Having migrated the old repository and configured access through svn+ssh, I now want to be able to access the repository with the same path on the client as before. On Fedora you could specify the repository root with "svnserve -r " but having checked the config files and svnadmin options I'm stuck as to how I can do this on Debian. Is there a way to set the repository root in Debian?

    Read the article

  • Generate a limited amount of random network traffic between 2 hosts

    - by Andrew S
    I'm trying to find a utility that will allow me to generate a constant flow of random network traffic at a specified rate between 2 hosts. The utility needs to run on Windows and OSX. I've tried iperf but it seems to be more oriented toward short-term testing/statistics and it really taxes the CPU even at slower rates. I want something that will generate traffic for a few weeks at say 10Mbps while I use other tools to monitor the impact of that level of traffic on the network.

    Read the article

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >