Search Results

Search found 10860 results on 435 pages for 'bad blocks'.

Page 24/435 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • iOS: Assignment to iVar in Block (ARC)

    - by manmal
    I have a readonly property isFinished in my interface file: typedef void (^MyFinishedBlock)(BOOL success, NSError *e); @interface TMSyncBase : NSObject { BOOL isFinished_; } @property (nonatomic, readonly) BOOL isFinished; and I want to set it to YES in a block at some point later, without creating a retain cycle to self: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { [weakSelf willChangeValueForKey:@"isFinished"]; weakSelf -> isFinished_ = YES; [weakSelf didChangeValueForKey:@"isFinished"]; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property } I'm unsure that this is the right way to do it (I hope I'm not embarrassing myself here ^^). Will this code leak, or break, or is it fine? Perhaps there is an easier way I have overlooked? SOLUTION Thanks to the answers below (especially Krzysztof Zablocki), I was shown the way to go here: Define isFinished as readwrite property in the class extension (somehow I missed that one) so no direct ivar assignment is needed, and change code to: - (void)doSomethingWithFinishedBlock:(MyFinishedBlock)theFinishedBlock { __weak MyClass *weakSelf = self; MyFinishedBlock finishedBlockWrapper = ^(BOOL success, NSError *e) { MyClass *strongSelf = weakSelf; strongSelf.isFinished = YES; theFinishedBlock(success, e); }; self.finishedBlock = finishedBlockWrapper; // finishedBlock is a class ext. property }

    Read the article

  • Unity 1.2 Dependency injection of internal types

    - by qvin
    I have a facade in a library that exposes some complex functionality through a simple interface. My question is how do I do dependency injection for the internal types used in the facade. Let's say my C# library code looks like - public class XYZfacade:IFacade { [Dependency] internal IType1 type1 { get; set; } [Dependency] internal IType2 type2 { get; set; } public string SomeFunction() { return type1.someString(); } } internal class TypeA { .... } internal class TypeB { .... } And my website code is like - IUnityContainer container = new UnityContainer(); container.RegisterType<IType1, TypeA>(); container.RegisterType<IType2, TypeB>(); container.RegisterType<IFacade, XYZFacade>(); ... ... IFacade facade = container.Resolve<IFacade>(); Here facade.SomeFunction() throws an exception because facade.type1 and facade.type2 are null. Any help is appreciated.

    Read the article

  • Detecting if youtube is blocked by company / ISP

    - by Simon_Weaver
    We have Youtube videos on a site and want to detect if it is likely that they will not be able to view them due to (mostly likely) company policy or otherwise. We have two sites: 1) Flex / Flash 2) HTML I think with Flex I can attempt to download http://youtube.com/crossdomain.xml and if it is valid XML assume the site is available But with HTML I dont know how to do it. I cant even think of a 'nice hack'

    Read the article

  • Is it a good idea to define a variable in a local block for a case of a switch statement?

    - by Paperflyer
    I have a rather long switch-case statement. Some of the cases are really short and trivial. A few are longer and need some variables that are never used anywhere else, like this: switch (action) { case kSimpleAction: // Do something simple break; case kComplexAction: { int specialVariable = 5; // Do something complex with specialVariable } break; } The alternative would be to declare that variable before going into the switch like this: int specialVariable = 5; switch (action) { case kSimpleAction: // Do something simple break; case kComplexAction: // Do something complex with specialVariable break; } This can get rather confusing since it is not clear to which case the variable belongs and it uses some unnecessary memory. However, I have never seen this usage anywhere else. Do you think it is a good idea to declare variables locally in a block for a single case?

    Read the article

  • How to prevent bad formatted data input in DataGridViewCell

    - by JuanNunez
    I have an automatically binded DataGridView that obtains data and update data directly from a Strongly Typed Dataset and its TableAdapter. the DataGridView allows data editing but I'm having issues dealing with bad formatted data input. For example, one of the columns is a date, formatted in the database as datetime, 11/05/2010. You can edit the date and the DataGridView opens a TextBox in wich you can enter letters, simbols and other unauthorised characters. When you finish editing the cell if has such bad data it throws a System.FormatException How can I prevent some data to be entered? Is there a way to "filter" that data before it is sent back to the DataGridView?

    Read the article

  • MyEntity.findAllByNameNotLike('bad%')

    - by Richard Paul
    I'm attempting to pull up all entities that have a name that doesn't partially match a given string. MyEntity.findAllByNameNotLike('bad%') This gives me the following error: No such property: nameNot for class: MyEntity Possible solutions: name" type="groovy.lang.MissingPropertyException" I had a quick look at the criteria style but I can't seem to get that going either, def results = MyEntity.withCritieria { not(like('name', 'bad%')) } No signature of method: MyEntity.withCritieria() is applicable for argument types: (MyService$_doSomething_closure1) Ideally I would like to be able to apply this restriction at the finder level as the database contains a large number of entities that I don't want to load up and then exclude for performance reasons. [grails 1.3.1]

    Read the article

  • Nginx+Passenger: 502 Bad Gateway from Nginx when passing urlencoded URLs in GET vars

    - by jimeh
    Here's an example of the URLs that don't work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fperson http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2F However, the following URL does work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com Also, this only happens with Nginx, using Passenger with Apache it works fine, but we use Nginx on our production machines. Here's the entry in Nginx's error log: 2009/12/01 09:30:51 [error] 6407#0: *136 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: domain, request: "GET /do?url=http%3A%2F%2Fwww.linkedin.com%2F HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.6335/master/helper_server.sock:", host: "domain"

    Read the article

  • Getting 400 Bad Request when requesting by server name on nginx/uwsgi

    - by Marc Hughes
    I'm trying to run 2 different sites on nginx via different ports (they each have a load balancer that points to the appropriate port). The first site work perfectly. The second site... If I access http://localhost:81/ it works correctly If I access http://127.0.01:81/ it works correctly If I access the hostname http://THEHOSTNAME:81/ it fails with a 400 error If I access the public IP http://x.x.x.x:81/ it fails with a 400 error I've set the error_log to info, but the only lines I get in the log when this happens is: ==> /var/log/nginx/access.log <== 10.183.38.141 - - [24/Aug/2014:21:03:28 +0000] "GET / HTTP/1.1" 400 37 "-" "curl/7.36.0" "-" ==> /var/log/nginx/error.log <== 2014/08/24 21:03:28 [info] 7029#0: *5 client 10.183.38.141 closed keepalive connection In my uwsgi log, I only see this: [pid: 6870|app: 0|req: 87/92] 10.28.23.224 () {32 vars in 380 bytes} [Sun Aug 24 21:05:21 2014] GET / => generated 26 bytes in 1 msecs (HTTP/1.1 400) 2 headers in 82 bytes (1 switches on core 2) What should be my next step in debugging this?

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • Bad response from freeSSHd server.

    - by Kirill
    I'm using ssh client called Granados to connect to servers. When I use CopSSH as ssh server, everything works fine, but when I use freeSSHd as ssh server I get strange response from server that contains something like that: "[4;41H [4;49H [4;42H [4;49H [4;43H [4;49H [4;44H [4;49H [4;45H [4;49H [4;46H [4;49H [4;47H [4;49H [4;48H [4;49H [4;1HC:\Users\Administrator\Desktopcat /proc/meminfot [4;52H [4;50H [4;1HC" Does anybody know what does this symbols means? Thanks.

    Read the article

  • I get a 502 bad gateway ONLY with a specific combination of domain/root folders - NGINX

    - by Patrick De Amorim
    I have a VPS running NGINX and virtual hosts, with a configuration such as this: Domains directing to it: lolpics.no smscloud.no idmag.no Root folders: /home/vds/www/lolpics /home/vds/www/smscloud /home/vds/www/idmag SMSCloud.no is the site that keeps getting 502 errors, but if I make the domain direct to any of the other folders, the site works, or if I make any other domain name direct to the /home/vds/www/smscloud folder, it works. Only smscloud.no with /home/vds/www/smscloud breaks I tried putting this between the http{} in my nginx.conf and no help: proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; EDIT: Well, that was slightly silly, if anyone from Google stumbles on this, here's how I fixed it, I just added this to the http{}: fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; So that the start of my http block is: http { include /etc/nginx/mime.types; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k;

    Read the article

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • monitoring a /21 for potential bad guys with snort and port mirroring

    - by Adeodatus
    Hi all, I want/need to start monitoring our network a bit better. Its an odd network in that it comprises 2 /22 public IPs and a slew of private admin IPs. I do have one point in the network where it all comes together and I can turn on port mirroring on the catalyst. From that port, I'd like to turn up a box running various utilities. Snort is high on my list but it'd be nice to also get some networking statistics with something like Netflow. So, what are peoeple's thoughts. I can turn up a box needed for this with a bit of ease. We have the hardware available. What should I run? I'd love to know what kind of nasty things are potentially going on but I'd also like to see statistics on what people are doing on the network so I can better tweak our systems to handle it better and improve performance. I'm open so please, give me some ideas to go along with what I've got.

    Read the article

  • Restarting nginx with Capistrano results in 502 Bad Gateway

    - by blee
    Here's what cap deploy does: sudo -p 'sudo password: ' -u root /var/rails_apps/fooapp/current/script/process/reaper reaper simply contains /etc/init.d/nginx restart When I run the same command from the shell, I do not get a 502--everything is fine. The nginx error.log is empty. Any thoughts on how to troubleshoot? Thanks in advance for your thoughts.

    Read the article

  • Bad I/O scheduler?

    - by user62367
    os: up-to-date Fedora 14. Working as a "normal desktop". It's doing very well, but if i start VirtualBox, and e.g.: install a guest on it, it just "freezez". I mean if there are disk activities on a VirtualBox guest, then the computer becomes unrespondable..even the mouse is laggin for about 50 minutes.. What could be the bottleneck? What could be the problem? If anyone has any tips/howtos to speed it up, please help! It has a normal 2,5" HDD, with 5400 RPM. Does it worth for me if i buy a 2,5" HDD with 7200 RPM? T7200 cpu, 4 GByte RAM, "vm.swappiness = 0". Thank you!

    Read the article

  • New CAT5 cable run is unstable - bad jacks? Bad cable?

    - by BeemerGuy
    This is a little project I'm doing at home. I wanted to wire two rooms together (basically, the router is one room, and the switch is in the second room). So I ran a CAT5 between the two rooms, and wired an RJ45 jack in each room. I then hooked up the two jacks with two CAT5 cable to run it through the cable tester, and all 8 wires seem good. Now, when I connect the switch and the router, the connection is unstable -- I ping the router and it barely holds on for two pings before it disconnects, and stays in that unstable state. Just to make sure the router and the switch are ok, I connected them with long wire between the two rooms and the connection is absolutely stable, and pings continuously. What could be the cause for the unstable connection? Especially that it pings a few times, so there IS a connection. But why is it unstable? And how come the cable tester says it's ok, but it's unstable?

    Read the article

  • nginx 502 Bad Gateway on every external site

    - by Leandros
    I just installed nginx and followed the guides on the official site, to set it up with php5-fpm, but it just won't work. Not even the default site, without php is working outside of my server. Tried listen = 127.0.0.1:7777 and listen = /var/run/php5-fpm.sock Both don't work. I can access http://localhost with lynx on my server, but not from somewhere else (with external ip obviously). Yes, the php5-fpm deamons are running, yes the port (80 and 7777) is opened. Don't work with php-cgi as well. My config: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_buffers 16 16k; proxy_buffer_size 32k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; } Server config: (symlinked to sites-enabled) server { server_name skilloverflow.de *.skilloverflow.de; root /var/www/blog.skilloverflow.de/htdocs; index index.php; error_log /var/log/nginx/skilloverflow.error.log; access_log /var/log/nginx/skilloverflow.access.log; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:7777; fastcgi_index index.php; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } } PHP Version: 5.4.17-1 nginx version: 1.2.1 Debian 6.0.7 Linux 2.6.32 Edit: Lighttpd is still installed, does that matter? It's not running though. Edit 2: No error or access log is generated. They're all empty.

    Read the article

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • bash completion processing gone bad, how to debug?

    - by msw
    It all started with a simple alias gv='gvim --remote-quiet' and now gv Space Tab gives nothing where it normally should give filenames. Oddly, alias gvi='gvim --remote-quiet' works as expected. I clearly have a workaround, but I'd like to know what is catching my gv for special processing. compopt is no help as gv shares the same settings as ls which does filename completion correctly. $compopt gv compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs gv $ compopt ls compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs ls The complete command is slightly more helpful but it doesn't tell me why my two characters got singled out for alteration: $ complete -p gv complete -o filenames -F _filedir_xspec gv $ complete -p ls complete -o filenames -F _longopt ls $ complete -p echo bash: complete: echo: no completion specification $ alias gvi='gvim --remote-silent' msw@tallguy:~/.gnupg$ complete -p gvi bash: complete: gvi: no completion specification Where did complete -o filenames -F _filedir_xpec gv come from?

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Has the hardware in my modem gone bad?

    - by Tyler Scott
    I contacted CenturyLink about my modem recently and received useless and unrelated information. The problem seems to be that the modem will no longer save settings, the web interface is unusable except in internet explorer for some reason, and the modem keeps resetting. CenturyLink claimed it had to do with signal strength but I checked and it is currently between good and outstanding according to this. All of the lights remain green even when it starts acting up and I lose internet and shortly before it crashes and reboots. Does anyone have any idea what is going on or what I can do to fix it? (Asking CenturyLink again is obviously not going to help.) Update 1: Accessing the syslog from the web interface causes a crash. After it reboots, the log looks like as follows: 01/01/1970 12:01:29 AM Ethernet Ethernet client connected ,ip(192.168.0.2), mac(1c:6f:65:4c:6d:3b) 01/01/1970 12:01:38 AM Wireless 802.11 client connected ,ip(192.168.0.18), mac(d0:df:c7:c2:73:ca) 01/01/1970 12:01:41 AM System Event Line 0: VDSL2 link up, Bearer 0, us=20128, ds=40127 01/01/1970 12:01:43 AM dhcp6s[2028] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:50 AM dhcp6s[2469] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:52 AM radvd[2306] poll error: Interrupted system call 01/01/1970 12:01:56 AM PPP Link PPP server detected. 01/01/1970 12:01:56 AM PPP Link PPP session established. 01/01/1970 12:01:56 AM PPP Link PPP LCP UP. 01/01/1970 12:01:56 AM System Event Received valid IP address from server. Connection UP. 06/05/2014 08:16:01 AM radvd[2511] poll error: Interrupted system call 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM dhcp6s[3236] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 06/05/2014 08:16:08 AM Wireless 802.11 client connected ,ip(192.168.0.7), mac(44:6d:57:c4:d7:08) I also get it to crash on various other pages. I am guessing the web server is unstable.

    Read the article

  • Bad resolution from Displayport converter on third of three screens

    - by Carl
    I am currently using three screens at the same time with my ATI 5770 & an active Displayport converter. The thing is that the third screen (the one using the active Displayport converter) is showing terrible resolution compared to my other two screens. The third screen is a Samsung Syncmaster P23. Two of my screens have a max resolution of 1920x1080, meanwhile the third on is only capable of 1600x1200. Do any of you know a solution to this problem?

    Read the article

  • public key infrastructure - distribute bad root certificates

    - by iamrohitbanga
    Suppose a hacker launches a new Linux distro with firefox provided with it. Now a browser contains the certificates of the root certification authorities of PKI. Because firefox is a free browser anyone can package it with fake root certificates. Can this be used to authenticate some websites. How? Many existing linux distros are mirrored by people. They can easily package software containing certificates that can lead to such attacks. Is the above possible? Has such an attack taken place before?

    Read the article

  • Bad network performance on KVM guest

    - by Devator
    I have a dedicated server connected to a 1000 Mbit port. However, the Debian guest is only getting half to a 1/4 the speeds: On the node itself (Linux node 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux): wget http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin -O /dev/null --2012-11-11 23:10:11-- http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin Resolving www.bbned.nl... 62.177.144.181 Connecting to www.bbned.nl|62.177.144.181|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: â/dev/nullâ 100%[====================================>] 1,048,576,000 100M/s in 10s 2012-11-11 23:10:21 (100 MB/s) - â/dev/nullâ On the guest (Debian 6.0.5, x64: Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux): wget http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin -O /dev/null --2012-11-11 23:10:41-- http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin Resolving www.bbned.nl... 62.177.144.181 Connecting to www.bbned.nl|62.177.144.181|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: â/dev/nullâ 100%[=================================================================================================================================================================================================>] 1,048,576,000 16.5M/s in 42s 2012-11-11 23:11:23 (23.8 MB/s) - â/dev/nullâ I use the virtio NIC. I tried some more NICs: e1000 and the Realtek 8139 but those yield even worse results. Anyone has an idea how to improve these speeds?

    Read the article

  • Is opening ports in the firewall bad?

    - by Steven
    From what little I know about networking, opening ports lets external data get sent in. But how that data is handled is entirely up to the applications running on my machine. So if I'm not running any malicious applications, there should be nothing wrong with disabling the firewall, right? Also, how do applications work when ports aren't forwarded? For example, I need to forward port TCP 6112 to host Blizzard games, but I've heard that HTTP uses port 80, but I haven't forwarded that port, yet Firefox still works. Btw I'm using Windows Vista.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >