Search Results

Search found 6055 results on 243 pages for 'ryan max'.

Page 151/243 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • new pc..noisy fan

    - by BRQ
    It's a new build, but it's always had noisy fans. From start to end, they will not stop running. The case is a cooler master which I believe comes with a fan that is not controlled by BIOS (according to technician), so that may be the source of the problem..but my lack of knowledge on the matter prevents me from making a reasonable assessment. Here are readings from CoreTemp: Model: Intel Core i7 870 (Lynnfield) Platform: LAG 1156 (Socket H) Frequency: 1658.23MHz (132.66 x 12.5) Tj. Max: 99 C Core #0: low= 34 C; high= 42 C; Load= 0% Core #1: low 31 C; high 42 C; load= 0% Core #3: 35 C; 42 C; 0% Any input will be appreciated.

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • Roguelike FOV problem

    - by Manderin87
    I am working on a college compsci project and I would like some help with a field of view algorithm. I works mostly, but in some situations the algorithm sees through walls and hilights walls the player should not be able to see. void cMap::los(int x0, int y0, int radius) { //Does line of sight from any particular tile for(int x = 0; x < m_Height; x++) { for(int y = 0; y < m_Width; y++) { getTile(x,y)->setVisible(false); } } double xdif = 0; double ydif = 0; bool visible = false; float dist = 0; for (int x = MAX(x0 - radius,0); x < MIN(x0 + radius, m_Height); x++) { //Loops through x values within view radius for (int y = MAX(y0 - radius,0); y < MIN(y0 + radius, m_Width); y++) { //Loops through y values within view radius xdif = pow( (double) x - x0, 2); ydif = pow( (double) y - y0, 2); dist = (float) sqrt(xdif + ydif); //Gets the distance between the two points if (dist <= radius) { //If the tile is within view distance, visible = line(x0, y0, x, y); //check if it can be seen. if (visible) { //If it can be seen, getTile(x,y)->setVisible(true); //Mark that tile as viewable } } } } } bool cMap::line(int x0,int y0,int x1,int y1) { bool steep = abs(y1-y0) > abs(x1-x0); if (steep) { swap(x0, y0); swap(x1, y1); } if (x0 > x1) { swap(x0,x1); swap(y0,y1); } int deltax = x1-x0; int deltay = abs(y1-y0); int error = deltax/2; int ystep; int y = y0; if (y0 < y1) ystep = 1; else ystep = -1; for (int x = x0; x < x1; x++) { if ( steep && getTile(y,x)->isBlocked()) { getTile(y,x)->setVisible(true); getTile(y,x)->setDiscovered(true); return false; } else if (!steep && getTile(x,y)->isBlocked()) { getTile(x,y)->setVisible(true); getTile(x,y)->setDiscovered(true); return false; } error -= deltay; if (error < 0) { y = y + ystep; error = error + deltax; } } return true; } If anyone could help me make the first blocked tiles visible but stops the rest, I would appreciate it. thanks, Manderin87

    Read the article

  • Is SATA bandwith per Port or per Controller?

    - by instanceofTom
    I always assumed that it was per Controller channel, and that If I have 4xSATA 3.0Gb/s ports on my Motherboard then I should have a potential 12.0Gb/s of bandwith. However, after doing some searching I found conflicting information suggesting that if I had 4xSATA drives connected to my MB and were using them simultaneously each drive would get only 3.0Gb/s /4 = 768 Mb/s max bandwith. So I wanted to clear up my understanding. Side question: Are there other hdd/ssd bandwith bottlenecks to be aware of? (Links to already answered questions are more than welcome)

    Read the article

  • Is there a way to make scp run faster on a Mac OS X?

    - by paul_sns
    I'm trying to a upload a Flex generated SWF file from my Macbook (running Snow Leopard) using the command scp main.swf server.com:/ I had setup key authentication to prevent typing the user/pass every time. This process normally takes up to two minutes using my connection at home (768kbps down/300+ kbps up). The interesting part is that when I use WinSCP in my Windows XP machine, the process only takes 30 seconds max. Both my MacBook and Windows XP machine use the same internet connection. The MacBook is connected to the router via cable (which should be faster right?) while the Windows XP connects through Wifi. Let me know if you need additional information in order to diagnose the problem. Thanks!

    Read the article

  • [Grails] How to enlist children of specified Parent in treeview colum (table)

    - by Rehman
    I am newbie in grails and tried to implement treeview using RichUI plugin, which shows all parents with individual children in Parent.list.gsp xml for parent and their children <parents name='Parents'> <Parent id='1' name='Parent_1'> <Children name='Children'> <child name='Child_2' id='2' /> <child name='Child_4' id='4' /> <child name='Child_1' id='3' /> <child name='Child_3' id='1' /> </Children> </Parent> <Parent id='2' name='Parent_2'> <Children name='Children'> <child name='Child_1' id='8' /> <child name='Child_2' id='7' /> <child name='Child_4' id='6' /> <child name='Child_3' id='5' /> </Children> </Parent> </parents> Parent Domain Class class Parent { String name static hasMany = [children:Child] } Child Domain Class class Child { String name Parent parent static belongsTo = [parent:Parent] } Parent Controller def list = { def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.parents(name: "Parents"){ Parent.list().each { Parent parentt = it Parent( id:parentt.id,name:parentt.name) { Children(name:'Children'){ parentt.children.each { Child childd = it child(name:childd.name,id:childd.id) } } } } } if(!params.max)params.max=10 ["data":writer.toString(),parentInstanceList: Parent.list(params), parentInstanceTotal: Parent.count()] } Parent.list.gsp <head> <resource:treeView/> ...</head> <body> <table> <thead> <tr> <g:sortableColumn property="id" title="${message(code: 'parent.id.label', default: 'Id')}" /> <g:sortableColumn property="name" title="${message(code: 'parent.name.label', default: 'Name')}" /> <g:sortableColumn property="relationship" title="${message(code: 'parent.relationhsip.label', default: 'Relationship')}" /> </tr> </thead> <tbody> <g:each in="${parentInstanceList}" status="i" var="parentInstance"> <tr class="${(i % 2) == 0 ? 'odd' : 'even'}"> <td><g:link action="show" id="${parentInstance.id}">${fieldValue(bean: parentInstance, field: "id")}</g:link></td> <td>${fieldValue(bean: parentInstance, field: "name")}</td> <td><richui:treeView xml="${data}" /></td> </tr> </g:each> </tbody> </table> </body> Problem Currently, in list view, every Parent entry has list of all parents and their children under relationship column Parent List view Snapshot link text Question how can i enlist all children only for each parent instead of enlisting all parents with their children in each Parent entry ? thanks in advance Rehman

    Read the article

  • Mount encrypted hfs in ubuntu

    - by pagid
    I try to mount an encrypted hfs+ partition in ubuntu. An older post described quite good how to do it, but lacks the information how to use encrypted partitions. What I found so far is: # install required packages sudo apt-get install hfsprogs hfsutils hfsplus loop-aes-utils # try to mount it mount -t hfsplus -o encryption=aes-256 /dev/xyz /mount/xyz But once I run this I get the following error: Error: Password must be at least 20 characters. So I tried to type it in twice, but that results in this: ioctl: LOOP_SET_STATUS: Invalid argument, requested cipher or key (256 bits) not supported by kernel Any suggestions? Thx Edit: One thing I'm not sure about is whether I use the right password. My assumption is that it is my default one for these situations. But I'm not sure whether Max OSX choose another password (internally) for that.

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • Virtual PC 2007 Full Screen Resolution

    - by Swami
    I have a laptop (1440 x 900) and an external monitor (1920 x 1200), and I'm running Virtual PC 2007. Initially, my VPC could not switch to fullscreen mode on my external monitor because the max resolution for VPC was only 1600 x 1200. I installed a hotfix (http://support.microsoft.com/kb/958162) and after the hotfix, I was able to view VPC in fullscreen mode on my external monitor, but now I'm unable to view fullscreen on my laptop. It says "please check that the resolution of the guest is not higher than that of the host". I even tried to decrease the resolution of my Virtual PC to less than than of my laptop, but it always resets it back up. So now after the hotfix, I can view fullscreen mode only when my external monitor is plugged in. Any way I can resolve this?

    Read the article

  • gzip compression using varnish cache

    - by Ali Raza
    Im trying to provide gzip compression using varnish cache. But when I set content-encoding as gzip using my below mentioned configuration for varnish (default.vcl). Browser failed to download those content for which i set content-encoding as gzipped. Varnish configuration file: backend default { .host = "127.0.0.1"; .port = "9000"; } backend socketIO { .host = "127.0.0.1"; .port = "8083"; } acl purge { "127.0.0.1"; "192.168.15.0"/24; } sub vcl_fetch { /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "^/public/" || req.url ~ "\.js"){ unset req.http.cookie; set beresp.http.Content-Encoding= "gzip"; set beresp.ttl = 86400s; set beresp.http.Cache-Control = "public, max-age=3600"; /*set the expires time to response header*/ set beresp.http.expires=beresp.ttl; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } if (!beresp.cacheable) { return (pass); } return (deliver); } sub vcl_deliver { if (resp.http.magicmarker) { /* Remove the magic marker */ unset resp.http.magicmarker; /* By definition we have a fresh object */ set resp.http.age = "0"; } if(obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; }else { set resp.http.X-Varnish-Cache = "MISS"; } return (deliver); } sub vcl_recv { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* Non-RFC2616 or CONNECT which is weird. */ return (pipe); } # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); } #pipe websocket connections directly to Node.js if (req.http.Upgrade ~ "(?i)websocket") { set req.backend = socketIO; return (pipe); } # Properly handle different encoding types if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|js|css)$") { # No point in compressing these remove req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unkown algorithm remove req.http.Accept-Encoding; } } # allow PURGE from localhost and 192.168.15... if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } return (lookup); } sub vcl_hit { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { purge_url(req.url); error 200 "Purged."; } } sub vcl_pipe { if (req.http.upgrade) { set bereq.http.upgrade = req.http.upgrade; } } Response Header: Cache-Control:public, max-age=3600 Connection:keep-alive Content-Encoding:gzip Content-Length:11520 Content-Type:application/javascript Date:Fri, 06 Apr 2012 04:53:41 GMT ETag:"1330493670000--987570445" Last-Modified:Wed, 29 Feb 2012 05:34:30 GMT Server:Play! Framework;1.2.x-localbuild;dev Via:1.1 varnish X-Varnish:118464579 118464571 X-Varnish-Cache:HIT age:0 expires:86400.000 Any suggestion on how to fix it and how to provide gzip compression using varnish.

    Read the article

  • Site-to-site VPN

    - by ronadona
    We are a small business company that is based in Sydney and opened a new office in London. Number of employees in Sydney office is 25 and in London is 6 employees. So the traffic isn't that high. Files to be transferred are Excel sheets with size of 15mb max. Both locations have MS server 2008 and Fortigate gateways. I set up a site to site vpn but it's extremely slow. Maybe this is because our upload speeds is 1Mbps only but We will increase the upload speed to 20 Mbps in both locations but I am afraid that this will not solve the problem as the 2 locations are far from each other and the upload upgrade won't solve the problem. what's the best way to go? Shall we find a provider for the VPN? or is there another technology that can be used through internet without paying extra costs? Many thanks!

    Read the article

  • Multiple IP Addresses on a Traceroute Line

    - by Paul
    I'm doing a traceroute from my box to ....say.... stackoverflow.com. I see a couple of instances where there are multiple ip's on one line. For instance, in below, line #2 has two IPs: 10.1.6.5 and 10.1.4.5 Also on line #4, there are two timestamps after 216.182.236.96: 0.653 ms and 0.637 ms What are these? This is on Linux Traceroute example: traceroute to www.stackoverflow.com (198.252.206.16), 30 hops max, 60 byte packets 2 ip-10-1-6-5.us-west-1.compute.internal (10.1.6.5) 0.329 ms 0.425 ms ip-10-1-4-5.us-west-1.compute.internal (10.1.4.5) 0.471 ms 4 216.182.236.104 (216.182.236.104) 0.554 ms 216.182.236.96 (216.182.236.96) 0.653 ms 0.637 ms 5 205.251.230.64 (205.251.230.64) 0.616 ms 205.251.229.232 (205.251.229.232) 1.305 ms 205.251.230.64 (205.251.230.64) 0.573 ms

    Read the article

  • Which Windows OS Supports 8 GB RAM in a Laptop and Suggestions for a Better Laptop for Personal & De

    - by Ellen
    I am about to purchase a laptop and have zeroed on the following two of them. Toshiba L500-ST2544 Toshiba L505-ES5034 The Common Specification for both of them are as follows - RAM - 4GB DDR3 Memory HDD - 320 GB Processor - Intel® Core™ i3-330M Processor WebCam and Mic - Available HDMI Port - Available Numeric Key Pad - Available Windows 7 (64 bit) Home Premium Now, the only difference between ST2544 and ES5034 is that, the ST2544 has a maximum of 2 slots with 2 GB in each. So, you can have a max of 4 GB RAM in that. The ES5034 can support 8 GB RAM, so, in a couple of years, if I want to add another 4 GB RAM I will be able to do it. The price for ST2544 is USD 629.00 whereas, the price for a ES5034 is USD685. A difference is USD 55.00 (not a major amount, but still something extra). Is it worthwhile going for the ES5034? Which Windows Operating System supports 8 GB of RAM?

    Read the article

  • Computer turns off and on after start ..then goes dead

    - by Shiki
    I built a new PC from the following components: - CPU: Intel Core i7 950 - MB: Gigabyte X58A-UD3R - RAM: 2x2gb i7 Corsair memory - VGA: Zotac AMP2 GTX260 - HDD: 1 GreenSATA HDD (Western Digital 500gb RE2) When I turn it on, it goes for a few seconds, fans at maximum speed, then turns off. The again, it starts by itself.. and goes with fans on max speed, nothing happens. First I suspected my PSU. It's a Chieftec 450AA PSU. After I borrowed a Chieftec 550AA PSU, I tried to start with that. Exact same story. Any idea ? Do I need a bigger PSU? Reason why its not localized. I never seen this turn on, off, on. If you give answer for that, it would already help people like me, with the same problem.

    Read the article

  • backgroundworker+wpf -> frozen window

    - by Valetudox
    -progressbar always 0% -the window is froozen (while DoWork r.) -if System.threading.thread.sleep(1) on - works perfectly whats the problem? private void btnNext_Click(object sender, RoutedEventArgs e) { this._worker = new BackgroundWorker(); this._worker.DoWork += delegate(object s, DoWorkEventArgs args) { long current = 1; long max = generalMaxSzam(); for (int i = 1; i <= 30; i++) { for (int j = i+1; j <= 30; j++) { for (int c = j+1; c <= 30; c++) { for (int h = c+1; h <= 30; h++) { for (int d = h+1; d <= 30; d++) { int percent = Convert.ToInt32(((decimal)current / (decimal)max) * 100); this._worker.ReportProgress(percent); current++; //System.Threading.Thread.Sleep(1); - it works well } } } } } }; this._worker.WorkerReportsProgress = true; this._worker.RunWorkerCompleted += delegate(object s, RunWorkerCompletedEventArgs args) { this.Close(); }; this._worker.ProgressChanged += delegate(object s, ProgressChangedEventArgs args) { this.statusPG.Value = args.ProgressPercentage; }; this._worker.RunWorkerAsync(); } <Window x:Class="SzerencsejatekProgram.Create" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Létrehozás" mc:Ignorable="d" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" Height="500" Width="700"> <DockPanel> <Button DockPanel.Dock="Right" Name="btnNext" Width="80" Click="btnNext_Click">Tovább</Button> <StatusBar DockPanel.Dock="Bottom"> <StatusBar.ItemsPanel> <ItemsPanelTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="auto"/> <ColumnDefinition Width="auto"/> </Grid.ColumnDefinitions> </Grid> </ItemsPanelTemplate> </StatusBar.ItemsPanel> <StatusBarItem Grid.Column="1"> <TextBlock Name="statusText"></TextBlock> </StatusBarItem> <StatusBarItem Grid.Column="2"> <ProgressBar Name="statusPG" Width="80" Height="18" IsEnabled="False" /> </StatusBarItem> <StatusBarItem Grid.Column="3"> <Button Name="statusB" IsCancel="True" IsEnabled="False">Cancel</Button> </StatusBarItem> </StatusBar> </DockPanel> </Window>

    Read the article

  • Using a time to set XY chart axis scaling like in 2003

    - by CookieOfFortune
    In Excel 2003, when you created a XY chart using time as an axis, you could set the scaling of these axes by typing in the date. In Excel 2007, you have to use the decimal version of the time (eg. How many days since some arbitrary earlier date). I was wondering if there was a way to avoid having to make such a calculation? A developer posted on a blog that this issue would be fixed in a future release, but all versions of Excel 2007 I have tried have not resolved this issue. The relevant quote: Those of you familiar with this technique of converting time to a decimal may recall that Excel 2003 allowed you to enter a date and time like “1/1/07 11:00 AM” directly in the axis option min/max fields and Excel would calculate the appropriate decimal representation. This currently does not work in Excel 2007 but will be fixed in a subsequent release.

    Read the article

  • NTFS write speed really slow (<15MB/s)

    - by Zulakis
    I got a new Seagate 4TB harddrive formatted with ntfs using parted /dev/sda > mklabel gpt > mkpart pri 1 -1 mkfs.ntfs /dev/sda1 When copying files or testing writespeed with dd, the max writespeed I can get is about 12MB/s. The harddrive should be capable of atleast 100MB/s. top shows high cpu usage for the mount.ntfs process. The system has a AMD dualcore. This is the output of parted /dev/sda unit s print: Model: ATA ST4000DM000-1F21 (scsi) Disk /dev/sda: 7814037168s Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 7814035455s 7814033408s pri The used kernel is 3.5.0-23-generic. The ntfs-3g versions I tried are ntfs-3g 2012.1.15AR.1 (ubuntu 12.04 default) and the newest version ntfs-3g 2013.1.13AR.2. When formatted with ext4 I get good write speeds with about 140MB/s. How can I fix the writespeed?

    Read the article

  • How can I add leading zeros between delimiters in Excel 2010

    - by Gregory Biernacki
    I am trying to convert a list of property id numbers that has a standard format of 0000-A-00000-00-00, where my worksheet has the various combinations of 1-A-123 10-B-1234 Ideally they would read as follows 0001-A-00123-0000-00 0010-B-01234-0000-00 I've tried using the custom number formatting but it doesn't like the letter in the middle of the number. I didn't know if my only option was to break them apart and then put them back together again. I would accept a solution that merely put the leading zeros at the front of the number, (max is 4 characters) so the result could look like 0001-A-123

    Read the article

  • iis php internal server error

    - by user1633206
    I developed a website, using php/mysql running at IIS Server as CGI Server API. Suddenly it gives me error 500 after two weeks. It has lot of scripts but index.php home is working. But other script that has header redirection What's wrong with my scripts. livehttp addon of firefox says.. GET /allplans.php?lang=ar&cat=1 HTTP/1.1 Host: www.myhost.com... [edited] User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20100101 Firefox/14.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cache-Control: max-age=0 HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 03 Sep 2012 08:09:13 GMT Content-Length: 1208 Connection: Keep-Alive

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Feeding the kernels entropy source from other machines and/or increasing its maximum size

    - by David Spillett
    We have has a little trouble with a small box that acts as a VPN end-point and mail relay for our network, caused by the available entropy for /dev/random being too low (which causes TLS connection attempts by exim to fail). The machine doesn't do anything else, so the normal feed into the entropy pool (interrupt timings from things like disk access) is not enough. As a quick hack I've set a looping script that reads from /dev/hda at a couple of Mbyte/sec which keeps it topped up. Other than buying a hardware RNG, is there a clean way of piping data for entry from elsewhere, such as a copy of the data our file server uses for its entropy source? I've spotted several tips for using rng-tools to feed it from /dev/urandom on the same machine but that "feels dirty". Also, is it possible to increase the maximum pool size? It currently seems to max out at 3585.

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • how to force 1440x900 resolution on acer al1702w lcd monitor

    - by ashishsony
    i have acer al1702w lcd monitor in my office.. somehow i restarted the system using the reset button as the os stoped responding.. since then the 1440x900 resolution option in the display settings is not appearing.. the intel graphocs is 82945G... and as always the h/w guys at the office are of no help.. they start blurting out their theories that this the best resolution... even though i show them that the max resolution for the monitor is indeed 1440x900.. and its not that the 82945g cant handle it... as it was working before.. i installed the latest drivers for this graphics but still im not finding that option... how to force 1440x900 resolution... oh tes the os ix xp pro sp2 Thanks..

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high

    - by Gk.
    I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >