Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 109/2677 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • first time setting up ssl, running into a strange problem, tutorials haven't been too helpful

    - by pedalpete
    This is my first time trying to set-up an ssl for one a site, and I'm running it on a server that has 3 other sites already hosted. I'm running apache2.?? and the install came with an ssl.conf page. The ssl.conf has the following settings LoadModule ssl_module modules/mod_ssl.so Listen 443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl <VirtualHost *:443> ServerAdmin [email protected] DocumentRoot /var/www/html/securesite ServerName securesite.com ErrorLog logs/securesite-error_log CustomLog logs/securesite-access_log common SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/securesite.com.crt SSLCertificateKeyFile /etc/httpd/ssl.key/server.key SSLCertificateChainFile /etc/httpd/ssl.crt/gd_bundle.crt </VirtualHost> When I run 'apachectl configtest', I don't get any errors, but running 'apachectl -k restart', I get 'httpd not running, trying to start'. I have two questions 1) Is there an error in the way I'm defining my virtualhost for 443?? the rest of my entries point to <VirtualHost *:80. When I comment out the above entry, apache runs fine. 2) do I need to set-up a redirect from port 80 for secure site? Because most users are going to go to http: or www. , and I need to send them to https: does apache do this automatically? or do i need to create an entry with a redirect?

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • winbind failing after a semi-random amount of time

    - by The Digital Ninja
    I have winbind set up to authenticate to our AD for samba shares. This is the third such server, and the only one having any issues. It seems after a random amount of time samba shares will just stop working. Winbind processes seem to be running but restarting them seems to fix the issue for a while. Looking at the logs have been kind of hit an miss and I don't know exactly when it fails. One interesting thing is that it seems to be pulling from another domain controller that it shoudlnt. I censored out the domain name in this example. But isnt there some way to block authentication to a domain? I'm not sure if this is a symptom or a cause of the issue. [2010/10/18 08:02:10, 0] winbindd/winbindd_cache.c:initialize_winbindd_cache(2577) initialize_winbindd_cache: clearing cache and re-creating with version number 1 [2010/10/18 09:15:54, 1] libsmb/clikrb5.c:ads_krb5_mk_req(686) ads_krb5_mk_req: krb5_get_credentials failed for [email protected] (Cannot find KDC for requested realm) [2010/10/18 09:15:54, 1] libsmb/cliconnect.c:cli_session_setup_kerberos(624) cli_session_setup_kerberos: spnego_gen_negTokenTarg failed: Cannot find KDC for requested realm [2010/10/18 09:15:54, 0] lib/util_sock.c:write_data(1139) write_data: write failure. Error = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:write_socket(242) write_socket: Error writing 108 bytes to socket 18: ERRNO = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:cli_send_smb(290) Error writing 108 bytes to client. -1 (Connection reset by peer)

    Read the article

  • Browser says "Waiting for www.xyz.com" for a very long time

    - by Phil
    When I load my website (hosted with Ipage). The browser often takes an incredible long time saying "Waiting for www.xyz.com ..." before any elements of the site actually appear. After this "Waiting for" process, the text, images and everything else actually load quite fast. I contacted my host with my tracert result and they said they optimized my website database and increased the memory available to PHP on my account to 64 Mb.They also said they have checked the issue by accessing my website and found that it is loading fine without any slowness. It seems to be a temporary issue. Please try to access your website with different browser and network. I tried different browsers and networks but this "Waiting for" process always takes too long. My website is http://www.surreyextra.com/ . It's Wordpress and BuddyPress. I'm in the UK while Ipage host is placed in the USA, can this potentially be a problem? I have tried a number of optimizations, like minifying my CSS and JS files and use catching but the problem hasn't improved. So is it my host's fault, should I contact them again?

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • VPS goes slow at more than 20 users online at the same time

    - by hachiari
    I have 512 MB VPS (brustable to 1GB) Somehow, the site goes slow when there are about 10 users, and becomes impossible to load at 20 users online at the same time. I wonder what could be the problem for this. The bandwidth connection of the VPS is 1Gbps. Here is some settings in my VPS: KeepAlive Off <IfModule prefork.c> StartServers 7 MinSpareServers 7 MaxSpareServers 10 ServerLimit 64 MaxClients 64 MaxRequestsPerChild 0 </IfModule> my.cnf settings - calculated Max Memory 300MB Output from UNIXBENCH INDEX VALUES TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 376783.7 13429727.4 356.4 Double-Precision Whetstone 83.1 1137.5 136.9 Execl Throughput 188.3 1637.4 87.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 148868.0 557.1 File Copy 256 bufsize 500 maxblocks 1077.0 79430.0 737.5 File Read 4096 bufsize 8000 maxblocks 15382.0 1410009.0 916.7 Pipe Throughput 111814.6 4419722.0 395.3 Pipe-based Context Switching 15448.6 561505.1 363.5 Process Creation 569.3 10272.7 180.4 Shell Scripts (8 concurrent) 44.8 514.3 114.8 System Call Overhead 114433.5 3537373.8 309.1 ========= FINAL SCORE 295.0 I am afraid that the VPS company limit the number of connection to the VPS... is it possible? The server is in Japan, but the site has global traffic (some of the traffic are from countries with low speed connection). Could this be the problem? This is a serious problem :( my site just cant grow if this keeps on happening... please tell me if you have any idea. Thank You, Bryant

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Getting Run time 1004 error in code

    - by krishna123
    I tried the code provided by vba express for combining sheet, while execution it is displaying Run Time error 1004: Application Defined or Object Defined Error: My Scenario is: I have a Excel, in that I have first sheet "Connection" and after it I have Sheet1, Sheet2 and so on. I am combining all sheets except Sheet"Conection" by saying start with sheet2. I tried following line of code to exclude "Connection" sheet: If Not Sheet.Name = "Connection" then but it did not work. Whatever the sheets I have in some of them I have large data in some cells. Here is the code which I am using: I have highlighted the line Sub CopyFromWorksheets() Dim wrk As Workbook 'Workbook object - Always good to work with object variables Dim sht As Worksheet 'Object for handling worksheets in loop Dim trg As Worksheet 'Master Worksheet Dim rng As Range 'Range object Dim colCount As Integer 'Column count in tables in the worksheets Set wrk = ActiveWorkbook 'Working in active workbook For Each sht In wrk.Worksheets If sht.Name = "Master" Then sht.Delete Exit Sub End If Next sht 'We don't want screen updating Application.ScreenUpdating = False 'trg.SaveAs "C:\temp\CPReport1.xls" 'Add new worksheet as the last worksheet Set trg = wrk.Worksheets.Add(After:=wrk.Worksheets(wrk.Worksheets.Count)) 'Rename the new worksheet trg.Name = "Master" 'Get column headers from the first worksheet 'Column count first Set sht = wrk.Worksheets(2) colCount = sht.Cells(1, 255).End(xlToLeft).Column 'Now retrieve headers, no copy&paste needed With trg.Cells(1, 1).Resize(1, colCount) .Value = sht.Cells(1, 1).Resize(1, colCount).Value 'Set font as bold .Font.Bold = True End With trg.SaveAs "C:\temp\CPReport1.xls" 'We can start loop 'Skip Sheet - Connection If Not sht.Name = "Connection" Then For Each sht In wrk.Worksheets 'If worksheet in loop is the last one, stop execution (it is Master worksheet) If sht.Index = wrk.Worksheets.Count Then Exit For End If 'Data range in worksheet - starts from second row as first rows are the header rows in all worksheets Set rng = sht.Range(sht.Cells(2, 1), sht.Cells(65536, 1).End(xlUp).Resize(, colCount)) 'Put data into the Master worksheet '----------------- Error in below line -------------------------------------------------- trg.Cells(65536, 1).End(xlUp).Offset(1).Resize(rng.Rows.Count, rng.Columns.Count).Value = rng.Value '---------------------------------------------------------------------------------------- Next sht End If 'Fit the columns in Master worksheet trg.Columns.AutoFit 'Dim dest, destyfile 'dest = "E:\Test_Merge\" 'destyfile = dest & "_" & trg.Name 'trg.SaveAs (destyfile) 'Screen updating should be activated Application.ScreenUpdating = True End Sub

    Read the article

  • GWT Query fails second time -only.

    - by Koran
    HI, I have a visualization function in GWT which calls for two instances of the same panels - two queries. Now, suppose one url is A and the other url is B. Here, I am facing an issue in that if A is called first, then both A and B works. If B is called first, then only B works, A - times out. If I call both times A, only the first time A works, second time it times out. If I call B twice, it works both times without a hitch. Even though the error comes at timed out, it actually is not timing out - in FF status bar, it shows till - transferring data from A, and then it gets stuck. This doesnt even show up in the first time query. The only difference between A and B is that B returns very fast, while A returns comparitively slow. The sample code is given below: public Panel(){ Runnable onLoadCallback = new Runnable() { public void run() { Query query = Query.create(dataUrl); query.setTimeout(60); query.send(new Callback() { public void onResponse(QueryResponse response) { if (response.isError()){ Window.alert(response.getMessage()); } } } } VisualizationUtils.loadVisualizationApi(onLoadCallback, PieChart.PACKAGE); } What could be the reason for this? I cannot think of any reason why this should happen? Why is this happening only for A and not for B? EDIT: More research. The query which works all the time (i.e. B is the example URL given in GWT visualization site: see comment [1]). So, I tried in my app engine to reproduce it - the following way s = "google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});"; response = HttpResponse(s, content_type="text/plain; charset=utf-8") response['Expires'] = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime()) return response Where s is the data when we run the query for B. I tried to add Expires etc too, since that seems to be the only header which has the difference, but now, the query fails all the time. For more info - I am now sending the difference between my server response vs the working server response. They seems to be pretty similar. HTTP/1.0 200 OK Content-Type: text/plain Date: Wed, 16 Jun 2010 11:07:12 GMT Server: Google Frontend Cache-Control: private, x-gzip-ok="" google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Mac$ telnet spreadsheets.google.com 80 Trying 209.85.231.100... Connected to spreadsheets.l.google.com. Escape character is '^]'. GET http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1 HTTP/1.0 200 OK Content-Type: text/plain; charset=UTF-8 Date: Wed, 16 Jun 2010 11:07:58 GMT Expires: Wed, 16 Jun 2010 11:07:58 GMT Cache-Control: private, max-age=0 X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Server: GSE google.visualization.Query.setResponse({version:'0.6',status:'ok',sig:'106459472',table:{cols:[{id:'A',label:'Source',type:'string',pattern:''},{id:'B',label:'Percent',type:'number',pattern:'#0.01%'}],rows:[{c:[{v:'Oil'},{v:0.37,f:'37.00%'}]},{c:[{v:'Coal'},{v:0.25,f:'25.00%'}]},{c:[{v:'Natural Gas'},{v:0.23,f:'23.00%'}]},{c:[{v:'Nuclear'},{v:0.06,f:'6.00%'}]},{c:[{v:'Biomass'},{v:0.04,f:'4.00%'}]},{c:[{v:'Hydro'},{v:0.03,f:'3.00%'}]},{c:[{v:'Solar Heat'},{v:0.005,f:'0.50%'}]},{c:[{v:'Wind'},{v:0.003,f:'0.30%'}]},{c:[{v:'Geothermal'},{v:0.002,f:'0.20%'}]},{c:[{v:'Biofuels'},{v:0.002,f:'0.20%'}]},{c:[{v:'Solar photovoltaic'},{v:4.0E-4,f:'0.04%'}]}]}});Connection closed by foreign host. Also, please note that App engine did not allow the Expires header to go through - can that be the reason? But if that is the reason, then it should not fail if B is sent first and then A. Comment [1] : http://spreadsheets.google.com/tq?key=pWiorx-0l9mwIuwX5CbEALA&range=A1:B12&gid=0&headers=-1

    Read the article

  • Is an Ethernet point to point connection without a switch real time capable?

    - by funksoulbrother
    In automation and control, it is commonly stated that ethernet can't be used as a bus because it is not real time capable due to packet collisions. If important control packets collide, they often can't keep the hard real time conditions needed for control. But what if I have a single point to point connection with Ethernet, no switch in between? To be more precise, I have an FPGA board with a giga-Ethernet port that is connected directly to my control PC. I think the benefits of giga Ethernet over CAN or USB for a p2p connection are huge, especially for high sampling rates and lots of data generation on the FPGA board. Am I correct that with a point to point connection there can't be any packet collisions and therefore a real time environment is given even with ethernet? Thanks in advance! ~fsb

    Read the article

  • nginx connection time issue on some IPs

    - by sheldon
    I have recently shifted my server to nginx and php-fpm getting rid of apache. This has helped improves speeds of my website. Everything seems to work fine until i came across this issue, i noticed that nginx keeps throwing connection time out errors for only certain IPs. One of the IPs is my office IP, we have a backend that is accessed from our office through out the day. I use supervisord to launch 3 php-fpm processes with workers this is my typical php-fpm config pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 300 Since i have a server with 4 cores and 2 GB ram this is my nginx setup worker_processes 4; worker_rlimit_nofile 8192; events { worker_connections 1024; use epoll; multi_accept off; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 55; recursive_error_pages on; server_name_in_redirect off; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 8k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; Where am i going wrong with the config, I have tried various settings but the issue still persists. These are the errors i keep getting 2011/11/13 18:20:33 [error] 21583#0: *311683 upstream timed out (110: Connection timed out) while reading response header from upstream, client: IP, server: tastykhana.in, request: "GET url HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket:", host: "tastykhana.in", referrer: "url"

    Read the article

  • Which Wine DLLs should I *not* overwrite with the real thing?

    - by endolith
    I have a legit installation of Windows XP and dual boot with Ubuntu (currently Karmic). WineHQ says it's possible to use DLLs from a real installation of Windows in place of "DLLs that Wine does not currently implement very well". So I'd like to just create softlinks that point to all of the DLLs in my real Windows System32 folder, under the theory that this would help things function better and behave in a less buggy, more native way. But should I go as far as replacing the Wine DLLs with the real ones? If so, are there any DLLs that need to remain the way they are for compatibility with the Linux world? Which ones are safe to replace?

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • First time installing Linux/Apache - uanble to connect

    - by bob
    I's my first time installing Linux/Apache. I loaded CentOS and LAMPP on a machine attached to a LAN. Turned off http and mysql (because I didn't want conflict with LLAMPP) chkconfig httpd off chkconfig mysqld off then successfully LAMPP started with /opt/lampp/lampp start Starting XAMPP for Linux 1.7.3a... XAMPP: Starting Apache with SSL (and PHP5)... XAMPP: Starting MySQL... XAMPP: Starting ProFTPD... XAMPP for Linux started. Problem: Unable to connect - Firefox can't establish a connection to the server at 179.16.51.36. I need some pointers as to where to look next. No errors in error_log file (just some warnings) I can ping server. httpd.conf looks like this: ServerRoot "/opt/lampp" Listen 80 ServerAdmin [email protected] ServerName 179.16.51.36 DocumentRoot "/opt/lampp/htdocs" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/opt/lampp/htdocs"> Options Indexes FollowSymLinks ExecCGI Includes Order allow,deny Allow from all </Directory> ErrorLog logs/error_log LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog logs/access_log common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> </IfModule> <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig etc/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi .pl AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> EnableMMAP off EnableSendfile off

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

  • C# calendar needs Business Logic for real-time reminders? [on hold]

    - by lazfish
    I am not a super experienced C# user, though I have some experience in .Net and VB.Net. Just got this new job and my first assignment is a mission critical part of the business. It is pretty important I get it right so I was hoping for some sage advice. We have created a calendar using jQuery, C#, .Net & SQL Server 08. The calendar works but now we are wanting to add email, SMS and voice-call reminders. We are a small company and can do whatever we need to do with no restrictions on our IT environment. I have some base-line experience with Unix servers but would prefer to stay in the Micro$oft universe if it is prudent to do so. I know how to add the API calls and services to initiate these reminders (using built in email services for email and Twilio.com for SMS). I am asking for advice about how to approach a reliable and timely listener service that knows when to call the service or API for the reminder before an appointment. EG. SMS: "You have a conference call in 30 minutes." I have done some research, but it is hard to know what is a proven reliable approach. What I am looking for is an experienced opinion on a good (or the best) strategy for implementing a solution that will reliably listen for appointments and dispatch the reminders when needed.

    Read the article

  • OTP or S/KEY - Conversion of Hex string into 6 readable words

    - by Garbit
    As seen in RFC2289 (S/KEY), there is a list of words that must be used when converting the hexadecimal string into a readable format. How would i go about doing so? The RFC mentions: The one-time password is therefore converted to, and accepted as, a sequence of six short (1 to 4 letter) English words. Each word is chosen from a dictionary of 2048 words; at 11 bits per word, all one-time passwords may be encoded. Read more: http://www.faqs.org/rfcs/rfc1760.html#ixzz0fu7QvXfe Does this mean converting a hex into decimal and then using that as an index for an array of words. The other thing it could be is using a text encoding e.g. 1111 might equal dog in UTF-8 encoding thanks in advance for your help!

    Read the article

  • Sun Java Realtime System on VirtualMachine / cloud

    - by portoalet
    Just wondering if anybody can run/compile application for Sun Java Realtime system on a VM such as VMWare or on the Cloud such as on Amazon EC2 ? I know it is not ideal running Realtime java on a virtualized infrastructure, but it makes things easier. (Otherwise I just have to install SLES SP2 on physical hardware.)

    Read the article

  • apostrophe in mysql/php

    - by fusion
    i'm trying to learn php/mysql. inserting data into mysql works fine but inserting those with apostrophe is generating an error. i tried using mysql_real_escape_string, yet this doesn't work. would appreciate any help. <?php include 'config.php'; echo "Connected <br />"; $auth = $_POST['author']; $quo = $_POST['quote']; $author = mysql_real_escape_string($auth); $quote = mysql_real_escape_string($quo); //************************** //inserting data $sql="INSERT INTO Quotes (vauthor, cquotes) VALUES ($author, $quote)"; if (!mysql_query($sql,$conn)) { die('Error: ' . mysql_error()); } echo "1 record added"; ... what am i doing wrong?

    Read the article

  • Get seconds since epoch in any POSIX compliant shell

    - by mattbh
    I'd like to know if there's a way to get the number of seconds since the UNIX epoch in any POSIX compliant shell, without resorting to non-POSIX languages like perl, or using non-POSIX extensions like GNU awk's strftime function. Here are some solutions I've already ruled out... date +%s // Doesn't work on Solaris I've seen some shell scripts suggested before, which parse the output of date then derive seconds from the formatted gregorian calendar date, but they don't seem to take details like leap seconds into account. GNU awk has the strftime function, but this isn't available in standard awk. I could write a small C program which calls the time function, but the binary would be specific to a particular architecture. Is there a cross platform way to do this using only POSIX compliant tools? I'm tempted to give up and accept a dependency on perl, which is at least widely deployed. perl -e 'print time' // Cheating (non-POSIX), but should work on most platforms

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >