Search Results

Search found 2021 results on 81 pages for 'pm paresh mayani'.

Page 18/81 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • DataTable.Select Behaves Strangely Using ISNULL Operator on NULL DateTime Column

    - by Paul Williams
    I have a DataTable with a DateTime column, "DateCol", that can be DBNull. The DataTable has one row in it with a NULL value in this column. I am trying to query rows that have either DBNull value in this column or a date that is greater than today's date. Today's date is 5/11/2010. I built a query to select the rows I want, but it did not work as expected. The query was: string query = "ISNULL(DateCol, '" + DateTime.MaxValue + "'") > "' + DateTime.Today "'" This results in the following query: "ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '5/11/2010'" When I run this query, I get no results. It took me a while to figure out why. What follows is my investigation in the Visual Studio immediate window: > dt.Rows.Count 1 > dt.Rows[0]["DateCol"] {} > dt.Rows[0]["DateCol"] == DBNull.Value true > dt.Select("ISNULL(DateCol,'12/31/9999 11:59:59 PM') > '5/11/2010'").Length 0 <-- I expected 1 Trial and error showed a difference in the date checks at the following boundary: > dt.Select("ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '2/1/2000'").Length 0 > dt.Select("ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '1/31/2000'").Length 1 <-- this was the expected answer The query works fine if I wrap the DateTime field in # instead of quotes. > dt.Select("ISNULL(DateCol, #12/31/9999#) > #5/11/2010#").Length 1 My machine's regional settings is currently set to EN-US, and the short date format is M/d/yyyy. Why did the original query return the wrong results? Why would it work fine if the date was compared against 1/31/2000 but not against 2/1/2000?

    Read the article

  • How to Import XML generated by TFPT into Excel 2007?

    - by keerthivasanp
    Below is the xml content generated by TFPT for the WIQL issued. When I try to import this XML into Excel 2007 XML source pane shows only "Id", "RefName" and "Value" as fields to be mapped. Whereas I would like to display System.Id, System.State, Microsoft.VSTS.Common.ResolvedDate, Microsoft.VSTS.Common.StateChangeDate as column headings and corresponding value as rows. Any idea how to achieve this using XML import in Excel 2007. <?xml version="1.0" encoding="utf-8"?> <WorkItems> <WorkItem Id="40100"> <Field RefName="System.Id" Value="40100" /> <Field RefName="System.State" Value="Closed" /> <Field RefName="Microsoft.VSTS.Common.ResolvedDate" Value="3/17/2010 9:39:04 PM" /> <Field RefName="Microsoft.VSTS.Common.StateChangeDate" Value="4/20/2010 9:15:32 PM" /> </WorkItem> <WorkItem Id="44077"> <Field RefName="System.Id" Value="44077" /> <Field RefName="System.State" Value="Closed" /> <Field RefName="Microsoft.VSTS.Common.ResolvedDate" Value="3/1/2010 4:26:47 PM" /> <Field RefName="Microsoft.VSTS.Common.StateChangeDate" Value="4/20/2010 7:32:12 PM" /> </WorkItem> </WorkItems> Thanks

    Read the article

  • How do I stop cpan from reconfiguring each time? + More

    - by Leonard
    I'm running on a Mac (version 10.6.3) and am struggling to understand what is going on with my Perl installation. I let the system do a copy from my previous mac, and I appear to have a second perl installed, which appears earlier in my path. I can't tell (or remember) if I might have installed it with fink, macports or CPAN or what. type -a cpan cpan is /opt/local/bin/cpan cpan is /usr/bin/cpan I'm seeing two oddities. (To start with!) When I run cpan, and let it configure in ~lcuff/.cpan, each time I run it, it wants to reconfigure, giving the message: Sorry, we have to rerun the configuration dialog for CPAN.pm due to some missing parameters... Also, when I try to install File::Find::Rule (so I can list my CPAN modules, per the FAQ) I end up with an error message that I can't decipher or Google a solution for: Use of inherited AUTOLOAD for non-method Digest::SHA::shaopen() is deprecated at /opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level/Digest/SHA.pm line 55. Catching error: "Can't locate auto/Digest/SHA/shaopen.al in \@INC (\@INC contains: /sw/lib/perl5 /sw/lib/perl5/darwin /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level /opt/local/lib/perl5/site_perl/5.8.9 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level /opt/local/lib/perl5/vendor_perl/5.8.9 /opt/local/lib/perl5/vendor_perl /opt/local/lib/perl5/5.8.9/darwin-2level /opt/local/lib/perl5/5.8.9 /Users/lcuff) at /opt/local/lib/perl5/vendor_perl/5.8.9/darwin-2level/Digest/SHA.pm line 55\cJ" at /opt/local/lib/perl5/5.8.9/CPAN.pm line 359 CPAN::shell() called at /opt/local/bin/cpan line 198

    Read the article

  • Filtering records in app-engine (Java)

    - by Manjoor
    I have following code running perfectly. It filter records based on single parameter. public List<Orders> GetOrders(String email) { PersistenceManager pm = PMF.get().getPersistenceManager(); Query query = pm.newQuery(Orders.class); query.setFilter("Email == pEmail"); query.setOrdering("Id desc"); query.declareParameters("String pEmail"); query.setRange(0,50); return (List<Orders>) query.execute(email); } Now i want to filter on multiple parameters. sdate and edate is Start Date and End Date. In datastore it is saved as Date (not String). public List<Orders> GetOrders(String email,String icode,String sdate, String edate) { PersistenceManager pm = PMF.get().getPersistenceManager(); Query query = pm.newQuery(Orders.class); query.setFilter("Email == pEmail"); query.setFilter("ItemCode == pItemCode"); query.declareParameters("String pEmail"); query.declareParameters("String pItemCode"); .....//Set filter and declare other 2 parameters .....// ...... query.setRange(0,50); query.setOrdering("Id desc"); return (List<Orders>) query.execute(email,icode,sdate,edate); } Any clue?

    Read the article

  • Perl Parallel::ForkManager wait_all_children() takes excessively long time

    - by zhang18
    I have a script that uses Parallel::ForkManager. However, the wait_all_children() process takes incredibly long time even after all child-processes are completed. The way I know is by printing out some timestamps (see below). Does anyone have any idea what might be causing this (I have 16 CPU cores on my machine)? my $pm = Parallel::ForkManager->new(16) for my $i (1..16) { $pm->start($i) and next; ... do something within the child-process ... print (scalar localtime), " Process $i completed.\n"; $pm->finish(); } print (scalar localtime), " Waiting for some child process to finish.\n"; $pm->wait_all_children(); print (scalar localtime), " All processes finished.\n"; Clearly, I'll get the Waiting for some child process to finish message first, with a timestamp of, say, 7:08:35. Then I'll get a list of Process i completed messages, with the last one at 7:10:30. However, I do not receive the message All Processes finished until 7:16:33(!). Why is that 6-minute delay between 7:10:30 and 7:16:33? Thx!

    Read the article

  • TimeZone change to UTC while updating the Appointment

    - by Firoz Ansari
    I am using EWS 1.2 to send appointments. On creating new Appointments, TimeZone is showing properly on notification mail, but on updating the same appointment, it's TimeZone reset to UTC. Could anyone help me to fix this issue? Here is sample code to replicate the issue: ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2010_SP1, TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time")); service.Credentials = new WebCredentials("ews_calendar", PASSWORD, "acme"); service.Url = new Uri("https://acme.com/EWS/Exchange.asmx"); Appointment newAppointment = new Appointment(service); newAppointment.Subject = "Test Subject"; newAppointment.Body = "Test Body"; newAppointment.Start = new DateTime(2012, 03, 27, 17, 00, 0); newAppointment.End = newAppointment.Start.AddMinutes(30); newAppointment.RequiredAttendees.Add("[email protected]"); //Attendees get notification mail for this appointment using (UTC-05:00) Eastern Time (US & Canada) timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 5:00 PM-5:30 PM. (UTC-05:00) Eastern Time (US & Canada) newAppointment.Save(SendInvitationsMode.SendToAllAndSaveCopy); // Pull existing appointment string itemId = newAppointment.Id.ToString(); Appointment existingAppointment = Appointment.Bind(service, new ItemId(itemId)); //Attendees get notification mail for this appointment using UTC timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 11:00 PM-11:30 PM. UTC existingAppointment.Update(ConflictResolutionMode.AlwaysOverwrite, SendInvitationsOrCancellationsMode.SendToAllAndSaveCopy);

    Read the article

  • Spreadsheet::WriteExcel - data_validation

    - by sid_com
    #! /usr/bin/env perl use warnings; use 5.012; use Spreadsheet::WriteExcel; my $workbook = Spreadsheet::WriteExcel->new( 'test_test.xls' ) or die $!; my $sheet = $workbook->add_worksheet(); my $format_in = $workbook->add_format( align => 'center', valign => 'vcenter' ); my $format_st = $workbook->add_format( align => 'center', valign => 'vcenter' ); $format_in->set_num_format ( 'hh:mm' ); $format_st->set_num_format ( '[h]:mm' ); $sheet->set_row( 0, 22 ); $sheet->set_row( 1, 22 ); $sheet->set_column( 'A:D', 20, $format_in ); $sheet->set_column( 'E:E', 20, $format_st ); $sheet->write( 'A1', 'begin am' ); $sheet->write( 'B1', 'end am' ); $sheet->write( 'C1', 'begin pm' ); $sheet->write( 'D1', 'end pm' ); $sheet->write( 'E1', 'time' ); $sheet->data_validation( 'A2:D2', { validate => 'time', criteria => 'between', minimum => 'T06:00', maximum => 'T20:00', }); $sheet->write_formula( 'E2', '=(B2-A2)+(D2-C2)' ); $workbook->close() or die $!; Which kind of data_validation would check if the "end am"-value is greater than the "begin am"-value (and "end pm" grater then "begin pm")? I tried this, but it didn't work: $sheet->data_validation( 'B2', { validate => 'time', criteria => '>=', value => '=A2', }); $sheet->data_validation( 'D2', { validate => 'time', criteria => '>=', value => '=C2', }); Spreadsheet::WriteExcel

    Read the article

  • Google App Engine error Object Manager has been closed

    - by newbie
    I had following error from Google App Engine when I was trying to iterate list in JSP page with EL. Object Manager has been closed I solved problem with following coe, but I don't think that it is very good solution to this problem: public List<Item> getItems() { PersistenceManager pm = getPersistenceManager(); Query query = pm.newQuery("select from " + Item.class.getName()); List<Item> items = (List<Items>) query.execute(); List<Item> items2 = new ArrayList<Item>(); // This line solved my problem Collections.copy(items, items2); // and this also pm.close(); return (List<Item>) items; } When I tried to use pm.detachCopyAll(items) it gave same error. I understood that detachCopyAll() method should do same what I did, but that method should be part of data nucelus, so it should be used instead of my owm methods. So why dosen't detachCopyAll() work at all?

    Read the article

  • Textbox time validations (javascript)

    - by unos
    I have a textbox in which user can enter time (eg- 01:00) and also a drop down box for entering AM/PM fields. (Since the AM/PM field is used, 12-hour time format is used.) The text box allows a max entry of 5 chars only (eg - 01:00). Pls let me know how I can set the 3rd char as a default -colon : , so that the user simply has to enter only the time. How to check if the time entered by the user is numeric or not?. Autocompletion feature : for eg, if user enters 1 then it would automatically be set to 01:00 Javascript Validations for 12-hour format. Eg: if user enters 13:00 then it should change to 01:00 How can I append the text box time values with the am/pm value selected in the drop down box?. Once the values are appended, automatically populate another text box (text box 2) with the result. Eg: 01:00 + pm should be set as 01:00p in the new text box (text box 2). Any help would be appreciated.

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • How can I control when Microsoft Security Essentials Updates Itself?

    - by David
    I'm using Microsoft Security Essentials (MSE) on a Windows Vista SP2 box. Every day in the 4 PM hour MSE updates itself: The green fortress icon in the notification area displays an animated download arrow, and my computer becomes unusably sluggish for five minutes (or more). I'm generally forced to take a coffee break or read a magazine. How can I control the time of day when this update occurs? Sometime after 9 PM would be ideal. Thanks.

    Read the article

  • Collapse Tables with specific Table ID? JavaScript

    - by medoix
    I have the below JS at the top of my page and it successfully collapses ALL tables on load. However i am trying to figure out how to only collapse tables with the ID of "ctable" or is there some other way of specifying the tables to make collapsible etc? <script type="text/javascript"> var ELpntr=false; function hideall() { locl = document.getElementsByTagName('tbody'); for (i=0;i<locl.length;i++) { locl[i].style.display='none'; } } function showHide(EL,PM) { ELpntr=document.getElementById(EL); if (ELpntr.style.display=='none') { document.getElementById(PM).innerHTML=' - '; ELpntr.style.display='block'; } else { document.getElementById(PM).innerHTML=' + '; ELpntr.style.display='none'; } } onload=hideall; </script>

    Read the article

  • perl TemplateToolkit - Can't locate object method "new" via package

    - by simnom
    Hi, I have inherited a web project that is perl based and I'm attempting to set up a local testing server so changes can be made internally to the project. The server architecture Ubuntu 9.10 php 5.2.10 mysql 5.1.37 perl 5.10.0-24ubuntu4 All the dependent modules and packages are installed such as DateTime.pm, TemplateToolkit.pm but running the application throws the following error message: Can't locate object method "new" via package "Template" (perhaps you forgot to load "Template"?) at ../lib//KPS/TemplateToolkit.pm line 51 The code block that this refers to is: sub new { return Template->new( INCLUDE_PATH => $KPS::Config::templatepath, ABSOLUTE => 1, DEBUG => 1, ); } If anybody is able to shed any light on this or point me in the right direction it would be greatly appreciated. Thanks Simnom

    Read the article

  • MembershipProvider user id

    - by niao
    Greetings, I wrote a custom MembershipProvider for my asp.net mvc application. I get the user as follows: public override MembershipUser GetUser(string username, bool userIsOnline) { using (CPersistanceManager pm = new CPersistanceManager()) { pm.EnsureConnectionOpen(); MembershipUser membershipUser = null; COperator oper = pm.OperatorRepository.Get(username); membershipUser = new MembershipUser(ApplicationName, oper.Username, oper.ROWGUID, oper.Email, string.Empty, string.Empty, oper.IsActive, false, DateTime.Today, DateTime.Today, DateTime.Today, DateTime.Today, DateTime.Today ); return membershipUser; } } how can I then retrieve logged user id (rowguid) in any controller?

    Read the article

  • Why does a module compile by itself but fail when used from elsewhere?

    - by chrisribe
    I have a Perl module that appears to compile fine by itself, but is causing other programs to fail compilation when it is included: me@host:~/code $ perl -c -Imodules modules/Rebat/Store.pm modules/Rebat/Store.pm syntax OK me@host:~/code $ perl -c -Imodules bin/rebat-report-status Attempt to reload Rebat/Store.pm aborted Compilation failed in require at bin/rebat-report-status line 4. BEGIN failed--compilation aborted at bin/rebat-report-status line 4. The first few lines of rebat-report-status are ... 3 use Rebat; 4 use Rebat::Store; 5 use strict; ...

    Read the article

  • Oracle: set timezone for column

    - by dbf
    Hi, I need to do migration date-timestamp with timezone similar to described here: http://stackoverflow.com/questions/1664627/migrating-oracle-date-columns-to-timestamp-with-timezone. But I need to make additional convertion (needed to work correctly with legacy apps): for all dates we need to change timezone to UTC and set time to 12:00 PM. So now dates are stored in local database (New York) timezone. I need to convert them this way 25/12/2009 09:12 AM (local timezone) in date column = 25/12/2009 12:00 PM UTC timestamp with local timezone column. Could you advice, how to set timezone for date value in Oracle (I found only suggestions how to convert from one timezone to another) (for example in Java there is setTimeZone method for Calendar objects). We want to make a convertion this way: rename old date column to NAME_BAK create new column timestamp with local timezone iterate over old column for not-null values set timezone to UTC, time to 12:00 PM drop old column after testing of this migration

    Read the article

  • Is strftime (hour) showing wrong time?

    - by ander163
    I'm using this line to get the beginning time of the first day of the month. t = Time.now.to_date.beginning_of_month.beginning_of_day When i display this using t.strftime("%A %b %e @ %l:%m %p") it shows: Monday Feb 1 @ 12:02 AM The hour is always 12 (instead of 00), and more wierd the minute changes to match the month in integers. For the February date, it shows 12:02 AM I use .prior_month and .next_month on the variable to move forward or backwards in time. So when I move to June, this would display as Tuesday June 1 @ 12:06 AM But when I just show the value of "t" using a straight t.to_s, I get this time of 00:00:00, which is what I expect: Mon Feb 01 00:00:00 -0700 2010 A similar error occurs using end_of_day, but the hour is always 11 PM and the minute is the same integer value that matches the month in integers, i.e the time is 11:06 PM in June, 11:02 PM in February. Qurky? Admittedly a noob to Rails. Thanks for any comments or explanations.

    Read the article

  • Getting a value from a row at particular time

    - by Swetha Bindu
    I had a row in my database: starttime:4/6/2012 2:00pm, Endtime:31/12/9999, name:"swetha", status:"open"..... When I update this row I changed the starttime to the current time (getdate()) and have no issues. I am running a Windows Service each day at 1am to modify a value in the row. I would like to know the status of my row at 4/6/2012 11:59 pm when my service runs. There is no need to do an update at 4/6/2012 11:59 pm and the last update may be at any time of the day however my requirement is to get the status value at 4/6/2012 11:59 pm. I would like to have the query in SQL Server 2008. Can anyone please help me to find a solution?

    Read the article

  • Wrong file encoding after Dist::Zilla

    - by xenoterracide
    How can I get mojibake to pass? this might be a bug in the contributors plugin. The character does not render correctly in perldoc, but does in my vim and in the extracted git log. # Failed test 'Mojibake test for blib/lib/Pod/Spell.pm' # at /home/xenoterracide/perl5/perlbrew/perls/perl-5.18.1/lib/site_perl/5.18.1/Test/Mojibake.pm line 168. # Non-UTF-8 unexpected in blib/lib/Pod/Spell.pm, line 431 (POD) here's a snippet from the source which should probably be looked at directly due to copy-paste maybe not catching an encoding issue. =item * Olivier Mengué <[email protected]> =back A little more vim exploration shows that :set filencoding is being changed to latin1 editing the file in vim seems to fix this, but since the file is being generated, how can I get it generated with the correct encoding?

    Read the article

  • Passing a formula field value into a Crystal Reports command

    - by DarkDeny
    Hello eveybody! I have a report, in which I calculate starting date via Formula Fields. Then I need to pass calculated value into a subreport, where I can use it as one of the parameters for the stored procedure in my database. The problem is that for some reason value of the corresponding parameter when the database is queried is empty (or null, I am not sure exactly). What can be wrong in my report template? The formula I use is: ToText( ShiftDateTime ({@StartDate}, {?TimeZone}, "UTC,0")) Name of this formula field is StartDate I pass it into subreport as a Pm-?@StartDateUTC (also tried Pm-@StartDateUTC). I am sure, that @StartDate has correct value (tested in preview with this field in report), also Pm-?@StartDateUTC does has a value, but when CR is accessing DB - the result is empty field value...

    Read the article

  • Database Table design for Weekdays and Values

    - by Ved
    Guys, I would love to here your ideas on this. I am creating a table which will store weekly shift hours for employees. for example: Jon works 9:00am tp 9:00pm Monday , 10:00 AM to 5:00PM on Tuseday etc.... How should i go on designing this table. I thought of 2 options. (1) For every record I can have two lines for AM and PM and have columns for Monday to Sunday ID | ParentID | Type | Monday | Tuseday ......Sunday 1 | 1 | AM | 9:00 | 9:00 12:00 2 | 1 | PM | 9:00 | 5:00 02:00 3 | 2 | AM | 10:00 | 4 | 2 | PM | 10:00 | (2) I can save Preference in XML format in one column ID | Info 1 | |Hours|Monday|9:00 AM - 9:00PM|Monday|Tuseday........|Hours Any better ideas ? Thanks

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >