Search Results

Search found 10731 results on 430 pages for 'k day'.

Page 408/430 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • Does this type of function or technique have a name?

    - by DHR
    HI there, I'm slightly new to programming, more of a hobby. I am wondering if a the following logic or technique has a specific name, or term. My current project has 7 check boxes, one for each day of the week. I needed an easy to save which boxes were checked. The following is the method to saved the checked boxes to a single number. Each checkbox gets a value that is double from the last check box. When I want to find out which boxes are checked, I work backwards, and see how many times I can divide the total value by the checkbox value. private int SetSelectedDays() { int selectedDays = 0; selectedDays += (dayMon.Checked) ? 1 : 0; selectedDays += (dayTue.Checked) ? 2 : 0; selectedDays += (dayWed.Checked) ? 4 : 0; selectedDays += (dayThu.Checked) ? 8 : 0; selectedDays += (dayFri.Checked) ? 16 : 0; selectedDays += (daySat.Checked) ? 32 : 0; selectedDays += (daySun.Checked) ? 64 : 0; return selectedDays; } private void SelectedDays(int n) { if ((n / 64 >= 1) & !(n / 64 >= 2)) { n -= 64; daySun.Checked = true; } if ((n / 32 >= 1) & !(n / 32 >= 2)) { n -= 32; daySat.Checked = true; } if ((n / 16 >= 1) & !(n / 16 >= 2)) { n -= 16; dayFri.Checked = true; } if ((n / 8 >= 1) & !(n / 8 >= 2)) { n -= 8; dayThu.Checked = true; } if ((n / 4 >= 1) & !(n / 4 >= 2)) { n -= 4; dayWed.Checked = true; } if ((n / 2 >= 1) & !(n / 2 >= 2)) { n -= 2; dayTue.Checked = true; } if ((n / 1 >= 1) & !(n / 1 >= 2)) { n -= 1; dayMon.Checked = true; } if (n > 0) { //log event } } The method works well for what I need it for, however, if you do see another way of doing this, or a better way to writing, I would be interested in your suggestions.

    Read the article

  • Need advice about pointers and time elapsed program. How to fix invalid operands and cannot convert errors?

    - by user1781382
    I am trying to write a program that tells the difference between the two times the user inputs. I am not sure how to go about this. I get the errors : Line 27|error: invalid operands of types 'int' and 'const MyTime*' to binary 'operator-'| Line |39|error: cannot convert 'MyTime' to 'const MyTime*' for argument '1' to 'int DetermineElapsedTime(const MyTime*, const MyTime*)'| I also need a lot of help in this problem. I don't have a good curriculum, and my class textbook is like cliffnotes for programming. This will be my last class at this university. The C++ teztbook I use(my own not for class) is Sam's C++ One hour a day. #include <iostream> #include<cstdlib> #include<cstring> using namespace std; struct MyTime { int hours, minutes, seconds; }; int DetermineElapsedTime(const MyTime *t1, const MyTime *t2); long t1, t2; int DetermineElapsedTime(const MyTime *t1, const MyTime *t2) { return((int)t2-t1); } int main(void) { char delim1, delim2; MyTime tm, tm2; cout << "Input two formats for the time. Separate each with a space. Ex: hr:min:sec\n"; cin >> tm.hours >> delim1 >> tm.minutes >> delim2 >> tm.seconds; cin >> tm2.hours >> delim1 >> tm2.minutes >> delim2 >> tm2.seconds; DetermineElapsedTime(tm, tm2); return 0; } I have to fix the errors first. Anyone have any ideas??

    Read the article

  • Changing a select input to a checkbox acting as an on/off toggle switch in Rails

    - by Ribena
    I have a set of 7 dropdown inputs allowing admins to say whether they are open or closed for business on a given day. I'd like that changed to 7 open/closed switches (presumably styled checkboxes?) but can't figure out how to do this! Here are the relevant bits of code I currently have (prior to any change): app/view/backend/inventory_pool/edit.html.haml - content_for :title, @inventory_pool = form_for [:backend, @inventory_pool], html: {name: "form"} do |f| .content - if is_admin? %a.button{:href => root_path}= _("Cancel") %button.button{:type => :submit}= _("Save %s") % _("Inventory Pool") %section %h2= _("Basic Information") .inner .field.text .key %h3= "#{_("Print Contracts")}" %p.description .value .input %input{type: "checkbox", name: "inventory_pool[print_contracts]", checked: @inventory_pool.print_contracts} %section#workdays %h2= _("Workdays") .inner - [1,2,3,4,5,6,0].each do |i| .field.text .key %h3= "#{I18n.t('date.day_names')[i]}" .value .input %select{:name => "store[workday_attributes][workdays][]"} %option{:label => _("Open"), :value => Workday::WORKDAYS[i]}= _("Open") %option{:label => _("Closed"), :value => "", :selected => @store.workday.closed_days.include?(i) ? true : nil}= _("Closed") app/models/workday.rb class Workday < ActiveRecord::Base belongs_to :inventory_pool WORKDAYS = ["sunday", "monday", "tuesday", "wednesday", "thursday", "friday", "saturday"] def is_open_on?(date) return false if date.nil? case date.wday when 1 return monday when 2 return tuesday when 3 return wednesday when 4 return thursday when 5 return friday when 6 return saturday when 0 return sunday else return false #Should not be reached end end def closed_days days = [] days << 0 unless sunday days << 1 unless monday days << 2 unless tuesday days << 3 unless wednesday days << 4 unless thursday days << 5 unless friday days << 6 unless saturday days end def workdays=(wdays) WORKDAYS.each {|workday| write_attribute(workday, wdays.include?(workday) ? true : false)} end end And in app/controllers/backend/inventory_pools_controller I have this (abridged): def update @inventory_pool ||= InventoryPool.find(params[:id]) process_params params[:inventory_pool] end def process_params ip ip[:print_contracts] ||= "false" # unchecked checkboxes are *not* being sent ip[:workday_attributes][:workdays].delete "" if ip[:workday_attributes] end

    Read the article

  • IO::Pipe - close(<handle>) does not set $?

    - by danboo
    My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)). The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered. I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core. I worked up a little test script to show what I mean: use 5.012; use warnings; use IO::Pipe; say 'init pipes:'; pipes(); my $fh = IO::Pipe->reader(q(false)); say 'post open pipes:'; pipes(); say 'return: ' . $fh->close; #say 'return: ' . close($fh); say 'status: ' . $?; say q(); say 'post close pipes:'; pipes(); sub pipes { for my $fd ( glob("/proc/self/fd/*") ) { say readlink($fd) if -p $fd; } say q(); } When using the method it shows the pipe being gone after the close and $? is set as I expected: init pipes: post open pipes: pipe:[992006] return: 1 status: 256 post close pipes: And, when using the built-in it also appears to close the pipe, but does not set $?: init pipes: post open pipes: pipe:[952618] return: 1 status: 0 post close pipes: It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy? Thanks!

    Read the article

  • Installing a clean Python 2.6 on SuSE (SLES) 11 using system-wide libraries

    - by optilude
    Hi, I've spent most of the day on this, and it is driving me absolutely insane. On all other Unixes I've used, this is a walk in the park, but SLES 11 has me dumbfounded. I need to build Zope on SLES 11 64 bit: Linux <name> 2.6.27.45-0.1-default #1 SMP 2010-02-22 16:49:47 +0100 x86_64 x86_64 x86_64 GNU/Linux I first tried to just use the YaST-installed Python 2.6. I've also installed python-devel, libjpeg-devel, readline-devel, libopenssl-devel, libz2-devel, zlib-devel, and libgcrypt-devel. The global python2.6 has a lot of cruft in it, and seems to execute stuff in /etc/pythonstart when I use it, which doesn't help. However, the error I get is this: Getting distribution for 'Zope2==2.12.3'. src/AccessControl/cAccessControl.c:596: warning: initialization from incompatible pointer type src/AccessControl/cAccessControl.c:598: warning: ‘intargfunc’ is deprecated src/AccessControl/cAccessControl.c:598: warning: initialization from incompatible pointer type src/AccessControl/cAccessControl.c:599: warning: ‘intargfunc’ is deprecated src/AccessControl/cAccessControl.c:599: warning: initialization from incompatible pointer type src/AccessControl/cAccessControl.c:600: warning: ‘intintargfunc’ is deprecated src/AccessControl/cAccessControl.c:600: warning: initialization from incompatible pointer type src/AccessControl/cAccessControl.c:601: warning: initialization from incompatible pointer type src/AccessControl/cAccessControl.c:602: warning: initialization from incompatible pointer type src/AccessControl/cAccessControl.c:606: warning: ‘intargfunc’ is deprecated src/AccessControl/cAccessControl.c:606: warning: initialization from incompatible pointer type /usr/lib64/gcc/x86_64-suse-linux/4.3/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/lib/libpython2.6.so when searching for -lpython2.6 /usr/lib64/gcc/x86_64-suse-linux/4.3/../../../../x86_64-suse-linux/bin/ld: cannot find -lpython2.6 collect2: ld returned 1 exit status error: Setup script exited with error: command 'gcc' failed with exit status 1 An error occured when trying to install Zope2 2.12.3. Look above this message for any errors that were output by easy_install. I don't know what "incompatible" is referring to here; my guess would be the hardware architecture, but I'm not sure what's incompatible with what in the statement above. I've had problems with system-installed Pythons before, so I tried to compile my own (hence the list of -devel packages above), downloading the Python 2.6 tarball and running: ./configure --disable-tk --prefix=${HOME}/python make make install This installs, but it seems to be unable to find any system-wide libraries. Here's a sample interpreter session: Python 2.6.5 (r265:79063, Mar 29 2010, 17:04:12) [GCC 4.3.2 [gcc-4_3-branch revision 141291]] on linux2 Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "/etc/pythonstart", line 7, in <module> import readline ImportError: No module named readline >>> from hashlib import md5 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/osc/python-2.6/lib/python2.6/hashlib.py", line 136, in <module> md5 = __get_builtin_constructor('md5') File "/home/osc/python-2.6/lib/python2.6/hashlib.py", line 63, in __get_builtin_constructor import _md5 ImportError: No module named _md5 Both readline and hashlib (via libgrypt) should be installed, and the relevant -devel packages are also installed. On Ubuntu or OS X, this works just fine. On SuSE, no luck. Any help greatly appreciated! Martin

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • Does Apache ever give incorrect "out of threads" errors?

    - by Eli Courtwright
    Lately our Apache web server has been giving us this error multiple times per day: [Tue Apr 06 01:07:10 2010] [error] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting We raised our ThreadsPerChild setting from 50 to 100, but we still get the error. Our access logs indicate that these errors never even happen at periods of high load. For example, here's an excerpt from our access log (ip addresses and some urls are edited for privacy). As you can see, the above error happened at 1:07 and only a small handful of requests occurred in the several minutes leading up to the error: 99.88.77.66 - - [06/Apr/2010:00:59:33 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-icons_222222_256x240.png HTTP/1.1" 304 - 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - mpeu [06/Apr/2010:00:59:40 -0400] "GET /some/dynamic/content HTTP/1.1" 200 145049 55.44.33.22 - mpeu [06/Apr/2010:01:06:56 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/date.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image1.gif HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image2.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image3.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image4.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image5.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image6.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image7.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image8.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image9.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/imageA.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_flat_75_ffffff_40x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_highlight-soft_75_cccccc_1x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 11.22.33.44 - mpeu [06/Apr/2010:01:18:03 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 11.22.33.44 - - [06/Apr/2010:01:18:03 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 200 27374 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 200 12795 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/date.js HTTP/1.1" 200 25809 For what it's worth, we're running the version of Apache that ships with Oracle 10g (some 2.0 version), and we're using mod_plsql to generate our dynamic content. Since the Apache server runs as a separate process and the database doesn't record any problems when this error occurs, I'm doubtful that Oracle is the problem. Unfortunately, the errors are freaking out our sysadmins, who are inclined to blame any and all problems which occur with the server on this error. Is this a known bug in Apache that I simply haven't been able to find any reference to through Google?

    Read the article

  • pecl-ssh2-0.11 Freebsd Compile error after upgrading to php 5.3.2

    - by penfold45
    Hi I've been looking for answers for this all day and can find nothing to solve my issue. I also came across a question about this port on serverfault that I just answered and will hopefully help someone else. however my problem is this. While running "make" in /usr/ports/security/pecl-ssh2 I get this error === Building for pecl-ssh2-0.11 /bin/sh /usr/ports/security/pecl-ssh2/work/ssh2-0.11/libtool --mode=compile cc -I. -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -DPHP_ATOM_INC -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/include -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/main -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -I/usr/local/include/php -I/usr/local/include/php/main -I/usr/local/include/php/TSRM -I/usr/local/include/php/Zend -I/usr/local/include/php/ext -I/usr/local/include/php/ext/date/lib -I/usr/local/include -I/usr/local/include -DHAVE_CONFIG_H -O2 -pipe -fno-strict-aliasing -c /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c -o ssh2.lo cc -I. -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -DPHP_ATOM_INC -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/include -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11/main -I/usr/ports/security/pecl-ssh2/work/ssh2-0.11 -I/usr/local/include/php -I/usr/local/include/php/main -I/usr/local/include/php/TSRM -I/usr/local/include/php/Zend -I/usr/local/include/php/ext -I/usr/local/include/php/ext/date/lib -I/usr/local/include -I/usr/local/include -DHAVE_CONFIG_H -O2 -pipe -fno-strict-aliasing -c /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c -fPIC -DPIC -o .libs/ssh2.o /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_methods_negotiated': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:502: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:503: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:507: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:508: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:509: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:510: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:515: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:516: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:517: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:518: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_poll': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:891: error: 'zval' has no member named 'is_ref' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:891: error: 'zval' has no member named 'refcount' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:901: error: 'zval' has no member named 'is_ref' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:902: error: 'zval' has no member named 'refcount' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_publickey_add': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1011: error: 'zval' has no member named 'is_ref' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1012: error: 'zval' has no member named 'refcount' /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1044: warning: passing argument 1 of '_efree' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c: In function 'zif_ssh2_publickey_list': /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1103: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type /usr/ports/security/pecl-ssh2/work/ssh2-0.11/ssh2.c:1104: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type *** Error code 1 Stop in /usr/ports/security/pecl-ssh2/work/ssh2-0.11. *** Error code 1 Stop in /usr/ports/security/pecl-ssh2. I am trying to recompile this port after upgrading from php 5.2.12 to php 5.3.2 which was released on freebsd over the weekend. I have run out of ideas and steam with this so if anyone has any ideas on what this might be I would be truly grateful.

    Read the article

  • Why do GPUs overheat?

    - by JAD
    About a year ago, I added a 9800GT (1 GB version) and a Corsair CX500 PSU to an HP M8000N computer. A few weeks ago, the HDD overheated and I decided to transfer the GPU & PSU to a new build, which consists of: i3 @ 3.3Ghz Gigabyte H61 Micro ATX Mobo 4GB RAM 500GB WD HDD DVD RW Drive Cooler Master Elite 430 Tower Once I had Win7 up and running, I installed all the essential drivers that came with the Gigabyte Mobo CD. However, whenever I tried installing the Graphics Media Accelerator driver, the computer would crash and enter an endless boot sequence on the next startup. I skipped installing this driver and installed the CD driver for the 9800GT, which by now is a year old. Everything was working fine, WEI rated my GPU at 6.6 graphics & aero performance. However, after updating my Nvidia drivers to the latest, the WEI dropped my rating to 3.3 for Aero, and 4.7 for graphics performance. Just to make sure that everything was ok, I ran Bad Company 2 on medium settings. The first few minutes ran just fine at a smooth framerate, so I dismissed this as Windows being Windows. About 6 hours later, I ran BC2 again. This time I averaged anywhere from 2-5 FPS. I checked the GPU temperature through GPU-Z, and it came back as 120C. The problem with this, is that the computer was on for six hours up to that point. Wouldn't the card have experienced a reactor core meltdown a lot sooner than that? Granted, the computer was "sleeping" some of the time, but still... The next day I took out a temperature gun and ran some tests. I would point the laser at a very specific area on the reverse side of the card (not the fan or "front"), and compare the temp reading with GPU-Z. After leaving the system on idle on idle for a few minutes, I ran BC2 twice. Here are the results: GPU-Z Reading / Temp Gun Reading / Time Null / 22.3°C / Comp is Off 53°C / 33.5°C / 1:49 78°C / 46°C / 1:53 - (First BC2 run; good framerate) 102°C / 64.6°C / 2:01 - (System is again on idle) 113°C / 64.8°C / 2:10 119°C / 71.8°C / 2:17 - (Second BC2 run; poor framerate) I should also mention that I also took a temp recording of another part of the GPU from 2:01-2:17. The temp in this area jumped from 75°C to 82.9°C in that time frame. This pretty much confirms that GPU-Z is reporting the temperature accurately, and the card is overheating. But I'd like to know why; the cars is doing nothing and still the temperature climbs at a steady rate. I thoroughly cleaned the GPU and PSU when I salvaged them from the old HP M8000N computer with a can of compressed air, dust cant be the issue. Similarly, the rest of the computer is brand new. I installed various Nvidia drivers, but no luck. It seems strange to me that a year-old card is suddenly failing on me; aren't they supposed to last at least two years? Could this be a driver issue? Is the motherboard faulty? Could the PSU be overfeeding the card on voltage? Neither case seems likely, as the CPU, RAM and otherwise the rest of the comp has worked flawlessly and has stayed well within respectable temp ranges (the i3 lingers around 50C, the HDD stays at 30C, so does the PSU). How can I pinpoint the issue?

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Active Directory Time Synchronisation - Time-Service Event ID 50

    - by George
    I have an Active Directory domain with two DCs. The first DC in the forest/domain is Server 2012, the second is 2008 R2. The first DC holds the PDC Emulator role. I sporadically receive a warning from the Time-Service source, event ID 50: The time service detected a time difference of greater than %1 milliseconds for %2 seconds. The time difference might be caused by synchronization with low-accuracy time sources or by suboptimal network conditions. The time service is no longer synchronized and cannot provide the time to other clients or update the system clock. When a valid time stamp is received from a time service provider, the time service will correct itself. Time sync in the domain is configured with the second DC to synchronise using the /syncfromflags:DOMHIER flag. The first DC is configured to sync time using a /syncfromflags:MANUAL /reliable:YES, from a peerlist consisting of a number of UK based stratum 2 servers, such as ntp2d.mcc.ac.uk. I'm confused why I receive this event warning. It implies that my PDC emulator cannot synchronise time with a supposedly reliable external time source, and it quotes a time difference of 5 seconds for 900 seconds. It's worth also mentioning that I used to use a UK pool from ntp.org but I would receive the warning much more often. Since updating to a number of UK based academic time servers, it seems to be more reliable. Can someone with more experience shed some light on this - perhaps it is purely transient? Should I disregard the warning? Is my configuration sound? EDIT: I should add that the DCs are virtual, and installed on two separate VMware ESXi/vSphere physical hosts. I can also confirm that as per MDMarra's comment and best practice, VMware timesync is disabled, since: c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe timesync status returns Disabled. EDIT 2 Some strange new issue has cropped up. I've noticed a pattern. Originally, the event ID 50 warnings would occur at about 1230pm each day. This is interesting since our veeam backup happens at 12 midday. Since I made the changes discussed here, I now receive an event ID 51 instead of 50. The new warning says that: The time sample received from peer server.ac.uk differs from the local time by -40 seconds (Or approximately 40 seconds). This has happened two days in a row. Now I'm even more confused. Obviously the time never updates until I manually intervene. The issue seems to be related to virtualisation and veeam. Something may be occuring when veeam is backing up the PDCe. Any suggestions? UPDATE & SUMMARY msemack's excellent list of resources below (the accepted answer) provided enough information to correctly configure the time service in the domain. This should be the first port of call for any future people looking to verify their configuration. The final "40 second jump" issue I have resolved (there are no more warnings) through adjusting the VMware time sync settings as noted in the veeam knowledge base article here: http://www.veeam.com/kb1202 In any case, should any future reader use ESXi, veeam or not, the resources here are an excellent source of information on the time sync topic and msemack's answer is particularly invaluable.

    Read the article

  • Unable to make the session state request to the session state server

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • How to diagnose computer lockup/freezing problem

    - by Scott Mitchell
    I built a desktop computer a couple years back with the following specs: CPU: Intel Core 2 Quad Q9300 Yorkfield 2.5GHz 6MB L2 Cache LGA 775 95W Quad-Core Processor BX80580Q9300 Motherboard: EVGA 122-CK-NF68-T1 LGA 775 NVIDIA nForce 680i SLI ATX Intel Motherboard Video Card: Two EVGA 256-P2-N758-TR GeForce 8600GT SCC 256MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card PSU: SeaSonic S12 Energy Plus SS-550HT 550W ATX12V V2.3 / EPS12V V2.91 SLI Certified CrossFire Ready 80 PLUS Certified Active PFC Power Supply Memory: Two G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ Since its inception, the machine has periodically locked up, the regularlity having varied over the years from once a day to once a month. Typically, lockups happen once every few days. By "lockup" I mean my computer just freezes. The screen locks up, I can't move the mouse. Hitting keys on my keyboard that normally turn LEDs on or off on the keyboard (such as Caps Lock) no longer turn the LEDs on or off. If there was music playing at the time of the lockup, noise keeps coming out of the speakers, but it's just the current frequency/note that plays indefinitely. There is no BSOD. When such a lockup occurs I have to do a hard reboot by either turning off the computer or hitting the reset button. I have the most recent version of the NVIDIA hardware drivers, and update them semi-regularly, but that hasn't seemed to help. I am currently using Windows 7 x64, but was previously using Windows Server 2003 x64 and having the same lockup issues. My guess is that it's somehow video driver or motherboard related, but I don't know how to go about diagnosing this problem to narrow down which of the two is the culprit. Additional information re: cooling Regarding cooling... I've not installed any after-market cooling systems aside from two regular fans I scavenged from an older computer. The fan atop the CPU is the one that shipped with it. One of the two scavenged fans I added it located at the bottom tower of the corner, in an attempt to create some airflow from front to back. The second fan is pointed directly at the two video cards. SpeedFan installation and readings Per studiohack's suggestion, I installed SpeedFan, which provided the following temperature readings: GPU: 63C GPU: 65C System: 76C CPU: 64C AUX: 36C Core 0: 78C Core 1: 76C Core 2: 79C Core 3: 79C Update #3: Another Lockup :-( Well, I had another lockup last night. :-( SpeedFan reported the CPU temp at 38 C when it happened, and there was no spike in temperature leading up to the freeze. One thing I notice is that the freeze seems more likely to happen if I am watching a video. In fact, of the last 5 freezes over the past month, 4 of them have been while watching a video on Flickr. Not necessarily the same video, but a video nevertheless. I don't know if this is just coincidence or if it means anything. (As an aside, each night before bedtime my 2 year old daughter sits on my lap and watches some home videos on Flickr and, in the last month, has learned the phrase, "Uh oh, computer broke.") Update #4: MemTest86 and 3DMark06 Test Results: Per suggestions in the comments, I ran the MemTest86 overnight and it cycled through the 8 GB of memory 5 times without error. I also ran the 3DMark06 test without a problem (see my scores at http://3dmark.com/3dm06/15163549). So... what now? :-) Any further suggestions on what to check? Is there some way to get a stack trace or something when the computer locks like that? Thanks

    Read the article

  • What do these messages in the Qmail maillog indicate?

    - by Griffo
    There seems to be an endless supply of messages in the Qmail maillog for a single address. Can anyone shed some light on why this might be and whether it is a problem? To me it looks like either spam or some sort of unhandled problem. It strikes me as unusual that the 'from=' field is blank. This is on a VPS using Plesk in case that's important. Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23593]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: [email protected] EDIT Here's a sample of one of the emails: Received: (qmail 5603 invoked for bounce); 29 Jun 2011 07:46:31 +0100 Date: 29 Jun 2011 07:46:31 +0100 From: [email protected] To: [email protected] Subject: failure notice Hi. This is the qmail-send program at vps-1001108-595.cp.blacknight.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 200.147.36.13 does not like recipient. Remote host said: 450 4.7.1 Client host rejected: cannot find your hostname, [78.153.208.195] Giving up on 200.147.36.13. I'm not going to try again; this message has been in the queue too long. --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15585 invoked by uid 48); 22 Jun 2011 07:38:26 +0100 Date: 22 Jun 2011 07:38:26 +0100 Message-ID: <[email protected]> To: [email protected] Subject: Cadastre-se e Concorra ? um Carro! MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Cielo Fidelidade <[email protected]> <!DOCTYPE HTML> <html> ... <body text removed> <html> If I understand this correctly, this is saying that an email sent by my server, from address [email protected], could not be delivered. However, [email protected] is not a valid email address on my server, so how can email be sent from this address on my server? I have tested whether my server is acting as an open relay, and it isn't. So how else could this be happening? I am getting thousands of these every day. What can I do to prevent it?

    Read the article

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • How to disable proxy requests once a server has been added to spammers "open proxy" list?

    - by Matt
    Hello all, I've just started in a new company, and have been going over the setup of their Apache webserver conf files... only to find that they've had their apache servers set up as open proxies available to all the world for the last two months. I've already set ProxyRequests Off in the httpd.conf file and restarted the web server, but the access log file is still growing at a horrendous rate (about a gig a day). I noticed that another question was posted on here about this (http://serverfault.com/questions/63715/apache-hit-with-proxy-request), but their access log was supposedly returning 404 errors, while mine appears to be returning 403 and 404 codes... Is this correct? Here are a few lines out of my access log: 87.118.118.124 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 HTTP/1.0" 404 219 "http://www.c5interlude.ru/torrent/viewtopic.php?p=2501" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=300x250&section=790074 HTTP/1.0" 404 200 "http://www.newbiegamer.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Alexa Toolbar)" 122.224.55.222 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1" 403 214 "http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar" "Mozilla/4.0" 58.55.21.40 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.cpx24.com/ad1.js HTTP/1.0" 404 204 "http://thebighits.com/?id=aibux" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" 122.226.223.188 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.reduxmedia.com/st?ad_type=iframe&ad_size=160x600&section=798636 HTTP/1.0" 404 200 "http://www.gvvu.com" "Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)" 84.51.109.31 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.kslp.ru/forum/index.php HTTP/1.0" 404 213 "http://www.kslp.ru/forum/index.php" "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0 ; .NET CLR 2.0.50215; SL Commerce Client v1.0; Tablet PC 2.0" 122.224.48.49 - - [16/Mar/2010:10:56:36 -0400] "GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1" 403 214 "http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe" "Mozilla/4.0" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=728x90&section=657624 HTTP/1.0" 404 200 "http://www.raiseanimals.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Alexa Toolbar)" And my corresponding error log entries: [Tue Mar 16 10:56:36 2010] [error] [client 87.118.118.124] File does not exist: C:/public_html/torrent, referer: http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.newbiegamer.com [Tue Mar 16 10:56:36 2010] [error] [client 122.224.55.222] (22)Invalid argument: Cannot map GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1 to file, referer: http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar [Tue Mar 16 10:56:36 2010] [error] [client 58.55.21.40] File does not exist: C:/public_html/ad1.js, referer: http://thebighits.com/?id=aibux [Tue Mar 16 10:56:36 2010] [error] [client 122.226.223.188] File does not exist: C:/public_html/st, referer: http://www.gvvu.com [Tue Mar 16 10:56:36 2010] [error] [client 84.51.109.31] File does not exist: C:/public_html/forum, referer: http://www.kslp.ru/forum/index.php [Tue Mar 16 10:56:36 2010] [error] [client 122.224.48.49] (22)Invalid argument: Cannot map GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1 to file, referer: http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.raiseanimals.com Does this in fact look like the server is blocking them correctly, and is there anything else that I could do better to cut down on my access log size? (perhaps block these requests from the server completely?) Thanks! Matt

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • htaccess on remote server issues - password prompt not accepting input

    - by pying saucepan
    EDIT: I will contact the university about my problem after labor day weekend, but I thought if someone knew a quick fix that I haven't tried, or if the problem has an obvious fix then I could hope to try my luck here, thanks! TLDR: Sorry its a long post, I thought I should be... thorough. I am having a common issue (found a dead thread through google with no solution to the same problem) with the prompt to enter in a username and password via htaccess rights, but this prompt will keep popping up asking for a username and password when trying to access my home directory on my university's server which has the .htaccess and .htpasswd files. It does not matter if I enter in correct or incorrect credentials, the prompt will keep asking me for input without displaying my home directory. Ever since I have included these ht files I have never once been able to get past the username/password no matter what I have tried, save for removing them from the directory I am trying to access (my top level directory that I own). This kind of served my original goal of making the top level directory inaccessible to casual users, but if I wanted to use this method on other places, I would want it to work as intended. And I also like it when computers do what I wish they would, so any help is appreciated. Some things I have tried: Changing the file/directory access rights: they told me to try these commands if people can't access my files cd ~/public_html find ./ -type d -exec chmod 755 {} \; find ./ -type f -exec chmod 644 {} \; enter in the single character name/pw at least twenty times in a row, no cheddar. so I changed directory with cd ~ in hopes that this would be my home directory, since my home directory contains the "public_html" directory, so logic tells me that the ~ tilde symbol is the top level directory that I have ownership of. Then I did those two commands to change the rights on the files inside, I am still having no luck. How I got to this point: I have been following the instructions given to me through my university's website for setting up my little directory. A link on how they describe how to password protect the home directory is given below: "Protect Web Directories" instructions I have everything in order except for one small detail that I feel probably does not matter. I am on windows and so I am using winSCP to remote control my allocated server space. The small detail is that as the instructions indicate (on step 3) that I should use the command htpasswd -c .htpasswd {username} where {username} is my folder that holds my allocated server space. But this command requires further input through the terminal, and unfortunately winSCP does not offer this kind of functionality. So I looked up some basic instructions on using htaccess and it is formatted correctly such that the .htaccess file appears as follows: AuthType Basic AuthName "Verify" AuthUserFile /correctpath/.htpasswd require valid-user and this file is in the root directory for my server space as well as the .htpasswd file which has only this data inside: username:password I know for sure that these two files must be formatted correctly, at least according to their tutorial, because before my path was incorrectly formatted via including some curly { braces } without knowing the correct way to do this at first. And the password prompt that shows up when accessing my directory responded by loading an error page indicating to contact OSU admin or something not important. But now that I have everything like it 'should' be. I know this because when I enter in my credentials "username and password" the prompt pops up for my username and password again and again whether or not I enter in correct information. The only exception is that if I click cancel it will direct me to a page saying that I need to enter in a username and password. Note that I am very inexperienced at server-related buisness, two days ago I couldn't have told you what a website actually consists of. So, if you use some technical jargon I may or may not need to look it up and get back to you before I actually understand what you mean, but I am a quick learner and it probably wont matter.

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Alternative Methods of Sharing Folders in Windows?

    - by Blaenk
    Hey guys. I'm running Windows 7 and as of now I simply share folders as one usually does in Windows. I then have a MacBook with Leopard (Now Snow Leopard) which I use to connect to my computer to mount the shares by going to Finder, then CMD + K and typing smb://BlaenkPC (The name of my PC) into the address box. This consequently connects to my computer and mounts all of the shares. The problem is that sometimes, if for example I close my MacBook (Which makes it go to sleep) or sometimes even without doing that, the connection somehow drops. Sometimes I close the MacBook and upon re-opening it, everything still works; it's random. It still shows the computer as being connected, but it just shows 'loading' indefinitely. If I hit 'eject' with the intention of re-connecting to the computer, it disappears from the sidebar (The Computer Icon) in Finder, but I cannot re-connect. Activity Monitor (or ps aux, whichever) both show hung instances of umount; one for each share that was mounted. I cannot kill these processes with kill or killall (Yes, even with sudo, and sending signal -9). This has happened to me before, and here is another person who has experienced this. My question boils down to this: Is there an alternative method of sharing folders in Windows, that my Mac can read/understand, that is possibly more reliable and preferably just as fast? I usually use the mounted shares to watch television episodes off my computer, or movies, etc. (In other words, I open them in VLC and they automatically stream from my computer). As far as I can tell, this is a problem with the Samba protocol. I have heard of NFS, but I am not sure if I would have to re-format my drives, or what. I don't mind running a service or daemon to allow the sharing of the folders, I just want it to be done and hopefully in a better way than typical Windows shares through Samba. Usually when I encounter this problem, which is often (read: every day), I have no other option but to restart the MacBook. As I stated in the first question I linked to, shutting down and restarting don't work; I have to manually force the shutdown by holding the power button. I have not modified my installation of Mac OS X in any hackish way, so I doubt it's something with the Operating System, but worst come to worst, I might end up reformatting and doing a clean install to see if that fixes anything, as I am at a complete loss as to what may be causing the problem, and no one else seems to have any idea or care, despite there being quite a few people suffering from this problem, as my research has shown. Any pieces of information that can help are extremely appreciated. You don't have to answer every question on here, but maybe even some insight as to why it might not be possible to kill those hung umount instances for example, or why I may not be able to reconnect using samba (Is it something regarding the way the protocol works?). One thing to note is that I have another computer in the home network that doesn't seem to have this problem. However, it is also running Windows 7 (Note though that I am not using the homegroup feature, but the typical windows sharing feature). My only deduction is that the problem is being caused by the way the Mac (Or Samba implementation, whichever) is handling things. Perhaps it is a limitation.

    Read the article

  • apache webserver unresponsible with server-status showing all child processes waiting for connection

    - by Jeff
    My setup: i have 3 nearly identical webserver machines serving the same high loaded dynamic website with simple load balancing over dns. The service has been working for over two ears with the same apache config. apache2, php5, ubuntu 8.04 linux 2.6.24-29-server My problem: since about two weeks i'm experiencing problems with this config. Nearly every day i have one small moment about 5 minutes, in which the website is unreachable. I'm still able to login to the servers over ssh. If i run htop, i see the machine simply doing nothing. i have about 1000 apache processes running, but no cpu activity. i've used the apache mod_status to debug this situation. the process scoreboard looks like this: _C.___K_______________________R._______.__K_K____K___C_______.__ _______C__________.___________________________________.________C _.____K__________K___K_WK_____._K_____________________________._ W______K__________K________.____________________._______C_______ _C_.__K__K____.._.._____________________________________C_______ _R___________K___.______C________.C_________.______._____C______ ____________KKC____K_____K__WC_________________C_____.__.____.__ _____________________C_________K______.____C______._____________ _.___C____.___.___________________________.K______.____K________ W__.___________________C.__.____K________K_______R_._.__._______ __C__C_.__________C__C_______._____W______________C_.___C_______ ____.______C_____________C________.____C____________.________._K __.__________.K_____________K_________._____C____.K__________KW_ __K.W________R_________._______.___W___________.____.__K_____W__ W___.___..________W____K Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process So the most of the processes are just waiting for connection. after about 5 minutes the situation will return to normal: i have lot least processes on every machine, the most workers have the "."-status (meaing they are open to process a request) and of course the website is reachable! so i'm trying to find something in the logs, but there is simply nothing... the apache access log is silent for about 4 minutes, the same is for the error log. i also can not figure out anything wrong in other system logs. the situation is the same on all 3 webservers (all of them have this load peak and unresposibility at the same time), so i do not thing this is hardware related. but i think, this might be related to some network (tcp) issue. any ideas? EDIT: some more information, that i have just discovered: it has just happened again. and i was able to verify that i'm also not able to connect locally when this problem occurs. i have made some connection statistics with the following command after it happend netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 109 CLOSE_WAIT 2652 ESTABLISHED 2 FIN_WAIT1 11 LAST_ACK 12 LISTEN 91 SYN_RECV 1 SYN_SENT 16 TIME_WAIT If i execute the same command some time later, i have something like this: 4 CLOSING 108 ESTABLISHED 18 FIN_WAIT1 182 FIN_WAIT2 37 LAST_ACK 12 LISTEN 50 SYN_RECV 11276 TIME_WAIT So in the normal situation i have only 100-200 open connections by clients beeing handled by apache in this moment. when i have this "crash", i have a lot more connections. what is the best way to analyse this? EDIT2: the important lines in apache2.conf are: KeepAlive On MaxKeepAliveRequests 20 KeepAliveTimeout 1 <IfModule mpm_prefork_module> ServerLimit 920 StartServers 30 MinSpareServers 80 MaxSpareServers 120 MaxClients 920 MaxRequestsPerChild 700 </IfModule> it is an apache2 prefork with php_mod. the server has 8GB ram and a 4gb swap partition.

    Read the article

  • How to stop Apache from crashing my entire server?

    - by CyberShadow
    I maintain a Gentoo server with a few services, including Apache. It's fairly low-end (2GB of RAM and a low-end CPU with 2 cores). My problem is that, despite my best efforts, an over-loaded Apache crashes the entire server. In fact, at this point I'm close to being convinced that Linux is a horrible operating system that isn't worth anyone's time looking for stability under load. Things I tried: Adjusting oom_adj for the root Apache process (and thus all its children). That had close to no effect. When Apache was overloaded it would bring the system to a grind, as the system paged out everything else before it got to kill anything. Turning off swap. Didn't help, it would unload memory paged to binaries of processes and other files on /, thus causing the same effect. Putting it in a memory-limited cgroup (limited to 512 MB of RAM, 1/4th of the total). This "worked", at least in my own stress tests - except the server keeps crashing under load (basically stalling all other processes, inaccessible via SSH, etc.) Running it with idle I/O priority. This wasn't a very good idea in the end, because it just caused the system load to climb indefinitely (into the thousands) with almost no visible effect - until you tried to access an unbuffered part of the disk. This caused the task to freeze. (So much for good I/O scheduling, eh?) Limiting the number of concurrent connections to Apache. Setting the number too low caused web sites to become unresponsive due to most slots being occupied with long requests (file downloads). I tried various Apache MPMs without much success (prefork, event, itk). Switching from prefork/event+php-cgi+suphp to itk+mod_php. This improved performance, but didn't solve the actual problem. Switching I/O schedulers (cfq to deadline). Just to stress this out: I don't care if Apache itself goes down under load, I just want the rest of my system to remain stable. Of course, having Apache recover quickly after a brief period of intensive load would be great to have, but one step at a time. Right now I am mostly dumbfounded by how can humanity, in this day and age, design an operating system where such a seemingly simple task (don't allow one system component to crash the entire system) seems practically impossible - or at least, very hard to do. Please don't suggest things like VMs or "BUY MORE RAM". Some more information gathered with a friend's help: The processes hang when the cgroup oom killer is invoked. Here's the call trace: [<ffffffff8104b94b>] ? prepare_to_wait+0x70/0x7b [<ffffffff810a9c73>] mem_cgroup_handle_oom+0xdf/0x180 [<ffffffff810a9559>] ? memcg_oom_wake_function+0x0/0x6d [<ffffffff810aa041>] __mem_cgroup_try_charge+0x32d/0x478 [<ffffffff810aac67>] mem_cgroup_charge_common+0x48/0x73 [<ffffffff81081c98>] ? __lru_cache_add+0x60/0x62 [<ffffffff810aadc3>] mem_cgroup_newpage_charge+0x3b/0x4a [<ffffffff8108ec38>] handle_mm_fault+0x305/0x8cf [<ffffffff813c6276>] ? schedule+0x6ae/0x6fb [<ffffffff8101f568>] do_page_fault+0x214/0x22b [<ffffffff813c7e1f>] page_fault+0x1f/0x30 At this point, the apache memory cgroup is practically deadlocked, and burning CPU in syscalls (all with the above call trace). This seems like a problem in the cgroup implementation...

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • Server overloaded: DBCP issue on Mysql.

    - by taras
    Hi, I have 2 setups tomcat5.5.20 on Redhat and mysql 4.1.22 on another Redhat server. Recently i started getting the repeating error(each seconds) in catalina.out: DBCP object created 2010-12-22 13:33:12 by the following code was never closed: java.lang.Exception at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.init(AbandonedTrace.java:96) at org.apache.tomcat.dbcp.dbcp.AbandonedTrace.(AbandonedTrace.java:79) at org.apache.tomcat.dbcp.dbcp.DelegatingResultSet.(DelegatingResultSet.java:71) at org.apache.tomcat.dbcp.dbcp.DelegatingResultSet.wrapResultSet(DelegatingResultSet.java:80) at org.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:92) at org.apache.jsp.external_005fpage.signup_jsp._jspService(signup_jsp.java:1185) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264) at javax.servlet.http.HttpServlet.service(HttpServlet.java:802) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:199) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:282) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684) at java.lang.Thread.run(Thread.java:595) i have to restart tomcat once a day when server load reaches 80-90%. Also catalina.out file is growing too fast which every few hours need to clear the logs. My datasource config: <bean id="myDataSource" class="org.apache.tomcat.dbcp.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName"> <value>com.mysql.jdbc.Driver</value> </property> jdbc:mysql://XXX/XXX?autoReconnect=true 20 20 <property name="maxIdle"> <value>50</value> </property> <property name="maxActive"> <value>50</value> </property> <property name="removeAbandoned"> <value>false</value> </property> <property name="removeAbandonedTimeout"> <value>2400</value> </property> <property name="username"> <value>XXX</value> </property> <property name="password"> <value>XXX</value> </property> </bean> Any idea what can be the issue ? Thanks for any direction.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >