Search Results

Search found 2210 results on 89 pages for 'magic plane'.

Page 80/89 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Setting default values for inherited property without using accessor in Objective-C?

    - by Ben Stock
    I always see people debating whether or not to use a property's setter in the -init method. I don't know enough about the Objective-C language yet to have an opinion one way or the other. With that said, lately I've been sticking to ivars exclusively. It seems cleaner in a way. I don't know. I digress. Anyway, here's my problem … Say we have a class called Dude with an interface that looks like this: @interface Dude : NSObject { @private NSUInteger _numberOfGirlfriends; } @property (nonatomic, assign) NSUInteger numberOfGirlfriends; @end And an implementation that looks like this: @implementation Dude - (instancetype)init { self = [super init]; if (self) { _numberOfGirlfriends = 0; } } @end Now let's say I want to extend Dude. My subclass will be called Playa. And since a playa should have mad girlfriends, when Playa gets initialized, I don't want him to start with 0; I want him to have 10. Here's Playa.m: @implementation Playa - (instancetype)init { self = [super init]; if (self) { // Attempting to set the ivar directly will result in the compiler saying, // "Instance variable `_numberOfGirlfriends` is private." // _numberOfGirlfriends = 10; <- Can't do this. // Thus, the only way to set it is with the mutator: self.numberOfGirlfriends = 10; // Or: [self setNumberOfGirlfriends:10]; } } @end So what's a Objective-C newcomer to do? Well, I mean, there's only one thing I can do, and that's set the property. Unless there's something I'm missing. Any ideas, suggestions, tips, or tricks? Sidenote: The other thing that bugs me about setting the ivar directly — and what a lot of ivar-proponents say is a "plus" — is that there are no KVC notifications. A lot of times, I want the KVC magic to happen. 50% of my setters end in [self setNeedsDisplay:YES], so without the notification, my UI doesn't update unless I remember to manually add -setNeedsDisplay. That's a bad example, but the point stands. I utilize KVC all over the place, so without notifications, things can act wonky. Anyway, any info is much appreciated. Thanks!

    Read the article

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • How does System.TraceListener prepend message with process name?

    - by btlog
    I have been looking at using System.Diagnostics.Trace for doing logging is a very basic app. Generally it does all I need it to do. The downside is that if I call Trace.TraceInformation("Some info"); The output is "SomeApp.Exe Information: 0: Some info". Initally this entertained me but no longer. I would like to just output "Some info" to console. So I thought writing a cusom TraceListener, rather than using the inbuilt ConsoleTraceListener, would solve the problem. I can see a specific format that I want all the text after the second colon. Here is my attempt to see if this would work. class LogTraceListener : TraceListener { public override void Write(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.Write(message); } public override void WriteLine(string message) { int firstColon = message.IndexOf(":"); int secondColon = message.IndexOf(":", firstColon + 1); Console.WriteLine(message); } } If I output the value of firstColon it is always -1. If I put a break point the message is always just "Some info". Where does all the other information come from? So I had a look at the call stack at the point just before Console.WriteLine was called. The method that called my WriteLine method is: System.dll!System.Diagnostics.TraceListener.TraceEvent(System.Diagnostics.TraceEventCache eventCache, string source, System.Diagnostics.TraceEventType eventType, int id, string message) + 0x33 bytes When I use Reflector to look at this message it all seems pretty straight forward. I can't see any code that changes the value of the string after I have sent it to Console.WriteLine. The only method that could posibly change the underlying string value is a call to UnsafeNativeMethods.EventWriteString which has a parameter that is a pointer to the message. Does anyone understand what is going on here and whether I can change the output to be just my message with out the additional fluff. It seems like evil magic that I can pass a string "Some info" to Console.WriteLine (or any other method for that matter) and the string that output is different.

    Read the article

  • Is there some way to make variables like $a and $b in regard to strict?

    - by Axeman
    In light of Michael Carman's comment, I have decided to rewrite the question. Note that 11 comments appear before this edit, and give credence to Michael's observation that I did not write the question in a way that made it clear what I was asking. Question: What is the standard--or cleanest way--to fake the special status that $a and $b have in regard to strict by simply importing a module? First of all some setup. The following works: #!/bin/perl use strict; print "\$a=$a\n"; print "\$b=$b\n"; If I add one more line: print "\$c=$c\n"; I get an error at compile time, which means that none of my dazzling print code gets to run. If I comment out use strict; it runs fine. Outside of strictures, $a and $b are mainly special in that sort passes the two values to be compared with those names. my @reverse_order = sort { $b <=> $a } @unsorted; Thus the main functional difference about $a and $b--even though Perl "knows their names"--is that you'd better know this when you sort, or use some of the functions in List::Util. It's only when you use strict, that $a and $b become special variables in a whole new way. They are the only variables that strict will pass over without complaining that they are not declared. : Now, I like strict, but it strikes me that if TIMTOWTDI (There is more than one way to do it) is Rule #1 in Perl, this is not very TIMTOWDI. It says that $a and $b are special and that's it. If you want to use variables you don't have to declare $a and $b are your guys. If you want to have three variables by adding $c, suddenly there's a whole other way to do it. Nevermind that in manipulating hashes $k and $v might make more sense: my %starts_upper_1_to_25 = skim { $k =~ m/^\p{IsUpper}/ && ( 1 <= $v && $v <= 25 ) } %my_hash ;` Now, I use and I like strict. But I just want $k and $v to be visible to skim for the most compact syntax. And I'd like it to be visible simply by use Hash::Helper qw<skim>; I'm not asking this question to know how to black-magic it. My "answer" below, should let you know that I know enough Perl to be dangerous. I'm asking if there is a way to make strict accept other variables, or what is the cleanest solution. The answer could well be no. If that's the case, it simply does not seem very TIMTOWTDI.

    Read the article

  • What is user gcc's purpose in requesting code possibly like this?

    - by James Morris
    In the question between syntax, are there any equal function the user gcc is requesting only what I can imagine to be the following code: #include <stdio.h> #include <string.h> /* estimated magic values */ #define MAXFUNCS 8 #define MAXFUNCLEN 3 int the_mainp_compare_func(char** mainp) { char mainp0[MAXFUNCS][MAXFUNCLEN] = { 0 }; char mainp1[MAXFUNCS][MAXFUNCLEN] = { 0 }; char* psrc, *pdst; int i = 0; int func = 0; psrc = mainp[0]; printf("scanning mainp[0] for functions...\n"); while(*psrc) { if (*psrc == '\0') break; else if (*psrc == ',') ++psrc; else { mainp0[func][0] = *psrc++; if (*psrc == ',') { mainp0[func][1] = '\0'; psrc++; } else if (*psrc !='\0') { mainp0[func][1] = *psrc++; mainp0[func][2] = '\0'; } printf("function: '%s'\n", mainp0[func]); } ++func; } printf("\nscanning mainp[1] for functions...\n"); psrc = mainp[1]; func = 0; while(*psrc) { if (*psrc == '\0') break; else if (*psrc == ',') ++psrc; else { mainp1[func][0] = *psrc++; if (*psrc == ',') { mainp1[func][1] = '\0'; psrc++; } else if (*psrc !='\0') { mainp1[func][1] = *psrc++; mainp1[func][2] = '\0'; } printf("function: '%s'\n", mainp1[func]); } ++func; } printf("\ncomparing functions in '%s' with those in '%s'\n", mainp[0], mainp[1] ); int func2; func = 0; while (*mainp0[func] != '\0') { func2 = 0; while(*mainp1[func2] != '\0') { printf("comparing %s with %s\n", mainp0[func], mainp1[func2]); if (strcmp(mainp0[func], mainp1[func2++]) == 0) return 1; /* not sure what to return here */ } ++func; } /* no matches == failure */ return -1; /* not sure what to return on failure */ } int main(int argc, char** argv) { char* mainp[] = { "P,-Q,Q,-R", "R,A,P,B,F" }; if (the_mainp_compare_func(mainp) == 1) printf("a match was found, but I don't know what to do with it!\n"); else printf("no match found, and I'm none the wiser!\n"); return 0; } My question is, what is it's purpose?

    Read the article

  • Using a large list of terms, search through page text and replace words with links

    - by dunc
    A while ago I posted this question asking if it's possible to convert text to HTML links if they match a list of terms from my database. I have a fairly huge list of terms - around 6000. The accepted answer on that question was superb, but having never used XPath, I was at a loss when problems started occurring. At one point, after fiddling with code, I somehow managed to add over 40,000 random characters to our database - the majority of which required manual removal. Since then I've lost faith in that idea and the more simple PHP solutions simply weren't efficient enough to deal with the amount of data and the quantity of terms. My next attempt at a solution is to write a JS script which, once the page has loaded, retrieves the terms and matches them against the text on a page. This answer has an idea which I'd like to attempt. I would use AJAX to retrieve the terms from the database, to build an object such as this: var words = [ { word: 'Something', link: 'http://www.something.com' }, { word: 'Something Else', link: 'http://www.something.com/else' } ]; When the object has been built, I'd use this kind of code: //for each array element $.each(words, function() { //store it ("this" is gonna become the dom element in the next function) var search = this; $('.message').each( function() { //if it's exactly the same if ($(this).text() === search.word) { //do your magic tricks $(this).html('<a href="' + search.link + '">' + search.link + '</a>'); } } ); } ); Now, at first sight, there is a major issue here: with 6,000 terms, will this code be in any way efficient enough to do what I'm trying to do?. One option would possibly be to perform some of the overhead within the PHP script that the AJAX communicates with. For instance, I could send the ID of the post and then the PHP script could use SQL statements to retrieve all of the information from the post and match it against all 6,000 terms.. then the return call to the JavaScript could simply be the matching terms, which would significantly reduce the number of matches the above jQuery would make (around 50 at most). I have no problem with the script taking a few seconds to "load" on the user's browser, as long as it isn't impacting their CPU usage or anything like that. So, two questions in one: Can I make this work? What steps can I take to make it as efficient as possible? Thanks in advance,

    Read the article

  • recursion resulting in extra unwanted data

    - by spacerace
    I'm writing a module to handle dice rolling. Given x die of y sides, I'm trying to come up with a list of all potential roll combinations. This code assumes 3 die, each with 3 sides labeled 1, 2, and 3. (I realize I'm using "magic numbers" but this is just an attempt to simplify and get the base code working.) int[] set = { 1, 1, 1 }; list = diceroll.recurse(0,0, list, set); ... public ArrayList<Integer> recurse(int index, int i, ArrayList<Integer> list, int[] set){ if(index < 3){ // System.out.print("\n(looping on "+index+")\n"); for(int k=1;k<=3;k++){ // System.out.print("setting i"+index+" to "+k+" "); set[index] = k; dump(set); recurse(index+1, i, list, set); } } return list; } (dump() is a simple method to just display the contents of list[]. The variable i is not used at the moment.) What I'm attempting to do is increment a list[index] by one, stepping through the entire length of the list and incrementing as I go. This is my "best attempt" code. Here is the output: Bold output is what I'm looking for. I can't figure out how to get rid of the rest. (This is assuming three dice, each with 3 sides. Using recursion so I can scale it up to any x dice with y sides.) [1][1][1] [1][1][1] [1][1][1] [1][1][2] [1][1][3] [1][2][3] [1][2][1] [1][2][2] [1][2][3] [1][3][3] [1][3][1] [1][3][2] [1][3][3] [2][3][3] [2][1][3] [2][1][1] [2][1][2] [2][1][3] [2][2][3] [2][2][1] [2][2][2] [2][2][3] [2][3][3] [2][3][1] [2][3][2] [2][3][3] [3][3][3] [3][1][3] [3][1][1] [3][1][2] [3][1][3] [3][2][3] [3][2][1] [3][2][2] [3][2][3] [3][3][3] [3][3][1] [3][3][2] [3][3][3] I apologize for the formatting, best I could come up with. Any help would be greatly appreciated. (This method was actually stemmed to use the data for something quite trivial, but has turned into a personal challenge. :) edit: If there is another approach to solving this problem I'd be all ears, but I'd also like to solve my current problem and successfully use recursion for something useful.

    Read the article

  • Firefox all but freezes during large file upload; Ajax progress bar infeasible; IE6 works fine

    - by Sean
    I want to provide a progress bar for my users who upload very large files. I did some reading and implemented what should be a pretty straightforward solution: I have a <form> element that contains an file input element; its target is set to the ID of a hidden iframe. On the server side, there's some Spring magic that attaches an object to the user's session; the progress of the upload can be queried from this object. After submitting the form, I start a repeating Ajax call using setInterval that queries the server for the percent-complete using the aforementioned session object. The call repeats every half-second, skipping the Ajax call if the previous call has not yet completed. I use the data from the call to update the width of an onscreen element. When the server call reports that the upload is complete, I clear the interval timer. I created a 100-megabyte file and uploaded it using my interface. This is using Firefox 3.6.3. What I found is that although the upload takes 20-25 seconds, the progress bar doesn't get updated until the very end. Moreover, the entire browser is basically frozen until the upload completes. I assumed that my method must be flawed, but I tried the same page using IE6, and was utterly amazed when it behaved as I had designed it to--the progress bar got updated every half second, and the whole upload only took about 15 seconds, much faster than Firefox. I don't have many add-ons installed, but I tried disabling Firebug and restarting my browser. This marginally improved the performance--I got perhaps a single additional progress bar update mid-upload--but still far from acceptable. Can anyone tell me what I can do to bring Firefox's performance up to the level of IE6? Ugh, I can't believe I actually typed that. EDIT: I just tried uploading a large file from a Firefox 3.6.3 browser on a different machine than the one that's running my web server, and it worked fine. Huh.

    Read the article

  • c permutation of a number

    - by Pkp
    I am writing a program for magic box. As the only way around it is brute force, I wrote a program to compute permutations of a given array using bells algorithm. I wrote in the lines similar to http://programminggeeks.com/c-code-for-permutation/. It does work for array of 3 and 4. It does not work for an arryay of 8 numbers (1,2,3,4,5,6,7,8). I see that the combination 1 2 3 4 5 6 7 8 gets repeated couple of times. Also there are other combinations that gets repeated. I see that certain combinations don't get displayed even. So could someone tell me what is wrong in the program below. Code: include<stdio.h> int len,numperm=1,count=0; display(int a[]){ int i; for(i=0;i<len;i++) printf("%d ",a[i]); printf("\n"); count++; } swap(int *a,int *b){ int temp; temp=*a; *a=*b; *b=temp; } no_of_perm(){ int x; for(x=1;x<=len;x++) numperm=numperm*x; } perm(int a[]){ int x,y; while(count < numperm){ for(x=0;x<len-1;x++){ swap(&a[x],&a[x+1]); display(a); } swap(&a[0],&a[1]); display(a); for(y=len-1;y>0;y--){ swap(&a[y],&a[y-1]); display(a); } swap(&a[len-1],&a[len-2]); display(a); } } main(int argc, char *argv[]){ if(argc<2){ printf("Error\n"); exit(0); } int i,*a=malloc(sizeof(int)*atoi(argv[1])); len=atoi(argv[1]); for(i=0;i<len;i++) a[i]=i+1; no_of_perm(); perm(a); }

    Read the article

  • Can I use accepts_nested_attributes_for with checkboxes in a _form to select potential 'links' from a list

    - by Ryan
    In Rails 3: I have the following models: class System has_many :input_modes # name of the table with the join in it has_many :imodes, :through => :input_modes, :source => 'mode', :class_name => "Mode" has_many :output_modes has_many :omodes, :through => :output_modes, :source => 'mode', :class_name => 'Mode' end class InputMode # OutputMode is identical belongs_to :mode belongs_to :system end class Mode ... fields, i.e. name ... end That works nicely and I can assign lists of Modes to imodes and omodes as intended. What I'd like to do is use accepts_nested_attributes_for or some other such magic in the System model and build a view with a set of checkboxes. The set of valid Modes for a given System is defined elsewhere. I'm using checkboxes in the _form view to select which of the valid modes is actually set in imodes and omodes . I don't want to create new Modes from this view, just select from a list of pre-defined Modes. Below is what I'm currently using in my _form view. It generates a list of checkboxes, one for each allowed Mode for the System being edited. If the checkbox is ticked then that Mode is to be included in the imodes list. <% @allowed_modes.each do |mode| %> <li> <%= check_box_tag :imode_ids, mode.id, @system.imodes.include?(modifier), :name => 'imode_ids[]' %> <%= mode.name %> </li> <% end %> Which passes this into the controller in params: { ..., "imode_ids"=>["2", "14"], ... } In the controller#create I extract and assign the Modes that had their corresponding checkboxes ticked and add them to imodes with the following code: @system = System.new(params[:system]) # Note the the empty list that makes sure we clear the # list if none of the checkboxes are ticked if params.has_key?(:imode_ids) imodes = Mode.find(params[:imode_ids]) else imodes = [] end @system.imodes = imodes Once again that all works nicely but I'll have to copy that cludgey code into the other methods in the controller and I'd much prefer to use something more magical if possible. I feel like I've passed off the path of nice clean rails code and into the forest of "hacking around" rails; it works but I don't like it. What should I have done?

    Read the article

  • SSH X11 not working

    - by azat
    I have a home and work computer, the home computer has a static IP address. If I ssh from my work computer to my home computer, the ssh connection works but X11 applications are not displayed. In my /etc/ssh/sshd_config at home: X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost yes At work I have tried the following commands: xhost + home HOME_IP ssh -X home ssh -X HOME_IP ssh -Y home ssh -Y HOME_IP My /etc/ssh/ssh_config at work: Host * ForwardX11 yes ForwardX11Trusted yes My ~/.ssh/config at work: Host home HostName HOME_IP User azat PreferredAuthentications password ForwardX11 yes My ~/.Xauthority at work: -rw------- 1 azat azat 269 Jun 7 11:25 .Xauthority My ~/.Xauthority at home: -rw------- 1 azat azat 246 Jun 7 19:03 .Xauthority But it doesn't work After I make an ssh connection to home: $ echo $DISPLAY localhost:10.0 $ kate X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. X11 connection rejected because of wrong authentication. kate: cannot connect to X server localhost:10.0 I use iptables at home, but I've allowed port 22. According to what I've read that's all I need. UPD. With -vvv ... debug2: callback start debug2: x11_get_proto: /usr/bin/xauth list :0 2/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 1: request x11-req confirm 1 debug2: client_session2_setup: id 1 debug2: fd 3 setting TCP_NODELAY debug2: channel 1: request pty-req confirm 1 ... When try to launch kate: debug1: client_input_channel_open: ctype x11 rchan 2 win 65536 max 16384 debug1: client_request_x11: request from 127.0.0.1 55486 debug2: fd 8 setting O_NONBLOCK debug3: fd 8 is O_NONBLOCK debug1: channel 2: new [x11] debug1: confirm x11 debug2: X11 connection uses different authentication protocol. X11 connection rejected because of wrong authentication. debug2: X11 rejected 2 i0/o0 debug2: channel 2: read failed debug2: channel 2: close_read debug2: channel 2: input open - drain debug2: channel 2: ibuf empty debug2: channel 2: send eof debug2: channel 2: input drain - closed debug2: channel 2: write failed debug2: channel 2: close_write debug2: channel 2: output open - closed debug2: X11 closed 2 i3/o3 debug2: channel 2: send close debug2: channel 2: rcvd close debug2: channel 2: is dead debug2: channel 2: garbage collecting debug1: channel 2: free: x11, nchannels 3 debug3: channel 2: status: The following connections are open: #1 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1) #2 x11 (t7 r2 i3/0 o3/0 fd 8/8 cc -1) # The same as above repeate about 7 times kate: cannot connect to X server localhost:10.0 UPD2 Please provide your Linux distribution & version number. Are you using a default GNOME or KDE environment for X or something else you customized yourself? azat:~$ kded4 -version Qt: 4.7.4 KDE Development Platform: 4.6.5 (4.6.5) KDE Daemon: $Id$ Are you invoking ssh directly on a command line from a terminal window? What terminal are you using? xterm, gnome-terminal, or? How did you start the terminal running in the X environment? From a menu? Hotkey? or ? From terminal emulator `yakuake` Manualy press `Ctrl + N` and write commands Can you run xeyes from the same terminal window where the ssh -X fails? `xeyes` - is not installed But `kate` or another kde app is running Are you invoking the ssh command as the same user that you're logged into the X session as? From the same user UPD3 I also download ssh sources, and using debug2() write why it's report that version is different It see some cookies, and one of them is empty, another is MIT-MAGIC-COOKIE-1

    Read the article

  • Cisco VPN Client dropping connection

    - by IT Team
    Using Windows XP and Cisco VPN client version 5.0.4.xxx to connect to a remote customer site. We are able to establish the connection and start an RDP session, but within 1-2 minutes the connection drops and the VPN connection disconnects. The PC making the connection is on a DMZ which is NATed to a public IP address. If we move the PC directly onto the internet without being on the DMZ the connection works and we don't encounter any disconnects. We use a PIX 515E running 7.2.4 and don't have any problems with similar setups connecting to other customer sites from the DMZ. The VPN setup on the client side is pretty basic, using IPSec over TCP port 10000. Not sure what device they are using on the peer, but my guess would be an ASA. Any idea as to what the problem would be? Below is the logs from the VPN client when the problem occurs. The real IP address has been changed to: RemotePeerIP. 4 14:39:30.593 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 5 14:39:30.593 09/23/09 Sev=Info/6 CM/0x6310002F Allocated local TCP port 1942 for TCP connection. 6 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700008 IPSec driver successfully started 7 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 8 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370002C Sent 256 packets, 0 were fragmented. 9 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700020 TCP SYN sent to RemotePeerIP, src port 1942, dst port 10000 10 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370001C TCP SYN-ACK received from RemotePeerIP, src port 10000, dst port 1942 11 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700021 TCP ACK sent to RemotePeerIP, src port 1942, dst port 10000 12 14:39:30.796 09/23/09 Sev=Warning/3 IPSEC/0xA370001C Bad cTCP trailer, Rsvd 26984, Magic# 63697672h, trailer len 101, MajorVer 13, MinorVer 10 13 14:39:30.796 09/23/09 Sev=Info/4 CM/0x63100029 TCP connection established on port 10000 with server "RemotePeerIP" 14 14:39:31.296 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 15 14:39:31.296 09/23/09 Sev=Info/6 IKE/0x6300003B Attempting to establish a connection with RemotePeerIP. 16 14:39:31.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (SA, KE, NON, ID, VID(Xauth), VID(dpd), VID(Frag), VID(Unity)) to RemotePeerIP 17 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 18 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 19 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 20 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 21 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 22 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 23 14:39:51.328 09/23/09 Sev=Info/4 IKE/0x63000017 Marking IKE SA for deletion (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 24 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x6300004B Discarding IKE SA negotiation (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 25 14:39:51.828 09/23/09 Sev=Info/4 CM/0x63100014 Unable to establish Phase 1 SA with server "RemotePeerIP" because of "DEL_REASON_PEER_NOT_RESPONDING" 26 14:39:51.828 09/23/09 Sev=Info/5 CM/0x63100025 Initializing CVPNDrv 27 14:39:51.828 09/23/09 Sev=Info/4 CM/0x6310002D Resetting TCP connection on port 10000 28 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100030 Removed local TCP port 1942 for TCP connection. 29 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100046 Set tunnel established flag in registry to 0. 30 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x63000001 IKE received signal to terminate VPN connection 31 14:39:52.328 09/23/09 Sev=Info/6 IPSEC/0x63700023 TCP RST sent to RemotePeerIP, src port 1942, dst port 10000 32 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 33 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 34 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 35 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x6370000A IPSec driver successfully stopped Thank you for any help you can provide.

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • How should I config Apache2.2?

    - by user1607641
    I tried to configure Apache 2.2.17 with PHP5.4 using this manual. But when I restarted my PC and went to http://localhost/phpinfo.php, I received this message: Not Found. The requested URL /phpinfo.php was not found on this server. Here is my httpd.conf - (full version with comments) and a copy with the comments stripped for brevity: ServerRoot "C:/Program Files/Apache2.2" #Listen 12.34.56.78:80 Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule php5_module "C:/Languages/PHP/php5apache2_4.dll" # PHP AddHandler application/x-httpd-php .php PHPIniDir "C:/Languages/PHP" <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] #ServerName localhost:80 DocumentRoot "C:/Sites" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Sites"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.php index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "C:/Program Files/Apache2.2/cgi-bin/" </IfModule> <IfModule cgid_module> #Scriptsock logs/cgisock </IfModule> <Directory "C:/Program Files/Apache2.2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types #AddType application/x-gzip .tgz #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz AddType application/x-compress .Z AddType application/x-gzip .gz .tgz #AddHandler cgi-script .cgi #AddHandler type-map var #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> #MIMEMagicFile conf/magic # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://localhost/subscription_info.html #MaxRanges unlimited #EnableMMAP off #EnableSendfile off # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings #Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Directory with sites is C:/Sites and PHP is located at C:/Languages/PHP.

    Read the article

  • apache2.2 + php5 , process never die and stay blocked to LOCK_SH

    - by Givre
    Server version: Apache/2.2.22 (Unix) Server built: Mar 28 2012 16:31:45 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.4.1 Compiled using: APR 1.4.6, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache2" -D SUEXEC_BIN="/opt/apache2/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Php5.2.17. Using mod_php5 as a DSO module compiled Problem: On shared webhosting, a lot of apache2 process never stop or die and they waiting as long as apache2 restart. Strace of one of theses process: access("tmp/meta_cache.txt", F_OK) = 0 getcwd("/home/exemple.com/htdocs"..., 4096) = 34 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 getcwd("/home/exemple.com/htdocs"..., 4096) = 34 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 open("/home/exemple.com/htdocs/tmp/meta_cache.txt", O_RDONLY) = 10905 fstat(10905, {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lseek(10905, 0, SEEK_CUR) = 0 flock(10905, LOCK_SH) = The process never die, and stay like this. All files are on NFS V3 I'dont know how to solve this problem or find more informations. The effect is that all apache2 process become used and apache2 crash totaly . Thanks for you help.

    Read the article

  • Why am I getting a 403 error on a POST to a PHP script?

    - by John Gallagher
    Background I want to allow my users to submit a crash report which will get emailed to me. I'm using UKCrashReporter with the bundled PHP script I've modified. This code does a POST to a specified URL along with the crash report. I'm on a shared server running Linux. My main domain is synapticmishap.co.uk. The Problem When I send the crash report off, on the Cocoa side, it reports as having sent it successfully, but I don't receive an email. The code has been used in lots of other well established Cocoa projects and it was working for me a few months ago. That leads me to conclude that the problems are related to my web server setup, something I know almost nothing about. When I look at my log files, I see entries like this: IP Redacted - - [10/Jun/2010:09:47:53 +0100] "POST /synapticmishap/crashreportform.php HTTP/1.1" 403 74 "-" "UKCrashReporter" What I've tried I've tried accessing the page at http://synapticmishap.co.uk/synapticmishap/crashreportform.php via a browser. It loads fine. I've made sure the permissions on this php script are set so anyone can execute it. I've tried removing the deny entries from the section of .htaccess at various levels starting with root. I've downloaded the URLParams plugin for Firefox which allows you to simulate POSTs. I put in the URL above and tried a post with "crashlog" as the parameter and "test" as the value. This generated a 200 log entry in my log file - it seemed to work, although no mail message was sent. Code I've got the following at http://synapticmishap.co.uk/synapticmishap/crashreportform.php. I've simplified it to just the bare bones in an effort to get it working. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Crash Report</title> </head> <body> <p>This page contains super special magic which submits a crash report item to me.</p> <p>Nothing to see here - move along.</p> <?php mail( "[email protected]", "Crash Report", "\r\n\r\nThis is a test."); ?> </body> </html> This is my top level .htaccess file: RewriteEngine on # -FrontPage- IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> Options All -Indexes RewriteCond %{HTTP_HOST} ^synapticmishap.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.synapticmishap.co.uk$ RewriteCond %{HTTP_HOST} ^lapsusapp.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.lapsusapp.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/synapticmishap\/lapsuspromo\/" [R=301,L] RewriteCond %{HTTP_HOST} ^jgtutoring.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.jgtutoring.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/tutoring" [R=301,L] RewriteCond %{HTTP_HOST} ^synapticmishap.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.synapticmishap.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/synapticmishap" [R=301,L] RewriteCond %{HTTP_HOST} ^jgediting.co.uk$ [OR] RewriteCond %{HTTP_HOST} ^www.jgediting.co.uk$ RewriteRule ^/?$ "http\:\/\/synapticmishap\.co\.uk\/editing" [R=301,L] RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.synapticmishap.co.uk/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.synapticmishap.co.uk$ [NC] RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk/synapticmishap/crashreportform.php/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://synapticmishap.co.uk/synapticmishap/crashreportform.php$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp)$ - [F,NC] Help! I'm at the end of my tether with this and I'm in a very unfamiliar space with all this web stuff. I'd be most appreciative of any thoughts people had on why this isn't working. Thanks.

    Read the article

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Vim: Context sensitive code completion for PHP

    - by eddy147
    Vim gives me too much options when I use code completion. In a class, and type $class- it gives me about a zillion options, so not only from the class itself but also from php, all globals ever created, in short: a mess. I only want to have the options from the class itself (or the parent subtype class it extends from), so context or scope sensitive code completion, just like Netbeans for example. How can I do that? My current configuration is this: I am using ctags, and created 1 ctags file for our (big) application in the root. This is the .ctags file I used to create the ctags file: -R -h ".php" --exclude=.svn --languages=+PHP,-JavaScript --tag-relative=yes --regex-PHP=/abstract\s+class\s+([^ ]+)/\1/c/ --regex-PHP=/interface\s+([^ ]+)/\1/c/ --regex-PHP=/(public\s+|static\s+|protected\s+|private\s+)\$([^ \t=]+)/\2/p/ --regex-PHP=/const\s+([^ \t=]+)/\1/d/ --regex-PHP=/final\s+(public\s+|static\s+|abstract\s+|protected\s+|private\s+)function\s+\&?\s*([^ (]+)/\2/f/ --PHP-kinds=+cdf --fields=+iaS This is the .vimrc file: " autocomplete funcs and identifiers for languages autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType python set omnifunc=pythoncomplete#Complete autocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJS autocmd FileType html set omnifunc=htmlcomplete#CompleteTags autocmd FileType css set omnifunc=csscomplete#CompleteCSS autocmd FileType xml set omnifunc=xmlcomplete#CompleteTags autocmd FileType php set omnifunc=phpcomplete#CompletePHP autocmd FileType c set omnifunc=ccomplete#Complete " exuberant ctags " the magic is the ';' at end. it will make vim tags file search go up from current directory until it finds one. set tags=projectrootdir/tags; map <F8> :!ctags " TagList " :tag getUser => Jump to getUser method " :tn (or tnext) => go to next search result " :tp (or tprev) => to to previous search result " :ts (or tselect) => List the current tags " => Go back to last tag location " +Left click => Go to definition of a method " More info: " http://vimdoc.sourceforge.net/htmldoc/tagsrch.html (official documentation) " http://www.vim.org/tips/tip.php?tip_id=94 (a vim tip) let Tlist_Ctags_Cmd = "~/bin/ctags" let Tlist_WinWidth = 50 map <F4> :TlistToggle<cr> "see http://vim.wikia.com/wiki/Make_Vim_completion_popup_menu_work_just_like_in_an_IDE " will change the 'completeopt' option so that Vim's popup menu doesn't select the first completion item, but rather just inserts the longest common text of all matches :set completeopt=longest,menuone " will change the behavior of the <Enter> key when the popup menu is visible. In that case the Enter key will simply select the highlighted menu item, just as <C-Y> does :inoremap <expr> <CR> pumvisible() ? "\<C-y>" : "\<C-g>u\<CR>" " inoremap <expr> <C-n> pumvisible() ? '<C-n>' : \ '<C-n><C-r>=pumvisible() ? "\<lt>Down>" : ""<CR>' inoremap <expr> <M-,> pumvisible() ? '<C-n>' : \ '<C-x><C-o><C-n><C-p><C-r>=pumvisible() ? "\<lt>Down>" : ""<CR>'

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • How do I allow mysqld to use more than 24.9% of my cpu?

    - by Joseph Yancey
    I have a Web server running on RHEL that is running Apache and MySQL. It has a Quad core 3.2Ghz Xeon CPU and 8 Gigs of RAM Most of the time, we don't have any issues at all. Our web application is very database intensive. When our usage gets pretty heavy MySQL will peg out at using 24.9% of the cpu. Most of the time, it hangs around below 5%. I have speculated that it is only using one core of the CPU and it is pegging out that core but TOP shows me in the cpu column that mysqld changes cores even while the usage stays at 24.9%. When it does this MySQL gets painfully slow as it is queuing up queries Is there some magic configuration that will tell mysql to use more cpu when it needs to? Also, any other advice on my configuration would be helpful. We run two applications on this server. One that runs Innodb but doesn't get much usage (it has been replaced by the other app), and one that runs MyIsam and gets lots of use. Overall, our whole mysql data directory is something like 13Gigs if that matters at all. Here is my config: [root@ProductionLinux root]# cat /etc/my.cnf [mysqld] server-id = 71 log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = oldapplication binlog-do-db = newapplication binlog-do-db = support thread_cache_size = 30 key_buffer_size = 256M table_cache = 256 sort_buffer_size = 4M read_buffer_size = 1M skip-name-resolve innodb_data_home_dir = /usr/local/mysql/data/ innodb_data_file_path = InnoDB:100M:autoextend set-variable = innodb_buffer_pool_size=70M set-variable = innodb_additional_mem_pool_size=10M set-variable = max_connections=500 innodb_log_group_home_dir = /usr/local/mysql/data innodb_log_arch_dir = /usr/local/mysql/data set-variable = innodb_log_file_size=20M set-variable = innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit = 1 log-queries-not-using-indexes log-error = /var/log/mysql/mysql-error.log mysql show variables; +---------------------------------+-----------------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+-----------------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 5 | | datadir | /usr/local/mysql/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_ndbcluster | NO | | have_openssl | NO | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 10485760 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 73400320 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | InnoDB:100M:autoextend | | innodb_data_home_dir | /usr/local/mysql/data/ | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | OFF | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | /usr/local/mysql/data | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 20971520 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | /usr/local/mysql/data | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 20 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 268435456 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | /var/log/mysql/mysql-error.log | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 1048576 | | max_binlog_cache_size | 18446744073709551615 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 500 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_relay_log_size | 0 | | max_seeks_for_key | 18446744073709551615 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 18446744073709551615 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 9223372036854775807 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2510 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /usr/local/mysql/data/ProductionLinux.pid | | port | 3306 | | preload_buffer_size | 32768 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 2048 | | read_buffer_size | 1044480 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | server_id | 71 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql.sock | | sort_buffer_size | 4194296 | | sql_mode | | | sql_notes | ON | | sql_warnings | ON | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | sync_replication | 0 | | sync_replication_slave_id | 0 | | sync_replication_timeout | 10 | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 30 | | thread_stack | 262144 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.18-standard-log | | version_comment | MySQL Community Edition - Standard (GPL) | | version_compile_machine | x86_64 | | version_compile_os | unknown-linux-gnu | | wait_timeout | 28800 | +---------------------------------+-----------------------------------------------------------------------------+ 210 rows in set (0.00 sec)

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >