Search Results

Search found 4392 results on 176 pages for 'bind'.

Page 128/176 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • MVVM and Animations in Silverlight

    - by Aligned
    I wanted to spin an icon to show progress to my user while some content was downloading. I'm using MVVM (aren't you) and made a satisfactory Storyboard to spin the icon. However, it took longer than expected to trigger that animation from my ViewModel's property.I used a combination of the GoToState action and the DataTrigger from the Microsoft.Expression.Interactions dll as described here.Then I had problems getting it to start until I found this approach that saved the day. The DataTrigger didn't bind right away because "it doesn’t change visual state on load is because the StateTarget property of the GotoStateAction is null at the time the DataTrigger fires.". Here's my XAML, hopefully you can fill in the rest.<Image x:Name="StatusIcon" AutomationProperties.AutomationId="StatusIcon" Width="16" Height="16" Stretch="Fill" Source="inProgress.png" ToolTipService.ToolTip="{Binding StatusTooltip}"> <i:Interaction.Triggers> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="True" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Downloading" /> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="False" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Complete"/> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> </i:Interaction.Triggers> <Image.Projection> <PlaneProjection/> </Image.Projection> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="VisualStateGroup"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="0" To="Downloading"> <VisualTransition.GeneratedEasingFunction> <QuadraticEase EasingMode="EaseInOut"/> </VisualTransition.GeneratedEasingFunction> <Storyboard RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Projection).(PlaneProjection.RotationZ)" Storyboard.TargetName="StatusIcon"> <EasingDoubleKeyFrame KeyTime="0:0:1.5" Value="-360"/> <EasingDoubleKeyFrame KeyTime="0:0:2" Value="-360"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualTransition> <VisualTransition From="Downloading" GeneratedDuration="0"/> </VisualStateGroup.Transitions> <VisualState x:Name="Downloading"/> <VisualState x:Name="Complete"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups></Image>MVVMAnimations.zip

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • What are the best practices to use NHiberante sessions in asp.net (mvc/web api) ?

    - by mrt181
    I have the following setup in my project: public class WebApiApplication : System.Web.HttpApplication { public static ISessionFactory SessionFactory { get; private set; } public WebApiApplication() { this.BeginRequest += delegate { var session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(session); }; this.EndRequest += delegate { var session = SessionFactory.GetCurrentSession(); if (session == null) { return; } session = CurrentSessionContext.Unbind(SessionFactory); session.Dispose(); }; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); var assembly = Assembly.GetCallingAssembly(); SessionFactory = new NHibernateHelper(assembly, Server.MapPath("/")).SessionFactory; } } public class PositionsController : ApiController { private readonly ISession session; public PositionsController() { this.session = WebApiApplication.SessionFactory.GetCurrentSession(); } public IEnumerable<Position> Get() { var result = this.session.Query<Position>().Cacheable().ToList(); if (!result.Any()) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return result; } public HttpResponseMessage Post(PositionDataTransfer dto) { //TODO: Map dto to model IEnumerable<Position> positions = null; using (var transaction = this.session.BeginTransaction()) { this.session.SaveOrUpdate(positions); try { transaction.Commit(); } catch (StaleObjectStateException) { if (transaction != null && transaction.IsActive) { transaction.Rollback(); } } } var response = this.Request.CreateResponse(HttpStatusCode.Created, dto); response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + "/" + dto.Name); return response; } public void Put(int id, string value) { //TODO: Implement PUT throw new NotImplementedException(); } public void Delete(int id) { //TODO: Implement DELETE throw new NotImplementedException(); } } I am not sure if this is the recommended way to insert the session into the controller. I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this public PositionsController(ISession session) { this.session = session; } Question: What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • Postgres - could not create any TCP/IP sockets

    - by Jacka
    I'm running a rails app in development with postgresql 9.3. When I tried to start passenger server today, I got: PG::ConnectionBad - could not connect to server: Connection refused Is the server running on host "localhost" (217.74.65.145) and accepting TCP/IP connections on port 5432? No big deal I thought, that happened before. Restarting postgres always solved the problem. So I ran sudo service postgresql restart and got: * Restarting PostgreSQL 9.3 database server * The PostgreSQL server failed to start. Please check the log output: 2014-06-11 10:32:41 CEST LOG: could not bind IPv4 socket: Cannot assign requested address 2014-06-11 10:32:41 CEST HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. 2014-06-11 10:32:41 CEST WARNING: could not create listen socket for "localhost" 2014-06-11 10:32:41 CEST FATAL: could not create any TCP/IP sockets ...fail! My postgresql.conf points to the defaults: localhost and port 5432. I tried changing the port but the error message is the same (except the port change). Both ps aux | grep postgresql and ps aux | grep postmaster return nothing. EDIT: In postgresql.conf I changed listen_addresses to 127.0.0.1 instead of localhost and it did the trick, server restarted. I also had to edit my applications' db config and point to 127.0.0.1 instead of localhost. However, the question is now, why is localhost considered to be 217.74.65.145 and not 127.0.0.1? That's my /etc/hosts: 127.0.0.1 local 127.0.1.1 jacek-X501A1 127.0.0.1 something.name.non.example.com 127.0.0.1 company.something.name.non.example.com

    Read the article

  • Plesk SSL Certificate (Default cert when SSL enabled, CORRECT cert when SSL is disabled)

    - by hztetra
    I'm running Plesk 8.6.0: I have an SSL cert installed through Plesk's admin interface. But I have a bit of an issue: When I enabled SSL for the site, and selected my cert, then restart httpd, Plesk defaults to using my self-signed default certificate. Conversely, when I disable SSL support for the domain, all of a sudden Plesk is using my new SSL certificate. Unfortunately, when I try to view any folder on the site (mydomain.tld/folder) I'm simply met with a 404 (with files placed both in httpdocs and httpsdocs). I switch SSL support back on, and Plesk defaults back to the default self-signed cert and I can then view the folders that were not previously accessible. Any ideas? One further note: I tried following http://kb.parallels.com/en/939 . Once I tried to restart httpd with the edited ssl.conf file, I received an httpd could not start error. I restored the original ssl.conf file, and still received the could not start error. So as of now, I am running without an ssl.conf file. The following is the error I receive when I attempt to reintroduce ssl.conf: Starting httpd: [Mon Aug 23 15:45:40 2010] [warn] module ssl_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs

    Read the article

  • Configuring external SMTP server on Azure VM - messages staying in queue

    - by Steph Locke
    I have an external SMTP provider: auth.smtp.1and1.co.uk I am trying to send SQL Server Reporting Services emails via this on an Windows 2012 Azure VM. It is configured sufficiently correctly for emails to be generated, but I've not configured something or mis-configured something as the emails then stay in the queue. Setup details Configured SMTP Virtual Server General: IP Address: Fixed value Access: Access Control: Authentication: ticked Anonymous access Access: Connection Control: All except the list below (which is empty) Access: relay restrictions: Only the list below (which contains 127.0.0.1), ticked 'allow all..' option Delivery: Outbound Security...:Basic Authentication with username and password completed, ticked TLS encryption Delivery: Outbound connections...:TCP port=587 Delivery: Advanced: FQDN=ServerName, smarthost=auth.smtp.1and1.co.uk I then set the following SSRS rsreportserver.config values: <SMTPServer>100.92.192.3</SMTPServer> <SendUsing>2</SendUsing> <SMTPServerPickupDirectory> c:\inetpub\mailroot\pickup </SMTPServerPickupDirectory> <From>[email protected]</From> Tried so far 1) turning the smtp service off and on again (just in case) 2) run SMTPDiag with no errors (also no emails) 3) tried turning off the firewall for the ports (and more generally to see if it made a difference) 4) tried generation from powershell which resulted with message in queue 5) added 25 and 857 as endpoint 6) perused the event log and found some warnings that appear to be about the recipient Message delivery to the remote domain 'gmail.com' failed for the following reason: Unable to bind to the destination server in DNS. Message delivery to the host '212.227.15.179' failed while delivering to the remote domain 'gmail.com' for the following reason: The remote server did not respond to a connection attempt. 7) tried pinging but this appears to be blocked on azure 8) tried more powershell sending on different domains variants (localhost, boxname, internal ip used in smtp properties, 127.0.0.1) - none resulting in success 9) tried adding a remote domain - no change Could anyone recommend what step 10 should be in fixing this issue please?

    Read the article

  • Restarting Haproxy Gracefully

    - by Anand Gupta
    As per various blogs, HAproxy can be gracefully restarted using the following command: sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) TO verify this, I had set up a apache bench script which contiguously sent message to haproxy. Ideally, whenever I restarted my server the script should not have an affect on the apache bunch execiton. But, it seems that whenever Haproxy is restarted apache bench scripts terminate and the connection to load balancer is lost. Here is the details of my HaProxy configuration file : global nbproc 4 log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon pidfile /var/run/haproxy.pid stats socket /home/ubuntu/haproxy.sock #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webstats bind 0.0.0.0:1000 stats enable mode http stats uri /lb?stats stats auth anand:aaaaaaaa #stats refresh listen web-farm 0.0.0.0:80 mode http balance roundrobin option httpchk HEAD /index.php HTTP/1.0 server server2.com 1.1.1.1:80 server serve1.com 1.1.1.2:80 ~ Please let me know what am I missing here.

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • How do I start nginx on port 80 at OS X login?

    - by Bryson
    I installed Nginx using homebrew and after completing the installation the following message was displayed: In the interest of allowing you to run `nginx` without `sudo`, the default port is set to localhost:8080. If you want to host pages on your local machine to the public, you should change that to localhost:80, and run `sudo nginx`. You'll need to turn off any other web servers running port 80, of course. You can start nginx automatically on login running as your user with: mkdir -p ~/Library/LaunchAgents cp #{prefix}/org.nginx.nginx.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/org.nginx.nginx.plist Though note that if running as your user, the launch agent will fail if you try to use a port below 1024 (such as http's default of 80.) But I want Nginx, on port 80, running at login and I don't want to have to open terminal and type in sudo nginx to do it. I want it to load from a plist file like Redis and PostgreSQL do. I moved the plist to /Library/LaunchAgents/ from the user folder equivalent and changed its ownership, also tried setting the user directive in the nginx.conf file and still the same error message in Console.app: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) (along with another message telling me that since nginx was being run without super-user privileges, the user directive was being ignored)

    Read the article

  • Problem: Munin Graph

    - by Pablo
    I've been trying to install Munin for 15 days, I looked for information, analized logs, I even deleted and reinstalled Munin using YUM. I'm hosted at Media Temple on a VPS with CentOS. The problem is still there and It's driving me nuts. Graphics are shown as following: http://imageshack.us/photo/my-images/833/capturadepantalla201106u.png/ This is the configuration of my munin.conf file dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin [localhost] address **.**.***.*** #IP VPS This is the configuration of my munin-node.conf file log_level 4 log_file /var/log/munin/munin-node.log port 4949 pid_file /var/run/munin/munin-node.pid background 1 setseid 1 # Which port to bind to; host * user root group root setsid yes # Regexps for files to ignore ignore_file ~$ ignore_file \.bak$ ignore_file %$ ignore_file \.dpkg-(tmp|new|old|dist)$ ignore_file \.rpm(save|new)$ allow ^127\.0\.0\.1$ Thanks so much, I appreciate all the answers UPDATE munin-graph.log Jun 22 16:30:02 - Starting munin-graph Jun 22 16:30:02 - Processing domain: localhost Jun 22 16:30:02 - Graphed service : open_inodes (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailtraffic (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : apache_processes (0.12 sec * 4) Jun 22 16:30:02 - Graphed service : entropy (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailstats (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : processes (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : apache_accesses (0.27 sec * 4) Jun 22 16:30:03 - Graphed service : apache_volume (0.15 sec * 4) Jun 22 16:30:03 - Graphed service : df (0.21 sec * 4) Jun 22 16:30:03 - Graphed service : netstat (0.19 sec * 4) Jun 22 16:30:03 - Graphed service : interrupts (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : swap (0.14 sec * 4) Jun 22 16:30:04 - Graphed service : load (0.11 sec * 4) Jun 22 16:30:04 - Graphed service : sendmail_mailqueue (0.13 sec * 4) Jun 22 16:30:04 - Graphed service : cpu (0.21 sec * 4) Jun 22 16:30:04 - Graphed service : df_inode (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : open_files (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : forks (0.13 sec * 4) Jun 22 16:30:05 - Graphed service : memory (0.26 sec * 4) Jun 22 16:30:05 - Graphed service : nfs_client (0.36 sec * 4) Jun 22 16:30:05 - Graphed service : vmstat (0.10 sec * 4) Jun 22 16:30:05 - Processed node: localhost (3.45 sec) Jun 22 16:30:05 - Processed domain: localhost (3.45 sec) Jun 22 16:30:05 - Munin-graph finished (3.46 sec)

    Read the article

  • Access All VLANS over XenServer Interface

    - by Garrett
    For my current setup, I have a physical NIC on a XenServer machine that receives traffic tagged with various VLAN IDs. I have a virtual machine that is running Vyatta that needs to be able to access both tagged and untagged traffic in order to route traffic. Here's the problem: 1) If I bind the NIC in XenCenter to the VM (which has no VLAN ID associated with it), the VM cannot see any tagged traffic. I have verified this using tcpdump. However, the tagged traffic is flowing into the XenServer machine perfectly fine. 2) I have more than 7 VLANs, so adding each VLAN as an interface within XenCenter isn't an option. 3) Even though tcpdump shows no tagged traffic coming in the VMs NIC, I have tried adding VLAN interfaces within Vyatta. This also doesn't work. I have tried using both Linux bridge and openvswitch setups and neither seem to work. I am running XenServer 6.0.3 free and Vyatta VC6.3. Please help! I've run out of ideas. I've googled for hours and can't seem to find anything.

    Read the article

  • Sending USR2 to mongrel_rails sometimes results in an “Address already in use” on the restart

    - by Ben
    We have a rolling-restart mode for our mongrel cluster that sends a USR2 signal to each running process. This works great, most of the time. But very occasionally the mongrel process will shutdown, and then fail to restart, with the following error: /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize_without_backlog': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `listener' Looking though the mongrel source, the USR2 handler calls a synchronous stop on the running server, so it ought to block until the socket has been released. Has anyone seen this error? Does anyone have any ideas what might cause it? (I asked this question over on StackOverflow initially, but thought it might be more appropriate here)

    Read the article

  • Booting the server redis no errors

    - by Tylër
    The redis but usually begins with the following errors: tyler @ tyler-vortex: ~ / pens $. / src / redis-server [3690] Dec 01 10:56:05 # Warning: the specified config file, using the default config. In order to Specify a config file use 'redis-server / path / to / redis.conf' [3690] Dec 01 10:56:05 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max configuration to 992 clients. Others errors founds: tyler@tyler-vortex:~/redis$ sudo ./utils/install_server.sh Welcome to the redis service installer This script will help you easily set up a running redis server Please select the redis port for this instance: [6379] Selecting default: 6379 Please select the redis config file name [/etc/redis/6379.conf] Selected default - /etc/redis/6379.conf Please select the redis log file name [/var/log/redis_6379.log] Selected default - /var/log/redis_6379.log Please select the data directory for this instance [/var/lib/redis/6379] Selected default - /var/lib/redis/6379 Please select the redis executable path [/usr/local/bin/redis-server] cat: ./redis.conf.tpl: Arquivo ou diretório não encontrado cat: ./redis_init_script.tpl: Arquivo ou diretório não encontrado ERROR: Could not write init script to /tmp/6379.conf. Aborting! Furthermore, I would like to know how to configure it not to consume so much RAM. Follow the memory configuration of our website, but the settings of "vm-*" does not exist in the file redis.conf. http://redis.io/topics/virtual-memory You have to create them? * Edit: I installed. After that, I believe that I no longer have access via. / Src / redis-server, because it happens: tyler@tyler-vortex:~$ cd redis/ tyler@tyler-vortex:~/redis$ ./src/redis-server [2616] 01 Dec 22:29:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [2616] 01 Dec 22:29:30 # Opening port 6379: bind: Address already in use tyler@tyler-vortex:~/redis$ But there's another detail, the redistribution starts with the system .. redis 127.0.0.1:6379> exit tyler@tyler-vortex:~/redis$ ./src/redis-cli redis 127.0.0.1:6379> exit ... but how can I now see that the communication had before you installed from. sh?

    Read the article

  • Move sendmail from Fedora 1 to a different server ( fedora 12)

    - by tanieboy4u
    We have a sendmail server that also works as DHCP, DNS, and a gateway to our ISP. It has three network interfaces, one for our ISP ( static IP) and the other two is for LANS on different subnet. The hardware is quite old and we've been experiencing downtime due to hardware failures, so we have decided to upgrade the hardware and while at it upgrade the linux OS to Fedora 12. Were trying to do this with minimal downtime. We are planning to take these steps. Install New OS (Fedora 12) on the new server with 3 network interfaces. Install DHCP, BIND, Sendmail, SpamAssassin, MailScanner, Dovecot, Squirrelmail on the new server. Transfer settings from the old server to the new server. ( This is the hardest part that we know). For DHCP and DNS, we can just copy the dhcp leases and conf file and everything should work right? How do we go about moving the users/email accounts from the old server to the new one? Thanks for all your help!

    Read the article

  • Multiple authoritative DNS server on same IPv4 address

    - by Adrien Clerc
    I'd like to maintain a DNS tunnel on my self-hosted server at example.com. I also have a DNS server on it, which serves everything for example.com. I'm currently using dns2tcp for DNS tunneling, on the domain tunnel.example.com. NSD3 is used for serving authoritative zones, because it is both simple and secure. However, I have only one public IPv4 address, which means that NSD and dns2tcp can't listen on the same IP/port. So I'm currently using PowerDNS Recursor using the forward-zones parameter like this: forward-zones-recurse=tunnel.example.com=1.2.3.4:5354 forward-zones=example.com=1.2.3.4:5353 This enables request for authoritative zone to be asked to the correct server, as well as for tunnel requests. NSD is listening on port 5353 and dns2tcp on port 5354. However, this is bad, because the recursor needs to be open. And it actually answers to any recursive query. Do you have any solution for that? I really prefer a solution that doesn't involve setting up BIND, but if you are in the mood to convince me, don't hesitate to do so ;) EDIT: I change the title to be clearer.

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by Puddingfox
    I have a DNS server set up on one of my machines using BIND 9.7 Everything works fine with it. On my Windows 7 desktop, I have statically-assigned all network values. I have one DNS server set -- my DNS server. On my desktop, I can ping a third machine by IP fine. I can nslookup the hostname of the third machine fine. When I ping the hostname, it says it cannot find the host. / C:\Users\James>nslookup icecream Server: cake.my.domain Address: xxx.xxx.6.3 Name: icecream.my.domain Address: xxx.xxx.6.9 C:\Users\James>ping xxx.xxx.6.9 Pinging xxx.xxx.6.9 with 32 bytes of data: Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Ping statistics for xxx.xxx.6.9: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\James>ping icecream Ping request could not find host icecream. Please check the name and try again. I have also specified the search domain as my.domain xxx.xxx and my.domain substituted for security Why can I not ping by hostname? I also can not ping using the FQDN. The problem is that this problem is shared by all applications that resolve hostnames. I cannot use PuTTY to SSH to my machines by hostname; only by IP

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • mysqladmin - Unknown MySQL server host

    - by ert
    I'm trying to connect to a mysql server over a local network. The server is running and listening to post 41322. dylan~$ netstat -ln | s mysql unix 2 [ ACC ] STREAM LISTENING 41322 /var/run/mysqld/mysqld.sock My user is granted all rights from all addresses, and I can log in locally. dylan~$ mysqladmin -P 41322 -h [email protected] create database test mysqladmin: connect to server at '[email protected]' failed error: 'Unknown MySQL server host '[email protected]' (1)' Check that mysqld is running on [email protected] and that the port is 41322. You can check this by doing 'telnet [email protected] 41322' Adding a --verbose flag gives no additional output. I've commented out bind-address=127.0.0.1 in /etc/mysql/my.cnf on the server. I can ssh into the server without a problem. dylan~$ ps a | grep mysql 11131 pts/3 S 0:00 /bin/sh /usr/bin/mysqld_safe 11170 pts/3 Sl 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock 11171 pts/3 S 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld 13710 pts/1 S+ 0:00 grep mysq Any help or thoughts are appreciated.

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • Why apache doesn't restart after configuring SSL?

    - by poz2k4444
    I've installed apache2 and then configure it to work with SSL following this and this tutorials, the problem becomes when I try to restart the service, the following error throws: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs the output of netstat -anp | grep 443 just display firefox listening and anything else, how could I solve this and get the service running?? The ouput of ps -Af|grep <firefox PID> is: root 1949 1 11 18:42 tty1 00:20:55 /opt/firefox/firefox-bin root 2025 1949 4 18:43 tty1 00:08:39 /opt/firefox/plugin-container /root/.mozilla/plugins/libflashplayer.so -greomni /opt/firefox/omni.ja 1949 true plugin after closing firefox and then cheking again for port 443 the output is: tcp 0 0 10.32.208.179:38923 74.125.139.155:443 TIME_WAIT - tcp 0 0 10.32.208.179:45706 74.125.139.113:443 TIME_WAIT - tcp 0 0 10.32.208.179:40456 74.125.139.156:443 TIME_WAIT - tcp 0 0 10.32.208.179:56823 69.171.227.62:443 FIN_WAIT2 - unix 3 [ ] STREAM CONNECTED 12443 1721/dbus-daemon @/tmp/dbus-8ee35rmOOS Seeing the error logs, which are not at the time when I'm doing this, the last errors are: [Tue Oct 02 18:41:54 2012] [error] Init: Unable to read server certificate from file /etc/apache2/ssl/sever.crt [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >