Search Results

Search found 11453 results on 459 pages for 'apache axis'.

Page 421/459 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • What mobile phones are viable for a "nerdy" person? [closed]

    - by Blixt
    This community wiki is for determining a list of good mobile phone choices if you are "nerdy" (definition follows.) As a point of reference, the almost two years old N95 8GB will be used. Mostly because that's what I can most relate to since I've had it since it came out. Today, iPhone and other modern mobile phones really out-shine it in usability and interface. However, it still does everything I want it to do (and here's the definition of "nerdy"; modify as needed): Syncs contacts, calendar, tasks and mail in the background Can run multiple installable applications in the background (Google Maps with Latitude, for example) Good amount of space for music etc. Lets you develop your own little toy applications (Python; not to mention it can also run an Apache server with a public URL!) Tethering! Supports Flash (maybe not very important, but it has its uses) Has a manufacturer that believes in the nerdiness! (The people at Nokia Labs make lots of cool stuff and share early versions with the community and are generally open) A high resolution screen (at least 320x240) Special hardware features such as an accelerometer and GPS What's missing from the N95 8GB but still qualifies as good, "nerdy", qualities: 7.2 Mbit/s (or faster) internet through HSDPA or HSPA+. A good web browser that can do most of the stuff a desktop browser can (especially render sites properly and run JavaScript correctly) Touch (especially multi-touch) More special hardware features such as a compass Intuitive and fluent user interface (Shiny stuff) Ability to configure it to trust root certificates of my own choice A processor fast enough to run Quake 3! =)

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • Am I able to forward traffic from an external subdomain to a specific local host?

    - by George Bowman
    I apologise in advance if the question doesn't make sense, please let me know. I've got a small LAN (~10 Virtual Servers) using Win Server 2008 as a DNS server. This is behind a smoothwall express 3.0 firewall with ports forwarded for specific services. I have a domain (123-reg) with the NS's that of afraid.org (DynamicDNS) and subdomains pointed to my (Dynamic) IP address e.g. subdomain1.example.com - 123.456.789.101. I think that adequately explains my set up. My question is, am I able to have subdomains e.g. subdomain1.example.com only point to a specific local host? Like so: subdomain1.example.com:80 - firewall(external facing) - server1.example.com:80 subdomain2.example.com:80 - firewall(external facing) - server2.example.com:80 I don't actually necessarily want to use port 80, otherwise I would just use VirtualHosts on apache, it is just an example port. Currently I can use either subdomain1.example.com OR subdomain2.example.com and they will both point to server1.example.com:80 I do not have to stay using Win Server 2008 for DNS, I am more than happy to move over to BIND if needs be, it was just easier to use Win Server 2008's DNS. I do not know if this is even possible, I have a feeling it isn't as I've only got one external IP address but any information is useful!

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • Strange mysql problem moving website from Ubuntu server to Mac server

    - by evan
    I'm moving a website (php/mysql) from an Ubuntu server to a OSX 10.6 server. I've set up apache to run php scripts and setup the newest version of mysql on the mac. I just copied all of the php files and dumped/imported all of the mysql databases (including the mysql users database). When I visit the page being served on the Mac the page is able to connect to the database, but not query. Specifically this function mysql_error() is returning this error message NO SUCH FILE OR DIRECTORY The reason it's strange is that I'm able to change the php connection strings for mysql on the Ubuntu server so that it points to the Mac server and the page works correctly (so it seems mysql is correctly set up on the mac and definitely contains all of the users and tables it should). Thinking it was something to do with file permissions on the mac, I've changed all of the files 755 and it hasn't helped. Any ideas? Thanks!! UPDATE: I've found this error which I'm relatively certain is related to this problem in /var/log/apache2/error_log PHP Warning: mysql_query(): A link to the server could not be established

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • nginx: Rewrite PHP does not work

    - by Ton Hoekstra
    I've a Suffix Proxy installed and I'm using the following rewrite with wildcard subdomain DNS on: location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; } } My suffix proxy has the following URL format: (subdomain and/or domain + domain extension to proxy).proxy.org/(request-uri to proxy) I've this php code in my index.php: if(preg_match('#([\w\.-]+)\.example\.com(.+)#', $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'], $match)) { header('Location: http://example.com/browse.php?u=http://'.$match[1].$match[2]); die; } But when requested a page with a .php extension I'll get a 404 not found error: http://www.php.net.proxy.org/docs.php - HTTP/1.1 404 Not Found http://www.utexas.edu.proxy.org/learn/php/ex3.php - HTTP/1.1 404 Not Found But everything else is working (also index.php is working): http://php.net.proxy.org/index.php - HTTP/1.1 200 OK http://www.php-scripts.com.proxy.org/php_diary/example2.php3 - HTTP/1.1 200 OK http://www.utexas.edu.proxy.org/learn/php/ex3.phps - HTTP/1.1 200 OK http://www.w3schools.com.proxy.org/html/default.asp - HTTP/1.1 200 OK Somebody has an answer? I don't know why it's not working, on apache it's working fine. Thanks in advance. I've removed the location and now it's working perfectly: if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; }

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • iis not listening on port 80

    - by Holian
    Hello, We have server 2003 and ISA 2004 with IIS 6 on same machnie. Everything worked well till yesterday, when we try to make some new rule in ISA..but this is a long story... Unfortunatelly something happend with our intranet site. Our site is on the port 80, but if we try to open on this client machines then we got and error page (which error page is our provider): 403-forbidden; Remote host not listening, the remote host is not prepared to acceppt the connection request. On the server i can open the site with port 80. If i change the port number in the iis and try to open the site with the port, then works well. I try to shut down IIS and start apache with a simple page. On the server works well but in clients the problem is the same, so i think this is not an IIS related problem. In the ISA we have a web pub rule, with port 80, no auth. Im pulling out my hair, please help.

    Read the article

  • tomcat vs FULL J2EE Solutions

    - by jrhickey
    We are getting ready to make a major revision in our Web Application architecture which currently is running on JBoss 4.2. At first we were looking at moving from 4.2 to JBoss 6 but after some research tomcat may be a better solution for us. My first question is their anything that JBoss can do that tomcat cannot do assuming you are using the correct plugins. We do not really use EJB's in our solution and it would appear there are simple plugins for web services, JMX and other features. Tomcat appears to have much better support, faster upgrade cycles and many, many books. Since there is less to the system it also seems much easier to support from an admin point of view. What am I missing? The main features we want to enable are better clustering support and session replication / persistence. We will consider other application servers as well such as Glassfish / Geronimo. quote form a web article: Apache Tomcat is the world’s most widely used web application server, with over one million downloads per month and over 70% penetration in the enterprise datacenter. Tomcat is used to power everything from simple one server sites to large enterprise networks.

    Read the article

  • nginx giving 404 when accessing php from alias directory

    - by code90
    I am trying to migrate from apache to nginx. The php sites that I am hosting need to access a shared library which turns out to be an alias directory. Below is the configuration I came up with. html files work fine, but php files giving 404. I have read through and tried most (if not all) of the answers to the similar questions with no any success. Any hint on what might be causing the issue in my case? location /wtlib/ { alias /var/www/shared/wtlib_4/; index index.php; } location ~ /wtlib/.*\.php$ { alias /var/www/shared/wtlib_4/; try_files $uri =404; if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/shared/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Thanks all ! Update: Following seems to be working fine: location /wtlib/ { alias /usr/share/php/wtlib_4/; location ~* .*\.php$ { try_files $uri @php_wtlib; } location ~* \.(html|htm|js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location @php_wtlib { if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; }

    Read the article

  • NFS-shared file-system is locking up

    - by fredden
    Our NFS-shared file-system is locking up. Please feel free to ask any questions you feel relevant. :) At the time, there are a lot of processes in "disk sleep" state, and the load averages on our machines sky-rocket. The machines are responsive on SSH, but our the majority of our websites (apache+mod_php) just hang, as does our email system (exim+dovecot). Any websites which don't require write access to the file-system continue to operate. The load averages continue to rise until some kind of time-out is reached, but for at least 10-15 minutes. I've seen load averages over 800, yet the machines are still responsive for actions which don't require writing to the shared file-system. I've been investigating a variety of options, which have all turned out to be red-herrings: nagios, proftpd, bind, cron tasks. I'm seeing these messages in the file server's system log: Jul 30 09:37:17 fs0 kernel: [1810036.560046] statd: server localhost not responding, timed out Jul 30 09:37:17 fs0 kernel: [1810036.560053] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:37:17 fs0 kernel: [1810036.560064] lockd: cannot monitor node2 Jul 30 09:38:22 fs0 kernel: [1810101.384027] statd: server localhost not responding, timed out Jul 30 09:38:22 fs0 kernel: [1810101.384033] nsm_mon_unmon: rpc failed, status=-5 Jul 30 09:38:22 fs0 kernel: [1810101.384044] lockd: cannot monitor node0 Software involved: VMWare, Debian lenny (64bit), ancient Red Hat (32 bit) (version 7 I believe), Debian etch (32bit) NFS, apache2+mod_php, exim, dovecot, bind, amanda, proftpd, nagios, cacti, drbd, heartbeat, keepalived, LVS, cron, ssmtp, NIS, svn, puppet, memcache, mysql, postgres Joomla!, Magento, Typo3, Midgard, Symfony, custom php apps

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • Ngix rewrite is not working as expected

    - by SamFisher83
    I am trying to learn how to use nginx and how to use its rewrite functionality Nginx seems to be doing the rewrite: 2012/03/27 16:30:26 [notice] 16216#0: *3 "foo.php" matches "/foo.php", client: 61.90.22.223, server: localhost, request: "GET /foo.php HTTP/1.1", host: "domain.com" 2012/03/27 16:30:26 [notice] 16216#0: *3 rewritten data: "img.php", args: "", client: 61.90.22.223, server: localhost, request: "GET /foo.php HTTP/1.1", host: "domain.com" but in my access log I am getting the following: 61.90.22.223 - - [27/Mar/2012:16:26:54 +0000] "GET /foo.php HTTP/1.1" 404 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0" 61.90.22.223 - - [27/Mar/2012:16:30:26 +0000] "GET /foo.php HTTP/1.1" 404 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0" There is an img.php in the root directory so I am not sure why I am getting a 404 error Here is part of the configuration block: rewrite foo.php img.php last; location / { try_files $uri $uri/ /index.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; }

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • How to connect AD Explorer from Sysinternals to Global Catalog

    - by Oliver
    I'm using the sysinternals AD Explorer quite frequently to search and inspect an Active Directory without any big problems. But now i'd like to connect not only to a single AD Server. Instead i like to inspect the global catalog. If i enter within the AD Explorer connect dialog only the dns name of the machine (e.g. dns.to.domain.controller) that is serving the global catalog i only receive the concrete domain for which it is responsible, but not the whole forest (that's normal behaviour and expected by me). If i'm going to add the default port number (3268) for the global catalog in the form dns.to.domain.controller:3268 AD Explorer will simply crash without any further message. The global catalog itself works as expected under the given name and port number, cause our apache server use exactly this address and port number to authenticate some users. So any hints or tips to access the global catalog out of AD Explorer? Or there are any other nice tools like AD Explorer out there that doesn't have any problems to access the global catalog?

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Can Octopussy use messages other than syslog style?

    - by Lee Lowder
    I am currently exploring different options for a centralized log server. We use both Linux (Ubuntu 10.04 / 12.04, LTS for both) and Windows, though for this specific issue only Linux is relevant. I like the interface that octopussy has and it's feature list, but I am hesitant due to a few things. One of the biggest concerns I have is that it seems to be syslog only. The end goal is to have a centralized place for our devs and admins to be able to search through the logs generated by Apache, Tomcat and 70+ web apps spread out among a cluster, for both our prod and test environments. While I did see that octopussy has support for plugins, I haven't been able to find any sort of plugin repo or in depth guides as to what can be done with them. Does anyone know if plugins can be used to allow octopussy to non-syslog messages? Specifically log4j type log messages that may include multi-line stack traces and such. Also, is there a user community for this software, such as a mailing list or forum? I've been unable to locate any so far. Thank you.

    Read the article

  • implementing NGINX loadbalancer

    - by Alaa Alomari
    I have two servers (ServerA 192.168.1.10, ServerB 192,168.1.11) and DNS of test.mysite.com is pointing to ServerA #in serverA i have this upstream lb_units { server 192.168.1.10 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES1 server 192.168.1.11 weight=2 max_fails=3 fail_timeout=30s; # Reverse proxy to BES2 } server { listen 80; # Listen on the external interface server_name test.mysite.com; # The server name root /var/www/test; index index.php; location / { proxy_pass http://lb_units; # Load balance the URL location "/" to the upstream lb_units } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/test/$fastcgi_script_name; } } and ServerB is apache and it has the following <VirtualHost *:80 RewriteEngine on <Directory "/var/www/test" AllowOverride all </Directory DocumentRoot "/var/www/test" ServerName test.mysite.com </VirtualHost but whenever i try to browse test.mysite.com, it serves me from ServerA. also i tried to mark serverA and down server 192.168.1.10 down; in lb_units and still the same, serving me from serverA. any idea what i have done wrong??

    Read the article

  • Cisco Call Manager adding 7945's

    - by Will
    Hello we currently have a call manager settup (older we are working on upgrading it) but for now we are looking to add 7945 IP phones. We currently have 7960's all over the place, but we can't get these new anymore. Here is the info about our call manager ace.dll 5.2.5.0 CCM4.1(3) aced.dll CCM4.1(3) AdministrativeReportingTool.exe 4.1(0.45) 4.1(3)sr4d Apache Tomcat 4.1 CCM4.1(3) ASTIsapi.dll 3.3.2.0 4.1(3)sr4d AudioTranslator.exe 4.0.0.3 CCM4.1(3) Aupair.exe 4.1.3.10472 4.1(3)sr4d AupairChangeNotify.dll 4.1.0.11 CCM4.1(3) AuthFilt.dll 4.0.0.0 4.1(3)sr4d AVVIDCustomerDirectoryConfigurationPlugin.exe 4.1.0.17(0) CCM4.1(3) bootp.exe 2.0.2.2 CCM4.1(3) BulkAdministrationTool.exe 5.1(4c) 4.1(3)sr4d CallBackService.exe 3.3.2.3 4.1(3)sr4d ccm.exe 4.1.3.17472 4.1(3)sr4d CcmPerfMon.dll 4.1(3)sr4d CCNTEST.EXE CCM4.1(3) cdpintf.dll 4.0.0.0 CCM4.1(3) Cisco CallManager 4.1(3)sr4d 4.1(3)sr4d One of the admins recommenced downloading a device pack, which we did. However when we ran it on the call manager server it gave the error "unable to read script" Any recommendations on how to get these phones working with our Call Manager? Thank you.

    Read the article

  • I cannot connect to home server after a few hours

    - by Iago
    I have an old PC and I decided to revive it. A LAMP (for my own use) and a P2P server (torrent and e2dk). My old PC is an AMD Athlon XP (1400 MHz) with 384 Mb of RAM First of all I installed Ubuntu Server 11.10, SSH, FTP, SAMBA and LAMP. With this configuration my home server works well, with no problem. Then I went to the P2P server and I tried rTorrent and then uTorrent Server Alpha. And here is my problem. After a few hours (maybe 10 hours, or maybe 30 hours) with the torrent app running (rTorrent or uTorrent) I lose the connection to my home server. That is, I cannot access via ssh, I cannot access the apache server, etc. but I can ping the home server. It seems that the server freezes and all I can do is reboot the server physically. So, I have two questions: What is the problem? and How can I solve it?

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >