Search Results

Search found 20157 results on 807 pages for 'friendly url'.

Page 359/807 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Anyone interested in obtaining a cable list from a visio Diagram? [closed]

    - by Alex
    it's my first post here. Just wondering if anyone had to deal with the following problem - you have to install over say a 100 network elements and servers - cabling is done via contractors, so you need to provide them with an accurate, error-free cable list - your inputs are a set of detailed, port by port, visio diagrams. Prb is to obtain the cable list and get the cabling started while you're busy crafting the switch/routers configs. I coded a Visio plugin, which I plan to release under the GNU license, that returns a cable list from a diagram, and tested it on intermediate size infrastructure, 2K+ cables. It works well. The tool needs a little work to be user friendly, so before getting started, I wanted to know if that was worth the effort. Questions are welcomed, let me know -A PS: the tool is targeted for those who need a port by port description of their network, in the form Source/slot/port/Destination/slot/port.

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • Sparing level on HP EVA 4000

    - by Samuel
    One of the disks of our EVA4000 died today. This diskgroup (all volumes vraid5 with sparing level 1 and almost no space left for more volumes, 1TiB drives) is being rebuilt with "spare space" right now, and it will take at least 15 hours to do the leveling/rebuilding. We can't get a new disk until Friday. So, the question is, what would happen if another disk dies before the leveling completes? Would we lose data? And after that, how many aditional disks could die before losing data? 1 or 2? In "usual" RAID, we would be vulnerable to data loss while the rebuild takes place, but in this case the space reserved for sparing is two times the size of the bigger disk, so at the very least the effect should be the same of having two spares. Thanks in advance. Update: I have found some interesting threads about this question but still can't answer to this question, so I'm starting a bounty. http://blog.thestoragearchitect.com/2008/10/27/understanding-eva/ http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&url=http%3A%2F%2Fwww.experts-exchange.com%2FStorage%2FStorage_Technology%2FQ_25548177.html (Expert Exchange question from google).

    Read the article

  • DHCP server with database backend

    - by Cory J
    I have been looking around for something to replace my (ancient) ISC-DHCPd server. A DHCP server with a database backend sounds like a great idea to me, as I could then have a nice, friendly web interface to my server. Surprisingly, I can't any major open-source projects that offer this. Does anyone know of one? I have also read about modifying ISC to use a database backend...can anyone tell me if this solution is stable enough for a busy production server? Or is using a database a Bad Idea™ all together? PS - http://stackoverflow.com/questions/893887/dchp-with-database-backend looks like SO couldn't answer this old, similar question. EDIT: I am looking for something on a free OS platform, Linux or BSD. If there is something absolutely great that is Windows-only though, still interested.

    Read the article

  • My Ruby on Rails application only works if the address contains the port

    - by True Soft
    I have a Ruby on Rails application that works ok on my notebook ( http://localhost:3000/ ) I uploaded it on my hosting server, created with CPanel X an application, the URL is http://example.com:12007/ created a rewrite from http://example.com/ to http://example.com:12007/, and started it. If I write in my browser http://example.com:12007/ or http://www.example.com:12007/ all the pages work as expected. But if I write http://example.com/ or http://www.example.com/ the first page is displayed, but without any css or images (just like it wouldn't find them). I can see all the text (even the text from my MySQL database), but with no format. And if I click on any link, I get a error page like this: Not Found The requested URL /some_controller was not found on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. What should I do to make my website work without writing the port in the address bar? The content of my /public_html/.htaccess file is RewriteEngine on RewriteCond %{HTTP_HOST} ^example.com$ [OR] RewriteCond %{HTTP_HOST} ^www.example.com$ RewriteRule ^/?$ "http\:\/\/127\.0\.0\.1\:12007%{REQUEST_URI}" [P,QSA,L] which I guess was generated by CPanel Rewrites.

    Read the article

  • Enabling JMX for proxool with tomcat

    - by dialt0ne
    I am trying to get proxool's MBeans available so that I can see/manipulate them with jconsole. I have jconsole working, but I don't see anything related to proxool. The system is using Sun Java 1.5.0_17 (I know, I know... I'm working with the developers to upgrade). JMX is enabled by modifying $JAVA_OPTS in my tomcat 5.5 startup script: SJO="$SJO -Dcom.sun.management.jmxremote" SJO="$SJO -Dcom.sun.management.jmxremote.port=4998" SJO="$SJO -Dcom.sun.management.jmxremote.authenticate=false" SJO="$SJO -Dcom.sun.management.jmxremote.ssl=false" JAVA_OPTS="$JAVA_OPTS $SJO" I have proxool configured with JNDI in server.xml: <GlobalNamingResources> <Resource name="jdbc/database" auth="Container" type="javax.sql.DataSource" factory="org.logicalcobwebs.proxool.ProxoolDataSource" user="username" password="password" proxool.driver-url="jdbc:oracle:thin:@fqdn.example.com:1521:MYSID" proxool.driver-class="oracle.jdbc.driver.OracleDriver" proxool.alias="mysid" proxool.maximum-connection-count="20" proxool.statistics="20s,5m,15m" proxool.statistics-log-level="INFO" proxool.jmx="true" proxool.verbose="true" /> </GlobalNamingResources> My test .jsp can run queries and I can see it using the connections with the proxool admin servlet, but I'm unsure if there's more I need to configure in tomcat or proxool to get JMX functioning. Advice? jmxproxy info edit: The jmxproxy servlet is working - when I go to the URL http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=*:type%3DRequestProcessor,* the results are: OK - Number of results: 2 Name: Catalina:type=RequestProcessor,worker=http-8080,name=HttpRequest0 modelerType: org.apache.coyote.RequestInfo bytesSent: 0 requestBytesSent: 0 contentLength: -1 bytesReceived: 0 requestProcessingTime: 1297983483666 globalProcessor: org.apache.coyote.RequestGroupInfo@32dc51c8 requestBytesReceived: 0 serverPort: -1 stage: 0 requestCount: 0 maxTime: 0 processingTime: 0 errorCount: 0 Name: Catalina:type=RequestProcessor,worker=jk-127.0.0.1-8009,name=JkRequest794 modelerType: org.apache.coyote.RequestInfo virtualHost: tomcatserver.example.com bytesSent: 0 method: GET remoteAddr: 172.30.3.51 requestBytesSent: 0 contentLength: -1 workerThreadName: TP-Processor15 bytesReceived: 0 requestProcessingTime: 9 globalProcessor: org.apache.coyote.RequestGroupInfo@1e7d3b8e protocol: HTTP/1.1 currentQueryString: qry=*%3Atype%3DRequestProcessor%2C* requestBytesReceived: 0 serverPort: 4999 stage: 3 requestCount: 0 maxTime: 0 processingTime: 0 currentUri: /manager/jmxproxy/ errorCount: 0 And more to the point http://tomcatserver.example.com:4999/manager/jmxproxy/?qry=Catalina:type%3DEnvironment,resourcetype%3DGlobal,name%3DProxool yields: OK - Number of results: 0

    Read the article

  • android geting data from sql

    - by sagar
    Hello i m new to android. i wont to connect to sql server for store and get data so so me one can help me sending code of android for do it.. i had tried to do tht with java nd it was workink but now i wont to create a aplication for android my java code is :: import java.sql.*; public class MysqlTest { public static void main (String[] args) { Connection conn = null; try { String userName = "pietro"; //change it to your username String password = "pietro"; //change it to your password String url = "jdbc:mysql://192.168.0.67:3306/registro"; Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection(url, userName, password); Statement s = (Statement) conn.createStatement(); // code for create a tabel in server s.execute("create table School2 (rolno integer,sub text)"); // code for create a tabel in server s.execute("insert into School2(rolno,sub)values(10,'java')"); //code for add value in tabel s.execute("select rolno,sub from School2");//code for add value in tabel s.close(); System.out.println("Database connection established"); } catch (Exception e) { System.err.println("Cannot connect to database server"); } finally { if (conn != null) { try { conn.close (); System.out.println("Database connection terminated"); } catch (Exception e) { /* ignore close errors */ } } } } }

    Read the article

  • non-GUI connection to local Hyper-V VM without network

    - by sandro
    I have a virtual machine on Hyper-V manager (Windows 2008 R2) without a network configured on the VM. From a powershell script running on the host Windows server, I would like to query into the OS of that local VM for certain information (i.e. if a given process has finished completion). I am using codeplex's pshyperv module (https://pshyperv.codeplex.com/) to interact with Hyper-V manager, but the only cmdlet to connect to the vm is 'New-VMConnectSession', which launches a 'vmconnect.exe' connection to the VM. Since vmconnect.exe is essentially RDP, this is not very script-friendly. From within a host's powershell script, is there any way to send a command to a local virtual machine's OS and receive output, if no network is configured on the VM? (I believe Vmware's 'vmrun' utility has this capability) Another way to ask this question: Does Hyper-V have a non-GUI-based form of vmconnect.exe? (PS. Not sure if this was more stackoverflow or serverfault)

    Read the article

  • Redirect before rewrite

    - by Kirk Strobeck
    Had an issue where I need to redirect old URLs, but not disable the mod_rewrite for page structure. redirect 301 /home.html http://www.url.com/ It needs to live on the Symphony 2.0 .htaccess file ### Symphony 2.0.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ###### Updated redirect 301 ^home.html http://www.url.com/ [L] ### Symphony 2.0.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www\.krista-swim\.com [NC] RewriteRule ^(.*)$ http://www.krista-swim.com/$1 [L,R=301] RewriteBase / ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ######

    Read the article

  • Mod_rewrite to eliminate query strings

    - by Greg Frommer
    Hi everyone, I have been working on this for a while but I'm not finding exactly what I am looking for. I am writing a webapp to let my users create and publish pieces of HTML content in a domain and URL folder structure of their choosing. All of the content and requested URL structures are stored in a database. I have all of the code in my index.php (in the root folder) to access the database content, and based on the server name (and hopefully folder structure) will pick out the proper content from the DB and display it to the end-users browser. So my situation looks like this: www.test.com/index.php?id=123234345 ... will display the proper page, but I want my users to be able to define a unique "page name" instead of using the numeric index (also I want to hide the /index.php part) so what I would like the end-user to see is: www.test.com/arbitrary-unique-keyword/keyword2/keyword3 which will invoke the index.php page in the root folder. Then I will use the PHP $_SERVER['PATH_INFO'] variable to match the requested folder structure up with the proper content in my database and display that. All the material I have found so far expects me to hard code parts of the folder structure into the rules.... but I think I want something simpler (perhaps). So the question in a nutshell: How do I use mod_rewrite to allow all "non-existent" folder paths be passed through to a main index.php residing in the root folder? (For all paths that DO exist, like for calls to images... I want those to succeed and not be directed to the index.php obviously) Thanks everyone, please let me know if I can clear anything up.

    Read the article

  • Change date in a SQL query to reference a cell in Excel

    - by Adil
    I have the following code that returns the needed data into excel and manually changing the date will change the returned data; however, I'd like to reference a cell with a formula that will make the query a bit more user friendly. I've tried using my limited knowledge of referencing a cell but none have worked. This information is in cell A1 and the query is placed in cell A2 with the following equation: =wwQuery("STKAP03", $A$1) SET QUOTED_IDENTIFIER OFF SELECT * FROM OPENQUERY(INSQL, "SELECT DateTime, [40_MOTORS.MI436423.CIN], [40_MOTORS.MI436425.CIN] FROM WideHistory WHERE [40_MOTORS.MI436423.CIN] IS NOT NULL AND wwRetrievalMode = 'Delta' AND wwVersion = 'Latest' AND DateTime >='20120409 07:00:00' These two dates/times I'd like to reference cells on a different sheet AND DateTime <= '20120416 07:00:00'")

    Read the article

  • Disable CD eject button on Windows laptop

    - by Marc Gravell
    Most laptops have a CD eject that is very sensitive, and placed such that it regularly gets triggered when handling the laptop. This is in particular a problem (for clumsy-handed me) when picking up the laptop to stow in in a laptop bag; I've lost count of the number of times it has ejected just as I am lowering it into the case! I rarely use a CD, but I am wondering whether some crafty software hack (or other trick) might be possible to make it less vulnerable. Perhaps trying to fool it into thinking it is busy (but ideally without destroying my battery). Otherwise, I might as well bow to the inevitable and snap the darned thing off. I'm not making this brand-specific, as I've seen this problem on a range of both branded and re-badged laptops. I am, however, mainly interested in windows-friendly solutions.

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Configuring Ubuntu 10.04's Greeter Screen

    - by Skizz
    I have an Ubuntu server (9.04 at the moment) and an Ubuntu desktop which I recently upgraded to 10.04. Once I'd set up the users and groups on the desktop to match the server (I'm new to this, I think LDAP would do this for me, but that's another question), the friendly greeter screen no longer displays the same set of users1. In 9.04 (the previous version running on the desktop PC) there were four users shown. These had UIDs of 500 to 510. Changing the UIDs is one solution, but that would mean changing the UIDs on all my linux PCs, and that is a might PITA (unless there's a tool to make it less painful). How can I get the greeter in 10.04 to show users with UIDs in the 500s without resorting to changing the UIDs? I use the greeter screen with user pictures as the PC is mainly for use by my young children and clicking the picture is a bit easier (they still need to type a password though).

    Read the article

  • SharePoint web services not protected?

    - by Philipp Schmid
    Using WSS 3.0, we have noticed that while users can be restricted to access only certain sub-sites of a site collection through permission settings, the same doesn't seem to be true for web services, such as /_vti_bin/Lists.asmx! Here's our experimental setup: http://formal/test : 'test' site collection - site1 : first site in test site collection, user1 is member - site2 : second site in test site collection, user2 is member With this setup, using a web browser user2 can: - access http://formal/test/site2/Default.aspx - cannot access http://formal/test/site1/Default.aspx That's what is expected. To our surprise however, using the code below, user2 can retrieve the names of the lists in site1, something he should not have access to! Is that by (unfortunate) design, or is there a configuration setting we've missed that would prevent user2 from retrieving the names of lists in site1? Is this going to be different in SharePoint 2010? Here's the web service code used in the experiment: class Program { static readonly string _url ="http://formal/sites/research/site2/_vti_bin/Lists.asmx"; static readonly string _user = "user2"; static readonly string _password = "password"; static readonly string _domain = "DOMAIN"; static void Main(string[] args) { try { ListsSoapClient service = GetServiceClient(_url, _user, _password, _domain); var result = service.GetListCollection(); Console.WriteLine(result.Value); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } } private static ListsSoapClient GetServiceClient(string url, string userName, string password, string domain) { BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.TransportCredentialOnly); binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm; ListsSoapClient service = new ListsSoapClient(binding, new System.ServiceModel.EndpointAddress(url)); service.ClientCredentials.UserName.Password = password; service.ClientCredentials.UserName.UserName = (!string.IsNullOrEmpty(domain)) ? domain + "\\" + userName : userName; return service; } }

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB, shudder) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • Default /server-status location not inheriting in Apache

    - by rmalayter
    I'm having a problem getting /server-status to work Apache 2.2.14 on Ubuntu Server 10.04.1. The default symlinks for status.load and status.conf are present in /etc/apache2/mods-enabled. The status.conf does include the location /server-status and appropriate allow/deny directives. However, the only vhost I have in sites-enabled looks like this. The idea is to proxy anything with a Tomcat URL to a cluster of tomcats, and anything else to an IIS box. However, this seems to result in requests to /server-status being sent to IIS. Copying the /server-status in explicitly to the Vhost configuration doesn't seem to help, no matter what order I use. Is it possible to include /server-status do this within a vhost configuration that has a "default" proxy rule?: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED <Proxy balancer://tomcatCluster> BalancerMember ajp://qa-app1:8009 route=1 BalancerMember ajp://qa-app2:8009 route=2 ProxySet stickysession=ROUTEID </Proxy> <ProxyMatch "^/(mytomcatappA|mytomcatappB)/(.*)" > ProxyPassMatch balancer://tomcatCluster/$1/$2 </ProxyMatch> #proxy anything that's not a tomcat URL to IIS on port 80 <Proxy /> ProxyPass http://qa-web1/ </Proxy>

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • How can I prevent Ask.com Toolbar from being installed every time Java is updated?

    - by abstrask
    As many are painfully aware of, Oracle continues to not only bundle the Java installation with the useless Internet browser toolbar from Ask.com, but also enable its installation by default. In addition to the toolbar, Ask also replaces your favourite search engine in your browser with Ask. Furthermore, the Java installation goes as far as to actually recommend installing this useless junk, meaning that any non IT-savvy person is more than likely to leave it checked and install it (after all, it was enabled by default and the friendly Java installer did recommend it, right?). To add insult to injury, even if you remove the Ask Toolbar, you can be sure to see it again soon, when the next Java update hits you (which seem to happen quite often lately, due to loads of security fixes for Java, but that's another story). I'll duly remove the check-mark to install Ask Toolbar, whenever I update Java, but when supporting my family and friends, it's obvious they don't. How can I prevent the pesky Ask.com Toolbar from being installed in the first place?

    Read the article

  • Can I use @import to import Kod's default style sheet into my own?

    - by Thomas Upton
    I understand that Kod is being actively developed and is prone to drastic changes in any area. I would like to modify some small things (like font face and size or certain colors) while still being able to benefit from any changes or updates to the default Kod stylesheet. I thought that I would be able to @import the default stylesheet into my own to achieve this. This is what ~/.kod/custom.css would look like, @import url("file:///Applications/Kod.app/Contents/Resources/style/default.css"); /* Change the default font face and color. */ body { font-family: Menlo, monospace; color: #efefef; } This stylesheet was set with the following defaults command, per the comments at the top of Kod's default CSS file: defaults write se.hunch.kod style/url ~/.kod/custom.css Unfortunately, this didn't work. When I first tried to reload the style, Kod crashed. It opened fine again, but the @import statement wasn't working, and Kod crashed every time I saved the custom.css file. Am I doing something wrong? Did I write my @import statement wrong? Is that not how @import is supposed to work? Did I miss some sort of documentation or Kod Google Groups post that mentions that Kod explicitly disallows this?

    Read the article

  • GIT : I keep having to merge my new branch

    - by mnml
    Hi, I have created a new branch and I'm working on it with others dev but for reasons when I want to push my new commits I always have to git merge origin/mynewbranch Otherwise I'm getting some errors: ! [rejected] mynewbranch -> mynewbranch (non-fast-forward) error: failed to push some refs to '[email protected]/repo.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. You asked me to pull without telling me which branch you want to merge with, and 'branch.mynewbranch.merge' in your configuration file does not tell me, either. Please specify which branch you want to use on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. If you often merge with the same branch, you may want to use something like the following in your configuration file: [branch "mynewbranch"] remote = <nickname> merge = <remote-ref> [remote "<nickname>"] url = <url> fetch = <refspec> See git-config(1) for details. Why is it not automatic? Thanks

    Read the article

  • PGP - MailCloak/Gpg4Win/etc

    - by ericl42
    Hello, I have used Gpg4Win in the past along with FireGPG to have encryption on my emails. I am needing to roll this out to quite a few more people and was wondering if anyone else had some products they preferred better. The criteria I need is free and as user friendly as possible. I looked at MailCloak but I have had some issues with it on 1 of the 2 computers I've been testing on. It seems like it might work pretty well as a overall solution but on my computer I keep getting "Identity cannot be created." and not sure whats going on with that. Thanks for the info.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >