Search Results

Search found 50600 results on 2024 pages for 'application lifecycle'.

Page 840/2024 | < Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Oracle Database Upcoming Event dates to know

    - by mandy.ho
    February may be a short month, but it's not short of exciting Oracle events. From information packed "Real Performance Days" to participation in one of the biggest IT Security events - look out for Oracle Database and let us know if you are there with us! Feb 13-18, 2011 - Las Vegas, NV TDWI World Conference Series Join Oracle in highlighting Exadata x2-2 and x2-8, along with Oracle Business Intelligence, Enterprise Performance management and Data Warehousing solutions. Oracle will be presenting a workshop - Oracle Data Integration: Best-of-Breed Solutions for the Enterprise Wednesday, February 16, 2011 7p.m - 9p.m Glen Goodrich, Director of Product Management Christophe Dupupet, Director of Product Management, Data Integration http://events.tdwi.org/events/las-vegas-world-conference-2011/sessions/session-list.aspx Feb 14-17, 2011 - Barcelona, Spain Mobile World Congress MWC is an event where Oracle showcases the near complete breadth and depth of value that our Communications Industry strategy and Hardware and Software Solutions can deliver. Oracle supports Communications Service Providers today and delivers platforms and flexibility primed for the future. Oracle will have a two story Pavilion, along with an Oracle Java and Embedded Solutions Center - App Planet. The Exhibition times are Monday, 14th February 09.00 - 19.00 Tuesday, 15th February 09.00 - 19.00 Wednesday, 16th February 09.00 - 19.00 Thursday, 17th February 09.00 - 16.00 Have questions? Meet with Oracle Sales representatives at the Oracle Café. Open every day from 9am to 17:00pm. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=109912&src=6973382&src=6973382&Act=4 Feb 14-18, 2011 - San Francisco, CA RSA Conference As the world's most complete, open, integrated business software and hardware systems provider, Oracle can uniquely safeguard your information throughout its entire lifecycle. Learn more by attending these sessions: Cloud Computing: A Brave New World for Security and Privacy (CLD-201) Wednesday, February 16 at 8:30 a.m. Databases Under Attack - Securing Heterogeneous Database Infrastructures (DAS-301) Thursday, February 17, 2011 at 8:30 a.m. Seven Steps to Protecting Databases (DAS-402) Friday, February 18 at 10:10 a.m. RSA Conference Attendees will also have the opportunity to meet with Oracle Security Solution experts, see live product demos and more by visiting booth # 1559. Hours: Monday, February 14, 6:00 p.m. - 8:00 p.m., Tuesday, February 15, 11:00 a.m. - 6:00 p.m. and 4:30 p.m. - 6:00p.m., Wednesday, February 16, 11:00 a.m. - 6:00 p.m., and Thursday, February 17, 11:00 a.m. - 3:00 p.m. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=127657&src=6967733&src=6967733&Act=12 Feb 21-25, 2011 - Various Locations IOUG Presents - A Day of Real World Performance with Tom Kyte, Andrew Holdsworth and Graham Wood These Oracle experts will debate, discuss and delineate the best practices for designing hardware architectures, deploying Oracle databases, and developing applications that deliver the fastest possible performance for your business.Topics are covered in a conversational format - with all three chiming in where appropriate. Each presenter has their own screen projector to demonstrate their individual points to the participants. Customers will have the opportunity to get their specific performance/tuning questions answered and learn how to balance all the different environmental requirements for their applications to improve performance. Register today for the following dates and locations • February 21 in San Diego, CA • February 22 in Los Angeles, CA • February 23 in Seattle, WA • February 25 in Phoenix, AZ http://www.ioug.org/tabid/194/Default.aspx Feb 8-24 - Various Oracle Enterprise Cloud Summit This series of full-day events with cloud experts, sharing real-world best practices, reference architectures and more continues during the month of February. Attend the Oracle Enterprise Cloud Summit to learn how to: • Build a state-of-the-art cloud architecture • Leverage your existing IT investments • Optimize your IT management processes Whether you are considering a move to cloud computing or have already adopted a cloud model, this event offers you the insights you need to take full advantage of cloud computing. Check below to see if the event is coming to a city near you. http://www.oracle.com/us/corporate/events/cloud-events-214342.html

    Read the article

  • Losing JSESSIONID when using ProxyHTMLURLMap

    - by Matthew Schmitt
    I've setup a reverse proxy between an Apache front-end and multiple Tomcat backends. The below block of code includes the ProxyHTMLURLMap param so that the HTML can be rewritten to remove the Tomcat context path. With this setup in place, after logging into my application, an initial JSESSIONID is set properly, but when navigating to any other page, this JSESSIONID is lost and another one is set by the application. I should mention that the initial login directs to a URL that includes the current context path (i.e. https://app.domain.com/context/home), but when navigating to another page, that context path is not present in the URL (i.e. https://app.domain.com/page2). <Proxy balancer://happcluster> BalancerMember ajp://happ01.h.s.com:8009 route=worker1 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ02.h.s.com:8009 route=worker2 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ03.h.s.com:8009 route=worker3 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ04.h.s.com:8009 route=worker4 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ05.h.s.com:8009 route=worker5 loadfactor=5 timeout=15 retry=5 ProxySet lbmethod=bytraffic ProxySet stickysession=JSESSIONID </Proxy> ProxyPass /context balancer://happcluster/context ProxyPass / balancer://happcluster/context/ <Location /context/> # Rewrite HTTP headers and HTML/CSS links for everything else ProxyPassReverse / ProxyPassReverseCookieDomain / app.domain.com ProxyPassReverseCookiePath / /context ProxyHTMLURLMap /context/ / # Be prepared to rewrite the HTML/CSS files as they come back # from Tomcat SetOutputFilter INFLATE;proxy-html;DEFLATE </Location> Has anyone ever run into a similar situation?

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help!

    Read the article

  • first time setting up ssl, running into a strange problem, tutorials haven't been too helpful

    - by pedalpete
    This is my first time trying to set-up an ssl for one a site, and I'm running it on a server that has 3 other sites already hosted. I'm running apache2.?? and the install came with an ssl.conf page. The ssl.conf has the following settings LoadModule ssl_module modules/mod_ssl.so Listen 443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl <VirtualHost *:443> ServerAdmin [email protected] DocumentRoot /var/www/html/securesite ServerName securesite.com ErrorLog logs/securesite-error_log CustomLog logs/securesite-access_log common SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/securesite.com.crt SSLCertificateKeyFile /etc/httpd/ssl.key/server.key SSLCertificateChainFile /etc/httpd/ssl.crt/gd_bundle.crt </VirtualHost> When I run 'apachectl configtest', I don't get any errors, but running 'apachectl -k restart', I get 'httpd not running, trying to start'. I have two questions 1) Is there an error in the way I'm defining my virtualhost for 443?? the rest of my entries point to <VirtualHost *:80. When I comment out the above entry, apache runs fine. 2) do I need to set-up a redirect from port 80 for secure site? Because most users are going to go to http: or www. , and I need to send them to https: does apache do this automatically? or do i need to create an entry with a redirect?

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • What do I need to consider when buying hardware to meet my needs?

    - by Darth Android
    I'm looking to build a new computer from the ground up. I'm not sure what to look out for and need guidance and help on how to pick the hardware needed to construct my new rig. How do I know what to buy? How do I find out if a given CPU will be enough for a certain game or application that I want to run? How do I find out if a given graphics card will be enough for a certain game or application? What is important when looking at motherboards? How much memory do I need? How do I know how much wattage I need for a power supply? What size case do I need? What relevant standards do I need to read up on and be aware of? PCI, PCIe, SATA, USB 2.0, USB 3.0, etc... What "gotchas" do I need to be on the lookout for? Please keep responses generation-agnostic to ensure they will be helpful to our future users. While Stack Exchange does not permit shopping recommendations, it doesn't provide any general advice to consider when buying hardware. So, instead of just telling those that ask what to buy that it's not allowed, let's tell them how to figure out what they need. This question was Super User Question of the Week #20 Read the June 20, 2011 blog entry for more details or submit your own Question of the Week.

    Read the article

  • installing lots of perl modules

    - by Colin Pickard
    Hi, I've been landed with the job of documenting how to install a very complicated application onto a clean server. Part of the application requires a lot of perl scripts, each of which seem to require lots of different perl modules. I don't know much about perl, and I only know one way to install the required modules. This means my documentation now looks this: Type each of these commands and accept all the defaults: sudo perl -MCPAN -e 'install JSON' sudo perl -MCPAN -e 'install Date::Simple' sudo perl -MCPAN -e 'install Log::Log4perl' sudo perl -MCPAN -e 'install Email::Simple' (.... continues for 2 more pages... ) Is there any way I can do all this one line like I can with aptitude i.e. Type the following command and go get a coffee: sudo aptitude install openssh-server libapache2-mod-perl2 build-essential ... Thank you (on behalf of the long suffering people who will be reading my document) EDIT: The best way to do this is to use the packaged versions. For the modules which were not packaged for Ubuntu 10.10 I ended up with a little perl script which I found here ) #!/usr/bin/perl -w use CPANPLUS; use strict; CPANPLUS::Backend->new( conf => { prereqs => 1 } )->install( modules => [ qw( Date::Simple File::Slurp LWP::Simple MIME::Base64 MIME::Parser MIME::QuotedPrint ) ] ); This means I can put a nice one liner in my document: sudo perl installmodules.pl

    Read the article

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • Resotre single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Applicaiton log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated.

    Read the article

  • Apache + PHP via FastCGI

    - by Wilco
    I'm running into some problems while attempting to run PHP via FastCGI in Apache. I have the FastCGI module loaded, but get the following error when attempting to load a page: The requested URL /fastcgi/php54.fcgi/index.php was not found on this server. Somewhere, it seems that the script to be executed is appended to the executable without any spaces. Is this where the problem likely is? Below I've included snippets from my Apache configuration (hopefully this is enough): LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so FastCgiIpcDir /var/run/fastcgi AddHandler fastcgi-script .fcgi FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 AddType application/x-httpd-php .php ScriptAlias /fastcgi/ /Library/WebServer/FCGI-Executables/ <Directory "/Library/WebServer/FCGI-Executables"> Options +ExecCGI SetHandler fastcgi-script Order allow,deny Allow from all <VirtualHost *:80> ServerName www.somedomain.com ServerAdmin [email protected] DocumentRoot "/Web/www.somedomain.com" DirectoryIndex index.html index.php default.html CustomLog /var/log/apache2/access_log combinedvhost ErrorLog /var/log/apache2/error_log Action application/x-httpd-php /fastcgi/php54.fcgi <IfModule mod_ssl.c> SSLEngine Off SSLCipherSuite "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM" SSLProtocol -ALL +SSLv3 +TLSv1 SSLProxyEngine On SSLProxyProtocol -ALL +SSLv3 +TLSv1 </IfModule> <Directory "/Web/www.somedomain.com"> Options All -Indexes +ExecCGI +Includes +MultiViews AllowOverride All <IfModule mod_dav.c> DAV Off </IfModule> <IfDefine !WEBSERVICE_ON> Deny from all ErrorDocument 403 /customerror/websitesoff403.html </IfDefine> </Directory> </VirtualHost> ... and this is the executable: #!/bin/sh PHP_FCGI_CHILDREN=1 PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN export PHP_FCGI_MAX_REQUESTS exec /opt/local/bin/php-cgi54

    Read the article

  • IIS 7 + Tomcat 7 - how to reach http://localhost:8080/my_app under e.g. http://my_app.local

    - by Sk8erPeter
    In brief: IIS 7 + Apache Tomcat 7 + isapi_redirect.dll: I have a deployed and working Tomcat-application available under http://localhost:8080/my_app. I would like to see the same content under http://my_app.local (and NOT the default Tomcat-site [which you can see below]). How can I do that? Longer explained: I have IIS 7 (7.5.7600.16385) and Apache Tomcat/7.0.22 installed. I deployed an application (let's call it "my_app") in Tomcat, which now can be reached at http://localhost:8080/my_app, works fine. I added a new web site in IIS panel with the path of the Tomcat deployed my_app, which looks like this: "c:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\my_app" I binded the host name my_app.local. After that, I configured isapi_redirect.dll like this (or that). Now, when I open http://my_app.local, I can see the default Tomcat site (see below). BUT under http://my_app.local I would like to see the same content as under http://localhost:8080/my_app. How can I do that? Thank you very much in advance!! my config files: isapi_redirect.properties (I made a dir junction to c:\tomcat, so this also works :) ) workers.properties uriworkermap.properties rewrites.properties (empty)

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • What does this error mean in my IIS7 Failed Request Tracing report?

    - by Pure.Krome
    Hi folks, when I attempt to goto any page in my web application (i'm migrating the code from an asp.net web site to web application, and now testing it) .. i keep getting some not authenticated error(s) . So, i've turned on FREB and this is what it says... I'm not sure what that means? Secondly, i've also made sure that my site (or at least the default document which has been setup to be default.aspx) has anonymous on and the rest off. Proof: - C:\Windows\System32\inetsrv>appcmd list config "My Web App/default.aspx" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> C:\Windows\System32\inetsrv>appcmd list config "My Web App" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> Can someone please help?

    Read the article

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Four disks - RAID 10 or two mirrored pairs?

    - by ewwhite
    I have this discussion with developers quite often. The context is an application running in Linux that has a medium amount of disk I/O. The servers are HP ProLiant DL3x0 G6 with four disks of equal size @ 15k rpm, backed with a P410 controller and 512MB of battery or flash-based cache. There are two schools of thought here, and I wanted some feedback... 1). I'm of the mind that it makes sense to create an array containing all four disks set up in a RAID 10 (1+0) and partition as necessary. This gives the greatest headroom for growth, has the benefit of leveraging the higher spindle count and better fault-tolerance without degradation. 2). The developers think that it's better to have multiple RAID 1 pairs. One for the OS and one for the application data, citing that the spindle separation would reduce resource contention. However, this limits throughput by halving the number of drives and in this case, the OS doesn't really do much other than regular system logging. Additionally, the fact that we have the battery RAID cache and substantial RAM seems to negate the impact of disk latency... What are your thoughts?

    Read the article

  • Guest blog: A Closer Look at Oracle Price Analytics by Will Hutchinson

    - by Takin Babaei
    Overview:  Price Analytics helps companies understand how much of each sale goes into discounts, special terms, and allowances. This visibility lets sales management see the panoply of discounts and start seeing whether each discount drives desired behavior. In Price Analytics monitors parts of the quote-to-order process, tracking quotes, including the whole price waterfall and seeing which result in orders. The “price waterfall” shows all discounts between list price and “pocket price”. Pocket price is the final price the vendor puts in its pocket after all discounts are taken. The value proposition: Based on benchmarks from leading consultancies and companies I have talked to, where they have studied the effects of discounting and started enforcing what many of them call “discount discipline”, they find they can increase the pocket price by 0.8-3%. Yes, in today’s zero or negative inflation environment, one can, through better monitoring of discounts, collect what amounts to a price rise of a few percent. We are not talking about selling more product, merely about collecting a higher pocket price without decreasing quantities sold. Higher prices fall straight to the bottom line. The best reference I have ever found for understanding this phenomenon comes from an article from the September-October 1992 issue of Harvard Business Review called “Managing Price, Gaining Profit” by Michael Marn and Robert Rosiello of McKinsey & Co. They describe the outsized impact price management has on bottom line performance compared to selling more product or cutting variable or fixed costs. Price Analytics manages what Marn and Rosiello call “transaction pricing”, namely the prices of a given transaction, as opposed to what is on the price list or pricing according to the value received. They make the point that if the vendor does not manage the price waterfall, customers will, to the vendor’s detriment. It also discusses its findings that in companies it studied, there was no correlation between discount levels and any indication of customer value. I urge you to read this article. What Price Analytics does: Price analytics looks at quotes the company issues and tracks them until either the quote is accepted or rejected or it expires. There are prebuilt adapters for EBS and Siebel as well as a universal adapter. The target audience includes pricing analysts, product managers, sales managers, and VP’s of sales, marketing, finance, and sales operations. It tracks how effective discounts have been, the win rate on quotes, how well pricing policies have been followed, customer and product profitability, and customer performance against commitments. It has the concept of price waterfall, the deal lifecycle, and price segmentation built into the product. These help product and sales managers understand their pricing and its effectiveness on driving revenue and profit. They also help understand how terms are adhered to during negotiations. They also help people understand what segments exist and how well they are adhered to. To help your company increase its profits and revenues, I urge you to look at this product. If you have questions, please contact me. Will HutchinsonMaster Principal Sales Consultant – Analytics, Oracle Corp. Will Hutchinson has worked in the business intelligence and data warehousing for over 25 years. He started building data warehouses in 1986 at Metaphor, advancing to running Metaphor UK’s sales consulting area. He also worked in A.T. Kearney’s business intelligence practice for over four years, running projects and providing training to new consultants in the IT practice. He also worked at Informatica and then Siebel, before coming to Oracle with the Siebel acquisition. He became Master Principal Sales Consultant in 2009. He has worked on developing ROI and TCO models for business intelligence for over ten years. Mr. Hutchinson has a BS degree in Chemical Engineering from Princeton University and an MBA in Finance from the University of Chicago.

    Read the article

  • JSP Content Issue in Tomcat

    - by gautam vegeta
    There is one application where I work where there are still manual builds used i.e manually moving the servlet classes and jsp files from Dev to QA and finally to Prod. This is the method used in this application which cant be changed for some wierd reasons.BTW this is not the problem. We did a manual build where we transferred jsp files from QA to PROD recently. And we noticed that the jsp file content does not correspond to the updated jsp's but have the same content to the jsp file which was present in the server prior to the deployment. We did not re-start tomcat since jsp files upon updation automatically changes its contents. This problem persisted even after 6 hours of deployment If we consider the time standards which are different which may cause some delay. So to fix this we had to individually go into every jsp file and just type something save it and delete this change and save it.Then it worked perfectly. But finally the jsp file content before and after was never changed we just did this to change the modification date. If we think in terms of timestamp problem how can this be possible coz the old jsp files which were present in the server prior to deployment was atlest one month old and the ones getting deployed were defenitely newer than that. Why did this happen? This did not happen when we did same type of deployments earlier. How can we prevent this from happening in the future.

    Read the article

  • Windows 7 taskbar groups to use vertical menus only, not thumbnails?

    - by Justin Grant
    I frequently have 10 or more windows of the same application (e.g. Outlook, Word, IE, etc.) open at one time. Windows 7's new taskbar grouping thumbnail feature shows a preview of open windows (aka thumbnails) when you single-click on the taskbar group for that application. But when I have over 10 windows open (more thumbnails than will fit horizontally on my screen), Windows reverts to a vertical menu. This is disconcerting since I need to train myself to deal with two separate behaviors and can never anticipate what I'm going to see when I click on a taskbar group. Furthermore, I find the thumbnails more difficult to visually scan through (vs. the vertical menu) because only 1-2 words of each window's title are shown. Typically that's not enough text to disambiguate and help me find the right window. I'd like to force Windows 7's taskbar grouping to always show a vertical menu (like in XP) instead of sometimes showing thumbnail previews and sometimes showing a vertical menu. Anyone know how (whether?) this can be done? UPDATE: BTW, I'm running the RTM version (build 7600) of Windows 7. There are apparently other solutions out there which work on earlier builds, but which don't work on the RTM build.

    Read the article

  • What applications do people use for windows folder management? How do you switch between folders in

    - by 118184489189799176898
    I'm always switching between client's folders in different applications like photoshop, sql manager, explorer etc. It's so slow to go between them, navigate to the folder and it's still too slow to copy and paste the directory etc. It's so annoying to do. Someone must have a good solution. I was thinking if there was a "recently accessed" folders list available within every folder explorer window... so in any application, if i go "file open" it will have something, somewhere that lists the recently accessed folders - that would be really helpful. I am aware of the recent places folder in win7, but this sucks because it is not sorted by date accessed. Perhaps if there was a way to change this then this would become a decent feature? Is there some application that already does this? i'm sure someone has already solved this issue in a more elegant solution than I can think off. I'm keen to know what programs people use or how people addresss this issue? Thanks...

    Read the article

< Previous Page | 836 837 838 839 840 841 842 843 844 845 846 847  | Next Page >