Search Results

Search found 6317 results on 253 pages for 'mod proxy ajp'.

Page 75/253 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Redirecting http request to two different weblogic servers using the Weblogic proxy and Apache2

    - by Jhon
    Hello All, I've read previous posts like "Redirecting https requests to two different weblogic servers using the Weblogic proxy and Apache2". But I have a different situation and I don't think I'm understanding this to well. I have an Apache 2 server (server1) that will receive http request for my application. Then I have two more servers (server2 and server3) with Web Logic 9.2 runing on ports 7000 (server1) and 8000 (server2). I want the users to enter appname.domain.com and be redirected between the two web logic servers, always keeping appname.domain.com (this is hidding servername:port from URL). How can I manage to do that? Thanks in advanced! Jhon.

    Read the article

  • ANTLR parser hanging at proxy.handshake call

    - by Peter Boughton
    I am attempting to get a basic ECMAScript parser working, and found a complete ANTLR grammar for ECMAScript 3, which appears to compile ok and produces the appropriate Lexer/Parser/Walker Java files. (Running inside ANTLR IDE plugin for Eclipse 3.5) However, when actually trying to use it with some simple test code (following guide on ANTLR wiki), it just hangs when trying to create the parser: CharStream MyChars = new ANTLRFileStream(FileName); // FileName is valid ES3Lexer MyLexer = new ES3Lexer(MyChars); CommonTokenStream MyTokens = new CommonTokenStream(MyLexer); MyTokens.setTokenSource(MyLexer); ES3Parser MyParser = new ES3Parser( MyTokens ); // hangs here ES3Parser.program_return MyReturn = MyParser.program(); I've tracked down the problem to inside the ES3Parser constructor, where it's calling the function proxy.handshake() - before this line I can successfully do System.out.println("text") but after it I get nothing. So, how do I go about finding out why it's hanging, and stopping it - or even just bypassing this section (can/should I disable debugging?) - so long as that lets it work and allows me to get on with doing useful stuff.

    Read the article

  • Custom Logic and Proxy Classes in ADO.NET Data Services

    - by rasx
    I've just read "Injecting Custom Logic in ADO.NET Data Services" and my next question is, How do you get your [WebGet] method to show up in the client-side proxy classes? Sure, I can call this directly (RESTfully) with, say, WebClient but I thought the strong typing features in ADO.NET Data Services would "hide" this from me auto-magically. So here we have: public class MyService : DataService<MyDataSource> { // This method is called only once to initialize service-wide policies. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("Customers", EntitySetRights.AllRead); config.SetServiceOperationAccessRule("CustomersInCity", ServiceOperationRights.All); } [WebGet] public IQueryable<MyDataSource.Customers> CustomersInCity(string city) { return from c in this.CurrentDataSource.Customers where c.City == city select c; } } How can I get CustomersInCity() to show up in my client-side class defintions?

    Read the article

  • ODBC Proxy for remotely accessing legacy resources?

    - by Winston Fassett
    Our project uses AcuCorp's AcuODBC driver to access a legacy Vision database. The problem is that we only have a 32-bit driver and the installer simply won't run on our 64-bit servers. I need a way to use SSIS to pull data from that system. As far as I can tell, there are 3 options: Set up a whole new SQL Server instance with SSIS and the AcuODBC drivers on a 32-bit VM (costly) Try to hack the 32-bit driver onto our 64-bit server manually (failure prone and unsupported) Set up a 32-bit VM with some sort of "proxy" service that our 64-bit SSIS can use to pull the data. The first option is the least desirable. If you have any suggestions for options 2 or 3, or anything else I haven't thought of, I'd love to hear them.

    Read the article

  • Django as S3 proxy

    - by schneck
    Hi there, I extended a ModelAdmin with a custom field "Download file", which is a link to a URL in my Django project, like: http://www.myproject.com/downloads/1 There, I want to serve a file which is stored in a S3-bucket. The files in the bucket are not public readable, and the user may not have direct access to it. Now I want to avoid that the file has to be loaded in the server memory (these are multi-gb-files) avoid to have temp files on the server The ideal solution would be to let django act as a proxy that streams S3-chunks directly to the user. I use boto, but did not find a possibility to stream the chunks. Any ideas? Thanks.

    Read the article

  • Problem installing mod_jk on Ubuntu karmic apache httpd 2.2.12 and tomcat 6

    - by Deny Prasetyo
    I have a problem when configuring mod_jk on ubuntu i use apache httpd 2.2.12 and tomcat 6 I installed apache httpd and lib mod_jk from synaptic and use default configuration. Here my mod_jk.conf Load mod_jk module Update this path to match your modules location LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Where to find workers.properties Update this path to match your conf directory location JkWorkersFile /etc/apache2/workers.properties Where to put jk logs Update this path to match your logs directory location JkLogFile /etc/apache2/logs/mod_jk.log Set the jk log level [debug/error/info] JkLogLevel info Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions indicate to send SSL KEY SIZE, JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat set the request format JkRequestLogFormat "%w %V %T" Send everything for context /ws to worker ajp13 JkMount /themark ajp13 JkMount /themark/* ajp13 Send everything for context /jsp-examples to worker ajp13 JkMount /static ajp13 JkMount /static/* ajp13 Send everything for context /servelts-examples to worker ajp13 JkMount /servlets-examples ajp13 JkMount /servlets-examples/* ajp13 and this my workers.properties Define 1 real worker named ajp13 worker.list=ajp13 Set properties for worker named ajp13 to use ajp13 protocol, and run on port 8009 worker.ajp13.type=ajp13 worker.ajp13.host=localhost worker.ajp13.port=8009 worker.ajp13.lbfactor=50 worker.ajp13.cachesize=10 worker.ajp13.cache_timeout=600 worker.ajp13.socket_keepalive=1 worker.ajp13.socket_timeout=300 when i start apache httpd and tomcat 6. it seem that mod_jk load successfully and tomcat 6 recognized the ajp connector. here my tomcat 6 log INFO: Starting Coyote HTTP/1.1 on http-8080 Mar 29, 2010 11:48:34 AM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Mar 29, 2010 11:48:34 AM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/16 config=null here my mod_jk.log [Mon Mar 29 11:06:53 2010][6688:3499775792] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 11:16:59 2010][18277:1983043376] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 11:16:59 2010][18278:1983043376] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized but when i access http://localhost/themark it won't work. it seems that apache httpd can load the mod_jk module but it can listen the ajp. is there somebody ever had same problem? nb: i use the same config in windows using xampplite and it works well

    Read the article

  • Zend_Soap_Client doesn't work with proxy

    - by understack
    I'm accessing a SOAP web service like : $wsdl_url = 'http://abslive3.timesgroup.com:8888/clsRSchedule.soap?wsdl' ; $client = new Zend_Soap_Client($wsdl_url, array('proxy_host'=>"http://virtual-browser.25u.com" , 'proxy_port'=>80)); Since my shared server blocks port 8888, I'm using this proxy server. But Zend Soap Client tries to directly connect it. Exception information: Message: SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://abslive3.timesgroup.com:8888/clsRSchedule.soap?wsdl' : failed to load external entity "http://abslive3.timesgroup.com:8888/clsRSchedule.soap?wsdl" Stack trace: #0 /home/..../library/Zend/Soap/Client/Common.php(51): SoapClient->SoapClient('http://abslive3...', Array) #1 /home/..../library/Zend/Soap/Client.php(1024): Zend_Soap_Client_Common->__construct(Array, 'http://abslive3...', Array) #2 /home/..../library/Zend/Soap/Client.php(1180): Zend_Soap_Client->_initSoapClientObject() #3 /home/..../library/Zend/Soap/Client.php(1104): Zend_Soap_Client->getSoapClient() #4 [internal function]: Zend_Soap_Client->__call('ReturnDataSet', Array) What am I doing wrong?

    Read the article

  • ASP.NET Request.ServerVariables["SERVER_PORT_SECURE"] and proxy SSL by load balancer

    - by frankadelic
    We have some legacy ASP.NET code that detects if a request is secure, and redirects to the https version of the page if required. This code uses Request.ServerVariables["SERVER_PORT_SECURE"] to detect if SSL is needed. Our operations team has suggested doing proxy SSL at the load balancer (F5 Big-IP) instead of on the web servers (assume for the purposes of this question that this is a requirement). The consequence would be that all requests appear as HTTP to the web server. My question: how can we let the web servers known that the incoming connection was secure before it hit the load balancer? Can we continue to use Request.ServerVariables["SERVER_PORT_SECURE"]? Do you know of a load balancer config that will send headers so that no application code changes are needed?

    Read the article

  • NDIS or TDI for packet redirection to a local proxy

    - by Enrico Detoma
    I need to develop a transparent filter to redirect outgoing HTTP packets to a local proxy, to do transparent content filtering. Which is the best technology to do it, TDI or NDIS IM? My main constraint is to avoid conflicts with antivirus software, which also do some kind of packet redirection to inspect HTTP content (I don't know whether antivirus programs use TDI, NDIS IM, or both). Rather than writing the driver myself, actually, I'm also considering two commercial SDKs for packet filtering/modification: one uses a TDI driver while the other uses a NDIS IM driver, so that's the origin of my question (I was only aware of NDIS IM, before looking at the two SDKs).

    Read the article

  • NHibernate: uninitialized proxy passed to save() and cascade

    - by jonnii
    Hi, I keep getting an NHibernate.PersistentObjectException when calling session.Save() which is due to an uninitialized proxy passed to save(). If I fiddle with my cascade settings I can make it go away, but then child objects aren't being saved. The only other fix I have found is by adding the following to my DefaultSaveEventListener. protected override bool ReassociateIfUninitializedProxy(object obj, global::NHibernate.Engine.ISessionImplementor source) { if (!NHibernateUtil.IsInitialized(obj)) NHibernateUtil.Initialize(obj); return base.ReassociateIfUninitializedProxy(obj, source); } This is obviously not an ideal solution. Any ideas?

    Read the article

  • WCF: Proxy open and close - whaaa?

    - by MikeMalter
    I am maintaing a windows form application using WCF and are using net.tcp internally. The lifecycle of our connections is GET/USE/CLOSE. We are having a problem with the application pool crashing with no trace. In looking at netstat, I can see when I come into the application as we have a login service. However, even though we are creating the proxy in a using statement, the connection in netstat does not go away until I physically close the application. Is this right? Should I be doing something diffent on the client to force the connection to close? So if the connection stays open, does it stay open for the duration of the openTimeout setting and then gets torn down? Thanks.

    Read the article

  • Do not use IE browser settings when using a proxy with Indy

    - by JD
    Hi At one of our customer sites, we have a Delphi 2007 application that makes a number of HTTPS requests using indy components. All requests are made using the proxy settings the client provides. For this to work, in IE we have to put the URL's in the trusted zones section. After a month due to security settings the trusted zones are cleared. This means we have to re-add the URLs again to make our application work. Is there a way of bypassing IE settings or using a client side HTTP stack so we do not go through the browser to make https requests? JD

    Read the article

  • Service Contracts with Message causes duplicate proxy classes

    - by jaklucky
    Hi, I have a service contract with Message as shown below. [OperationContract] Message MyMethodWithMessage(Message myMsgParam); Everything works fine. I could host my services. But when I try to create proxies through "Add Service References", I am getting the duplicate proxy classes. If I take out the above OperationContract and re run my services and try to create proxies, then "Add Service References" does not provide duplicate proxies. I am really confused about this!!! Any help is greatly appreciated... Thank you, Suresh

    Read the article

  • what is the use of creating proxy for an webservice

    - by prince23
    hi, i have an webserivce written where i do an insertion opertion to DB. path :http://localhost:1838/Ajax/WebService.asmx?wsdl.name of the webservice is localhost i have added webservice for the project now on button click event i try to call this webserice like this localhost obj= new localhost(); obj.insert(); now i am able to do the insertion operation fine. but i wanted to create an proxy for the webservice so wat is the use of it doing like tat? when i run this command in my command prompt in vs wsdl /out:myProxyClass.cs http://localhost:1838/Ajax/WebService.asmx?WSDL i get an error unable to connect the remote server. no connection would be made because the target machine actively refused it looking forward for an solution any help would great thank you

    Read the article

  • Proxy object references in MVC code

    - by krystan honour
    Hi there, I am just figuring out best practice with MVC now I have a project where we have chosen to use it in anger. My question is. If creating a list view which is bound to an IEnumerable is this bad practise? Would it be better to seperate the code generated by the WCF Service reference into a datastructure which essentially holds the same data but abstracts further from the service, meaning that the UI is totally unaware of the service implementation beneath. or do people just bind to the proxy object types and have done with it ? My personal feeling is to create an abstraction but this seems to violate the DRY principle.

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • The remote server returned an error: (407) Proxy Authentication Required

    - by chris
    I'm getting this error when I call a web service: "The remote server returned an error: (407) Proxy Authentication Required". I get the general idea and I can get the code to work by adding myProxy.Credentials= NetworkCredential("user", "password", "domain"); or by using DefaultCredentials in code. My problem is that the call to the web service works in production without this. It seems like there is a non code solution involving Machine.config, but what is it? At the moment I can't get to the production boxes machine.config file to see what that looks like. I tried updating my machine.config as follows, but I still get the 407 error.

    Read the article

  • Custom service application - proxy stopped

    - by Jonesie
    Ive created a custom service app using samples from Tony Bierman and MS. I can see the application in central admin, I can create a new service app from it, the create page works, the manage page is blank and I don't have a properties page. I havent yet tried using the beast, I just want to get the deployment and admin stuff working first. However, after creating it, I see the Service app has started but the app proxy is stopped. I dont know if this is a problem or not but I cant find anywhere to start it. Should I worry?

    Read the article

  • Why is hibernate returning a proxy object?

    - by predhme
    I have a service method that returns an object from the database. This method is called from numerous parts of the system. However, one particular method is getting a return type of ObjectClass_$$_javassist_somenumber as the type. Which is throwing things off. I call the service method exactly the same as everywhere else, so why would hibernate return the proxy as opposed to the natural object? I know there are ways to expose the "proxied" object, but I don't feel like I should have to do that. The query is simply hibernateTemplate.find("from User u where u.username = ?", username)

    Read the article

  • Entity Framework: a proxy collection for displaying a subset of data

    - by Jefim
    Imagine I have an entity called Product and a repository for it: public class Product { public int Id { get; set; } public bool IsHidden { get; set; } } public class ProductRepository { public ObservableCollection<Product> AllProducts { get; set; } public ObservableCollection<Product> HiddenProducts { get; set; } } All products contains every single Product in the database, while HiddenProducts must only contain those, whose IsHidden == true. I wrote the type as ObservableCollection<Product>, but it does not have to be that. The goal is to have HiddenProducts collection be like a proxy to AllProducts with filtering capabilities and for it to refresh every time when IsHidden attribute of a Product is changed. Is there a normal way to do this? Or maybe my logic is wrong and this could be done is a better way?

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? To be clear: Within the windows/linux OS, all browsers work perfectly. Within mac only firefox works if it uses its internal certificate store and not the keychain. It's the keychain method of importing a certificate that causes the issue. Thus, all browsers using the keychain will not work. Root CA Cert: -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE----- Intermediate CA Cert: Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: C=*****, ST=*******, L=******, O=*******, CN=******/emailAddress=****** Validity Not Before: May 21 13:57:32 2014 GMT Not After : Jun 20 13:57:32 2014 GMT Subject: C=*****, ST=********, O=*******, CN=*******/emailAddress=******* Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (4096 bit) Modulus (4096 bit): 00:e7:2d:75:38:23:02:8e:b9:8d:2f:33:4c:2a:11: 6d:d4:f8:29:ab:f3:fc:12:00:0f:bb:34:ec:35:ed: a5:38:10:1e:f3:54:c2:69:ae:3b:22:c0:0d:00:97: 08:da:b9:c9:32:c0:c6:b1:8b:22:7e:53:ea:69:e2: 6d:0f:bd:f5:96:b2:d0:0d:b2:db:07:ba:f1:ce:53: 8a:5e:e0:22:ce:3e:36:ed:51:63:21:e7:45:ad:f9: 4d:9b:8f:7f:33:4c:ed:fc:a6:ac:16:70:f5:96:36: 37:c8:65:47:d1:d3:12:70:3e:8d:2f:fb:9f:94:e0: c9:5f:d0:8c:30:e0:04:23:38:22:e5:d9:84:15:b8: 31:e7:a7:28:51:b8:7f:01:49:fb:88:e9:6c:93:0e: 63:eb:66:2b:b4:a0:f0:31:33:8b:b4:04:84:1f:9e: d5:ed:23:cc:bf:9b:8e:be:9a:5c:03:d6:4f:1a:6f: 2d:8f:47:60:6c:89:c5:f0:06:df:ac:cb:26:f8:1a: 48:52:5e:51:a0:47:6a:30:e8:bc:88:8b:fd:bb:6b: c9:03:db:c2:46:86:c0:c5:a5:45:5b:a9:a3:61:35: 37:e9:fc:a1:7b:ae:71:3a:5c:9c:52:84:dd:b2:86: b3:2e:2e:7a:5b:e1:40:34:4a:46:f0:f8:43:26:58: 30:87:f9:c6:c9:bc:b4:73:8b:fc:08:13:33:cc:d0: b7:8a:31:e9:38:a3:a9:cc:01:e2:d4:c2:a5:c1:55: 52:72:52:2b:06:a3:36:30:0c:5c:29:1a:dd:14:93: 2b:9d:bf:ac:c1:2d:cd:3f:89:1f:bc:ad:a4:f2:bd: 81:77:a9:f4:f0:b9:50:9e:fb:f5:da:ee:4e:b7:66: e5:ab:d1:00:74:29:6f:01:28:32:ea:7d:3f:b3:d7: 97:f2:60:63:41:0f:30:6a:aa:74:f4:63:4f:26:7b: 71:ed:57:f1:d4:99:72:61:f4:69:ad:31:82:76:67: 21:e1:32:2f:e8:46:d3:28:61:b1:10:df:4c:02:e5: d3:cc:22:30:a4:bb:81:10:dc:7d:49:94:b2:02:2d: 96:7f:e5:61:fa:6b:bd:22:21:55:97:82:18:4e:b5: a0:67:2b:57:93:1c:ef:e5:d2:fb:52:79:95:13:11: 20:06:8c:fb:e7:0b:fd:96:08:eb:17:e6:5b:b5:a0: 8d:dd:22:63:99:af:ad:ce:8c:76:14:9a:31:55:d7: 95:ea:ff:10:6f:7c:9c:21:00:5e:be:df:b0:87:75: 5d:a6:87:ca:18:94:e7:6a:15:fe:27:dd:28:5e:c0: ad:d2:91:d3:2d:8e:c3:c0:9f:fb:ff:c0:36:7e:e2: d7:bc:41 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, DNS:dropbox.com, DNS:*.dropbox.com, DNS:filedropper.com, DNS:*.filedropper.com X509v3 Subject Key Identifier: F3:E5:38:5B:3C:AF:1C:73:C1:4C:7D:8B:C8:A1:03:82:65:0D:FF:45 X509v3 Authority Key Identifier: keyid:2B:37:39:7B:9F:45:14:FE:F8:BC:CA:E0:6E:B4:5F:D6:1A:2B:D7:B0 DirName:/C=****/ST=******/L=*******/O=*******/CN=******/emailAddress=******* serial:EE:8C:A3:B4:40:90:B0:62 X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha1WithRSAEncryption 46:2a:2c:e0:66:e3:fa:c6:80:b6:81:e7:db:c3:29:ab:e7:1c: f0:d9:a0:b7:a9:57:8c:81:3e:30:8f:7d:ef:f7:ed:3c:5f:1e: a5:f6:ae:09:ab:5e:63:b4:f6:d6:b6:ac:1c:a0:ec:10:19:ce: dd:5a:62:06:b4:88:5a:57:26:81:8e:38:b9:0f:26:cd:d9:36: 83:52:ec:df:f4:63:ce:a1:ba:d4:1c:ec:b6:66:ed:f0:32:0e: 25:87:79:fa:95:ee:0f:a0:c6:2d:8f:e9:fb:11:de:cf:26:fa: 59:fa:bd:0b:74:76:a6:5d:41:0d:cd:35:4e:ca:80:58:2a:a8: 5d:e4:d8:cf:ef:92:8d:52:f9:f2:bf:65:50:da:a8:10:1b:5e: 50:a7:7e:57:7b:94:7f:5c:74:2e:80:ae:1e:24:5f:0b:7b:7e: 19:b6:b5:bd:9d:46:5a:e8:47:43:aa:51:b3:4b:3f:12:df:7f: ef:65:21:85:c2:f6:83:84:d0:8d:8b:d9:6d:a8:f9:11:d4:65: 7d:8f:28:22:3c:34:bb:99:4e:14:89:45:a4:62:ed:52:b1:64: 9a:fd:08:cd:ff:ca:9e:3b:51:81:33:e6:37:aa:cb:76:01:90: d1:39:6f:6a:8b:2d:f5:07:f8:f4:2a:ce:01:37:ba:4b:7f:d4: 62:d7:d6:66:b8:78:ad:0b:23:b6:2e:b0:9a:fc:0f:8c:4c:29: 86:a0:bc:33:71:e5:7f:aa:3e:0e:ca:02:e1:f6:88:f0:ff:a2: 04:5a:f5:d7:fe:7d:49:0a:d2:63:9c:24:ed:02:c7:4d:63:e6: 0c:e1:04:cd:a4:bf:a8:31:d3:10:db:b4:71:48:f7:1a:1b:d9: eb:a7:2e:26:00:38:bd:a8:96:b4:83:09:c9:3d:79:90:e1:61: 2c:fc:a0:2c:6b:7d:46:a8:d7:17:7f:ae:60:79:c1:b6:5c:f9: 3c:84:64:7b:7f:db:e9:f1:55:04:6e:b5:d3:5e:d3:e3:13:29: 3f:0b:03:f2:d7:a8:30:02:e1:12:f4:ae:61:6f:f5:4b:e9:ed: 1d:33:af:cd:9b:43:42:35:1a:d4:f6:b9:fb:bf:c9:8d:6c:30: 25:33:43:49:32:43:a5:a8:d8:82:ef:b0:a6:bd:8b:fb:b6:ed: 72:fd:9a:8f:00:3b:97:a3:35:a4:ad:26:2f:a9:7d:74:08:82: 26:71:40:f9:9b:01:14:2e:82:fb:2f:c0:11:51:00:51:07:f9: e1:f6:1f:13:6e:03:ee:d7:85:c2:64:ce:54:3f:15:d4:d7:92: 5f:87:aa:1e:b4:df:51:77:12:04:d2:a5:59:b3:26:87:79:ce: ee:be:60:4e:87:20:5c:7f -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE-----

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • Apache2 Enabling Includes module causes svn access to quit working

    - by Matthew Talbert
    I have dav_svn installed to provide http access to my svn repos. The url is directly under root, eg mywebsite.com/svn/individual-repo. This setup has been working great for some time. Now, I need SSI (server-side includes) for a project, so I enabled this module with a2enmod include. Now, tortoisesvn can't access the repo; it always returns a 301 permanent redirect. Some playing with it reveals I can access it in a browser if I'm sure to include the trailing / but it still doesn't work in TortoiseSVN. I've looked at all of the faq's for this problem with TortoiseSVN and apache, and none of them seem to apply to my problem. Anyone have any insight into this problem? I'm running Ubuntu 9.10 with Apache 2.2.12. The only change I've made to my configuration is to enable the includes mod. Here's my dav_svn conf: <Location /svn> DAV svn SVNParentPath /home/matthew/svn AuthType Basic AuthName "Subversion repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> and here's the relevant part of my virtual host conf: <Location /svn> SetHandler None Order allow,deny Allow from all </Location> Edit: OK, I've discovered that the real conflict is between the include module and basic authentication. That is, if I disable the include module, browse to the subversion repo, enter my user/pass for the basic authentication, I can browse it just fine. It even continues to work after I re-enable the include module. However, if I browse with another browser where I'm not already authenticated, then it no longer works.

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >