Search Results

Search found 20887 results on 836 pages for 'instance fields'.

Page 136/836 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • SQL SERVER – Fix Visual Studio Error : Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL

    - by pinaldave
    In one of the virtual environment while I was trying to add SQL Server Database (.mdf) file to asp.net project I encountered following error: Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL:  For a long time I am using SQL Server 2012 but this error was a bit interesting to me. I realize that there should not be any need of the SQL Server 2005 installation. I quickly figured out that I can remove this error if I do as mentioned below: Open Microsoft Visual Studio Select Tools >> Options >> Database Tools >> Data Connections Enter the name of an installed instance in “SQL Server Instance Name” field. Click OK If you do not know the instance name, you can follow either of the options. 1) Use the command line sqlcmd utility 2) Using SQL Server Management Studio Is there any other way to resolve this error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd, Visual Studio

    Read the article

  • AWS RDS Timeout

    - by warder57
    I know next to nothing about networking/servers. So I'm assuming I'm missing something obvious. All of the resources I can find on this, either don't work or are outdated. I created a brand new AWS account on the free plan. I created a postgres RDS DB instance. I made sure that this RDS instance is set to publicly accessible. This RDS instance has the default VPC/Security Group settings. In order to connect to this DB from my local machine, I used pgadmin3 and followed the instructions provided on the AWS documentation page. Seen here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html I've double checked all of the information required to connect: Host: whatever.whatever.us-west-2.rds.amazonaws.com Port: 5432 Username: USERNAME Password: PASSWORD When I try to connect to the database, my connection fails due to a timeout. (During step 4 in the above guide.) Can anyone point me to whatever I am missing? Thanks in advance

    Read the article

  • Should the syntax for disabling code differ from that of normal comments?

    - by deltreme
    For several reasons during development I sometimes comment out code. As I am chaotic and sometimes in a hurry, some of these make it to source control. I also use comments to clarify blocks of code. For instance: MyClass MyFunction() { (...) // return null; // TODO: dummy for now return obj; } Even though it "works" and alot of people do it this way, it annoys me that you cannot automatically distinguish commented-out code from "real" comments that clarify code: it adds noise when trying to read code you cannot search for commented-out code for for instance an on-commit hook in source control. Some languages support multiple single-line comment styles - for instance in PHP you can either use // or # for a single-line comment - and developers can agree on using one of these for commented-out code: # return null; // TODO: dummy for now return obj; Other languages - like C# which I am using today - have one style for single-line comments (right? I wish I was wrong). I have also seen examples of "commenting-out" code using compiler directives, which is great for large blocks of code, but a bit overkill for single lines as two new lines are required for the directive: #if compile_commented_out return null; // TODO: dummy for now #endif return obj; So as commenting-out code happens in every(?) language, shouldn't "disabled code" get its own syntax in language specifications? Are the pro's (separation of comments / disabled code, editors / source control acting on them) good enough and the cons ("shouldn't do commenting-out anyway", not a functional part of a language, potential IDE lag (thanks Thomas)) worth sacrificing? Edit I realise the example I used is silly; the dummy code could easily be removed as it is replaced by the actual code.

    Read the article

  • When to use identity comparison instead of equals?

    - by maaartinus
    I wonder why would anybody want to use identity comparison for fields in equals, like here (Java syntax): class C { private A a; public boolean equals(Object other) { // standard boring prelude if (other==this) return true; if (other==null) return false; if (other.getClass() != this.getClass()) return false; C c = (C) other; // the relevant part if (c.a != this.a) return false; // more tests... and then return true; } // getter, setters, hashCode, ... } Using == is a bit faster than equals and a bit shorter (due to no need for null tests), too, but in what cases (if any) you'd say it's really better to use == for fields inside equals?

    Read the article

  • Facebook Game database design

    - by facebook-100000781341887
    Hi, I'm currently develop a facebook mafia like PHP game(of course, a light weight version), here is a simplify database(MySQL) of the game id-a <int3> <for index> uid <chr15> <facebook uid> HP <int3> <health point> exp <int3> <experience> money <int3> <money> list_inventory <chr5> <the inventory user hold...some special here, talk next> ... and 20 other fields just like reputation, num of combat... *the number next to the type is the size(byte) of the type For the list_inventory, there have 40 inventorys in my game, (actually, I have 5 these kind of list in my database), and each user can only contain 1 qty of each inventory, therefore, I assign 5 char for this field and each bit of char as 1 item(5 char * 8 bit = 40 slot), and I will do some manipulation by PHP to extract the data from this 5 byte. OK, I was thinking on this, if this game contains 100,000 user, and only 10% are active, therefore, if use my method, for the space use, 5 byte * 100,000 = 500 KB if I use another method, create a table user_hold_inventory, if the user have the inventory, then insert a record into this table, so, for 10,000 active user, I assume they got all item, but for other, I assume they got no item, here is the fields of the new table id-b <int3> <for index> id-a <int3> <id of the user table> inv_no <int1> <inventory that user hold> for the space use, ([id] (3+3) byte + [inv_no] 1 byte ) * [active user] 10,000 * [all inventory] * 40 = 2.8 MB seems method 2 have use more space, but it consume less CPU power. Please comment these 2 method or please correct me if there have another better method rather than what I think. Another question is, my database contain 26 fields, but I counted 5 of them are not change frquently, should I need to separate it on the other table or not? So many words, thanks for reading :)

    Read the article

  • Unable to open up port 80 on EC2 using elasticfox

    - by uswaretech
    I have launched an instance of EC2. Initially the security group I created did not have the port 80 open. I sshed and installed Apache etc, and now want to open port 80. I am using elasticfox. So I go to Security Group - [My Group name] - Grant new permission Open up the port 80(Http with TCP) for network range 0.0.0.0/0 Now my assumption is that these ports should be opened up on the instance, but the instance is not responding on the allocated IPs, public DNS entry. What should I do next?

    Read the article

  • Java: Reflection Packet Builder using getField()

    - by Matchlighter
    So I just finished writing a packet builder that dynamically loads data into a data stream which is then sent out. Each builder operates by finding fields in its class (and its superclasses) that are marked with an @data annotation. Upon finishing the builder, I remembered that getFields() does not return in "any specific order". I quite like my builder because it allows for quite simple, yet hard-typed packets. Could this implementation be a problem? What would be the best next step to keep the simplicity - do alphabetical sorting of fields?

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 se

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • How can I create multiple identical AWS EC2 server instances with large amounts of persistent data?

    - by mojones
    I have a CPU-intensive data-processing application that I want to run across many (~100,000) input files. The application needs a large (~20GB) data file in order to run. What I would like to do is create an EC2 machine image that has my application and associated data files installed boot up a large number (e.g. 100) of instances of this image split my input files up into 100 batches and send one batch to be processed on each instance I am having trouble figuring out the best way to ensure that each instance has access to the large data file. The data file is too big to fit on the root filesystem of an AMI. I could use Block Storage, but a given Block Storage volume can only be attached to a single instance, so I would need 100 clones. Is there some way to create a custom image that has more space on the root filsystem so that I can include my large data file? Or is there a better way to tackle this problem?

    Read the article

  • Microsoft Access 2010: How to Modify Tables

    As you work with Microsoft Access 2010, it is highly likely that you will run in to times where you need to modify the fields contained within your tables. Luckily, this is a task that is not hard to accomplish, and this tutorial will teach you how to do so. Before you begin modifying tables, you should be aware that there are basically three different ways in which you can affect or control the type of data that enters your fields, which are data types, character limits, and validation rules. We will be taking a look at them today, so let's begin, shall we? Keep in mind that for this tutor...

    Read the article

  • Simple form validation

    - by ElendilTheTall
    Hi, I have a site with a simple contact form using ASP for customers to e-mail quote requests. However, I'm getting quite a few messages through with no contact information; I think people assume that their e-mail address is coming through automatically. I'd like a simple way to make the e-mail and/or telephone number fields required, preferably so that the fields are highlighted as such if they're submitted without anything in them. I've Googled for this but they seem either too simple, diverting people to a separate page and requiring a 'back click', or incredibly complicated with massive reams of code. Any suggestions?

    Read the article

  • obout Super Form - forms on steroids!

    Create great looking forms without any code.  Simply specify a data source, set a few default properties and super form will generate all the necessary fields.- Exciting unique feature: linked fields - specify dependency of form fieldshttp://www.obout.com/SuperForm/aspnet_linked_show.aspxhttp://www.obout.com/SuperForm/aspnet_linked_conditional.aspx- Build-in validationhttp://www.obout.com/SuperForm/aspnet_validation_required.aspx- Support for masks and filtershttp://www.obout.com/SuperForm/aspnet_mask_masks.aspx-...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

  • What is the use of Association, Aggregation and Composition (Encapsulation) in Classes

    - by SahilMahajanMj
    I have gone through lots of theories about what is encapsulation and the three techniques of implementing it, which are Association, Aggregation and Composition. What i found is, Encapsulation Encapsulation is the technique of making the fields in a class private and providing access to the fields via public methods. If a field is declared private, it cannot be accessed by anyone outside the class, thereby hiding the fields within the class. For this reason, encapsulation is also referred to as data hiding. Encapsulation can be described as a protective barrier that prevents the code and data being randomly accessed by other code defined outside the class. Access to the data and code is tightly controlled by an interface. The main benefit of encapsulation is the ability to modify our implemented code without breaking the code of others who use our code. With this feature Encapsulation gives maintainability, flexibility and extensibility to our code. Association Association is a relationship where all object have their own lifecycle and there is no owner. Let’s take an example of Teacher and Student. Multiple students can associate with single teacher and single student can associate with multiple teachers but there is no ownership between the objects and both have their own lifecycle. Both can create and delete independently. Aggregation Aggregation is a specialize form of Association where all object have their own lifecycle but there is ownership and child object can not belongs to another parent object. Let’s take an example of Department and teacher. A single teacher can not belongs to multiple departments, but if we delete the department teacher object will not destroy. We can think about “has-a” relationship. Composition Composition is again specialize form of Aggregation and we can call this as a “death” relationship. It is a strong type of Aggregation. Child object dose not have their lifecycle and if parent object deletes all child object will also be deleted. Let’s take again an example of relationship between House and rooms. House can contain multiple rooms there is no independent life of room and any room can not belongs to two different house if we delete the house room will automatically delete. The question is: Now these all are real world examples. I am looking for some description about how to use these techniques in actual class code. I mean what is the point for using three different techniques for encapsulation, How these techniques could be implemented and How to choose which technique is applicable at time.

    Read the article

  • JavaFX - the right way to use Properties with domain objects

    - by pjm56
    JavaFX has provided a bunch of new Property objects, such as javafx.beans.property.DoubleProperty which allow you to define fields which can be automatically observed and synchronised. In many JFX examples, the MVC model class has a number of these Property fields, which can then bind automatically to the view. However, this seems to be encouraging us to put JFX properties into our Domain objects (if you assume that the Model class is going to be a domain object), which strikes me as a poor separation of concerns (i.e. putting GUI code in the Domain). Has anyone seen this problem being solved in 'real life' and, if so, how was it done?

    Read the article

  • How can I give my client "full access" to their PHP application's MySQL database?

    - by Micah Delane Bolen
    I am building a PHP application for a client and I'm seriously considering WordPress or a simple framework that will allow me to quickly build out features like forums, etc. However, the client is adamant about having "full access" to the database and the ability to "mine the data." Unfortunately, I'm almost certain they will be disappointed when they realize they won't be able to easily glean meaningful insight by looking at serialized fields in wp_usermeta, etc. One thought I had was to replicate a variation on the live database where I flatten out all of those ambiguous and/or serialized fields into something that is then parsable by a mere mortal using a tool as simple as phpMyAdmin. Unfortunately, the client is not going to settle for a simple backend dashboard where I create the custom reports for them even though I know that would be the easiest and most sane approach.

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Login authentication vanished from MongoDB install

    - by Robert Oschler
    A few months ago I enabled password protection on my MongoDB install. Today I ran the Mongo client and forgot to use my login details. Instead of rejecting nearly everything I try to do from the shell, like it should, I had complete access to all the databases and collections. Fortunately this instance is only running a few test apps, so I quickly shutdown the MongoD instance until I figure this out. Has anybody ever seen this kind of behavior before and knows what is going on? The MongoD instance is running on a Linux VM hosted by Azure. The only thing I can think of is that perhaps Azure restored an old copy of the VM, but I received no E-mails to that effect and everything else on the server seems to be proper, including new daemon processes that I added after I enabled password protection on MongoD.

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • Scrolling though objects then creating a new instace of this object

    - by gopgop
    In my game when pressing the right mouse button you will place an object on the ground. all objects have the same super class (GameObject). I have a field called selected and will be equal to one certain gameobject at a time. when clicking the right mouse button it checks whats the instance of selected and that how it determines which object to place on the ground. code exapmle: t is the "slot" for which the object will go to. if (selected instanceof MapleTree) { t = new MapleTree(game,highLight); } else if (selected instanceof OakTree) { t = new OakTree(game,highLight); } Now it has to be a "new" instance of the object. Eventually my game will have hundreds of GameObjects and I don't want to have a huge if else statement. How would I make it so it scrolls though the possible kinds of objects and if its the correct type then create a new instance of it...? When pressing E it will switch the type of selected and is an if else statement as well. How would I do it for this too? here is a code example: if (selected instanceof MapleTree) { selected = new OakTree(game); } else if (selected instanceof OakTree) { selected = new MapleTree(game); }

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >