Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 202/371 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Can not set Character Encoding using sun-web.xml

    - by stck777
    I am trying to send special characters like spanish chars from my page to a JSP page as a form parameter. When I try get the parameter which I sent, it shows that as "?" (Question mark). After searching on java.net thread I came to know that I should have following entry in my sun-web.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Sun ONE Application Server 8.0 Servlet 2.4//EN" "http://www.sun.com/software/sunone/appserver/dtds/sun-web-app_2_4-0.dtd"> <sun-web-app> <locale-charset-info default-locale="es"> <locale-charset-map locale="es" charset="UTF-8"/> <parameter-encoding default-charset="UTF-8"/> </locale-charset-info> </sun-web-app> But it did not work with this approach still the character goes as "?".

    Read the article

  • How can I find out which version of emacs introduced a function?

    - by Chris R
    I want to write a .emacs that uses as much of the mainline emacs functionality as possible, falling back gracefully when run under previous versions. I've found through trial and error some functions that didn't exist, for example, in emacs 22 but now do in emacs 23 on the rare occasion that I've ended up running my dotfiles under emacs 22. However, I'd like to take a more proactive approach to this, and have subsets of my dotfiles that only take effect when version = <some-threshold> (for example). The function I'm focusing on right now is scroll-bar-mode but I'd like a general solution. I have not seen a consistent source for this info; I've checked the gnu.org online docs, the function code itself, and so far nothing. How can I determine this, without keeping every version of emacs I want to support kicking around?

    Read the article

  • How should I use random.jumpahead in Python

    - by Peter Smit
    I have a application that does a certain experiment 1000 times (multi-threaded, so that multiple experiments are done at the same time). Every experiment needs appr. 50.000 random.random() calls. What is the best approach to get this really random. I could copy a random object to every experiment and do than a jumpahead of 50.000 * expid. The documentation suggests that jumpahead(1) already scrambles the state, but is that really true? Or is there another way to do this in 'the best way'? (No, the random numbers are not used for security, but for a metropolis hasting algorithm. The only requirement is that the experiments are independent, not whether the random sequence is somehow predictable or so)

    Read the article

  • Proper way to assert type of variable in Python

    - by Morlock
    In using a function, I wish to ensure that the type of the variables are as expected. How to do it right? Here is an example fake function trying to do just this before going on with its role: def my_print(text, begin, end): """Print text in UPPER between 'begin' and 'end' in lower """ for i in (text, begin, end): assert type(i) == type("") out = begin.lower() + text.upper() + end.lower() print out Is this approach valid? Should I use something else than type(i) == type("") ? Should I use try/except instead? Thanks pythoneers

    Read the article

  • iPhone Text entry with completion and DONE button (not search)

    - by Adam Jack
    Using iPhone SDK 3.0, I wish to allow text entry with (optional) completion options that appear as typing is occurring, i.e. also allowing freeformat entry. As such I am using a UISearchBar (which has the text change events) and a UISearchDisplayController to present options. The problems is I want the DONE button to say DONE and not SEARCH, however I cannot find a way to set that. Clearly I feel I am missing something, or Interface Builder of the SDK API would have some property to set. I have seen other apps (in the store) that have achieved the result I want (free format entry, completion, DONE button) so maybe there is an alternative approach I am missing. Thanks in advance for any pointers.

    Read the article

  • ASP.NET Web Service - Passing a base object with list/collection?

    - by schooner
    We need to create a simple web service in ASP.NET that can be called from PHP or other languages. This in turn will be used to update records in a database for an item submission. The core part is fairly simple, we have a base set of fields for the object - first name, last name, birth date, city, etc. In addition however we need to accept a list of items associated with that object that can range from 0-n. Jan 1 2009, ABC May 1 2010, 123 Jun 30 2010, XXXXX What would be the best way to structure this so it can be easily passed to the ASP.NET web service and processed as a single call for the entire object? Would passing the list of items as a single delimted string be a wise approach? Ex: Jan 1 2009, ABC|May 1 2010, 123|Jun 30 2010, XXXXX

    Read the article

  • How to enable Automatic Sorting of IEnumerable Data in GridView?

    - by ace
    How can I enable automatic sorting of my BLL which returns a List CustomerList:List in a GridView? Customer is my own strongly typed class and CustomerList is a List of customers. I know one approach is to set the AllowSorting property to true in the GridView and handle the OnSorting event and call a sorting method defined in my CustomerList class. However I would like a solution which is automatic in the sense that I do not have to handle the OnSorting Event, it should be like how GridView handles automatic sorting for DataView, DataTable, and DataSet. Is there an Interface I need to implement on my CustomerList or Customer class that will enable that functionality?

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • How to properly update a feature branch from trunk?

    - by Pavel Radzivilovsky
    SVN book says: ...Another way of thinking about this pattern is that your weekly sync of trunk to branch is analogous to running svn update in a working copy, while the final merge step is analogous to running svn commit from a working copy I find this approach very unpractical in large developments, for several reasons, mostly related to reintegration step. From SVN v1.5, merging is done rev-by-rev. Cherry-picking the areas to be merged would cause us to resolve the trunk-branch conflicts twice (one when merging trunk revisions to the FB, and once more when merging back). Repository size: trunk changes might be significant for a large code base, and copying the differences files (unlike SVN copy) from trunk elsewhere may be a significant overhead. Instead, we do what we call "re-branching". In this case, when a significant chunk of trunk changes is needed, a new feature branch is opened from current trunk, and the merge is always downward (Feature branches - trunk - stable branches). This does not go along SVN book guidelines and developers see it as extra pain. How do you handle this situation?

    Read the article

  • Using Hive in a maven project

    - by Jason
    I have a project that I am migrating from ant to maven. The project makes use of a lightly-customized Hive build. I figured I would just import this build into our internal maven repo and list it as a dependency in the project's pom file. The problem I'm running into is that the Hive build just generates a bunch of jars in build/dist/lib. Some of these are the core Hive jars themselves and some are jars that Hive depends on. What's the best way to deal with these? Should I put all the core hive jars into our internal repo and just deal with undocumented dependencies in the new project's pom file? Or just jar up everything as a jar of jars and deploy that to the repo? Would that approach even work? Kind of a maven newbie still, thanks for any help.

    Read the article

  • Finding the closest grid coordinate to the mouse onclick with javascript/jQuery

    - by Acorn
    What I'm trying to do is make a grid of invisible coordinates on the page equally spaced. I then want a <div> to be placed at whatever grid coordinate is closest to the pointer when onclick is triggered. Here's the rough idea: I have the tracking of the mouse coordinates and the placing of the <div> worked out fine. What I'm stuck with is how to approach the problem of the grid of coordinates. First of all, should I have all my coordinates in an array which I then compare my onclick coordinate to? Or seeing as my grid coordinates follow a rule, could I do something like finding out which coordinate that is a multiple of whatever my spacing is is closest to the onclick coordinate? And then, where do I start with working out which grid point coordinate is closest? What's the best way of going about it? Thanks!

    Read the article

  • Any ideas on how to implement a 'touchMoveOver' event in Javascript?

    - by gargantaun
    I'm faffing around with SVG, specifically for web content aimed at iPad users. I've created a little dial type thingy that I'm calling a "cheese board" that I'd like to use as an interface element. http://appliedworks.co.uk/files/times/SVGTests/raphael.html Clicking on a piece of cheese (to keep the analogy going) will do "something". That bit's easy. However, I'd like the user to be able to drag their finger around the 'cheese board', firing a new event (touchesMovedOver?) every time they their finger moves over a new piece of cheese. But I can't figure out how to do it since there's no 'mouseOver' equivalent for touch interfaces. If the whole thing was made of squares, I could have created some sort of 'rectContainsPoint' method to be called for every 'touchesMoved', but that approach wouldn't work here. If anyone has any idea about how something like this could be achieved, I'd love to hear it.

    Read the article

  • LINQ to SQL Web Application Best Practices

    - by derek
    In my experience building web applications, I've always used a n-tier approach. A DAL that gets data from the db and populates the objects, and BLL that gets objects from the DAL and performs any business logic required on them, and the website that gets it's display data from the BLL. I've recently started learning LINQ, and most of the examples show the queries occurring right from the Web Application code-behinds(it's possible that I've only seen overly simplified examples). In the n-tier architectures, this was always seen as a big no-no. I'm a bit unsure of how to architect a new Web Application. I've been using the Server Explorer and dbml designer in VS2008 to create the dbml and object relationships. It seems a little unclear to me if the dbml would be considered the DAL layer, if the website should call methods within a BLL, which then would do the LINQ queries, etc. What are some general architecture best practices, or approaches to creating a Web Application solution using LINQ to SQL?

    Read the article

  • How do you implement caching in Linq to SQL?

    - by Glenn Slaven
    We've just started using LINQ to SQL at work for our DAL & we haven't really come up with a standard for out caching model. Previously we had being using a base 'DAL' class that implemented a cache manager property that all our DAL classes inherited from, but now we don't have that. I'm wondering if anyone has come up with a 'standard' approach to caching LINQ to SQL results? We're working in a web environment (IIS) if that makes a difference. I know this may well end up being a subjective question, but I still think the info would be valuable. EDIT: To clarify, I'm not talking about caching an individual result, I'm after more of an architecture solution, as in how do you set up caching so that all your link methods use the same caching architecture.

    Read the article

  • How to auto-increment reference number persistently when NSManagedObjects created in core-data.

    - by KayKay
    In my application i am using core-data to store information and saving these data to the server using web-connectivity i have to use MySql. Basically what i want to do is to keep track of number of NSManagedObject already created and Whenever i am adding new NSManagedObject, based on that counting it will assign the class a Int_value which will act as primary_key in MySql. For examaple, there are already 10 NSManagedobjects, and when i will add new one it will assign it "11" as primary_key. these value will have to be increasing because there is no deleting of NSManagedObject. From my approach its about static member in applicationDelegate whose initial value can be any integer but should be incremented by one(like auto-increment) everytime new NSManagedObject is created and also it should be persistent. I am not clear how to do this, please give me suggestions. Thanks in advance.

    Read the article

  • Migrating schema and SP from informix to mysql

    - by zombiegx
    We need to redo a database in MySQL that has been already done on Informix, is there a way to migrate not only the schema, but the stored procedures as well? Thanks. We have a client whom we built a web application that uses an Informix database. Now the client wants to be able to implement the same software but on multiple closed networks (like 20). Doing this using Informix would be very expensive (20 licences X_X). So the best approach is to redo the database on something like MySQL. The application was done using Flex, .Net (using ODBC) and Informix.

    Read the article

  • Multiple records with one request in RESTful system

    - by keithjgrant
    All the examples I've seen regarding a RESTful architecture have dealt with a single record. For example, a GET request to mydomain.com/foo/53 to get foo 53 or a POST to mydomain.com/foo to create a new Foo. But what about multiple records? Being able to request a series of Foos by id or post an array of new Foos generally would be more efficient with a single API request rather than dozens of individual requests. Would you "overload" mydomain.com/foo to handle requests for both a single or multiple records? Or would you add a mydomain.com/foo-multiple to handle plural POSTs and GETs? I'm designing a system that may potentially need to get many records at once (something akin to mydomain.com/foo/53,54,66,86,87) But since I haven't seen any examples of this, I'm wondering if there's something I'm just not getting about a RESTful architecture that makes this approach "wrong".

    Read the article

  • Multi-part gzip file random access (in Java)

    - by toluju
    This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic header bytes (i.e. 0x1f8b, as per the RFC), and attempt to read the gzip stream from the following bytes. The problem with this approach is that those same bytes can appear inside the actual data as well, so seeking for those bytes can lead to an invalid position to start reading a gzip stream from. Is there a better way to handle random access, given that the record offsets aren't known a priori?

    Read the article

  • Right recursive grammar or left recursive?

    - by user2485710
    I have little to no knowledge of what I'm about to ask, so I would like a suggestion based on the level of skills required to implemented a parser for the given grammar ( since I'm a beginner in this kind of formal approach to parsers and languages ). Just by going back of a couple of years, this situation reminds me a little of Pascal grammar vs C/C++ grammar, this left vs right stuff. But I'm not going to do any of that, my purpose is to implement a simple parser for a markup language for documents like Markdown. So considering that I'm starting with a markup language in mind, I want to keep things simple, which is the easiest one to handle between this 2 options and why . Another kind of grammar could be an easier option for me ? If yes which one do you suggest ?

    Read the article

  • Converting HTML special characters into their value using Python

    - by tipu
    I have a file that's littered with these: http://www.utexas.edu/learn/html/spchar.html That link just displays all sorts of HTML entities, such as – &ndash; — &mdash; ¡ &iexcl; and so on. Is it possible in Python to natively convert these characters back into their values so any occurrences of &ndash; will appear as – instead? My current approach was just to make a dict of key html entities and their utf-8 values and do search and replace, but I was wondering if there are any libraries that can take care of this for me.

    Read the article

  • Safe deployment of ASP.Net applications

    - by gatapia
    Hi All, I have an asp.net app that I want to deploy safely (with as little down time possible). I would love to do something like blue green deployment but without the need for a second web server. So, I know I can use load balancing, etc but I need a quick and cheap approach. I was thinking of doing something like: Setting up another website (copy of original) in IIS, currently I use host headers to direct traffic across sites). I could then view the new site locally until the site is totally online (due to NHibernate start up and various other high intensity tasks this takes a while). Once site 2 is totally started I would then change host headers around giving me a much much smaller down time. So my question is. Has anyone done anything like this? Will IIS restart my app pool or application when changing host headers (making this useless)? Any other options? Thanks for your help all. Guido

    Read the article

  • In MVC2, how do I validate fields that aren't in my data model?

    - by Andy Evans
    I am playing with MVC2 in VS 2010 and am really getting to like it. In a sandbox application that I've started from scratch, my database is represented in an ADO.NET entity data model and have done much of the validation for fields in my data model using Scott Guthrie's "buddy class" approach which has worked very well. However, in a user registration form that I have designed and am experimenting with, I'd like to add a 'confirm email address' or a 'confirm password' field. Since these fields obviously wouldn't exist in my data model, how would I validate these fields client side and server side? I would like to implement something like 'Html.ValidationMessageFor', but these fields don't exist in the data model. Any help would be greatly appreciated.

    Read the article

  • Implement a vpn

    - by jackson
    I want to build an application client(client.exe) - server to do the following: when the clients run it they are thrown in a VPN and they can communicate each other within 1 applicataion. For example : clients run client.exe and they can see each other in LAN ONLY in Starcraft. From what i have read the right type of vpn for this situation is Secured Socket Tunneling Protocol: "Secure socket tunneling protocol, also referred to as SSTP, is by definition an application-layer protocol. It is designed to employ a synchronous communication in a back and forth motion between two programs. It allows many application endpoints over one network connection, between peer nodes, thereby enabling efficient usage of the communication resources that are available to that network. " Question: I don't have experience with networking programming so my question for the ones who have, is this the right approach? PS1: i don't want something done like OpenVpn, i do this as learning exercise. PS2: the application is targeting Windows and i plan to use .NET Thanks for reading the whole story, i am waiting for your replies.

    Read the article

  • Using Parallel Extensions In Web Applications

    - by Greg
    I'd like to hear some opinions as to what role, if any, parallel computing approaches, including the potential use of the parallel extensions (June CTP for example), have a in web applications. What scenarios does this approach fit and/or not fit for? My understanding of how exactly IIS and web browsers thread tasks is fairly limited. I would appreciate some insight on that if someone out there has a good understanding. I'm more curious to know if the way that IIS and web browsers work limits the ROI of creating threaded and/or asynchronous tasks in web applications in general. Thanks in advance.

    Read the article

  • How to write async background workers that work on WPF flowdocument

    - by iBe
    I'm trying to write a background worker that processes a flowdocument. I can't access the properties of flowdocument objects because of the thread verification. I tried to serialize the document and loaded it on the worker thread which actually solved the thread verfication issue. However, once the processing is complete I also need to use things like TextPointer objects. Those objects now point to a objects in the copy not the original. Can anyone suggest the best way to approach such background processing in WPF?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >