Search Results

Search found 7338 results on 294 pages for 'useful'.

Page 261/294 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Is there any reasonable use of a function returning an anonymous struct?

    - by Akanksh
    Here is an (artificial) example of using a function that returns an anonymous struct and does "something" useful: #include <iostream> template<typename T> T* func( T* t, float a, float b ) { if(!t) { t = new T; t->a = a; t->b = b; } else { t->a += a; t->b += b; } return t; } struct { float a, b; }* foo(float a, float b) { if(a==0) return 0; return func(foo(a-1,b), a, b); } int main() { std::cout << foo(5,6)->a << std::endl; std::cout << foo(5,6)->b << std::endl; void* v = (void*)(foo(5,6)); float* f = (float*)(v); //[1] delete f now because I know struct is floats only. std::cout << f[0] << std::endl; std::cout << f[1] << std::endl; delete[] f; return 0; } There are a few points I would like to discuss: As is apparent, this code leaks, is there anyway I can NOT leak without knowing what the underlying struct definition is? see Comment [1]. I have to return a pointer to an anonymous struct so I can create an instance of the object within the templatized function func, can I do something similar without returning a pointer? I guess the most important, is there ANY (real-world) use for this at all? As the example given above leaks and is admittedly contrived. By the way, what the function foo(a,b) does is, to return a struct containing two numbers, the sum of all numbers from 1 to a and the product of a and b. EDIT: Maybe the line new T could use a boost::shared_ptr somehow to avoid leaks, but I haven't tried that. Would that work?

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • Classifying captured data in unknown format?

    - by monch1962
    I've got a large set of captured data (potentially hundreds of thousands of records), and I need to be able to break it down so I can both classify it and also produce "typical" data myself. Let me explain further... If I have the following strings of data: 132T339G1P112S 164T897F5A498S 144T989B9B223T 155T928X9Z554T ... you might start to infer the following: possibly all strings are 14 characters long the 4th, 8th, 10th and 14th characters may always be alphas, while the rest are numeric the first character may always be a '1' the 4th character may always be the letter 'T' the 14th character may be limited to only being 'S' or 'T' and so on... As you get more and more samples of real data, some of these "rules" might disappear; if you see a 15 character long string, then you have evidence that the 1st "rule" is incorrect. However, given a sufficiently large sample of strings that are exactly 14 characters long, you can start to assume that "all strings are 14 characters long" and assign a numeric figure to your degree of confidence (with an appropriate set of assumptions around the fact that you're seeing a suitably random set of all possible captured data). As you can probably tell, a human can do a lot of this classification by eye, but I'm not aware of libraries or algorithms that would allow a computer to do it. Given a set of captured data (significantly more complex than the above...), are there libraries that I can apply in my code to do this sort of classification for me, that will identify "rules" with a given degree of confidence? As a next step, I need to be able to take those rules, and use them to create my own data that conforms to these rules. I assume this is a significantly easier step than the classification, but I've never had to perform a task like this before so I'm really not sure how complex it is. At a guess, Python or Java (or possibly Perl or R) are possibly the "common" languages most likely to have these sorts of libraries, and maybe some of the bioinformatic libraries do this sort of thing. I really don't care which language I have to use; I need to solve the problem in whatever way I can. Any sort of pointer to information would be very useful. As you can probably tell, I'm struggling to describe this problem clearly, and there may be a set of appropriate keywords I can plug into Google that will point me towards the solution. Thanks in advance

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • Avoiding stack overflows in wrapper DLLs

    - by peachykeen
    I have a program to which I'm adding fullscreen post-processing effects. I do not have the source for the program (it's proprietary, although a developer did send me a copy of the debug symbols, .map format). I have the code for the effects written and working, no problems. My issue now is linking the two. I've tried two methods so far: Use Detours to modify the original program's import table. This works great and is guaranteed to be stable, but the user's I've talked to aren't comfortable with it, it requires installation (beyond extracting an archive), and there's some question if patching the program with Detours is valid under the terms of the EULA. So, that option is out. The other option is the traditional DLL-replacement. I've wrapped OpenGL (opengl32.dll), and I need the program to load my DLL instead of the system copy (just drop it in the program folder with the right name, that's easy). I then need my DLL to load the Cg framework and runtime (which relies on OpenGL) and a few other things. When Cg loads, it calls some of my functions, which call Cg functions, and I tend to get stack overflows and infinite loops. I need to be able to either include the Cg DLLs in a subdirectory and still use their functions (not sure if it's possible to have my DLLs import table point to a DLL in a subdirectory) or I need to dynamically link them (which I'd rather not do, just to simplify the build process), something to force them to refer to the system's file (not my custom replacement). The entire chain is: Program loads DLL A (named opengl32.dll). DLL A loads Cg.dll and dynamically links (GetProcAddress) to sysdir/opengl32.dll. I now need Cg.dll to also refer to sysdir/opengl32.dll, not DLL A. How would this be done? Edit: How would this be done easily without using GetProcAddress? If nothing else works, I'm willing to fall back to that, but I'd rather not if at all possible. Edit2: I just stumbled across the function SetDllDirectory in the MSDN docs (on a totally unrelated search). At first glance, that looks like what I need. Is that right, or am I misjudging? (off to test it now) Edit3: I've solved this problem by doing thing a bit differently. Instead of dropping an OpenGL32.dll, I've renamed my DLL to DInput.dll. Not only does it have the advantage of having to export one function instead of well over 120 (for the program, Cg, and GLEW), I don't have to worry about functions running back in (I can link to OpenGL as usual). To get into the calls I need to intercept, I'm using Detours. All in all, it works much better. This question, though, is still an interesting problem (and hopefully will be useful for anyone else trying to do crazy things in the future). Both the answers are good, so I'm not sure yet which to pick...

    Read the article

  • Buy or Build for web deployment?

    - by Cannonade
    I have been evaluating the wide range of installation and web deployment solutions available for Windows applications. I will just clarify here (without too much detail, these tools have been covered in other questions) my understanding of the options: NSIS - Free tool that generates setup executables. Small binary. Specialized, sometimes obtuse, scripting language. Inno Setup - Free tools for setup executables. Various binary compression schemes. Pascal scripting engine. WIX - Free toolset to generate MSI binaries. XML definitions language. WIX ClickThrough - Additional tools for packaging, web download and auto update detection (now part of WIX core). InstallShield - Commercial development environment for installation packaging. Generates MSI binaries. C-like InstallScript language. Wise - Commercial development environment for installation packaging. Generates MSI binaries. ClickOnce - Visual Studio supported framework for publishing applications to a webserver, with automatic detection of updates. No support for custom installation requirements (INI files, registry etc ...). Packages setup as an MSI binary. Install Aware - Commercial development environment for installation. Generates MSI binaries. Automatic Update framwork (Web Update). If I have missed any, please let me know. And found some useful discussions of these technologies on StackOverflow: Best Simple Install System Best choice for Windows installers Alternatives to ClickOnce I have worked with a few of these solutions, as well as a handful of proprietary internal installation solutions. They are mostly concerned with packing installations and providing a framework for developers to access the run time environment. With the growing requirement for web deployment and automatic software updates, I expected to find more of a consensus among developers on a framework for web delivery of software and subsequent updates, I haven't really found that consensus. There are certainly solutions available (ClickOnce, ClickThrough, InstallShield Update Service), but they each have considerable limitations (please correct me if I mis-represent any of these). I would be interested in a framework that provided some of the following: Third party hosting/management of updates. Access to client environment (INI files, registry, etc..). User registration/activation. Feedback/Error reporting This is leaving me with the strong impression that the best way to approach the web deployment problem is through a custom built proprietary solution (possibly leveraging existing installer packaging). I have seen this sort of solution work well for a number of successful applications: FileZilla - HTTP request to update.filezilla-project.org to check for updates, downloads an NSIS binary (I think) and then shuts down to run the install.

    Read the article

  • What Can I Do To One Of My Team Number (Good Friend As Well) Who Lost His Passion.

    - by skyflyer
    It seems this question is not program related, but there are lot of similar questions. So please bear with me! By the way, I am programmer and my team is also charging a software project. And SO is the only place which solved me lot of thorny troubles!THANK YOU GUYS! I joined my company with him years ago. At that time he was quite passionate on his job which is a front-end development. He gave us lot of useful suggestions concerning his work like design. And I believed he was a smart guy. I believe he still is smart too by the way. One years later, however, he seemed lost his passion and fooling around every day, did not care about his work any more and produced poorwork. Even worse he literally stopped learning new skills and honing his work related skills. For me it is horrible, we got to keep abreast with new technology development, otherwise we will be throw out. Since we were just coworkers, I did not care about it too much except mentioned my thoughts several times. But last month, we resembled a new group and assigned very important project. And I am the team leader, sadly! My boss gave me lot of support and expectation as well. I did a pretty good job before and I am very optimism to our future. But as a team, if my team does not work hard, we will be doomed to failure no matter how hard I work and push. In order to revitalize his passion, I tried couple of ways like talking to him about my concern and my boss's angry. I offered his new task which is quite new to him. I even persuaded my boss to give him new incentive package. But all of them knocked wall. His reaction was just he did not care. Even worse he did not want to talk about his situation. I want to be hard on him, but since we are friends and coworkers, I really can not see it will work. Even it works, I can not so quickly change my self from friend and coworker into manager. As a novice in management, I am really overwhelmed! I do not want get him fired, we are friends and I do not see him fired as my team number. What can I do? Thank you guys!

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • Postgresql: Implicit lock acquisition from foreign-key constraint evaluation

    - by fennec
    So, I'm being confused about foreign key constraint handling in Postgresql. (version 8.4.4, for what it's worth). We've got a couple of tables, mildly anonymized below: device: (id, blah, blah, blah, blah, blah x 50)… primary key on id whooooole bunch of other junk device_foo: (id, device_id, left, right) Foreign key (device_id) references device(id) on delete cascade; primary key on id btree index on 'left' and 'right' So I set out with two database windows to run some queries. db1> begin; lock table device in exclusive mode; db2> begin; update device_foo set left = left + 1; The db2 connection blocks. It seems odd to me that an update of the 'left' column on device_stuff should be affected by activity on the device table. But it is. In fact, if I go back to db1: db1> select * from device_stuff for update; *** deadlock occurs *** The pgsql log has the following: blah blah blah deadlock blah. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."device" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR SHARE OF X: update device_foo set left = left + 1; I suppose I've got two issues: the first is that I don't understand the precise mechanism by which this sort of locking occurs. I have got a couple of useful queries to query pg_locks to see what sort of locks a statement invokes, but I haven't been able to observe this particular sort of locking when I run the update device_foo command in isolation. (Perhaps I'm doing something wrong, though.) I also can't find any documentation on the lock acquisition behavior of foreign-key constraint checks. All I have is a log message. Am I to infer from this that any change to a row will acquire an update lock on all the tables which it's foreign-keyed against? The second issue is that I'd like to find some way to make it not happen like that. I'm ending up with occasional deadlocks in the actual application. I'd like to be able to run big update statements that impact all rows on device_foo without acquiring a big lock on the device table. (There's a lot of access going on in the device table, and it's kind of an expensive lock to get.)

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • Django Multi-Table Inheritance VS Specifying Explicit OneToOne Relationship in Models

    - by chefsmart
    Hope all this makes sense :) I'll clarify via comments if necessary. Also, I am experimenting using bold text in this question, and will edit it out if I (or you) find it distracting. With that out of the way... Using django.contrib.auth gives us User and Group, among other useful things that I can't do without (like basic messaging). In my app I have several different types of users. A user can be of only one type. That would easily be handled by groups, with a little extra care. However, these different users are related to each other in hierarchies / relationships. Let's take a look at these users: - Principals - "top level" users Administrators - each administrator reports to a Principal Coordinators - each coordinator reports to an Administrator Apart from these there are other user types that are not directly related, but may get related later on. For example, "Company" is another type of user, and can have various "Products", and products may be supervised by a "Coordinator". "Buyer" is another kind of user that may buy products. Now all these users have various other attributes, some of which are common to all types of users and some of which are distinct only to one user type. For example, all types of users have to have an address. On the other hand, only the Principal user belongs to a "BranchOffice". Another point, which was stated above, is that a User can only ever be of one type. The app also needs to keep track of who created and/or modified Principals, Administrators, Coordinators, Companies, Products etc. (So that's two more links to the User model.) In this scenario, is it a good idea to use Django's multi-table inheritance as follows: - from django.contrib.auth.models import User class Principal(User): # # # branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Or should I go about doing it like this: - class Principal(models.Model): # # # user = models.OneToOneField(User, blank=True) branchoffice = models.ForeignKey(BranchOffice) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="principalmodifier") # # # Please keep in mind that there are other user types that are related via foreign keys, for example: - class Administrator(models.Model): # # # principal = models.ForeignKey(Principal, help_text="The supervising principal for this Administrator") user = models.OneToOneField(User, blank=True) province = models.ForeignKey( Province) landline = models.CharField(blank=True, max_length=20) mobile = models.CharField(blank=True, max_length=20) created_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratorcreator") modified_by = models.ForeignKey(User, editable=False, blank=True, related_name="administratormodifier") I am aware that Django does use a one-to-one relationship for multi-table inheritance behind the scenes. I am just not qualified enough to decide which is a more sound approach.

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Executing a .NET Managed Assembly from SQL Server 2008 - Pro's, Con's & Recommendations

    - by RPM1984
    Hi guys, looking for opinions/recommendations/links for the following scenario im currently facing. The Platform: .NET 4.0 Web Application SQL Server 2008 The Task: Overhaul a component of the system that performs (fairly) complex mathematical operations based on a specific user activity, and updates numerous tables in the database. A common user activity might be "Bob" decides to post a forum topic. This results in (the end-solution) needing to look at various factors (about the post he did), then after doing some math based on lookup values/ratios as well as other data in the database, inserting some other data as a result of these operations. The Options: Ok - so here's what im thinking. Although it would be much easier to do this in C# (LINQ-SQL) it doesnt make much sense as the majority of the computations are based on values in the db, and it will get difficult to control/optimize/debug the LINQ over time. Hence, im leaning towards created a managed assembly (C# Class Library) that contains the lookup values (constants) as well as leveraging the math classes in the existing .NET BCL. Basically i'd expose a few methods that can be called by the T-SQL Stored Procedures. This to me has the following advantages: Simplicity of math. Do complex math in .NET vs complex math in T-SQL. No brainer. =) Abstraction of computatations, configurable "lookup" values and business logic from raw T-SQL. T-SQL only needs to care about the data, simplifying the stored procedures and making it easier to maintain. When it needs to do math it delegates off to the managed assembly. So, having said that - ive never done this before (call .NET assmembly from T-SQL), and after some googling the best site i could come up with is here, which is useful but outdated. So - what am i asking? Well, firstly - i need some better references on how to actually do this. "This" being how to call a C# .NET 4 Assembly from within T-SQL Stored Procedures in SQL Server 2008. Secondly, who out there has done this, what problems (if any) did you face? Realize this may be difficult to provide a "correct answer", so ill try to give it to whoever gives me the answer with a combination of good links and a list of pro's/con's/problems with this implementation. Cheers!

    Read the article

  • CSS3's border-radius property and border-collapse:collapse don't mix. How can I use border-radius to

    - by vamin
    Edit - Original Title: Is there an alternative way to achieve border-collapse:collapse in CSS (in order to have a collapsed, rounded corner table)? Since it turns out that simply getting the table's borders to collapse does not solve the root problem, I have updated the title to better reflect the discussion. I am trying to make a table with rounded corners using the CSS3 border-radius property. The table styles I'm using look something like this: table { -moz-border-radius:10px; -webkit-border-radius:10px; border-radius:10px} Here's the problem. I also want to set the border-collapse:collapse property, and when that is set border-radius no longer works (at least in Firefox)(edit- I thought this might just be a difference in mozilla's implementation, but it turns out this is the way it's supposed to work according to the w3c). Is there a CSS-based way I can get the same effect as border-collapse:collapse without actually using it? Edits: I've made a simple page to demonstrate the problem here (Firefox/Safari only). It seems that a large part of the problem is that setting the table to have rounded corners does not affect the corners of the corner td elements. If the table was all one color, this wouldn't be a problem since I could just make the top and bottom td corners rounded for the first and last row respectively. However, I am using different background colors for the table to differentiate the headings and for striping, so the inner td elements would show their rounded corners as well. Summary of proposed solutions: Surrounding the table with another element with round corners doesn't work because the table's square corners "bleed through." Specifying border width to 0 doesn't collapse the table. Bottom td corners still square after setting cellspacing to zero. Using javascript instead- works by avoiding the problem. Possible solutions: The tables are generated in php, so I could just apply a different class to each of the outer th/tds and style each corner separately. I'd rather not do this, since it's not very elegant and a bit of a pain to apply to multiple tables, so please keep suggestions coming. Possible solution 2 is to use javascript (jQuery, specifically) to style the corners. This solution also works, but still not quite what I'm looking for (I know I'm picky). I have two reservations: 1) this is a very lightweight site, and I'd like to keep javascript to the barest minimum 2) part of the appeal that using border-radius has for me is graceful degradation and progressive enhancement. By using border-radius for all rounded corners, I hope to have a consistently rounded site in CSS3-capable browsers and a consistently square site in others (I'm looking at you, IE). I know that trying to do this with CSS3 today may seem needless, but I have my reasons. I would also like to point out that this problem is a result of the w3c speficication, not poor CSS3 support, so any solution will still be relevant and useful when CSS3 has more widespread support.

    Read the article

  • Is it possible to reliably auto-decode user files to Unicode? [C#]

    - by NVRAM
    I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files. Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding. This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms? I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files. My questions boil down to: Is BOM-aware detection sufficient for the vast majority of files? In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.") Under what circumstances will a "valid" file fail with the C# encoder/decoder framework? Is there a repository anywhere that has a multitude of files with various encodings to use for testing? While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this. So far I've found: A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.) Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh? Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible. My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files. Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy. Thanks.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Realtime Twitter Replies?

    - by ejunker
    I have created Twitter bots for many geographic locations. I want to allow users to @-reply to the Twitter bot with commands and then have the bot respond with the results. I would like to have the bot reply to the user as quickly as possible (realtime). Apparently, Twitter used to have an XMPP/Jabber interface that would provide this type of realtime feed of replies but it was shut down. As I see it my options are to use one of the following: REST API This would involve polling every X minutes for each bot. The problem with this is that it is not realtime and each Twitter account would have to be polled. Search API The search API does allow specifying a "-to" parameter in the search and replies to all bots could be aggregated in a search such as "-to bot1 OR -to bot2...". Though if you have hundreds of bots then the search string would get very long and probably exceed the maximum length of a GET request. Streaming API The streaming API looks very promising as it provides realtime results. The API allows you to specify a follow and track parameters. follow is not useful as the bot does not know who will be sending it commands. track allows you to specify keywords to track. This could possibly work by creating a daemon process that connects to the Streaming API and tracks all references to the bot's names. Once again since there are lots of bots to track the length and complexity of the query may be an issue. Another idea would be to track a special hashtag such as #botcommand and then a user could send a command using this syntax @bot1 weather #botcommand. Then by using the Streaming API to track all references to #botcommand would give you a realtime stream of all the commands. Further parsing could then be done to determine which bot to send the command to. Third-party service Are there any third-party companies that have access to the Twitter firehouse and offer realtime data? I haven't investigated these, but here are a few that I have found: Gnip Tweet.IM excla.im TwitterSpy - seems to use polling, not realtime I'm leaning towards using the Streaming API. Is there a better way to get near realtime @-replies for many (hundreds) of Twitter accounts?

    Read the article

  • IQueryable and lazy loading

    - by Nelson
    I'm having a hard time determining the best way to handle this... With Entity Framework (and L2S), LINQ queries return IQueryable. I have read various opinions on whether the DAL/BLL should return IQueryable, IEnumerable or IList. Assuming we go with IList, then the query is run immediately and that control is not passed on to the next layer. This makes it easier to unit test, etc. You lose the ability to refine the query at higher levels, but you could simply create another method that allows you to refine the query and still return IList. And there are many more pros/cons. So far so good. Now comes Entity Framework and lazy loading. I am using POCO objects with proxies in .NET 4/VS 2010. In the presentation layer I do: foreach (Order order in bll.GetOrders()) { foreach (OrderLine orderLine in order.OrderLines) { // Do something } } In this case, GetOrders() returns IList so it executes immediately before returning to the PL. But in the next foreach, you have lazy loading which executes multiple SQL queries as it gets all the OrderLines. So basically, the PL is running SQL queries "on demand" in the wrong layer. Is there any sensible way to avoid this? I could turn lazy loading off, but then what's the point of having this "feature" that everyone was complaining EF1 didn't have? And I'll admit it is very useful in many scenarios. So I see several options: Somehow remove all associations in the entities and add methods to return them. This goes against the default EF behavior/code generation and makes it harder to do some composite (multiple entity) LINQ queries. It seems like a step backwards. I vote no. If we have lazy loading anyway which makes it hard to unit test, then go all the way and return IQueryable. You'll have more control farther up the layers. I still don't think this is a good option because IQueryable ties you to L2S, L2E, or your own full implementation of IQueryable. Lazy loading may run queries "on demand", but doesn't tie you to any specific interface. I vote no. Turn off lazy loading. You'll have to handle your associations manually. This could be with eager loading's .Include(). I vote yes in some specific cases. Keep IList and lazy loading. I vote yes in many cases, only due to the troubles with the others. Any other options or suggestions? I haven't found an option that really convinces me.

    Read the article

  • XNA - Mouse coordinates to word space transformation

    - by Gabriel Butcher
    I have a pretty annoying problem. I would like to create a drawing program, using winform + XNA combo. The most important part would be to transform the mouse position into the XNA drawn grid - I was able to make it for the translations, but it only work if I don't zoom in - when I do, the coordinates simply went horrible wrong. And I have no idea what I doing wrong. I tried to transform with scaling matrix, transform with inverse scaling matrix, multiplying with zoom, but none seems to work. In the beginning (with zoom value = 1) the grid starts from (0,0,0) going to (Width, Height, 0). I was able to get coordinates based on this grid as long as the zoom value didn't changed at all. I using a custom shader, with orthographic projection matrix, identity view matrix, and the transformed world matrix. Here is the two main method: internal void Update(RenderData data) { KeyboardState keyS = Keyboard.GetState(); MouseState mouS = Mouse.GetState(); if (ButtonState.Pressed == mouS.RightButton) { camTarget.X -= (float)(mouS.X - oldMstate.X) / 2; camTarget.Y += (float)(mouS.Y - oldMstate.Y) / 2; } if (ButtonState.Pressed == mouS.MiddleButton || keyS.IsKeyDown(Keys.Space)) { zVal += (float)(mouS.Y - oldMstate.Y) / 10; zoom = (float)Math.Pow(2, zVal); } oldKState = keyS; oldMstate = mouS; world = Matrix.CreateTranslation(new Vector3(-camTarget.X, -camTarget.Y, 0)) * Matrix.CreateScale(zoom / 2); } internal PointF MousePos { get { Vector2 mousePos = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); Matrix trans = Matrix.CreateTranslation(new Vector3(camTarget.X - (Width / 2), -camTarget.Y + (Height / 2), 0)); mousePos = Vector2.Transform(mousePos, trans); return new PointF(mousePos.X, mousePos.Y); } } The second method should return the coordinates of the mouse cursor based on the grid (where the (0,0) point of the grid is the top-left corner.). But is just don't work. I deleted the zoom transformation from the matrix trans, as I didnt was able to get any useful result (most of the time, the coordinates was horrible wrong, mostly many thousand when the grid's size is 500x500). Any idea, or suggestion? I trying to solve this simple problem for two days now :\

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

  • MSSQL 2005: Update rows in a specified order (like ORDER BY)?

    - by JMTyler
    I want to update rows of a table in a specific order, like one would expect if including an ORDER BY clause, but MS SQL does not support the ORDER BY clause in UPDATE queries. I have checked out this question which supplied a nice solution, but my query is a bit more complicated than the one specified there. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) ORDER BY Parent.Depth DESC; So, what I'm hoping that you'll notice is that a single table (TableA) contains a hierarchy of rows, wherein one row can be the parent or child of any other row. The rows need to be updated in order from the deepest child up to the root parent. This is because TableA.ColA must contain an up-to-date concatenation of its own current value with the values of its children (I realize this query only concats with one child, but that is for the sake of simplicity - the purpose of the example in this question does not necessitate any more verbosity), therefore the query must update from the bottom up. The solution suggested in the question I noted above is as follows: UPDATE messages SET status=10 WHERE ID in (SELECT TOP (10) Id FROM Table WHERE status=0 ORDER BY priority DESC ); The reason that I don't think I can use this solution is because I am referencing column values from the parent table inside my subquery (see WHERE Child.ParentColB = Parent.ColB), and I don't think two sibling subqueries would have access to each others' data. So far I have only determined one way to merge that suggested solution with my current problem, and I don't think it works. UPDATE TableA AS Parent SET Parent.ColA = Parent.ColA + (SELECT TOP 1 Child.ColA FROM TableA AS Child WHERE Child.ParentColB = Parent.ColB ORDER BY Child.Priority) WHERE Parent.Id IN (SELECT Id FROM TableA ORDER BY Parent.Depth DESC); The WHERE..IN subquery will not actually return a subset of the rows, it will just return the full list of IDs in the order that I want. However (I don't know for sure - please tell me if I'm wrong) I think that the WHERE..IN clause will not care about the order of IDs within the parentheses - it will just check the ID of the row it currently wants to update to see if it's in that list (which, they all are) in whatever order it is already trying to update... Which would just be a total waste of cycles, because it wouldn't change anything. So, in conclusion, I have looked around and can't seem to figure out a way to update in a specified order (and included the reason I need to update in that order, because I am sure I would otherwise get the ever-so-useful "why?" answers) and I am now hitting up Stack Overflow to see if any of you gurus out there who know more about SQL than I do (which isn't saying much) know of an efficient way to do this. It's particularly important that I only use a single query to complete this action. A long question, but I wanted to cover my bases and give you guys as much info to feed off of as possible. :) Any thoughts?

    Read the article

  • How to configure Jetty to reload a WebAppContext when classes are changed

    - by Guss
    I'm developing a web application and I run Jetty as the development and testing environment when I develop under Eclipse. When I make changes to Java classes, Eclipse automatically compiles them to the build directory, but Jetty won't see the changes until I stop and start the server. I know that Jetty supports "hot deployment" using ContextDeployer that will refresh updated application contexts, but it relies on a context file in a context directory being updated - which is not very useful in my case. Is there a way to set up Jetty so that it will reload the web app when any of the classes it uses is updated? My current jetty.xml looks something like this: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <Set name="ThreadPool"><!-- bla bla --></Set> <Call name="addConnector"><!-- bla bla --></Call> <Set name="handler"> <New id="Handlers" class="org.eclipse.jetty.server.handler.HandlerCollection"> <Set name="handlers"> <Array type="org.eclipse.jetty.server.Handler"> <Item> <New id="webapp" class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="displayName">My Web App</Set> <Set name="resourceBase">src/main/webapp</Set> <Set name="descriptor">src/main/webapp/WEB-INF/web.xml</Set> <Set name="contextPath">/mywebapp</Set> </New> </Item> <Item> <New id="DefaultHandler" class="org.eclipse.jetty.server.handler.DefaultHandler"/> </Item> </Array> </Set> </New> </Set> </Configure>

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >