Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 386/457 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Is there a more efficent way to randomise a set of linq results?

    - by Matthew De'Loughry
    Hi just wondering if you could help I've produced a function to get back a random set of submission depnding on the amount passed to it, but I worry that even though it works now with a small amount of data when the a large amount is passed through it would become efficent and cause problems. Just wondering if you could suggest a more efficent way of doing the following: public List<Submission> GetRandomWinners(int id) { List<Submission> submissions = new List<Submission>(); int amount = (DbContext().Competitions .Where(s => s.CompetitionId == id).FirstOrDefault()).NumberWinners; for(int i = 1 ; i <= amount; i++) { bool added = false; while (!added) { bool found = false; var randSubmissions = DbContext().Submissions .Where(s => s.CompetitionId == id && s.CorrectAnswer).ToList(); int count = randSubmissions.Count(); int index = new Random().Next(count); foreach (var sub in submissions ) { if (sub == randSubmissions.Skip(index).FirstOrDefault()) found = true; } if (!found) { submissions.Add(randSubmissions.Skip(index).FirstOrDefault()); added = true; } } } return submissions; } As I say I have this fully working and bringing back the wanted result just I'm not liking the foreach and while checks in there and my head has just turned to mush now try to come up with the above soloution. Thanks Matt

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • [OT a bit] Flex+JEE what is it good for?

    - by Zenzen
    Ok so sorry for being, I guess, a bit off topic but still I think this is the best place to ask. My new semester just started (don't worry I won't ask you to do my homework) and this time we have a rather cool subject about www programming in general where we have to do a web service, web abb - whatever as long as it's "web". Here's the problem though, my team and I want to do it with Flex and JEE but we don't have much experience about what are they actually used for. I mean we know you can do virtually anything with it, but we don't really want to lose time on doing something useless. My first idea was to do a "brainstorming" 3D room/service - a place where people could log in have a video conference, a whiteboard, a place to upload pictures everyone could see, some toolbars for google, youtube etc. plus some other features which would make real-time brainstorming easy when you can't get everyone in one place. But is Flex+JEE really suitable? I mean I'm 99% sure it's doable but is it really worth doing it in Flex+JEE or was the whole purpose of JEE completely different? @EDIT: well this was only one of our ideas obviously. I do know the basics of JSP, Servlets, JPA etc. of course but yeah the main goal of this project is to get some actual experience. The problem is we don't really know is it worth doing something like let's say a social network (something like extended facebook) for gamers (doesn't really matter if it already exists) in JEE or would it only look ridiculous (because PHP or whatever would be a far better choice)? Bottom line is that we are wondering are only large scale applications (for banks etc.) written in JEE or is it good for anything (even the smaller projects)?

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Reading a binary file in perl: Bad File Descriptor

    - by Magicked
    I'm trying to read a binary file 40 bytes at a time, then check to see if all those bytes are 0x00, and if so ignore them. If not, it will write them back out to another file (basically just cutting out large blocks of null bytes). This may not be the most efficient way to do this, but I'm not worried about that. However, right now I'm getting a "Bad File Descriptor" error and I cannot figure out why. my $comp = "\x00" * 40; my $byte_count = 0; my $infile = "/home/magicked/image1"; my $outfile = "/home/magicked/image1_short"; open IN, "<$infile"; open OUT, ">$outfile"; binmode IN; binmode OUT; my ($buf, $data, $n); while (read (IN, $buf, 40)) { ### Problem is here ### $boo = 1; for ($i = 0; $i < 40; $i++) { if ($comp[$i] != $buf[$i]) { $i = 40; print OUT $buf; $byte_count += 40; } } } die "Problems! $!\n" if $!; close OUT; close IN; I marked with a comment where it is breaking. Thanks for any help!

    Read the article

  • Client-server synchronization pattern / algorithm?

    - by tm_lv
    I have a feeling that there must be client-server synchronization patterns out there. But i totally failed to google up one. Situation is quite simple - server is the central node, that multiple clients connect to and manipulate same data. Data can be split in atoms, in case of conflict, whatever is on server, has priority (to avoid getting user into conflict solving). Partial synchronization is preferred due to potentially large amounts of data. Are there any patterns / good practices for such situation, or if you don't know of any - what would be your approach? Below is how i now think to solve it: Parallel to data, a modification journal will be held, having all transactions timestamped. When client connects, it receives all changes since last check, in consolidated form (server goes through lists and removes additions that are followed by deletions, merges updates for each atom, etc.). Et voila, we are up to date. Alternative would be keeping modification date for each record, and instead of performing data deletes, just mark them as deleted. Any thoughts?

    Read the article

  • scrollLeft works but scrollTop doesn't work

    - by Xiao Jia
    I have the following HTML with CSS .container { overflow: scroll; } etc. <body> <div class="container"> <div id="markers"></div> <img id="map" src="/img/map.jpg"/> </div> </body> map.jpg is very large and I want to scroll to a fixed position like this: $(function(){ console.log($('.container').scrollTop()); $('.container').scrollTop(1000); console.log($('.container').scrollTop()); console.log($('.container').scrollLeft()); $('.container').scrollLeft(1750); console.log($('.container').scrollLeft()); }); scrollLeft works fine but scrollTop doesn't. Below is the console output. 0 0 0 1750 I've been searching for half an hour but still don't know how to fix it... UPDATE: CSS about .container #markers and #map .container { overflow: scroll; width: 100%; max-width: 100%; height: 100%; max-height: 100%; margin: 0; padding: 0; } #map { width: 5000px; max-width: 5000px; height: 2907px; max-height: 2907px; cursor: crosshair; } #markers { position: relative; top: 0; left: 0; width: 0; height: 0; margin: 0; padding: 0; }

    Read the article

  • Django sub-applications & module structure

    - by Rob Golding
    I am developing a Django application, which is a large system that requires multiple sub-applications to keep things neat. Therefore, I have a top level directory that is a Django app (as it has an empty models.py file), and multiple subdirectories, which are also applications in themselves. The reason I have laid my application out in this way is because the sub-applications are separated, but they would never be used on their own, outside the parent application. It therefore makes no sense to distribute them separately. When installing my application, the settings file has to include something like this: INSTALLED_APPS = ( ... 'myapp', 'myapp.subapp1', 'myapp.subapp2', ... ) ...which is obviously suboptimal. This also has the slightly nasty result of requiring that all the sub-applications are referred to by their "inner" name (i.e. subapp1, subapp2 etc.). For example, if I want to reset the database tables for subapp1, I have to type: python manage.py reset subapp1 This is annoying, especially because I have a sub-app called core - which is likely to conflict with another application's name when my application is installed in a user's project. Am I doing this completely wrongly, or is there away to force these "inner" apps to be referred to by their full name?

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • Where to store site settings: DB? XML? CONFIG? CLASS FILES?

    - by Emin
    I am re-building a news portal of which already have a large number of visits every day. One of the major concerns when re-building this site was to maximize performance and speed. Having said this, we have done many things from caching, to all sort of other measures to ensure speed. Now towards the end of the project, I am having a dilemma of where to store my site settings that would least affect performance. The site settings will include things such as: Domain, DefaultImgPath, Google Analytics code, default emails of editors as well as more dynamic design/display feature settings such as the background color of specific DIVs and default color for links etc.. As far as I know, I have 4 choices in storing all these info. Database: Storing general settings in the DB and caching them may be a solution however, I want to limit the access to the database for only necessary and essential functions of the project which generally are insert/update/delete news items, author articles etc.. XML: I can store these settings in an XML file but I have not done this sort of thing before so I don't know what kind of problems -if any- I might face in the future. CONFIG: I can also store these settings in web.config CLASS FILE: I can hard code all these settings in a SiteSettings class, but since the site admin himself will be able to edit these settings, It may not be the best solution. Currently, I am more close to choosing web.config but letting people fiddle with it too often is something I do not want. E.g. if somehow, I miss out a validation for something and it breaks the web.config, the whole site will go down. My concern basically is that, I cannot forsee any possible consequences of using any of the methods above (or is there any other?), I was hoping to get this question over to more experienced people out here who hopefully help make my decision.

    Read the article

  • What is the correct stage to use for Google Guice in production in an application server?

    - by Yishai
    It seems like a strange question (the obvious answer would Production, duh), but if you read the java docs: /** * We want fast startup times at the expense of runtime performance and some up front error * checking. */ DEVELOPMENT, /** * We want to catch errors as early as possible and take performance hits up front. */ PRODUCTION Assuming a scenario where you have a stateless call to an application server, the initial receiving method (or there abouts) creates the injector new every call. If there all of the module bindings are not needed in a given call, then it would seem to have been better to use the Development stage (which is the default) and not take the performance hit upfront, because you may never take it at all, and here the distinction between "upfront" and "runtime performance" is kind of moot, as it is one call. Of course the downside of this would appear to be that you would lose the error checking, causing potential code paths to cause a problem by surprise. So the question boils down to are the assumptions in the above correct? Will you save performance on a large set of modules when the given lifetime of an injector is one call?

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • Wrapping unmanaged C++ with C++/CLI - a proper approach.

    - by Jamie
    Hi there, as stated in the title, I want to have my old C++ library working in managed .NET. I think of two possibilities: 1) I might try to compile the library with /clr and try "It Just Works" approach. 2) I might write a managed wrapper to the unmanaged library. First of all, I want to have my library working FAST, as it was in unmanaged environment. Thus, I am not sure if the first approach will not cause a large decrease in performance. However, it seems to be faster to implement (not a right word :-)) (assuming it will work for me). On the other hand, I think of some problems that might appear while writing a wrapper (e.g. how to wrap some STL collection (vector for instance)?) I think of writing a wrapper residing in the same project as the unmanaged C++ resides - is that a reasonable approach (e.g. MyUnmanagedClass and MyManagedClass in the same project, the second wrapping the other)? What would you suggest in that problem? Which solution is going to give me better performance of the resulting code? Thank you in advance for any suggestions and clues! Cheers

    Read the article

  • How to display a busy message over a wpf screen

    - by dave
    Hey, I have a WPF application based on Prism4. When performing slow operations, I want to show a busy screen. I will have a large number of screens, so I'm trying to build a single solution into the framework rather than adding the busy indicator to each screen. These long running operations run in a background thread. This allows the UI to be updated (good) but does not stop the user from using the UI (bad). What I'd like to do is overlay a control with a spinning dial sort of thing and have that control cover the entire screen (the old HTML trick with DIVs). When the app is busy, the control would display thus block any further interaction as well as showing the spinny thing. To set this up, I thought I could just have my app screen in a canvas along with the spinny thing (with a greater ZIndex) then just make the spinny thing visible as required. This, however, is getting hard. Canvases do not seem well set up for this and I think I might be barking up the wrong tree. I would appreciate any help. Thanks.

    Read the article

  • Does .Net use Device Dependent or Device Independent Bitmaps?

    - by Brian
    When loading an image into memory, does .Net use DDB, DIB, or something else entirely? If possible, please cite your sources. I'm wondering because we currently have a classic ASP application that is using a 3rd party component to load images that is occasionally creating a “Not enough storage is available to process this command.” error. The error is very inconsistent but tends to happen on larger images (not always, but often). After resetting IIS, processing the same file again typically works just fine. After much research I have found that DDBs tend to have this problem when processing large images because they work out of video memory. Considering that we are running on a web server with an integrated video card and limited shared memory, this could certainly be our problem. We are in the early stages of converting our app to .Net and am wondering if using .Net for this might be a viable alternative to our current method which is why I am asking the question. Any advice is welcome :) but out of curiosity if nothing else, I am really hoping for an answer to the question; does .Net use DDB or DIB?

    Read the article

  • Why do I get "Bad File Descriptor" when I try to read a file with Perl?

    - by Magicked
    I'm trying to read a binary file 40 bytes at a time, then check to see if all those bytes are 0x00, and if so ignore them. If not, it will write them back out to another file (basically just cutting out large blocks of null bytes). This may not be the most efficient way to do this, but I'm not worried about that. However, right now I'm getting a "Bad File Descriptor" error and I cannot figure out why. my $comp = "\x00" * 40; my $byte_count = 0; my $infile = "/home/magicked/image1"; my $outfile = "/home/magicked/image1_short"; open IN, "<$infile"; open OUT, ">$outfile"; binmode IN; binmode OUT; my ($buf, $data, $n); while (read (IN, $buf, 40)) { ### Problem is here ### $boo = 1; for ($i = 0; $i < 40; $i++) { if ($comp[$i] != $buf[$i]) { $i = 40; print OUT $buf; $byte_count += 40; } } } die "Problems! $!\n" if $!; close OUT; close IN; I marked with a comment where it is breaking. Thanks for any help!

    Read the article

  • How might one cope with the ambiguous value produced by GetDllDirectory?

    - by Integer Poet
    GetDllDirectory produces an ambiguous value. When the string this call produces is empty, it means one of the following: nobody has called SetDllDirectory somebody passed NULL to SetDllDirectory somebody passed an empty string to SetDllDirectory The first two cases are equivalent for my purposes, but the third case is a problem. If I want to write save/restore code (call GetDllDirectory to save the "old" value, SetDllDirectory to set a "new" value temporarily, and later SetDllDirectory again to restore the "old" value), I run the risk of reversing some other programmer's intent. If the other programmer intended for the current working directory to be in the DLL search order (in other words, one of the first two bullets is true), and I pass an empty string to SetDllDirectory, I will be taking the current working directory out of the DLL search order, reversing the other programmer's intent. Can anyone suggest an approach to eliminate or work around this ambiguity? P.S. I know having the current working directory in the DLL search order could be interpreted as a security hole. Nevertheless, it is the default behavior, and my code is not in a position to undo that; my code needs to be compatible with the expectations of all potential callers, many of which are large and old and beyond my control.

    Read the article

  • Should I invest in GraniteDS for Flex + Java development?

    - by Boden
    I'm new to Flex development, and RIAs in general. I've got a CRUD-style Java + Spring + Hibernate service on top of which I'm writing a Flex UI. Currently I'm using BlazeDS. This is an internal application running on a local network. It's become apparent to me that the way RIAs work is more similar to a desktop application than a web application in that we load up the entire model and work with it directly on the client (or at least the portion that we're interested in). This doesn't really jive well with BlazeDS because really it only supports remoting and not data management, thus it can become a lot of extra work to make sure that clients are in sync and to avoid reloading the model which can be large (especially since lazy loading is not possible). So it feels like what I'm left with is a situation where I have to treat my Flex application more like a regular old web application where I do a lot of fine grained loading of data. LiveCycle is too expensive. The free version of WebOrb for Java really only does remoting. Enter GraniteDS. As far as I can determine, it's the only free solution out there that has many of the data management features of LiveCycle. I've started to go through its documentation a bit and suddenly feel like it's yet another quagmire of framework that I'll have to learn just to get an application running. So my question(s) to the StackOverflow audience is: 1) do you recommend GraniteDS, especially if my current Java stack is Spring + Hibernate? 2) at what point do you feel like it starts to pay off? That is, at what level of application complexity do you feel that using GraniteDS really starts to make development that much better? In what ways?

    Read the article

  • Code Golf: Counting XML elements in file on Android

    - by CSharperWithJava
    Take a simple XML file formatted like this: <Lists> <List> <Note/> ... <Note/> </List> <List> <Note/> ... <Note/> </List> </Lists> Each node has some attributes that actually hold the data of the file. I need a very quick way to count the number of each type of element, (List and Note). Lists is simply the root and doesn't matter. I can do this with a simple string search or something similar, but I need to make this as fast as possible. Design Parameters: Must be in java (Android application). Must AVOID allocating memory as much as possible. Must return the total number of Note elements and the number of List elements in the file, regardless of location in file. Number of Lists will typically be small (1-4), and number of notes can potentially be very large (upwards of 1000, typically 100) per file. I look forward to your suggestions.

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >