Search Results

Search found 10098 results on 404 pages for 'per pixel'.

Page 323/404 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Storing PLSQL stored-procedure values in Oracle memory caches for extended periods

    - by Ira Baxter
    I am collecting runtime profiling data from PLSQL stored procedures. The data is collected as certain stored procedures execute, but it needs to accumululate across multiple executions of those procedures. To minimize overhead, I'd like to store that profiling data in some PLSQL-accessable Oracle memory-resident storage somewhere for the duration of the data collection interval, and then dump out the accumulated values. The data collection interval might be seconds or hours; its ok not to store this data across system boots. Something like session state in web servers would do. What are my choices for storing such data? The only method I know about are contexts in dbms_sessions: procedure set_ctx (value in varchar8) as begin dbms_session.set_context ( 'Test_Ctx', 'AccumulatedValue', value, NULL, 'ProfilerSessionId' ); end set_ctx; This works, but takes some 50 milliseconds(!) per update to the accumulated value. What I'm hoping for is a way to access/store an array of values in some Oracle memory using vanilla PLSQL statements, with access times typical of array accesses made to package-local arrays.

    Read the article

  • singleton pattern in Windows Activation Service

    - by Joshua
    Hello I have a few WCF services that are currently being self hosted, in a very basic NT Service. I want to expand my application to add provisioning of WCF Services, and updates, as well as isolation (I want each WCF Service to be in its own AppDomain). These WCF Services contain logic that needs to be run on a regular basis, pinging the database, and getting information from external devices so that when a request comes in the data is readily available. I'm thinking about trying out Windows Activation Service, because i really like the provisioning, and isolation that comes with a managed services infrastructure. If I didn't use WAS I would essentially have to write the same code myself. From what I understand though WAS does not really support the model of having a service that is running before someone actually calls a method on the service. the article I read here MSDN Article Link states "That means in essence that out-of-the-box WAS hosting is not something that is really suited for sessionful or singleton services. It is more suitable for stateless per-call services." it does say that "Out of the box" so I'm wondering if anyone has used WAS to host a WCF service that really behaves more like an NT Service (starting and stopping independantly of having a method called upon it). Or any other ideas would be great. I was planning on writting this infrastructure myself, to host WCF services in a custom ServiceHost, and put their execution in a seporate AppDomain, as well as allow for provision of these services after initial installation, along with updates. However, I would MUCH MUCH MUCH rather not own that code if I don't have to. thanks Joshua

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • Problem counting item frequency on T-SQL

    - by Raúl Roa
    I'm trying to count the frequency of numbers from 1 to 100 on different fields of a table. Let's say I have the table "Results" with the following data: LottoId Winner Second Third --------- --------- --------- --------- 1 1 2 3 2 1 2 3 I'd like to be able to get the frequency per numbers. For that I'm using the following code: --Creating numbers temp table CREATE TABLE #Numbers( Number int) --Inserting the numbers into the temp table declare @counter int set @counter = 0 while @counter < 100 begin set @counter = @counter + 1 INSERT INTO #Numbers(Number) VALUES(@counter) end -- SELECT #Numbers.Number, Count(Results.Winner) as Winner,Count(Results.Second) as Second, Count(Results.Third) as Third FROM #Numbers LEFT JOIN Results ON #Numbers.Number = Results.Winner OR #Numbers.Number = Results.Second OR #Numbers.Number = Results.Third GROUP BY #Numbers.Number The problem is that the counts are repeating the same values for each number. In this particular case I'm getting the following result: Number Winner Second Third --------- --------- --------- --------- 1 2 2 2 2 2 2 2 3 2 2 2 ... When I should get this: Number Winner Second Third --------- --------- --------- --------- 1 2 0 0 2 0 2 0 3 0 0 2 ... What am I missing?

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Unit Testing a Java Chat Application

    - by Epitaph
    I have developed a basic Chat application in Java. It consists of a server and multiple client. The server continually monitors for incoming messages and broadcasts them to all the clients. The client is made up of a Swing GUI with a text area (for messages sent by the server and other clients), a text field (to send Text messages) and a button (SEND). The client also continually monitors for incoming messages from other clients (via the Server). This is achieved with Threads and Event Listeners and the application works as expected. But, how do I go about unit testing my chat application? As the methods involve establishing a connection with the server and sending/receiving messages from the server, I am not sure if these methods should be unit tested. As per my understanding, Unit Testing shouldn't be done for tasks like connecting to a database or network. The few test cases that I could come up with are: 1) The max limit of the text field 2) Client can connect to the Server 3) Server can connect to the Client 4) Client can send message 5) Client can receive message 6) Server can send message 7) Server can receive message 8) Server can accept connections from multiple clients But, since most of the above methods involve some kind of network communication, I cannot perform unit testing. How should I go about unit testing my chat application?

    Read the article

  • Can a single developer still make money with shareware?

    - by Wouter van Nifterick
    I'm wondering if the shareware concept is dead nowadays. Like most developers, I've built up quite a collection of self-made tools and code libraries that help me to be productive. Some examples to give you an idea of the type of thing I'm talking about: A self-learning program that renames and orders all my mp3 files and adds information to the id3 tags; A Delphi component that wraps the Google Maps API; A text-to-singing-voice converter for musical purposes; A program to control a music synthesizer; A Gps-log <- KML <- ESRI-shapefile converter; I've got one of these already freely downloadable on my website, and on average it gets downloaded about a 150 times per month. Let's say I'd start charging 15 euro's for it; would there actually be people who buy it? How many? What would it depend on? If I could get some money for some of these, I'd finish them up a bit and put them online, but without that, I probably won't bother. Maintaining a SourceForge project is not very rewarding by itself. Is there anyone who is making money with shareware? How much? Any tips?

    Read the article

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

  • Find the flaws in the concept...

    - by Trindaz
    A web based web browser. Sounds silly right? Here's a use case. All comments about what could go wrong, and if anyone has tried and failed at this, very much wanted User goes to www.theBrowser.com and logs in with credentials specific to theBrowser.com. User tells theBrowser what their username and password for various sites are User goes to theBrowser.com/?uri=somesite.com theBrowser sends off the http request with User's log in details, then sends the http response back to User. This lets theBrowser do weird and wonderful functions like change colours / style sheets / etc. to every site that gets passed through it. From a technical stand point, storing username and password and passing them along is not a challenge for one user, but if there were a few, I'd have to use some kind of server based browser software to store a session per user logged in at theBrowser.com. How could I do that? Will I have to start from scratch? Obviously privacy and security are issues. Would theBrowser.com be too great a risk, even if users are fully warned? Cheers, Dave

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Using DPAPI / ProtectedData in a web farm environment with the User Store

    - by Lachman
    I was wondering if anyone had successfully used DPAPI with a user store in a web farm enviroment? Because our application is a recently converted from 1.1 to 2.0 asp.net app, we're using a custom wrapper which directly calls the CryptUnprotect methods. But this should be the same as the ProtectedData method available in the 2.0 framework. Because we are operating in a web farm environment, we can't guarantee that the machine that did the encryption is going to be the one decrypting it. (Also because machine failures shouldn't destroy our encrypted data). So what we have is a serviced component that runs in a service under a particular user account on each one of our web boxes. This user is a set up to have a roaming profile, as per the recomendation. The problem we have is that info encrypted on one machine can not be decrypted on another, this fails with the win32 error 'Key not valid for use in specified state'. I suspect that this is because I've made a mistake by having the encryption service running as the user on multiple machines, hence keeping the user logged in on more than one machine at the same time. If this is the problem, how are other using DPAPI with the User Store in a web farm environment?

    Read the article

  • Should a Perl constructor return an undef or a "invalid" object?

    - by DVK
    Question: What is considered to be "Best practice" - and why - of handling errors in a constructor?. "Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach. As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion): Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling. Pros: Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call. Allows for additional info to be passed (e.g. 1 error, warnings, etc...) Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design. Cons: None that I'm aware of. Return "undef". Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects). Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question. UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • MVC and repository pattern data efficiency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • Manipulate score/rank on query results from NHibernate.Search

    - by Fernando Figueiredo
    I've been working with NHibernate, NHibernate.Search and Lucene.Net to improve the search engine used on the website I develop. Basically, I use it to search contents of corporations specification documents. This is not to be confused with Lucene's notion of documents: in my case, a specification document (which I'll hereafter call a "specdoc") can contain many pages, and the content of these pages are the ones that are actually indexed (thus, the pages themselves are the ones that fall into Lucene's concept of documents). So, the pages belong to a specdoc, that in turn belong to a corporation (so, a corporation can have many specdocs). I'm using NHibernate.Search "IndexEmbedded" and "ContainedIn" attributes to associate the pages with their specdoc and the specdocs to their corporations, so I can query for terms in specdoc pages and have Lucene/NH.Search return either the pages themselves, the specdocs, or the corporations that match the query on the pages. I can query this way and get ranked results, thus presenting results (that is, corporations, specdocs or pages) by relevance, which is great. But now I need something more. Specifically in the case where I query terms and have NH.Search return the corporations that match, I need to manually/artificially tune the score of some of the results, because there are corporations that I want to show up on the top of the result set - think of "sponsored results". I'm thinking of doing it on my application, maybe creating an entity/database table that contain an association to the corporation entity, and a score boost value. But I don't know how to feed this to Lucene and have it boost the results accordingly at search time. Initially I thought about deriving a Similarity class to do this, but it doesn't look like Similarity can be used to modify result sets at search time. As per this page, it looks like what I need is to mess around with weight or scoring. But the docs are a little superficial in that there are no examples on how to implement a custom scoring, let alone integrate it with NH.Search. So, does anyone know how to do this, or point me to some documentation or working example on how to do something similar? Thanks!

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • How do you create a non-Thread-based Guice custom Scope?

    - by Russ
    It seems that all Guice's out-of-the-box Scope implementations are inherently Thread-based (or ignore Threads entirely): Scopes.SINGLETON and Scopes.NO_SCOPE ignore Threads and are the edge cases: global scope and no scope. ServletScopes.REQUEST and ServletScopes.SESSION ultimately depend on retrieving scoped objects from a ThreadLocal<Context>. The retrieved Context holds a reference to the HttpServletRequest that holds a reference to the scoped objects stored as named attributes (where name is derived from com.google.inject.Key). Class SimpleScope from the custom scope Guice wiki also provides a per-Thread implementation using a ThreadLocal<Map<Key<?>, Object>> member variable. With that preamble, my question is this: how does one go about creating a non-Thread-based Scope? It seems that something that I can use to look up a Map<Key<?>, Object> is missing, as the only things passed in to Scope.scope() are a Key<T> and a Provider<T>. Thanks in advance for your time.

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • Safe to update separate regions of a BufferedImage in separate threads?

    - by finnw
    I have a collection of BufferedImage instances, one main image and some subimages created by calling getSubImage on the main image. The subimages do not overlap. I am also making modifications to the subimage and I want to split this into multiple threads, one per subimage. From my understanding of how BufferedImage, Raster and DataBuffer work, this should be safe because: Each instance of BufferedImage (and its respective WritableRaster) is accessed from only one thread. The shared ColorModel is immutable The DataBuffer has no fields that can be modified (the only thing that can change is elements of the backing array.) Modifying disjoint segments of an array in separate threads is safe. However I cannot find anything in the documentation that says that it is definitely safe to do this. Can I assume it is safe? I know that it is possible to work on copies of the child Rasters but I would prefer to avoid this because of memory constraints. Otherwise, is it possible to make the operation thread-safe without copying regions of the parent image?

    Read the article

  • unable to use turboc through xp. why? solution?

    - by Fizz
    hi, im having problems opening directly turboc++ compiler(dos version) on xp. if i d-click on the tc icon through windows gui it opens for a sec(a blank dos screen) and shuts down. so i hv to acces through cmd and then adress of turboc and tc i.e., cmd (enter) c:\tc\bin (enter) tc.exe this way tc opens and im able to program all text programming.. why do i have to always start tc through dos..why can't i start it through xp. also,after starting tc through dos, im unable to execute any graphics program through it.. i write a simple code for creating a circle using predefined functions.. all directories have been set as per req. when i compile and run the prog. tc exits and returns to dos cmd prompt. why does this happen? solution? i hv also tried using dos box to run turboc. it closes automatically on executing the grapics program.. plzz help...

    Read the article

  • Should programmers do Pro Bono work? where are the code public defenders?

    - by Tj Kellie
    How many projects are people doing based on the Bro Bono publico ideals versus working for the highest wage or potential for a cash-in-buy-out payday? For years lawyers have been called out for excessive gathering of wealth from high bill rates and huge settlement deals, hiring out their knowledge and skills to the highest bidders. People call for them to do more for free, use the laws and their time to defend or further some cause thats in the public's best interest. Is professional software development that different? So many bright people and so much knowledge of complex systems. Do you think that there is enough of a "Pro Bono" movement to solve the social and public problems in the industry right now? If so what are the examples to point to? OLPC? NOTE: Saying that open source software is the same as pro bono misses the point completely. I was looking for specific projects with a social context, not just group-sourcing for free software. Just because your not making anyone pay for your software does not mean its doing anyone any good. I'm not calling out manual enforcement of pro bono work for programmers, really just want some objective opinions and concrete examples of social-minded software/tech development projects like the One Laptop Per Child project. I'm sure open source would be a natural tie-in for some.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >