Search Results

Search found 13227 results on 530 pages for 'memory efficiency'.

Page 99/530 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Finding most efficient transmission size in varying network latency scenarios

    - by rwmnau
    I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff: Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted. If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time. The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size? The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.

    Read the article

  • Writing an efficient cron job script utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Heap Dump Root Classes

    - by Adnan Memon
    We have production system going into infinite loop of full gc and memory drops form 8 gigs to like 1 MB in just 2 minutes. After taking heap dump it tells me there an is an array of java.lang.Object ([Ljava.lang.Object) with millions of java.lang.String objects having same String taking 99% of heap. But it doesn't tell me which class is referencing to this array so that I can fix it in the code. I took the heap dump using jmap tool on JDK 6 and used JProfiler, NetBeans, SAP Memory Analyzer and IBM Memory Analyzer but none of those tell me what is causing this huge array of objects?? ... like what class is referencing to it or contains it. Do I have to take a different dump with different config in order to get that info? ... Or anything else that can help me find out the culprit class causing this ... it will help a lot.

    Read the article

  • Indy Write Buffering / Efficient TCP communication

    - by Smasher
    I know, I'm asking a lot of questions...but as a new delphi developer I keep falling over all these questions :) This one deals with TCP communication using indy 10. To make communication efficient, I code a client operation request as a single byte (in most scenarios followed by other data bytes of course, but in this case only one single byte). Problem is that var Bytes : TBytes; ... SetLength (Bytes, 1); Bytes [0] := OpCode; FConnection.IOHandler.Write (Bytes, 1); ErrorCode := Connection.IOHandler.ReadByte; does not send that byte immediately (at least the servers execute handler is not invoked). If I change the '1' to a '9' for example everything works fine. I assumed that Indy buffers the outgoing bytes and tried to disable write buffering with FConnection.IOHandler.WriteBufferClose; but it did not help. How can I send a single byte and make sure that it is immediatly sent? And - I add another little question here - what is the best way to send an integer using indy? Unfortunately I can't find function like WriteInteger in the IOHandler of TIdTCPServer...and WriteLn (IntToStr (SomeIntVal)) seems not very efficient to me. Does it make a difference whether I use multiple write commands in a row or pack things together in a byte array and send that once? Thanks for any answers! EDIT: I added a hint that I'm using Indy 10 since there seem to be major changes concerning the read and write procedures.

    Read the article

  • Database design in blogging systems

    - by Peter
    As a learning exercise I'm trying to put myself a blogging system. The goal is to code something that will let me create multiple blogs, like blogger.com or wordpress.com, but much simplified. I would like to ask you, what do you think is best database design for this type of script. Is it better to have one big table, containing posts from all blogs of all users (like friendfeed) or would it be better to create separate table for each blog's posts? Big thanks in advance for your help, Peter.

    Read the article

  • Javascript functional inheritance with prototypes

    - by cdmckay
    In Douglas Crockford's JavaScript: The Good Parts he recommends that we use functional inheritance. Here's an example: var mammal = function(spec, my) { var that = {}; my = my || {}; // Protected my.clearThroat = function() { return "Ahem"; }; that.getName = function() { return spec.name; }; that.says = function() { return my.clearThroat() + ' ' + spec.saying || ''; }; return that; } var cat = function(spec, my) { var that = {}; my = my || {}; spec.saying = spec.saying || 'meow'; that = mammal(spec, my); that.purr = function() { return my.clearThroat() + " purr"; }; that.getName = function() { return that.says() + ' ' + spec.name + ' ' + that.says(); }; return that; }; var kitty = cat({name: "Fluffy"}); The main issue I have with this is that every time I make a mammal or cat the JavaScript interpreter has to re-compile all the functions in it. That is, you don't get to share the code between instances. My question is: how do I make this code more efficient? For example, if I was making thousands of cat objects, what is the best way to modify this pattern to take advantage of the prototype object?

    Read the article

  • Advice needed on best and most efficient practices with developing google apps application...

    - by Ali
    Hi guys , I'm getting my feet wet with developing my order management applications for integration with google apps. However there are certain aspects I need to take into consideration prior to proceeding any further. My application is such that it would upload documents to google documents and store contacts in google contacts. It requires such that a single order can have a number of uploaded documents associated with it as well as some contacts associated with it. MY question however is what would be the most efficient way to implement this. I could keep key tables for both contacts and documents which woudl contain just an ID and link to the documents/contacts or their respective identification id on google. Or I could maintain an exact replica of the information on my own database as well as a link to the contact on google. However won't that be too redundant. I don't want my application to be really slow as I'm afraid that everytime I make a call to google docs to retrieve a list of documents or google contacts it would be really slow on my application - or am I getting worried for no reason? Any advice would be most appreciated.

    Read the article

  • a good book about software design

    - by Idan
    i'm looking for a book that talks about sofware decision like : when should i use thread pool and shouldn't. and in the first case, explains how. how should i acess my DB , how big my transactions should be how to read XML, to use DOM or SAX, what library to choose, and best ways to parse how to handle client-server app best efficient way and more stuff like that. is a book like that exist ? (preferably in c++ but not that important)

    Read the article

  • MVC and repository pattern data effeciency

    - by Shawn Mclean
    My project is structured as follows: DAL public IQueryable<Post> GetPosts() { var posts = from p in context.Post select p; return posts; } Service public IList<Post> GetPosts() { var posts = repository.GetPosts().ToList(); return posts; } //Returns a list of the latest feeds, restricted by the count. public IList<PostFeed> GetPostFeeds(int latestCount) { List<Post> post - GetPosts(); //CODE TO CREATE FEEDS HERE return feeds; } Lets say the GetPostFeeds(5) is supposed to return the 5 latest feeds. By going up the list, doesn't it pull down every single post from the database from GetPosts(), just to extract 5 from it? If each post is say 5kb from the database, and there is 1 million records. Wont that be 5GB of ram being used per call to GetPostFeeds()? Is this the way it happens? Should I go back to my DAL and write queries that return only what I need?

    Read the article

  • How to find the right balance between "quick & dirty" and "nice & general" code?

    - by Frank
    This is not a direct programming question, but a little help from the programming community would be appreciated. I am suffering from an overgeneralization disease. I can't stop spending valuable time with making my code most general and abstract. I could also call it the toolkit/library disease. I tend to turn every programming task into a general problem and try to "write a toolkit", that would work for many similar problems. I know it's a good thing in general, if there is enough time, but sometimes I should be writing a quick prototype and just can't seem to write the quick and dirty code that just works for the special case. I often get excited about an idea that makes the code more general and user-configurable and understimate the time it takes to actually implement it that way. Does anyone else have this experience? How can I force myself to find the right balance between "quick hack" and "nice solution"?

    Read the article

  • Cache efficient code

    - by goldenmean
    This could sound a subjective question, but what i am looking for is specific instances which you would have encountered related to this. 1) How to make a code, cache effective-cache friendly? (More cache hits, as less cahce misses as possible). from both perspectives, data cache & program cache(instruction cache). i.e. What all things in one's code, related to data structures, code constructs one should take care of to make it cache effective. 2) Are there any particular data structures one must use, must avoid,or particular way of accessing the memers of that structure etc.. to make code cache effective. 3) Are there any program constructs(if, for, switch, break, goto,...), code-flow(for inside a if, if inside a for, etc...) one should follow/avoid in this matter? I am looking forward to hear individual experiences related to making a cache efficient code in general. It can be any programming language(C,C++,ASsembly,...), any hardware target(ARM,Intel,PowerPC,...), any OS(Windows,Linux,Symbian,...) etc.. More the variety, it will help better to understand it deeply.

    Read the article

  • Should I Make These Vectors Classes or Structs in C#

    - by dewald
    I am creating a geometry library in C# and I will need the following immutable types: Vector2f (2 floats - 8 bytes) Vector2d (2 doubles - 16 bytes) Vector3f (3 floats - 12 bytes) Vector3d (3 doubles - 24 bytes) Vector4f (4 floats - 16 bytes) Vector4d (4 doubles - 32 bytes) I am trying to determine whether to make them structs or classes. MSDN suggests only using a struct if the size if going to be no greater than 16 bytes. That reference seems to be from 2005. Is 16 bytes still the max suggested size? I am sure that using structs for the float vectors would be more efficient than using a class, but what should I do about the double vectors? Should I make them structs also to be consistent, or should I make them classes?

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • Doxygen, too heavy to maintain ?

    - by Phong
    I am currently starting using doxygen to document my source code. I have notice that the syntax is very heavy, every time I modify the source code, I also need to change the comment and I really have the impression to pass too much time modifying the comment for every change I make in the source code. Do you have some tips to document my source code efficiently ? Does some editor (or plugin for existing editor) for doxygen to do the following exist? automatically track unsynchronized code/comment and warn the programmer about it. automatically add doxygen comment format (template with parameter name in it for example) in the source code (template) for every new item PS: I am working on a C/C++ project.

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

  • How to keep only duplicates efficiently?

    - by Marc Eaddy
    Given an STL vector, I'd like an algorithm that outputs only the duplicates in sorted order, e.g., INPUT : { 4, 4, 1, 2, 3, 2, 3 } OUTPUT: { 2, 3, 4 } The algorithm is trivial, but the goal is to make it as efficient as std::unique(). My naive implementation modifies the container in-place: My naive implementation: void keep_duplicates(vector<int>* pv) { // Sort (in-place) so we can find duplicates in linear time sort(pv->begin(), pv->end()); vector<int>::iterator it_start = pv->begin(); while (it_start != pv->end()) { size_t nKeep = 0; // Find the next different element vector<int>::iterator it_stop = it_start + 1; while (it_stop != pv->end() && *it_start == *it_stop) { nKeep = 1; // This gets set redundantly ++it_stop; } // If the element is a duplicate, keep only the first one (nKeep=1). // Otherwise, the element is not duplicated so erase it (nKeep=0). it_start = pv->erase(it_start + nKeep, it_stop); } } If you can make this more efficient, elegant, or general, please let me know. For example, a custom sorting algorithm, or copy elements in the 2nd loop to eliminate the erase() call.

    Read the article

  • Is writing a reference atomic on 64bit VMs

    - by Steffen Heil
    Hi The java memory model mandates that writing a int is atomic: That is, if you write a value to it (consisting of 4 bytes) in one thread and read it in another, you will get all bytes or none, but never 2 new bytes and 2 old bytes or such. This is not guaranteed for long. Here, writing 0x1122334455667788 to a variable holding 0 before could result in another thread reading 0x112233440000000 or 0x0000000055667788. Now the specification does not mandate object references to be either int or long-sized. For type safety reasons I suspect they are guaranteed to be written atomiacally, but on a 64bit VM these references could be very well 64bit values (merely memory addresses). No here are my question: Are there any memory model specs covering this (that I haven't found)? Are long-writes suspect to be atomic on 64bit VMs? Are VMs forced to map references to 32bit? Regards, Steffen

    Read the article

  • Open Source Utilization Questions: How do you lone wold programmers best take advantage of open sour

    - by Funkyeah
    For Clarity: So you come up with an idea for a new program and want to start hacking, but you also happen to be a one-man army. How do you programming dynamos best find and utilize existing open-source software to give you the highest jumping off point possible when diving into your new project? When you do jump in where the shit do you start from? Any imaginary scenarios would be welcome, e.g. a shitty example might be utilizing a open-source database with an open-source IM client as a starting off point to a make a new client where you could tag and store conversations and query those tags at a later time.

    Read the article

  • What is considered a long execution time?

    - by stjowa
    I am trying to figure out just how "efficient" my server-side code is. Using start and end microtime(true) values, I am able to calculate the time it took my script to run. I am getting times from .3 - .5 seconds. These scripts do a number of database queries to return different values to the user. What is considered an efficient execution time for PHP scripts that will be run online for a website? Note: I know it depends on exactly what is being done, but just consider this a standard script that reads from a database and returns values to the user. Also, I look at Google and see them search the internet in .15 seconds and I feel like my script is crap. Thanks.

    Read the article

  • More efficient way of updating UI from Service than intents?

    - by Donal Rafferty
    I currently have a Service in Android that is a sample VOIP client so it listens out for SIP messages and if it recieves one it starts up an Activity screen with UI components. Then the following SIP messages determine what the Activity is to display on the screen. For example if its an incoming call it will display Answer or Reject or an outgoing call it will show a dialling screen. At the minute I use Intents to let the Activity know what state it should display. An example is as follows: Intent i = new Intent(); i.setAction(SIPEngine.SIP_TRYING_INTENT); i.putExtra("com.net.INCOMING", true); sendBroadcast(i); Intent x = new Intent(); x.setAction(CallManager.SIP_INCOMING_CALL_INTENT); sendBroadcast(x); Log.d("INTENT SENT", "INTENT SENT INCOMING CALL AFTER PROCESSINVITE"); So the activity will have a broadcast reciever registered for these intents and will switch its state according to the last intent it received. Sample code as follows: SipCallListener = new BroadcastReceiver(){ @Override public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if(SIPEngine.SIP_RINGING_INTENT.equals(action)){ Log.d("cda ", "Got RINGING action SIPENGINE"); ringingSetup(); } if(CallManager.SIP_INCOMING_CALL_INTENT.equals(action)){ Log.d("cda ", "Got PHONE RINGING action"); incomingCallSetup(); } } }; IntentFilter filter = new IntentFilter(CallManager.SIP_INCOMING_CALL_INTENT); filter.addAction(CallManager.SIP_RINGING_CALL_INTENT); registerReceiver(SipCallListener, filter); This works however it seems like it is not very efficient, the Intents will get broadcast system wide and Intents having to fire for different states seems like it could become inefficient the more I have to include as well as adding complexity. So I was wondering if there is a different more efficient and cleaner way to do this? Is there a way to keep Intents broadcasting only inside an application? Would callbacks be a better idea? If so why and in what way should they be implemented?

    Read the article

  • Why is Perl commonly used for writing CGI scripts?

    - by Michael Vasquez
    I plan to add a better search feature to my site, so I thought that I would write it in C and use the CGI as a means to access it. But it seems that Perl is the most popular language when it comes to CGI-based stuff. Why is that? Wouldn't it be faster programmed in C or machine code? What advantages, if any, are there to writing it in a scripting language? Thanks.

    Read the article

  • Time Saving Minutiae

    - by Dave Jarvis
    What little tricks do you use to save time when developing? For example: In JDeveloper, change the default editor to Source view from Design view for JSP files.   Tools » Preferences » File Types » Default Editors » JSP Tag File By default, JDeveloper opens JSP files in Design view, which can take 10 to 15 seconds to begin editing a file. Opening files in Source view only takes 2 seconds.

    Read the article

  • Efficient cron job utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >