Search Results

Search found 13227 results on 530 pages for 'memory efficiency'.

Page 64/530 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Simplicity-effecincy tradeoff

    - by sarepta
    The CTO called to inform me of a new project and in the process told me that my code is weird. He explained that my colleagues find it difficult to understand due to the overly complex, often new concepts and technologies used, which they are not familiar with. He asked me to maintain a simple code base and to think of the others that will inherit my changes. I've put considerable time into mastering LINQ and thread-safe coding. However, others don't seem to care nor are impressed by anything other than their paycheck. Do I have to keep it simple (stupid), just because others are not familiar with best practices and efficient coding? Or should I continue to do what I find best and write code my way?

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Better way for calculating project euler's 2nd problem Fibonacci sequence)

    - by firephil
    object Problem_2 extends App { def fibLoop():Long = { var x = 1L var y = 2L var sum = 0L var swap = 0L while(x < 4000000) { if(x % 2 ==0) sum +=x swap = x x = y y = swap + x } sum } def fib:Int = { lazy val fs: Stream[Int] = 0 #:: 1 #:: fs.zip(fs.tail).map(p => p._1 + p._2) fs.view.takeWhile(_ <= 4000000).filter(_ % 2 == 0).sum } val t1 = System.nanoTime() val res = fibLoop val t2 = (System.nanoTime() - t1 )/1000 println(s"The result is: $res time taken $t2 ms ") } Is there a better functional way for calculating the fibonaci sequence and taking the sum of the the even values below 4million ? (projecteuler.net - problem 2) The imperative method is 1000x faster ?

    Read the article

  • Passing class names or objects?

    - by nischayn22
    I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class

    Read the article

  • Most efficient way to store this collection of moduli and remainders?

    - by Bryan
    I have a huge collection of different moduli and associated with each modulus a fairly large list of remainders. I want to store these values so that I can efficiently determine whether an integer is equivalent to any one of the remainders with respect to any of the moduli (it doesn't matter which, I just want a true/false return). I thought about storing these values as a linked-list of balanced binary trees, but I was wondering if there is a better way? EDIT Perhaps a little more detail would be helpful. As for the size of this structure, it will be holding about 10s of thousands of (prime-1) moduli and associated to each modulus will be a variable amount of remainders. Most moduli will only have one or two remainders associated to it, but a very rare few will have a couple hundred associated to it. This is part of a larger program which handles numbers with a couple thousand (decimal) digits. This program will benefit more from this table being as large as possible and being able to be searched quickly. Here's a small part of the dataset where the moduli are in parentheses and the remainders are comma separated: (46) k = 20 (58) k = 15, 44 (70) k = 57 (102) k = 36, 87 (106) k = 66 (156) k = 20, 59, 98, 137 (190) k = 11, 30, 68, 87, 125, 144, 182 (430) k = 234 (520) k = 152, 282 (576) k = 2, 11, 20, 29, 38, 47, 56, 65, 74, ...(add 9 each time), 569 I had said that the moduli were prime, but I was wrong they are each one below a prime.

    Read the article

  • Is there any hard data on the (dis-)advantages of working from home?

    - by peterchen
    Is there any hard data (studies, comparisons, not-just-gut-feel analysis) on the advantages and disadvantages of working from home? My devs asked about e.g. working from home one day per week, the boss doesn't like it for various reasons, some of which I agree with but I think they don't necessarily apply in this case. We have real offices (2..3 people each), distractions are still common. IMO it would be beneficial for focus, and with 1 day / week, there wouldn't be much loss at interaction and communication. In addition it would be a great perk, and saving the commute. Related: Pros and Cons of working Remotely/from Home (interesting points, but no hard facts) [edit] Thanks for all the feedback! To clarify: it's not my decision to make, I agree that there are pro's and con's depending on circumstances, and we are pushing for "just try it". I've asked this specific question because (a) facts are a good addition to thoughts in arguing with an engineer boss, and (b) we, as developers, should build upon facts like every respectable trade.

    Read the article

  • What's Your Method of not forgetting the end brackets, parentheses

    - by JMC Creative
    disclaimer: for simplicity sake, brackets will refer to brackets, braces, quotes, and parentheses in the couse of this question. Carry on. When writing code, I usually type the beginning and end element first, and then go back and type the inner stuff. This gets to be a lot of backspacing, especially when doing something with many nested elements like: jQuery(function($){$('#element[input="file"]').hover(function(){$(this).fadeOut();})); Is there a more efficient way of remembering how many brackets you've got open ? Or a second example with quotes: <?php echo '<input value="'.$_POST['name'].'" />"; ?>

    Read the article

  • At what point asynchronous reading of disk I/O is more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • What is the best way to INSERT a large dataset into a MySQL database (or any database in general)

    - by Joe
    As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?

    Read the article

  • Using replacement to get possible outcomes to then search through HUGE amount of data

    - by Samuel Cambridge
    I have a database table holding 40 million records (table A). Each record has a string a user can search for. I also have a table with a list of character replacements (table B) i.e. i = Y, I = 1 etc. I need to be able to take the string a user is searching for, iterate through each letter and create an array of every possible outcome (the users string, then each outcome with alternative letters used). I need to check for alternatives on both lower and uppercase letters in the word A search string can be no longer than 10 characters long. I'm using PHP and a MySQL database. Does anyone have any thoughts / articles / guidance on doing this in an efficient way?

    Read the article

  • What is an efficient algorithm for randomly assigning a pool of objects to a parent using specific rules

    - by maple_shaft
    I need some expert answers to help me determine the most efficient algorithm in this scenario. Consider the following data structures: type B { A parent; } type A { set<B> children; integer minimumChildrenAllowed; integer maximumChildrenAllowed; } I have a situation where I need to fetch all the orphan children (there could be hundreds of thousands of these) and assign them RANDOMLY to A type parents based on the following rules. At the end of the job, there should be no orphans left At the end of the job, no object A should have less children than its predesignated minimum. At the end of the job, no object A should have more children than its predesignated maximum. If we run out of A objects then we should create a new A with default values for minimum and maximum and assign remaining orphans to these objects. The distribution of children should be as evenly distributed as possible. There may already be some children assigned to A before the job starts. I was toying with how to do this but I am afraid that I would just end up looping across the parents sorted from smallest to largest, and then grab an orphan for each parent. I was wondering if there is a more efficient way to handle this?

    Read the article

  • Do teams get more productive by adding more developers? [duplicate]

    - by jgauffin
    This question already has an answer here: Why does adding more resource to a late project make it later? 12 answers Suppose you've got a project that is running late. Is there any proof or argument that teams become much more productive by adding more people? I am looking for answers that can be supported by facts and references if possible. What I'm thinking about is that existing devs have to teach the new ones (thus losing overall development time), and then the new developers have to study the code (and tasks) before they can become fully productive.

    Read the article

  • URL Encryption vs. Encoding

    - by hozza
    At the moment non/semi sensitive information is sent from one page to another via GET on our web application. Such as user ID or page number requested etc. Sometimes slightly more sensitive information is passed such as account type, user privileges etc. We currently use base64_encode() and base64_decode() just to de-humanise the information so the end user is not concerned. Is it good practice or common place for a URL GET to be encrypted rather than simply PHP base64_encoded? Perhaps using something like, this: $encrypted = base64_encode(mcrypt_encrypt(MCRYPT_RIJNDAEL_256, md5($key), $string, MCRYPT_MODE_CBC, md5(md5($key)))); $decrypted = rtrim(mcrypt_decrypt(MCRYPT_RIJNDAEL_256, md5($key), base64_decode($encrypted), MCRYPT_MODE_CBC, md5(md5($key))), "\0"); Is this too much or too power hungry for something as common as the URL GET.

    Read the article

  • Is it possible to store pointers in shared memory without using offsets?

    - by Joseph Garvin
    When using shared memory, each process may mmap the shared region into a different area of their address space. This means that when storing pointers within the shared region, you need to store them as offsets of the start of the shared region. Unfortunately, this complicates use of atomic instructions (e.g. if you're trying to write a lock free algorithm). For example, say you have a bunch of reference counted nodes in shared memory, created by a single writer. The writer periodically atomically updates a pointer 'p' to point to a valid node with positive reference count. Readers want to atomically write to 'p' because it points to the beginning of a node (a struct) whose first element is a reference count. Since p always points to a valid node, incrementing the ref count is safe, and makes it safe to dereference 'p' and access other members. However, this all only works when everything is in the same address space. If the nodes and the 'p' pointer are stored in shared memory, then clients suffer a race condition: x = read p y = x + offset Increment refcount at y During step 2, p may change and x may no longer point to a valid node. The only workaround I can think of is somehow forcing all processes to agree on where to map the shared memory, so that real pointers rather than offsets can be stored in the mmap'd region. Is there any way to do that? I see MAP_FIXED in the mmap documentation, but I don't know how I could pick an address that would be safe.

    Read the article

  • How do I keep from running out of memory on graphics for an Android app?

    - by user279112
    I've been working on an Android app in Eclipse, and so far, my program hasn't really grown past midget size. However I've already run into an issue with an Out of Memory error. You see, I've been using graphics comprised solely of bitmaps and PNGs in this program, and recently, when I tried to add a little bit more functionality to the program (mainly including a few more bitmaps and causing an extra sprite to be created), it started crashing in the graphics thread's constructor - sprite's constructor. When I tracked the problem down, it turned out to be an Out of Memory error that is seemingly caused by adding too many picture files to the program and creating Drawables out of them. This would be a problem, as I really don't have that many picture resources worked into that program...maybe 20 or so. I haven't even started to include sound yet. These images aren't all that fancy. My questions are this: 1) Are programs for the Android phone really that limited on how much memory they can employ, or is it probably something other than the 20-30 resource pictures causing that error? 2) If the memory for Android apps is so awful it can't even handle 20-30 picture resources being loaded into Drawables that exist at the same time, then how in the world are you supposed to make decent graphics and sound for that thing? Thanks.

    Read the article

  • GetIpAddrTable() leaks memory. How to resolve that?

    - by Stabledog
    On my Windows 7 box, this simple program causes the memory use of the application to creep up continuously, with no upper bound. I've stripped out everything non-essential, and it seems clear that the culprit is the Microsoft Iphlpapi function "GetIpAddrTable()". On each call, it leaks some memory. In a loop (e.g. checking for changes to the network interface list), it is unsustainable. There seems to be no async notification API which could do this job, so now I'm faced with possibly having to isolate this logic into a separate process and recycle the process periodically -- an ugly solution. Any ideas? // IphlpLeak.cpp - demonstrates that GetIpAddrTable leaks memory internally: run this and watch // the memory use of the app climb up continuously with no upper bound. #include <stdio.h> #include <windows.h> #include <assert.h> #include <Iphlpapi.h> #pragma comment(lib,"Iphlpapi.lib") void testLeak() { static unsigned char buf[16384]; DWORD dwSize(sizeof(buf)); if (GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false) == ERROR_INSUFFICIENT_BUFFER) { assert(0); // we never hit this branch. return; } } int main(int argc, char* argv[]) { for ( int i = 0; true; i++ ) { testLeak(); printf("i=%d\n",i); Sleep(1000); } return 0; }

    Read the article

  • How to detect .NET WPF memory leak or GC long run?

    - by Néstor Sánchez A.
    I have the next very strange situation and problem: .NET 4.0 application for diagram editing (WPF). Runs ok in my PC: 8GM RAM, 3.0GHz, i7 quad-core. While creating objects (mostly diagram nodes and connectors, plus all the undo/redo information) the TaskManager show, as expected, some memory usage "jumps" (up and down). These mem-usage "jumps" also remains executing AFTER user interaction ended. Maybe this is the GC cleaning/regorganizing memory? To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction. PROBLEM: It Freezes/Hangs after seconds or minutes of usage in some slow/weak laptos/netbooks of my beta testers (under 2GHz of speed and under 2GB of RAM). I was thinking of a memory leak, but... EDIT: Also, there is the case that the memory usage grows and grows until collapse (only in slow machines). In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine without mem-usage "jumps" after user interaction (no GC cleaning?!). So, I really have a big trouble because I cannot reproduce the error, only see these strange behaviour (mem jumps), and the tool supposed to show me what is happening is hiding the problem (like the "observer's paradox"). Any ideas on what's happening and how to solve it?

    Read the article

  • Huge burst of memory in c# service, what could be the cause?

    - by Daniel
    I'm working on a c# service application and i have this problem where out of no where and for no obvious reason, the memory for the process will climb from 150mb to almost 2gb in about 5 seconds and then back to 150mb. But nothing in our system should be using any where near that amount of memory (so its probably a bug somewhere). It might be a tight while true loop somewhere but the cpu usage at the time was very low so i thought i'd look for other ideas. Now the weirder thing is when i compile the service for 64bit, the same massive burst will occur except it exceeded 10gb of ram (paging most of it) and it just caused lots of problems with the computer and everything running on it. After a while it shuts down but it looks like windows is still willing to give it more memory. Would you have any ideas or tools that i can use in order to find this? Yes it has lots of logging however nothing in the logs stand out as to why this is happening. I can run the service in a console app mode, so my next test was going to be running it in visual studio debugger and see if i can find anything. It only happens occasionally but usually about 10-20 minutes after startup. On 32bit mode it cleans up and continues on like normally. 64bit mode it crashes after a while and uses stupid amounts of memory. But i'm really stumped as to why this is happening!!!

    Read the article

  • Users take over a minute to log onto a 2008 windows server. LSM.exe running at 100MB+ memory.

    - by seanyboy
    We've a 64bit Windows Server 2008 running Remote Desktop. The application lsm.exe (the local session manager) appears to be leaking memory. Although the memory usage is quite low when the server is rebooted, this continues to climb until people can no longer log in. The server has no audio card and does not have any AV software installed. The server is fully service packed. (Service pack 2) What could be causing this, and how would I fix it?

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

  • Increasing Regex Efficiency

    - by cam
    I have about 100k Outlook mail items that have about 500-600 chars per Body. I have a list of 580 keywords that must search through each body, then append the words at the bottom. I believe I've increased the efficiency of the majority of the function, but it still takes a lot of time. Even for 100 emails it takes about 4 seconds. I run two functions for each keyword list (290 keywords each list). public List<string> Keyword_Search(HtmlNode nSearch) { var wordFound = new List<string>(); foreach (string currWord in _keywordList) { bool isMatch = Regex.IsMatch(nSearch.InnerHtml, "\\b" + @currWord + "\\b", RegexOptions.IgnoreCase); if (isMatch) { wordFound.Add(currWord); } } return wordFound; } Is there anyway I can increase the efficiency of this function? The other thing that might be slowing it down is that I use HTML Agility Pack to navigate through some nodes and pull out the body (nSearch.InnerHtml). The _keywordList is a List item, and not an array.

    Read the article

  • Is it possible to share a C struct in shared memory between apps compiled with different compilers?

    - by Joseph Garvin
    I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors? I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.

    Read the article

  • What are some of best Javascript memory detecting tools?

    - by Philip Fourie
    Our team is faced with slow but serious Javascript memory leak. We have read up on the normal causes for memory leaks in Javascript (eg. closures and circular references). We tried to avoid those pitfalls in the code but it likely we still have unknown mistakes left in our code. I started my search for available tools but would like input from people with actual experience with these tools. Some of the tools I found so far (but have no idea how good and useful they would be for our problem): Sieve Drip JavaScript Memory Leak Detector Our search is not limited to free tools, it will be a bonus, but more importantly something that will get the job done. We do the following in our Javascript code: AJAX calls to a .NET WCF back-end that send back JSON data Manipulate the DOM Keep a fairly sized object model in the Javascript to store current state

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >