Search Results

Search found 6207 results on 249 pages for 'slow'.

Page 207/249 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • jquery slideToggle() and unknown height?

    - by GaVrA
    Hello! Im using jquery 1.3.2 and this is the code: <script type="text/javascript"> //<![CDATA[ jQuery(function(){ jQuery('.news_bullet a.show_preview').click(function() { jQuery(this).siblings('div').slideToggle('slow'); return false; }).toggle( function() { jQuery(this).css({ 'background-position' : '0 -18px' }); }, function() { jQuery(this).css({ 'background-position' : '0 0' }); }); }); //]]> </script> If you see here i have bunch of small green + which when you click some text is revealed and background-position is changed for that link so then it shows the other part of image, red -. So the problem i am having is that i dont know the height for those hidden elements, because it depends on how much text there is, so when i click on + and show them, animation is 'jumping'. One workaround that i found is to put fixed height and overflow:hidden to those hidden elements. You can see how much smoother animation is running in top left block(the one with 'Vesti iz sveta crtanog filma' at the top). All other blocks dont have fixed height, and animation there is 'jumping'. Atm that fixed height in top left block is 30px, but ofc some elements require more height and some require less, so that is not a good solution... :) So how to stop this animation from 'jumping' when there is no fixed height?

    Read the article

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • Paging with Find using Active Record

    - by Brian Rizzo
    I can't seem to find an answer to this question or a good example of how to accomplish what I am trying to do. I'm sure it's been posted or explained somewhere, but I am having trouble finding the exact solution I need. I am using ActiveRecord in Subsonic 3.0.0.3. When I do something like recordset = VehicleModel.Find(x => x.Model.StartsWith(SearchText)); I get back an IList of VehicleModel objects (or more simply a recordset), this is fine until I return too many records. I also cannot order the returned set of records (my grid will do this fine, but i'm sure it will be too slow if i have too many records). Being that Find is returning an IList there isn't much that I can run directly against this (again I may be overlooking something simple so please don't kill me). My question is can someone explain how to find data like i am above, sort it and get a page of data where a page is of size n? Am I going about this wrong? Am I even close to being on the right track?

    Read the article

  • XNA 2D mouse picking

    - by Corndog
    I'm working on a simple 2D Real time strategy game using XNA. Right now I have reached the point where I need to be able to click on the sprite for a unit or building and be able to reference the object associated with that sprite. From the research I have done over the last three days I have found many references on how to do "Mouse picking" in 3D which does not seem to apply to my situation. I understand that another way to do this is to simply have an array of all "selectable" objects in the world and when the player clicks on a sprite it checks the mouse location against the locations of all the objects in the array. the problem I have with this approach is that it would become rather slow if the number of units and buildings grows to larger numbers. (it also does not seem very elegant) so what are some other ways I could do this. (Please note that I have also worked over the ideas of using a Hash table to associate the object with the sprite location, and using a 2 dimensional array where each location in the array represents one pixel in the world. once again they seem like rather clunky ways of doing things.)

    Read the article

  • Why won't this DOM element disappear?

    - by George Edison
    I have a page that uses jQuery with a small glitch. I managed to get this down to a simple example that demonstrates the problem: <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function hideIt() { $('#hideme').fadeOut('slow', function() { $(this).remove(); } ); } </script> </head> <body> <div id='#hideme'>Hide me!</div> <button onclick='hideIt();'>Hide</button> </body> </html> As you would expect, the problem is simple: the caption doesn't disappear. What simple thing did I overlook? (Or if it's not a simple thing, what complicated thing did I miss?)

    Read the article

  • Mysql latin1 turkish data and delphi 2010 utf8

    - by sabri.arslan
    Hello, I have tables collating latin1_general_ci and have turkish character values. And i can use this data on delphi 7+zeos with no problem. but i want to upgrade my delphi to 2010 version but zeos too slow as i saw. so i want to use odbc+ado or dbexpress solution. dbexpress solution works fine , display my data as entered and write as entered table without any change to column charset. but dbexpress has problems as i saw. for example when i select * from table which has column types as varchar,decimal,int,tinyint,text give av errors on xp systems. vista and 7 does not give any error and work fine(not fully tested). ado solution(dbgo) works fine but its not show my data as entered.its want everything be utf. but i don't want to convert my data to utf before test everything. how can i see my data as entered and write client side utf and store latin1(as zeos or dbexpress do). i was tried many other options. eg. mysql side collation and charset parameters. sorry for my bad english. i hope someone understand me. thanks.

    Read the article

  • Improve Efficiency for This Text Processing Code

    - by johnv
    I am writing a program that counts the number of words in a text file which is already in lowercase and separated by spaces. I want to use a dictionary and only count the word IF it's within the dictionary. The problem is the dictionary is quite large (~100,000 words) and each text document has also ~50,000 words. As such, the codes that I wrote below gets very slow (takes about 15 sec to process one document on a quad i7 machine). I'm wondering if there's something wrong with my coding and if the efficiency of the program can be improved. Thanks so much for your help. Code below: public static string WordCount(string countInput) { string[] keywords = ReadDic(); /* read dictionary txt file*/ /*then reads the main text file*/ Dictionary<string, int> dict = ReadFile(countInput).Split(' ') .Select(c => c) .Where(c => keywords.Contains(c)) .GroupBy(c => c) .Select(g => new { word = g.Key, count = g.Count() }) .OrderBy(g => g.word) .ToDictionary(d => d.word, d => d.count); int s = dict.Sum(e => e.Value); string k = s.ToString(); return k; }

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

  • Project Euler 7 Scala Problem

    - by Nishu
    I was trying to solve Project Euler problem number 7 using scala 2.8 First solution implemented by me takes ~8 seconds def problem_7:Int = { var num = 17; var primes = new ArrayBuffer[Int](); primes += 2 primes += 3 primes += 5 primes += 7 primes += 11 primes += 13 while (primes.size < 10001){ if (isPrime(num, primes)) primes += num if (isPrime(num+2, primes)) primes += num+2 num += 6 } return primes.last; } def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(num) for (i <- primes){ if(i <= r ){ if (num % i == 0) return false; } } return true; } Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds. def problem_7_alt:Int = { var limit = 10001; var count = 6; var num:Int = 17; while(count < limit){ if (isPrime2(num)) count += 1; if (isPrime2(num+2)) count += 1; num += 6; } return num; } def isPrime2(n:Int):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(n) var f = 5; while (f <= r){ if (n % f == 0) { return false; } else if (n % (f+2) == 0) { return false; } f += 6; } return true; } I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?

    Read the article

  • Out of Core Implementation of a Quadtree

    - by Nima
    Hi, I am trying to build a Quadtree data structure(or let's just say a tree) on the secondary memory(Hard Disk). I have a C++ program to do so and I use fopen to create the files. Also, I am using tesseral coding to store each cell in a file named with its corresponding code to store it on the disk in one directory. The problem is that after creating about 1,100 files, fopen just returns NULL and stops creating new files. I can create further files manually in that directory, but using C++ it can not create any further files. I know about max limit of inode on ext3 filesystem which is (from Wikipedia) 32,000 but mine is way less than that, also note that I can create files manually on the disk; just not through fopen. Also, I really appreciate any idea regarding the best way to store a very dynamic quadtree on disk(I need the nodes to be in separate files and the quadtree might have a depth of 50). Using nested directories is one idea, but I think it will slow down the performance because of following the links on the filesystem to access the file. Thanks, Nima

    Read the article

  • Multiple connections in a single SSH SOCKS 5 Proxy

    - by Elie Zedeck
    Hey guys, My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections? What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made. I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow. network.http.pipelining;true network.http.pipelining.maxrequests;8 network.http.pipelining.ssl;true network.http.proxy.pipelining;true network.http.max-persistent-connections-per-proxy;100 network.proxy.socks_remote_dns;true My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this. The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it. Thanks for all the inputs.

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • Hidden controls, iframes or divs

    - by user287745
    What happens to the controls or the iframe or the div, which are hidden? Do they get transferred to the user side? Disabled: does it get transferred to the user side? What I want is, an aspx page will be having many iframes to display different pages. There will be many div tags to display CSS formatted information. To understand what I mean by many:- I have to transfer a complete website with 30 aspx pages into one single page! I have simply combined everything resulting in one extremely huge page. My concern is that on local host it loads fast, but when on online server accessed by numerous people for education purposes, the site (ONE PAGE) WILL SLOW DOWN terribly. To overcome this I thought of using hidden and disable options. What is an improved way of achieving the above? Yes, it sounds silly but this is the requirement. Edit: Yes, I know id and server tag must be set, but what I am asking will the div tag be sent to the user's browser? One answer is no. So can I enable them using JavaScript? Like document.getElementById(id).style.visibility="visible" What if I disable them, and from coding of JavaScript enable them? Will they be loaded at the time of enabling?

    Read the article

  • C# .NET: Descending comparison of a SortedDictionary?

    - by Rosarch
    I'm want a IDictionary<float, foo> that returns the larges values of the key first. private IDictionary<float, foo> layers = new SortedDictionary<float, foo>(new DescendingComparer<float>()); class DescendingComparer<T> : IComparer<T> where T : IComparable<T> { public int Compare(T x, T y) { return -y.CompareTo(x); } } However, this returns values in order of the smallest first. I feel like I'm making a stupid mistake here. Just to see what would happen, I removed the - sign from the comparator: public int Compare(T x, T y) { return y.CompareTo(x); } But I got the same result. This reinforces my intuition that I'm making a stupid error. This is the code that accesses the dictionary: foreach (KeyValuePair<float, foo> kv in sortedLayers) { // ... } UPDATE: This works, but is too slow to call as frequently as I need to call this method: IOrderedEnumerable<KeyValuePair<float, foo>> sortedLayers = layers.OrderByDescending(kv => kv.Key); foreach (KeyValuePair<float, ICollection<IGameObjectController>> kv in sortedLayers) { // ... } UPDATE: I put a break point in the comparator that never gets hit as I add and remove kv pairs from the dictionary. What could this mean?

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Formatting inline many-to-many related models presented in django admin

    - by Jonathan
    I've got two django models (simplified): class Product(models.Model): name = models.TextField() price = models.IntegerField() class Invoice(models.Model): company = models.TextField() customer = models.TextField() products = models.ManyToManyField(Product) I would like to see the relevant products as a nice table (of product fields) in an Invoice page in admin and be able to link to the individual respective Product pages. My first thought was using the admin's inline - but django used a select box widget per related Product. This isn't linked to the Product pages, and also as I have thousands of products, and each select box independently downloads all the product names, it quickly becomes unreasonably slow. So I turned to using ModelAdmin.filter_horizontal as suggested here, which used a single instance of a different widget, where you have a list of all Products and another list of related Products and you can add\remove products in the later from the former. This solved the slowness, but it still doesn't show the relevant Product fields, and it ain't linkable. So, what should I do? tweak views? override ModelForms? I Googled around and couldn't find any example of such code...

    Read the article

  • Query optimization (OR based)

    - by john194
    I have googled but I can't find answers for these questions. Your advice is appreciated. centOS on vps with 512MB RAM, nginx, php5 (fastcgi), mysql5 (myisam, not innodb). I need to optimize this app created by some ex-employee. This app is working, but it's slow. Table: t1(id[bigint(20)],c1[mediumtext],c2[mediumtext],c3[mediumtext],c4[mediumtext]) id is some random big number, and is PK Those mediumtext rows look like this: c1="|box-002877|" c2="|ct-2348|rd-11124854|hw-3949|wd-8872|hw-119037736|...etc.. " c3="|fg-2448|wd-11172|hw-1656|...etc.. " c4="|hg-2448|qd-16667|...etc." (some columns contain a lot of data, around 900 KiB, database around 300 MiB) Yes, mediumtext "is bad", and (20) is too big... but I didn't create this. Those codes can be found on any of those 4 mediumtext's... //he needs all the columns of the row containing $code, so he wrote this: function f1($code) { SELECT * FROM t1 WHERE c1 LIKE '%$code%' OR c2 LIKE '%$code%' OR c3 LIKE '%$code%' OR c4 LIKE '%$code%'; Questions: Q1. If $code is found on c1... mysql automatically stops checking and returns row=id+c1+c2+c3+c4? or it will continue (wasting time) checking c2, c3 and c4?... Q2. Mysql is working with this table on disk (not RAM) because of the mediumtext, right? is this the primary cause of slowness? Q3. That query can be cached by mysql (if using a big query_cache_size=128M value on the my.cnf)? or that's not cacheable due to the mediumtexts, or due to the "OR LIKE"...? Q4. Do you recommend rewriting this with mysql's INSTR() / LOCATE() / MATCH..AGAINST [FULLTEXT]?

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Better ways to implement a modulo operation (algorithm question)

    - by ryxxui
    I've been trying to implement a modular exponentiator recently. I'm writing the code in VHDL, but I'm looking for advice of a more algorithmic nature. The main component of the modular exponentiator is a modular multiplier which I also have to implement myself. I haven't had any problems with the multiplication algorithm- it's just adding and shifting and I've done a good job of figuring out what all of my variables mean so that I can multiply in a pretty reasonable amount of time. The problem that I'm having is with implementing the modulus operation in the multiplier. I know that performing repeated subtractions will work, but it will also be slow. I found out that I could shift the modulus to effectively subtract large multiples of the modulus but I think there might still be better ways to do this. The algorithm that I'm using works something like this (weird pseudocode follows): result,modulus : integer (n bits) (previously defined) shiftcount : integer (initialized to zero) while( (modulus<result) and (modulus(n-1) != 1) ){ modulus = modulus << 1 shiftcount++ } for(i=shiftcount;i>=0;i++){ if(modulus<result){result = result-modulus} if(i!=0){modulus = modulus << 1} } So...is this a good algorithm, or at least a good place to start? Wikipedia doesn't really discuss algorithms for implementing the modulo operation, and whenever I try to search elsewhere I find really interesting but incredibly complicated (and often unrelated) research papers and publications. If there's an obvious way to implement this that I'm not seeing, I'd really appreciate some feedback.

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >