Search Results

Search found 10417 results on 417 pages for 'large'.

Page 35/417 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • CGContextDrawPDFPage taking up large amounts of memory

    - by Ed Marty
    I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page. However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page. Here's the code for drawing: //Note: This is always called in a background thread, but the autorelease pool is setup elsewhere + (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g { CGPDFBox box = kCGPDFMediaBox; CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES); CGRect pageRect = CGPDFPageGetBoxRect(m_page, box); //Start the drawing CGContextSaveGState(g); //Clip to our bounding box CGContextClipToRect(g, pageRect); //Now we have to flip the origin to top-left instead of bottom left //First: flip y-axix CGContextScaleCTM(g, 1, -1); //Second: move origin CGContextTranslateCTM(g, 0, -rect.size.height); //Now apply the transform to draw the page within the rect CGContextConcatCTM(g, t); //Finally, draw the page //The important bit. Commenting out the following line "fixes" the crashing issue. CGContextDrawPDFPage(g, m_page); CGContextRestoreGState(g); } Is there a better way to draw this image that doesn't take up huge amounts of memory?

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

  • Horizontal Scrolling Flash Game/Large Horizontal Scene

    - by Nathan
    Hello, I'm currently learning Flash (CS4, AS3) and am creating a game. I have currently 1 flv file with 4 scenes, I then move from left to right and then to scene 2 and go from left to right. This is the game where items pop up that need to be clicked on and you get points. Is there any way I can combine these onto 1 scene? Flash only allows you to have a maximum of 2880px wide. The reason for this is the transition between the scenes is RUBBISH and that my AS is not working correctly in between scenes (it loses values). Any help would be greatly appreciated! Nathan

    Read the article

  • 2D colliding n-body simulation (fast Collision Detection for large number of balls)

    - by osgx
    Hello I want to write a program for simulating a motion of high number (N = 1000 - 10^5 and more) of bodies (circles) on 2D plane. All bodies have equal size and the only force between them is elastic collision. I want to get something like but in larger scale, with more balls and more dense filling of the plane (not a gas model as here, but smth like boiling water model). So I want a fast method of detection that ball number i does have any other ball on its path within 2*radius+V*delta_t distance. I don't want to do a full search of collision with N balls for each of i ball. (This search will be N^2.) PS Sorry for loop-animated GIF. Just press Esc to stop it. (Will not work in Chrome).

    Read the article

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • Visualizing Undirected Graph That's Too Large for GraphViz?

    - by Gabe
    Hi Everyone, I was wondering if anyone has any advice for rendering an undirected graph with 178,000 nodes and 500,000 edges. I've tried Neato, Tulip, and Cytoscape. Neato doesn't even come remotely close, and Tulip and Cytoscape claim they can handle it but don't seem to be able to. (Tulip does nothing and Cytoscape claims to be working, and then just stops.) Does anyone have any ideas? I'd just like a vector format file (ps or pdf) with a remotely reasonable layout of the nodes. Thanks!

    Read the article

  • How to refactor large projects in visual studio

    - by Aaron
    I always run into a problem where my projects in Visual Studio (2008) become huge monstrosities and everything is generally thrown into a Web Application project. I know from checking out some open source stuff that they tend to have multiple projects within a solution, each with their own responsibilities. Does anyone have any advice for how to refactor this out? What should be in a separate project vs. part of the web project? Can you point me to any reference materials on the subject, or is it just something you become accustomed to with time?

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Slow select when inserting large amounts of data (MYSQL)

    - by siannopollo
    I have a process that imports a lot of data (950k rows) using inserts that insert 500 rows at a time. The process generally takes about 12 hours, which isn't too bad. Normally doing a query on the table is pretty quick (under 1 second) as I've put (what I think to be) the proper indexes in place. The problem I'm having is trying to run a query when the import process is running. It is making the query take almost 2 minutes! What can I do to make these two things not compete for resources (or whatever)? I've looked into "insert delayed" but not sure I want to change the table to MyISAM. Thanks for any help!

    Read the article

  • very large image manipulation and tiling

    - by Mohammad
    I need to a software , Program(Java),or a method for tiling very larg images (more than 140MB). I have used imagemagic and convert tools photoshop and corel draw and matlab (in win os) but I have problem with memory amount.and memory is not enough.imagemagic is very slow and result is not desirable. I dont know how can i only load a small part of image on hard disk to RAM .(with out load whole image from hard)

    Read the article

  • Large file uploads from web pages

    - by jerrygarciuh
    Hi folks, I code primarily in PHP and Perl. I have a client who is insisting on seeking video submissions (any encoding) from the public via one of their pages rather than letting YouTube do its job. Server in question is a virtual machine and I can adjust ini settings for max post, max upload size etc as needed. My initial thought is to use a Flash based uploader with PHP on the back end but I wondered if someone might have useful advice and experience on the subject? Peace JG

    Read the article

  • Large scale storage for incrementally-appended documents?

    - by Ben Dilts
    I need to store hundreds of thousands (right now, potentially many millions) of documents that start out empty and are appended to frequently, but never updated otherwise or deleted. These documents are not interrelated in any way, and just need to be accessed by some unique ID. Read accesses are some subset of the document, which almost always starts midway through at some indexed location (e.g. "document #4324319, save #53 to the end"). These documents start very small, at several KB. They typically reach a final size around 500KB, but many reach 10MB or more. I'm currently using MySQL (InnoDB) to store these documents. Each of the incremental saves is just dumped into one big table with the document ID it belongs to, so reading part of a document looks like "select * from saves where document_id=14 and save_id 53 order by save_id", then manually concatenating it all together in code. Ideally, I'd like the storage solution to be easily horizontally scalable, with redundancy across servers (e.g. each document stored on at least 3 nodes) with easy recovery of crashed servers. I've looked at CouchDB and MongoDB as possible replacements for MySQL, but I'm not sure that either of them make a whole lot of sense for this particular application, though I'm open to being convinced. Any input on a good storage solution?

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • Storing a large list in isolatedStorage on WP7

    - by Ra
    I'm storing a List with around 3,000 objects in Isolatedstorage using Xml serialize. It takes too long to deserialize this and I was wondering if you have any recommendations to speed it up. The time is tolerable to deserialize up to 500 objects, but takes forever to deserialize 3000. Does it take longer just on the emulator and will be faster on the phone? I did a whole bunch of searching and some article said to use a binary stream reader, but I can't find it. Whether I store in binary or xml doesn't matter, I just want to persist the List. I don't want to look at asynchronous loading just yet...

    Read the article

  • How large should my recv buffer be when calling recv in the socket library

    - by Silmaril89
    Hi, I have a few questions about the socket library in C. Here is a snippet of code I'll refer to in my questions. char recv_buffer[3000]; recv(socket, recv_buffer, 3000, 0); First, How do I decide how big to make recv_buffer? I'm using 3000, but it's arbitrary. Second, what happens if recv() receives a packet bigger than my recv_buffer? Third, how can I know if I have received the entire message without calling recv again and have it wait forever when there is nothing to be received? And finally, is there a way I can make a buffer not have a fixed amount of space, so that I can keep adding to it without fear of running out of space? maybe using strcat to concatenate the latest recv() response to the buffer? I know it's a lot of questions in one, but I would greatly appreciate any responses.

    Read the article

  • Encrypted AES key too large to Decrypt with RSA (Java)

    - by Petey B
    Hello, I am trying to make a program that Encrypts data using AES, then encrypts the AES key with RSA, and then decrypt. However, once i encrypt the AES key it comes out to 128 bytes. RSA will only allow me to decrypt 117 bytes or less, so when i go to decrypt the AES key it throws an error. Relavent code: KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA"); kpg.initialize(1024); KeyPair kpa = kpg.genKeyPair(); pubKey = kpa.getPublic(); privKey = kpa.getPrivate(); updateText("Private Key: " +privKey +"\n\nPublic Key: " +pubKey); updateText("Encrypting " +infile); //Genereate aes key KeyGenerator kgen = KeyGenerator.getInstance("AES"); kgen.init(128); // 192/256 SecretKey aeskey = kgen.generateKey(); byte[] raw = aeskey.getEncoded(); SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES"); updateText("Encrypting data with AES"); //encrypt data with AES key Cipher aesCipher = Cipher.getInstance("AES"); aesCipher.init(Cipher.ENCRYPT_MODE, skeySpec); SealedObject aesEncryptedData = new SealedObject(infile, aesCipher); updateText("Encrypting AES key with RSA"); //encrypt AES key with RSA Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, pubKey); byte[] encryptedAesKey = cipher.doFinal(raw); updateText("Decrypting AES key with RSA. Encrypted AES key length: " +encryptedAesKey.length); //decrypt AES key with RSA Cipher decipher = Cipher.getInstance("RSA"); decipher.init(Cipher.DECRYPT_MODE, privKey); byte[] decryptedRaw = cipher.doFinal(encryptedAesKey); //error thrown here because encryptedAesKey is 128 bytes SecretKeySpec decryptedSecKey = new SecretKeySpec(decryptedRaw, "AES"); updateText("Decrypting data with AES"); //decrypt data with AES key Cipher decipherAES = Cipher.getInstance("AES"); decipherAES.init(Cipher.DECRYPT_MODE, decryptedSecKey); String decryptedText = (String) aesEncryptedData.getObject(decipherAES); updateText("Decrypted Text: " +decryptedText); Any idea on how to get around this?

    Read the article

  • HTML table headers always visible at top of window when viewing a large table

    - by Craig McQueen
    I would like to be able to "tweak" an HTML table's presentation to add a single feature: when scrolling down through the page so that the table is on the screen but the header rows are off-screen, I would like the headers to remain visible at the top of the viewing area. This would be conceptually like the "freeze panes" feature in Excel. However, an HTML page might contain several tables in it and I only would want it to happen for the table that is currently in-view, only while it is in-view. Note: I've seen one solution where the table data area is made scrollable while the headers do not scroll. That's not the solution I'm looking for.

    Read the article

  • Large data processing in x86 C# gives System.OutOfMemory exception

    - by Cool
    I am processing XML coming from server which contains both images and data in one C# function (compiled 32 bit). When I try to parse this xml in memory it gives me System.OutOfMemory exception. Is there any way to avoid this error? My guess is, system cannot find contiguous block of 50-100MB memory. (my pc hv 8Gig ram and its quad core)

    Read the article

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • ASP.NET: Large number of Session_Start with same session id

    - by Jaap
    I'm running a ASP.NET website on my development box (.NET 2.0 on Vista/IIS7). The Session_Start method in global.asax.cs logs every call to a file (log4net). The Session_End method also logs every call. I'm using InProc session state, and set the session timeout to 5 mins (to avoid waiting for 20 mins). I hit the website, wait for 5 minutes unit I see the Session_End logging. Then I F5 the website. The browsers still has the session cookie and sends it to the server. Session_Start is called and a new session is created using the same session id (btw: I need this to be the same session id, because it is used to store data in database). Result: Every time I hit F5 on a previously ended session, the Session_Start method is called. When I open a different browser, the Session_Start method is called just once. Then after 5 minutes the Session_End each F5 causes the Session_Start method to execute. Can anyone explain why this is happening? Update: After the Session timeout, all subsequent requests have a session start & session end. So in the end my question is: why are the sessions on these subsequent request closed immediatly? 2010-02-09 14:49:08,754 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,754 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET http://localhost:80/js/settings.js 2010-02-09 14:49:08,756 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,760 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,760 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=core 2010-02-09 14:49:08,761 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,762 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,762 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /js/package.aspx?name=all 2010-02-09 14:49:08,763 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,763 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,763 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=rest 2010-02-09 14:49:08,764 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,764 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,765 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=vacation 2010-02-09 14:49:08,765 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq web.config relevant section: <system.web> <compilation debug="true" /> <sessionState timeout="2" regenerateExpiredSessionId="false" /> </system.web>

    Read the article

  • RichFaces rich:insert takes a long time to output large files

    - by Mark Lewis
    Hello I'm using a RichFaces <rich:insert like this: <rich:panel header="my head"> <a4j:outputPanel ajaxRendered="true"> <rich:insert src="#{MyBacking.myPath}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> If I have a 60k file to output, it takes 23 seconds. I've got a requirement to output the contents of some larger files than that and obviously the larger the file, the larger the wait for content. The recommendation in the answer to another related question is to introduce paging. I will, but the question is, why does it take so long to output 60k of text using JSF/RichFaces? That is, reading off a local disk with Windows XP SP2 PC - I can see from the log the data has already been written to disk from the network. Other scripting languages appear to be faster than this - is it something to do with the JSF lifecycle having to handle the text maybe? Thanks

    Read the article

  • Python - CSV: Large file with rows of different lengths

    - by dassouki
    In short, I have a 20,000,000 line csv file that has different row lengths. This is due to archaic data loggers and proprietary formats. We get the end result as a csv file in the following format. MY goal is to insert this file into a postgres database. How Can I do the following: Keep the first 8 columns and my last 2 columns, to have a consistent CSV file Add a Column to the file. 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0 img_id.jpg, -50

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • Method for finding memory leak in large Java heap dumps

    - by Rickard von Essen
    I have to find a memory leak in a Java application. I have some experience with this but would like advice on a methodology/strategy for this. Any reference and advice is welcome. About our situation: Heap dumps are larger than 1 GB We have heap dumps from 5 occasions. We don't have any test case to provoke this. It only happens in the (massive) system test environment after at least a weeks usage. The system is built on a internally developed legacy framework with so many design flaws that they are impossible to count them all. Nobody understands the framework in depth. It has been transfered to one guy in India who barely keeps up with answering e-mails. We have done snapshot heap dumps over time and concluded that there is not a single component increasing over time. It is everything that grows slowly. The above points us in the direction that it is the frameworks homegrown ORM system that increases its usage without limits. (This system maps objects to files?! So not really a ORM) Question: What is the methodology that helped you succeed with hunting down leaks in a enterprise scale application?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >