Search Results

Search found 12333 results on 494 pages for 'memory leaks'.

Page 246/494 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • x86_64 printf segfault after brk call

    - by gmb11
    While i was trying do use brk (int 0x80 with 45 in %rax) to implement a simple memory manager program in assembly and print the blocks in order, i kept getting segfault. After a while i could only reproduce the error, but have no idea why is this happening: .section .data helloworld: .ascii "hello world" .section .text .globl _start _start: push %rbp mov %rsp, %rbp movq $45, %rax movq $0, %rbx #brk(0) should just return the current break of the programm int $0x80 #incq %rax #segfault #addq $1, %rax #segfault movq $0, %rax #works fine? #addq $1, %rax #segfault again? movq $helloworld, %rdi call printf movq $1, %rax #exit int $0x80 In the example here, if the commented lines are uncommented, i have a segfault, but some commands (like de movq $0, %rax) work just fine. In my other program, the first couple printf work, but the third crashes... Looking for other questions, i heard that printf sometimes allocates some memory, and that the brk shouldn't be used, because in this case it corrupts the heap or something... I'm very confused, does anyone know something about that? EDIT: I've just found out that for printf to work you need %rax=0.

    Read the article

  • Submitting R jobs using PBS

    - by Tony
    I am submitting a job using qsub that runs parallelized R. My intention is to have R programme running on 4 different cores rather than 8 cores. Here are some of my settings in PBS file: #PBS -l nodes=1:ppn=4 .... time R --no-save < program1.R > program1.log I am issuing the command "ta job_id" and I'm seeing that 4 cores are listed. However, the job occupies a large amount of memory(31944900k used vs 32949628k total). If I were to use 8 cores, the jobs got hang due to memory limitation. top - 21:03:53 up 77 days, 11:54, 0 users, load average: 3.99, 3.75, 3.37 Tasks: 207 total, 5 running, 202 sleeping, 0 stopped, 0 zombie Cpu(s): 30.4%us, 1.6%sy, 0.0%ni, 66.8%id, 0.0%wa, 0.0%hi, 1.2%si, 0.0%st Mem: 32949628k total, 31944900k used, 1004728k free, 269812k buffers Swap: 2097136k total, 8360k used, 2088776k free, 6030856k cached Here is a snapshot when issuing command ta job_id PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1794 x 25 0 6247m 6.0g 1780 R 99.2 19.1 8:14.37 R 1795 x 25 0 6332m 6.1g 1780 R 99.2 19.4 8:14.37 R 1796 x 25 0 6242m 6.0g 1784 R 99.2 19.1 8:14.37 R 1797 x 25 0 6322m 6.1g 1780 R 99.2 19.4 8:14.33 R 1714 x 18 0 65932 1504 1248 S 0.0 0.0 0:00.00 bash 1761 x 18 0 63840 1244 1052 S 0.0 0.0 0:00.00 20016.hpc 1783 x 18 0 133m 7096 1128 S 0.0 0.0 0:00.00 python 1786 x 18 0 137m 46m 2688 S 0.0 0.1 0:02.06 R How can I prevent other users to use the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling. Could anyone kindly help me out on this? Can this be solved using pbs? Many Thanks

    Read the article

  • OutOfMemoryException

    - by Andrew
    I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays. I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it. I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now. I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException. This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object. Is my thinking correct here?

    Read the article

  • DDK/WDM developing problem ... driver won't load on x64 windows platform

    - by user295975
    Hi there! I am a beginner at DDK/WDM driver developing field. I have a task which involves porting a virtual device driver from x86 to x64 (intel). I got the source code, I modified it a bit and compiled it succesfuly with DDK (build environments). But when I tried to load it on a ia64 Windows7 machine it didn't want to load. Then I tried some simple examples of device drivers from --http://www.codeproject.com/KB/system/driverdev.aspx (I put '--' to be able to post the hyperlink) and from other links but still the same problem. I hear on a forum that some libraries that you use to link are not compatible with the new machines and suggested to link to another similar libraries...but still didn't worked. When I build I use "-cefw" command line parameters as suggested. I do not have an *.inf file asociated but I'm copying it in system32/drivers and I'm using WinObj to see if next restart it's loaded into the memory. I also tried this program ( http://www.codeproject.com/KB/system/tdriver.aspx ) to load the driver into the memory but still didn't worked for me. Please please help me...I'm stuck on this and my deadline already passed. I feel I'driving nuts in here trying to discover what am I doing wrong.

    Read the article

  • Rijndael managed: plaintext length detction

    - by sheepsimulator
    I am spending some time learning how to use the RijndaelManaged library in .NET, and developed the following function to test encrypting text with slight modifications from the MSDN library: Function encryptBytesToBytes_AES(ByVal plainText As Byte(), ByVal Key() As Byte, ByVal IV() As Byte) As Byte() ' Check arguments. If plainText Is Nothing OrElse plainText.Length <= 0 Then Throw New ArgumentNullException("plainText") End If If Key Is Nothing OrElse Key.Length <= 0 Then Throw New ArgumentNullException("Key") End If If IV Is Nothing OrElse IV.Length <= 0 Then Throw New ArgumentNullException("IV") End If ' Declare the RijndaelManaged object ' used to encrypt the data. Dim aesAlg As RijndaelManaged = Nothing ' Declare the stream used to encrypt to an in memory ' array of bytes. Dim msEncrypt As MemoryStream = Nothing Try ' Create a RijndaelManaged object ' with the specified key and IV. aesAlg = New RijndaelManaged() aesAlg.BlockSize = 128 aesAlg.KeySize = 128 aesAlg.Mode = CipherMode.ECB aesAlg.Padding = PaddingMode.None aesAlg.Key = Key aesAlg.IV = IV ' Create a decrytor to perform the stream transform. Dim encryptor As ICryptoTransform = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV) ' Create the streams used for encryption. msEncrypt = New MemoryStream() Using csEncrypt As New CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write) Using swEncrypt As New StreamWriter(csEncrypt) 'Write all data to the stream. swEncrypt.Write(plainText) End Using End Using Finally ' Clear the RijndaelManaged object. If Not (aesAlg Is Nothing) Then aesAlg.Clear() End If End Try ' Return the encrypted bytes from the memory stream. Return msEncrypt.ToArray() End Function Here's the actual code I am calling encryptBytesToBytes_AES() with: Private Sub btnEncrypt_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnEncrypt.Click Dim bZeroKey As Byte() = {&H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0} PrintBytesToRTF(encryptBytesToBytes_AES(bZeroKey, bZeroKey, bZeroKey)) End Sub However, I get an exception thrown on swEncrypt.Write(plainText) stating that the 'Length of the data to encrypt is invalid.' However, I know that the size of my key, iv, and plaintext are 16 bytes == 128 bits == aesAlg.BlockSize. Why is it throwing this exception? Is it because the StreamWriter is trying to make a String (ostensibly with some encoding) and it doesn't like &H0 as a value?

    Read the article

  • Help decrypting in ColdFusion passwords created in .NET

    - by KnightStalker
    I have a SQL db storing passwords that were encrypted through a .NET application, that I need to decrypt through a ColdFusion app. I just can't seem to get things set upproperly for the CF decryption to work. Any help would by appreciated. Thanks. The .NET decryption code is: public string Decrypt(string input) { try { DESCryptoServiceProvider des = new DESCryptoServiceProvider(); int ZeroBasedByteCount = (input.Length / 2); //Put the input string into the byte array byte[] inputByteArray = new byte[ZeroBasedByteCount]; int i; int x; for (x = 0;x<ZeroBasedByteCount;x++) { i = (Convert.ToInt32(input.Substring(x * 2, 2), 16)); inputByteArray[x] = (byte)i; } //Create the crypto objects des.Key = ASCIIEncoding.ASCII.GetBytes(key); des.IV = ASCIIEncoding.ASCII.GetBytes(key); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, des.CreateDecryptor(), CryptoStreamMode.Write); //Flush the data through the crypto stream into the memory stream cs.Write(inputByteArray, 0, inputByteArray.Length); cs.FlushFinalBlock(); //Get the decrypted data back from the memory stream StringBuilder ret = new StringBuilder(); foreach(byte b in ms.ToArray()) { ret.Append((char)b); } return ret.ToString(); } catch(Exception ex) { throw(ex); return null; } }

    Read the article

  • Linking the Linker script file to source code

    - by user304097
    Hello , I am new to GNU compiler. I have a C source code file which contains some structures and variables in which I need to place certain variables at a particular locations. So, I have written a linker script file and used the __ attribute__("SECTION") at variable declaration, in C source code. I am using a GNU compiler (cygwin) to compile the source code and creating a .hex file using -objcopy option, but I am not getting how to link my linker script file at compilation to relocate the variables accordingly. I am attaching the linker script file and the C source file for the reference. Please help me link the linker script file to my source code, while creating the .hex file using GNU. /*linker script file*/ /*defining memory regions*/ MEMORY { base_table_ram : org = 0x00700000, len = 0x00000100 /*base table area for BASE table*/ mem2 : org =0x00800200, len = 0x00000300 /* other structure variables*/ } /*Sections directive definitions*/ SECTIONS { BASE_TABLE : { } > base_table_ram GROUP : { .text : { } { *(SEG_HEADER) } .data : { } { *(SEG_HEADER) } .bss : { } { *(SEG_HEADER) } } > mem2 } C source code: const UINT8 un8_Offset_1 __attribute__((section("BASE_TABLE"))) = 0x1A; const UINT8 un8_Offset_2 __attribute__((section("BASE_TABLE"))) = 0x2A; const UINT8 un8_Offset_3 __attribute__((section("BASE_TABLE"))) = 0x3A; const UINT8 un8_Offset_4 __attribute__((section("BASE_TABLE"))) = 0x4A; const UINT8 un8_Offset_5 __attribute__((section("BASE_TABLE"))) = 0x5A; const UINT8 un8_Offset_6 __attribute__((section("SEG_HEADER"))) = 0x6A; My intention is to place the variables of section "BASE_TABLE" at the address defined i the linker script file and the remaining variables at the "SEG_HEADER" defined in the linker script file above. But after compilation when I look in to the .hex file the different section variables are located in different hex records, located at an address of 0x00, not the one given in linker script file . Please help me in linking the linker script file to source code. Are there any command line options to link the linker script file, if any plese provide me with the info how to use the options. Thanks in advance, SureshDN.

    Read the article

  • Technically why is processes in Erlang more efficient than OS threads?

    - by Jonas
    Spawning processes seam to be much more efficient in Erlang than starting new threads using the OS (i.e. by using Java or C). Erlang spawns thousands and millions of processes in a short period of time. I know that they are different since Erlang do per process GC and processes in Erlang are implemented the same way on all platforms which isn't the case with OS threads. But I don't fully understand technically why Erlang processes are so much more efficient and has much smaller memory footprint per process. Both the OS and Erlang VM has to do scheduling, context switches and keep track of the values in the registers and so on... Simply why isn't OS threads implemented in the same way as processes in Erlang? Does it have to support something more? and why does it need a bigger memory footprint? Technically why is processes in Erlang more efficient than OS threads? And why can't threads in the OS be implemented and managed in the same efficient way?

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • Should I release NSString before assigning a new value to it?

    - by Elliot Chen
    Hi, Please give me some suggestions about how to change a NSString variable. At my class, I set a member var: NSString *m_movieName; ... @property(nonatomic, retain) NSString *m_movieName; At viewDidLoad method, I assign a default name to this var: -(void)viewDidLoad{ NSString *s1 = [[NSString alloc] initWithFormat:@"Forrest Gump"]; self.m_movieName = s1; ... [s1 release]; [super viewDidLoad] } At some function, I want to give a new name to this var, so I did like: -(void)SomeFunc{ NSString *s2 = [[NSString alloc] initWithFormat:@"Brave Heart"]; //[self.movieName release]; // ??????? Should perform here? self.m_moiveName = s2; [s2 release]; } I know, NSString* var is just a pointer to an allocated memory block, and 'assign' operation will increment this memory block's using count. For my situation, should I release m_movieName before assigning a value to it? If I do not release it (via [self.movieName release]), when and where will the previous block be released? Thanks for your help very much!

    Read the article

  • getting proxies of the correct type in nhibernate

    - by Nir
    I have a problem with uninitialized proxies in nhibernate The Domain Model Let's say I have two parallel class hierarchies: Animal, Dog, Cat and AnimalOwner, DogOwner, CatOwner where Dog and Cat both inherit from Animal and DogOwner and CatOwner both inherit from AnimalOwner. AnimalOwner has a reference of type Animal called OwnedAnimal. Here are the classes in the example: public abstract class Animal { // some properties } public class Dog : Animal { // some more properties } public class Cat : Animal { // some more properties } public class AnimalOwner { public virtual Animal OwnedAnimal {get;set;} // more properties... } public class DogOwner : AnimalOwner { // even more properties } public class CatOwner : AnimalOwner { // even more properties } The classes have proper nhibernate mapping, all properties are persistent and everything that can be lazy loaded is lazy loaded. The application business logic only let you to set a Dog in a DogOwner and a Cat in a CatOwner. The Problem I have code like this: public void ProcessDogOwner(DogOwner owner) { Dog dog = (Dog)owner.OwnedAnimal; .... } This method can be called by many diffrent methods, in most cases the dog is already in memory and everything is ok, but rarely the dog isn't already in memory - in this case I get an nhibernate "uninitialized proxy" but the cast throws an exception because nhibernate genrates a proxy for Animal and not for Dog. I understand that this is how nhibernate works, but I need to know the type without loading the object - or, more correctly I need the uninitialized proxy to be a proxy of Cat or Dog and not a proxy of Animal. Constraints I can't change the domain model, the model is handed to me by another department, I tried to get them to change the model and failed. The actual model is much more complicated then the example and the classes have many references between them, using eager loading or adding joins to the queries is out of the question for performance reasons. I have full control of the source code, the hbm mapping and the database schema and I can change them any way I want (as long as I don't change the relationships between the model classes). I have many methods like the one in the example and I don't want to modify all of them. Thanks, Nir

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • Looking for advice on importing large dataset in sqlite and Cocoa/Objective-C

    - by jluckyiv
    I have a fairly large hierarchical dataset I'm importing. The total size of the database after import is about 270MB in sqlite. My current method works, but I know I'm hogging memory as I do it. For instance, if I run with Zombies, my system freezes up (although it will execute just fine if I don't use that Instrument). I was hoping for some algorithm advice. I have three hierarchical tables comprising about 400,000 records. The highest level has about 30 records, the next has about 20,000, the last has the balance. Right now, I'm using nested for loops to import. I know I'm creating an unreasonably large object graph, but I'm also looking to serialize to JSON or XML because I want to break up the records into downloadable chunks for the end user to import a la carte. I have the code written to do the serialization, but I'm wondering if I can serialize the object graph if I only have pieces in memory. Here's pseudocode showing the basic process for sqlite import. I left out the unnecessary detail. [database open]; [database beginTransaction]; NSArray *firstLevels = [[FirstLevel fetchFromURL:url retain]; for (FirstLevel *firstLevel in firstLevels) { [firstLevel save]; int id1 = [firstLevel primaryKey]; NSArray *secondLevels = [[SecondLevel fetchFromURL:url] retain]; for (SecondLevel *secondLevel in secondLevels) { [secondLevel saveWithForeignKey:id1]; int id2 = [secondLevel primaryKey]; NSArray *thirdLevels = [[ThirdLevel fetchFromURL:url] retain]; for (ThirdLevel *thirdLevel in thirdLevels) { [thirdLevel saveWithForeignKey:id2]; } [database commit]; [database beginTransaction]; [thirdLevels release]; } [secondLevels release]; } [database commit]; [database release]; [firstLevels release];

    Read the article

  • New NSData with range of old NSData maintaining bytes.

    - by umop
    I have a fairly large NSData (or NSMutableData if necessary) object which I want to take a small chunk out of and leave the rest. Since I'm working with large amounts of NSData bytes, I don't want to make a big copy, but instead just truncate the existing bytes. Basically: NSData *source: < a few bytes I want to discard + < big chunk of bytes I want to keep NSData *destination: < big chunk of bytes I want to keep There are truncation methods in NSMutableData, but they only truncate the end of it, whereas I want to truncate the beginning. My thoughts are to do this with the methods: - getBytes:range: and - initWithBytesNoCopy:length:freeWhenDone: However, I'm trying to figure out how to manage memory with these. I'm guessing the process will be like this (I've placed ????s where I don't know what to do): void *buffer // Get range of bytes [source getBytes:buffer range:NSMakeRange(myStart, myLength)]; // Somehow (m)alloc the memory which will be freed up in the following step ????? // Release the source, now that I've allocated the bytes [source release]; // Create a new data, recycling the bytes so they don't have to be copied NSData destination = [[NSData alloc] initWithBytesNoCopy:buffer length:myLength freeWhenDone:YES]; Thanks for the help!

    Read the article

  • Is there a disassembler + debugger for java (ala OllyDbg / SoftICE for assembler)?

    - by Ran Biron
    Is there a utility similar to OllyDbg / SoftICE for java? I.e. execute class (from jar / with class path) and, without source code, show the disassembly of the intermediate code with ability to step through / step over / search for references / edit specific intermediate code in memory / apply edit to file... If not, is it even possible to write something like this (assuming we're willing to live without hotspot for the debug duration)? Edit: I'm not talking about JAD or JD or Cavaj. These are fine decompilers, but I don't want a decompiler for several reasons, most notable is that their output is incorrect (at best, sometimes just plain wrong). I'm not looking for a magical "compiled bytes to java code" - I want to see the actual bytes that are about to be executed. Also, I'd like the ability to change those bytes (just like in an assembly debugger) and, hopefully, write the changed part back to the class file. Edit2: I know javap exists - but it does only one way (and without any sort of analysis). Example (code taken from the vmspec documentation): From java code, we use "javac" to compile this: void setIt(int value) { i = value; } int getIt() { return i; } to a java .class file. Using javap -c I can get this output: Method void setIt(int) 0 aload_0 1 iload_1 2 putfield #4 5 return Method int getIt() 0 aload_0 1 getfield #4 4 ireturn This is OK for the disassembly part (not really good without analysis - "field #4 is Example.i"), but I can't find the two other "tools": A debugger that goes over the instructions themselves (with stack, memory dumps, etc), allowing me to examine the actual code and environment. A way to reverse the process - edit the disassembled code and recreate the .class file (with the edited code).

    Read the article

  • Problem executing script using Python and subprocces.call yet works in Bash

    - by Antoine Benkemoun
    Hello, For the first time, I am asking a little bit of help over here as I am more of a ServerFault person. I am doing some scripting in Python and I've been loving the language so far yet I have this little problem which is keeping my script from working. Here is the code line in question : subprocess.call('xen-create-image --hostname '+nom+' --memory '+memory+' --partitions=/root/scripts/part.tmp --ip '+ip+' --netmask '+netmask+' --gateway '+gateway+' --passwd',shell=True) I have tried the same thing with os.popen. All the variables are correctly set. When I execute the command in question in my regular Linux shell, it works perfectly fine but when I execute it using my Python scripts, I get bizarre errors. I even replaced subprocess.call() by the print function to make sure I am using the exact output of the command. I went looking into environment variables of my shell but they are pretty much the same... I'll post the error I am getting but I'm not sure it's relevant to my problem. Use of uninitialized value $lines[0] in substitution (s///) at /usr/share/perl5/Config/IniFiles.pm line 614. Use of uninitialized value $_ in pattern match (m//) at /usr/share/perl5/Config/IniFiles.pm line 628. I am not a Python expert so I'm most likely missing something here. Thank you in advance for your help, Antoine

    Read the article

  • Using WCF to expose underlying process

    - by Steven
    I think I must be a little dull because I'm having so much difficulty with this. I use WCF for pretty much everything in-house, it's the most appropriate technology. I have a new Silverlight 3 app that is connecting to the WCF service and that's working fine. Where the problem begins is: Because of the expense in creating the objects within this service and the high correlation of individual objects being shared between clients I want to have a console application that basically gathers/calculates/caches all the data for the service 24/7 and the service basically connects to the console app (or whatever it is) and gets the pre-processed data. eg, think of it in terms of a stock reporting app (which it is). Person A has a portfolio of x, y z Person B has a portfolio of x, q, z, r The service needs to provide updated metrics on how their portfolio is performing. So instead of every 1 second processing person A, then person B, the app independently gathers the stock price and persons position information into memory and the service just queries the in memory result. Thanks for your help, I really am feeling dumb right now.

    Read the article

  • J2ME cache issue

    - by kiennt
    I have to write a J2ME app to retrieve images from server and display in mobile phone. I have seen and test that Snaptu have a mechanism to cache image, event with 100 images (both normal size and zoom size). I wonder how they can do that? I though that those guys use rms to save image stream to data. But when i check in working folder of simulater( I use Windows XP and Sun Wireless Toolkit 3.0, the Emulator device i use to run my program is CLDC Device 1 - my working folder is C:\Document And Settings\Administrator\javame-sdk\3.0\work\6\appdb), i see some .db file. When i delete these files, i still can view cache image in my emulator???? I also thought that those guys use heap memory to save image. But it is not correct because when i set limit device memory is 2MB (like some mobile phones), and i load and view 100 images in zoom size, it didn't make OutOfMemory Error? It so weird. Any one can help me? Thanks

    Read the article

  • How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: class Array def shuffle ret = dup j = length i = 0 while j > 1 r = i + rand(j) ret[i], ret[r] = ret[r], ret[i] i += 1 j -= 1 end ret end end (0..9).to_a.shuffle.each{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle is used to efficiently provide random ordering. My problem is that shuffle needs to operate on an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. Imagine replacing (0..9) with (0..99**99). This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do? [Edit1]: Why do I want to do this? I'm trying to exhaust the search space of a hash algorithm for a N-length input string looking for partial collisions. Each number I generate is equivalent to a unique input string, entropy and all. Basically, I'm "counting" using a custom alphabet. [Edit2]: This means that f(x) in the above examples is a method that generates a hash and compares it to a constant, target hash for partial collisions. I do not need to store the value of x after I call f(x) so memory should remain constant over time. [Edit3/4/5/6]: Further clarification/fixes.

    Read the article

  • Use auto_ptr in VC6 dll cause crash

    - by Yan Cheng CHEOK
    // dll #include <memory> __declspec(dllexport) std::auto_ptr<int> get(); __declspec(dllexport) std::auto_ptr<int> get() { return std::auto_ptr<int>(new int()); } // exe #include <iostream> #include <memory> __declspec(dllimport) std::auto_ptr<int> get(); int main() { { std::auto_ptr<int> x = get(); } std::cout << "done\n"; getchar(); } The following code run perfectly OK under VC9. However, under VC6, I will experience an immediate crash with the following message. Debug Assertion Failed! Program: C:\Projects\use_dynamic_link\Debug\use_dynamic_link.exe File: dbgheap.c Line: 1044 Expression: _CrtIsValidHeapPointer(pUserData) Is it exporting auto_ptr under VC6 is not allowed? It is a known problem that exporting STL collection classes through DLL. http://stackoverflow.com/questions/2451714/access-violation-when-accessing-an-stl-object-through-a-pointer-or-reference-in-a However, I Google around and do not see anything mention for std::auto_ptr. Any workaround?

    Read the article

  • Strange problem with vectors.

    - by Catalin Dumitru
    I have a really strange problem with stl vectors in which the wrong destructor is called for the right object when I call the erase method if that makes any sense. My code looks something like this: for(vector<Category>::iterator iter = this->children.begin(); iter != this->children.end(); iter++) { if((*iter).item == item) { this->children.erase(iter); return; } ------------------------- } It's just a simple function that finds the element in the vector which has some item to be searched, and removes said element from the vector. My problem is than when the erase function is called, and thus the object which the iterator is pointing at is being destroyed, the wrong destructor is being called. More specific the destructor of the last element in the vector is being called, and not of the actual object being removed. Thus the memory is being removed from the wrong object, which will still be an element in the vector, and the actual object which is removed from the vector, still has all of it's memory intact. The costructor of the object looks like this: Category::Category(const Category &from) { this->name = from.name; for(vector<Category>::const_iterator iter = from.children.begin(); iter != from.children.end(); iter++) this->children.push_back((*iter)); this->item = new QTreeWidgetItem; } And the destructor Category::~Category() { this->children.clear(); if(this->item != NULL) { QTreeWidgetItem* parent = this->item->parent(); if(parent != NULL) parent->removeChild(this->item); delete this->item; } }

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >