Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 422/844 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • x86_64 printf segfault after brk call

    - by gmb11
    While i was trying do use brk (int 0x80 with 45 in %rax) to implement a simple memory manager program in assembly and print the blocks in order, i kept getting segfault. After a while i could only reproduce the error, but have no idea why is this happening: .section .data helloworld: .ascii "hello world" .section .text .globl _start _start: push %rbp mov %rsp, %rbp movq $45, %rax movq $0, %rbx #brk(0) should just return the current break of the programm int $0x80 #incq %rax #segfault #addq $1, %rax #segfault movq $0, %rax #works fine? #addq $1, %rax #segfault again? movq $helloworld, %rdi call printf movq $1, %rax #exit int $0x80 In the example here, if the commented lines are uncommented, i have a segfault, but some commands (like de movq $0, %rax) work just fine. In my other program, the first couple printf work, but the third crashes... Looking for other questions, i heard that printf sometimes allocates some memory, and that the brk shouldn't be used, because in this case it corrupts the heap or something... I'm very confused, does anyone know something about that? EDIT: I've just found out that for printf to work you need %rax=0.

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • Help decrypting in ColdFusion passwords created in .NET

    - by KnightStalker
    I have a SQL db storing passwords that were encrypted through a .NET application, that I need to decrypt through a ColdFusion app. I just can't seem to get things set upproperly for the CF decryption to work. Any help would by appreciated. Thanks. The .NET decryption code is: public string Decrypt(string input) { try { DESCryptoServiceProvider des = new DESCryptoServiceProvider(); int ZeroBasedByteCount = (input.Length / 2); //Put the input string into the byte array byte[] inputByteArray = new byte[ZeroBasedByteCount]; int i; int x; for (x = 0;x<ZeroBasedByteCount;x++) { i = (Convert.ToInt32(input.Substring(x * 2, 2), 16)); inputByteArray[x] = (byte)i; } //Create the crypto objects des.Key = ASCIIEncoding.ASCII.GetBytes(key); des.IV = ASCIIEncoding.ASCII.GetBytes(key); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, des.CreateDecryptor(), CryptoStreamMode.Write); //Flush the data through the crypto stream into the memory stream cs.Write(inputByteArray, 0, inputByteArray.Length); cs.FlushFinalBlock(); //Get the decrypted data back from the memory stream StringBuilder ret = new StringBuilder(); foreach(byte b in ms.ToArray()) { ret.Append((char)b); } return ret.ToString(); } catch(Exception ex) { throw(ex); return null; } }

    Read the article

  • Submitting R jobs using PBS

    - by Tony
    I am submitting a job using qsub that runs parallelized R. My intention is to have R programme running on 4 different cores rather than 8 cores. Here are some of my settings in PBS file: #PBS -l nodes=1:ppn=4 .... time R --no-save < program1.R > program1.log I am issuing the command "ta job_id" and I'm seeing that 4 cores are listed. However, the job occupies a large amount of memory(31944900k used vs 32949628k total). If I were to use 8 cores, the jobs got hang due to memory limitation. top - 21:03:53 up 77 days, 11:54, 0 users, load average: 3.99, 3.75, 3.37 Tasks: 207 total, 5 running, 202 sleeping, 0 stopped, 0 zombie Cpu(s): 30.4%us, 1.6%sy, 0.0%ni, 66.8%id, 0.0%wa, 0.0%hi, 1.2%si, 0.0%st Mem: 32949628k total, 31944900k used, 1004728k free, 269812k buffers Swap: 2097136k total, 8360k used, 2088776k free, 6030856k cached Here is a snapshot when issuing command ta job_id PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1794 x 25 0 6247m 6.0g 1780 R 99.2 19.1 8:14.37 R 1795 x 25 0 6332m 6.1g 1780 R 99.2 19.4 8:14.37 R 1796 x 25 0 6242m 6.0g 1784 R 99.2 19.1 8:14.37 R 1797 x 25 0 6322m 6.1g 1780 R 99.2 19.4 8:14.33 R 1714 x 18 0 65932 1504 1248 S 0.0 0.0 0:00.00 bash 1761 x 18 0 63840 1244 1052 S 0.0 0.0 0:00.00 20016.hpc 1783 x 18 0 133m 7096 1128 S 0.0 0.0 0:00.00 python 1786 x 18 0 137m 46m 2688 S 0.0 0.1 0:02.06 R How can I prevent other users to use the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling. Could anyone kindly help me out on this? Can this be solved using pbs? Many Thanks

    Read the article

  • Is there a disassembler + debugger for java (ala OllyDbg / SoftICE for assembler)?

    - by Ran Biron
    Is there a utility similar to OllyDbg / SoftICE for java? I.e. execute class (from jar / with class path) and, without source code, show the disassembly of the intermediate code with ability to step through / step over / search for references / edit specific intermediate code in memory / apply edit to file... If not, is it even possible to write something like this (assuming we're willing to live without hotspot for the debug duration)? Edit: I'm not talking about JAD or JD or Cavaj. These are fine decompilers, but I don't want a decompiler for several reasons, most notable is that their output is incorrect (at best, sometimes just plain wrong). I'm not looking for a magical "compiled bytes to java code" - I want to see the actual bytes that are about to be executed. Also, I'd like the ability to change those bytes (just like in an assembly debugger) and, hopefully, write the changed part back to the class file. Edit2: I know javap exists - but it does only one way (and without any sort of analysis). Example (code taken from the vmspec documentation): From java code, we use "javac" to compile this: void setIt(int value) { i = value; } int getIt() { return i; } to a java .class file. Using javap -c I can get this output: Method void setIt(int) 0 aload_0 1 iload_1 2 putfield #4 5 return Method int getIt() 0 aload_0 1 getfield #4 4 ireturn This is OK for the disassembly part (not really good without analysis - "field #4 is Example.i"), but I can't find the two other "tools": A debugger that goes over the instructions themselves (with stack, memory dumps, etc), allowing me to examine the actual code and environment. A way to reverse the process - edit the disassembled code and recreate the .class file (with the edited code).

    Read the article

  • DDK/WDM developing problem ... driver won't load on x64 windows platform

    - by user295975
    Hi there! I am a beginner at DDK/WDM driver developing field. I have a task which involves porting a virtual device driver from x86 to x64 (intel). I got the source code, I modified it a bit and compiled it succesfuly with DDK (build environments). But when I tried to load it on a ia64 Windows7 machine it didn't want to load. Then I tried some simple examples of device drivers from --http://www.codeproject.com/KB/system/driverdev.aspx (I put '--' to be able to post the hyperlink) and from other links but still the same problem. I hear on a forum that some libraries that you use to link are not compatible with the new machines and suggested to link to another similar libraries...but still didn't worked. When I build I use "-cefw" command line parameters as suggested. I do not have an *.inf file asociated but I'm copying it in system32/drivers and I'm using WinObj to see if next restart it's loaded into the memory. I also tried this program ( http://www.codeproject.com/KB/system/tdriver.aspx ) to load the driver into the memory but still didn't worked for me. Please please help me...I'm stuck on this and my deadline already passed. I feel I'driving nuts in here trying to discover what am I doing wrong.

    Read the article

  • OutOfMemoryException

    - by Andrew
    I have an application that is pretty memory hungry. It holds a large amount of data in some big arrays. I have recently been noticing the occasional OutOfMemoryException. These OutOfMemoryExceptions are occurring long before my application (ASP.Net) has used up the 800mb available to it. I have track the issue down to the area of code where the array is resized. The array contains a structure that is 74bytes in size. (I know that you shouldn't create struct's that are bigger than 16bytes), but this application is a port from a Vb6 application). I have tried changing the struct to a class and this appears to have fixed the problem for now. I think the reason that changing to a class solves the problem has to do with the fact that when using a struct and the array is resized, a segment of memory that is large enough to store the new array needs to be reserved (e.g. (currentArraySize + increaseBySize)*74) cannot be found. This leads to the OutOfMemoryException. This isn't the case with a class as each element of the array only needs 8bytes to store a pointer to the new object. Is my thinking correct here?

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • Rijndael managed: plaintext length detction

    - by sheepsimulator
    I am spending some time learning how to use the RijndaelManaged library in .NET, and developed the following function to test encrypting text with slight modifications from the MSDN library: Function encryptBytesToBytes_AES(ByVal plainText As Byte(), ByVal Key() As Byte, ByVal IV() As Byte) As Byte() ' Check arguments. If plainText Is Nothing OrElse plainText.Length <= 0 Then Throw New ArgumentNullException("plainText") End If If Key Is Nothing OrElse Key.Length <= 0 Then Throw New ArgumentNullException("Key") End If If IV Is Nothing OrElse IV.Length <= 0 Then Throw New ArgumentNullException("IV") End If ' Declare the RijndaelManaged object ' used to encrypt the data. Dim aesAlg As RijndaelManaged = Nothing ' Declare the stream used to encrypt to an in memory ' array of bytes. Dim msEncrypt As MemoryStream = Nothing Try ' Create a RijndaelManaged object ' with the specified key and IV. aesAlg = New RijndaelManaged() aesAlg.BlockSize = 128 aesAlg.KeySize = 128 aesAlg.Mode = CipherMode.ECB aesAlg.Padding = PaddingMode.None aesAlg.Key = Key aesAlg.IV = IV ' Create a decrytor to perform the stream transform. Dim encryptor As ICryptoTransform = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV) ' Create the streams used for encryption. msEncrypt = New MemoryStream() Using csEncrypt As New CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write) Using swEncrypt As New StreamWriter(csEncrypt) 'Write all data to the stream. swEncrypt.Write(plainText) End Using End Using Finally ' Clear the RijndaelManaged object. If Not (aesAlg Is Nothing) Then aesAlg.Clear() End If End Try ' Return the encrypted bytes from the memory stream. Return msEncrypt.ToArray() End Function Here's the actual code I am calling encryptBytesToBytes_AES() with: Private Sub btnEncrypt_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnEncrypt.Click Dim bZeroKey As Byte() = {&H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0} PrintBytesToRTF(encryptBytesToBytes_AES(bZeroKey, bZeroKey, bZeroKey)) End Sub However, I get an exception thrown on swEncrypt.Write(plainText) stating that the 'Length of the data to encrypt is invalid.' However, I know that the size of my key, iv, and plaintext are 16 bytes == 128 bits == aesAlg.BlockSize. Why is it throwing this exception? Is it because the StreamWriter is trying to make a String (ostensibly with some encoding) and it doesn't like &H0 as a value?

    Read the article

  • nhibernate not taking mappings from assembly

    - by cvista
    Hi I'm using fnh and castle nhib facility. I followed the advice from mike hadlow here: http://mikehadlow.blogspot.com/2009/01/integrating-fluent-nhibernate-and.html here is my FluentNHibernateConfigurationBuilder: public Configuration GetConfiguration(IConfiguration facilityConfiguration) { var defaultConfigurationBuilder = new DefaultConfigurationBuilder(); var configuration = defaultConfigurationBuilder.GetConfiguration(facilityConfiguration); configuration.AddMappingsFromAssembly(typeof(User).Assembly); return configuration; } i know the facility is picking it up as i can break inside that method and it steps through. however, when it's done, non of the mappings are created and i get the following error when i try to save an entity: No persister for: IsItGd.Model.Entities.User here is my user class: //simple model of web user public class User { public virtual int Id { get; set; } public virtual string FullName { get; set; } } and here is the mapping: public class UserMap : ClassMap<User> { public UserMap() { Id(x=>x.Id); Map(x=>x.FullName); } } i really can't see what the problem is. the strange thing is - is that if i use automapping it picks everything up - but i don't want to use automapping as i can't do certain things in that scenario. any clues? w://

    Read the article

  • How to include a child object's child object in Entity Framework 5

    - by Brendan Vogt
    I am using Entity Framework 5 code first and ASP.NET MVC 3. I am struggling to get a child object's child object to populate. Below are my classes.. Application class; public class Application { // Partial list of properties public virtual ICollection<Child> Children { get; set; } } Child class: public class Child { // Partial list of properties public int ChildRelationshipTypeId { get; set; } public virtual ChildRelationshipType ChildRelationshipType { get; set; } } ChildRelationshipType class: public class ChildRelationshipType { public int Id { get; set; } public string Name { get; set; } } Part of GetAll method in the repository to return all the applications: return DatabaseContext.Applications .Include("Children"); The Child class contains a reference to the ChildRelationshipType class. To work with an application's children I would have something like this: foreach (Child child in application.Children) { string childName = child.ChildRelationshipType.Name; } I get an error here that the object context is already closed. How do I specify that each child object must include the ChildRelationshipType object like what I did above?

    Read the article

  • best way to set up a VM for development (regarding performance)

    - by raticulin
    I am trying to set up a clean vm I will use in many of my devs. Hopefully I will use it many times and for a long time, so I want to get it right and set it up so performance is as good as possible. I have searched for a list of things to do, but strangely found only older posts, and none here. My requirements are: My host is Vista 32b, and guest is Windows2008 64b, using Vmware Workstation. The VM should also be able to run on a Vmware ESX I cannot move to other products (VirtualBox etc), but info about performance of each one is welcomed for reference. Anyway I guess most advices would apply to other OSs and other VM products. I need network connectivity to my LAN Guest will run many java processes, a DB and perform lots of file I/O What I have found so far is: HOWTO: Squeeze Every Last Drop of Performance Out of Your Virtual PCs: it's and old post, and about Virtual PC, but I guess most things still apply (and also apply to vmware). I guess it makes a difference to disable all unnecessary services, but the ones mentioned in 1 seem like too few, I specifically always disable Windows Search. Any other service I should disable?

    Read the article

  • Silverlight - how to render image from non-visible data bound user control?

    - by MagicMax
    Hello! I have such situation - I'd like to build timeline control. So I have UserControl and ItemsControl on it (every row represents some person). The ItemsControl contains another ItemsControl as ItemsControl.ItemTemplate - it shows e.g. events for the person arranged by event's date. So it looks as some kind of grid with dates as a column headers and e.g. peoples as row headers. ........................|.2010.01.01.....2010.01.02.....2010.01.03 Adam Smith....|......[some event#1].....[some event#2]...... John Dow.......|...[some event#3].....[some event#4]......... I can have a lot of persons (ItemsControl #1 - 100-200 items) and a lot of events occured by some day (1-10-30 events per person in one day) the problem is that when user scrolls ItemsControl #1/#2 it happened toooo sloooooowwww due to a lot of elements should be rendered in one time (as I have e.g. a bit of text boxes and other elements in description of particular event) Question #1 - how can I improve it? May be somebody knows better way to build such user control? I have to mention that I'm using custom Virtual panel, based on some custom Virtual panel implementation found somewhere in internet... Question #2 - I'd like to make image with help of WriteableBitmap and render data bound control to image and to show image instead of a lot of elements. Problem is that I'm trying to render invisible data bound control (created in code behind) and it has actualWidth/Height equals to zero (so nothing rendered). How can I solve it? Thank you very much for you help!

    Read the article

  • Linking the Linker script file to source code

    - by user304097
    Hello , I am new to GNU compiler. I have a C source code file which contains some structures and variables in which I need to place certain variables at a particular locations. So, I have written a linker script file and used the __ attribute__("SECTION") at variable declaration, in C source code. I am using a GNU compiler (cygwin) to compile the source code and creating a .hex file using -objcopy option, but I am not getting how to link my linker script file at compilation to relocate the variables accordingly. I am attaching the linker script file and the C source file for the reference. Please help me link the linker script file to my source code, while creating the .hex file using GNU. /*linker script file*/ /*defining memory regions*/ MEMORY { base_table_ram : org = 0x00700000, len = 0x00000100 /*base table area for BASE table*/ mem2 : org =0x00800200, len = 0x00000300 /* other structure variables*/ } /*Sections directive definitions*/ SECTIONS { BASE_TABLE : { } > base_table_ram GROUP : { .text : { } { *(SEG_HEADER) } .data : { } { *(SEG_HEADER) } .bss : { } { *(SEG_HEADER) } } > mem2 } C source code: const UINT8 un8_Offset_1 __attribute__((section("BASE_TABLE"))) = 0x1A; const UINT8 un8_Offset_2 __attribute__((section("BASE_TABLE"))) = 0x2A; const UINT8 un8_Offset_3 __attribute__((section("BASE_TABLE"))) = 0x3A; const UINT8 un8_Offset_4 __attribute__((section("BASE_TABLE"))) = 0x4A; const UINT8 un8_Offset_5 __attribute__((section("BASE_TABLE"))) = 0x5A; const UINT8 un8_Offset_6 __attribute__((section("SEG_HEADER"))) = 0x6A; My intention is to place the variables of section "BASE_TABLE" at the address defined i the linker script file and the remaining variables at the "SEG_HEADER" defined in the linker script file above. But after compilation when I look in to the .hex file the different section variables are located in different hex records, located at an address of 0x00, not the one given in linker script file . Please help me in linking the linker script file to source code. Are there any command line options to link the linker script file, if any plese provide me with the info how to use the options. Thanks in advance, SureshDN.

    Read the article

  • Technically why is processes in Erlang more efficient than OS threads?

    - by Jonas
    Spawning processes seam to be much more efficient in Erlang than starting new threads using the OS (i.e. by using Java or C). Erlang spawns thousands and millions of processes in a short period of time. I know that they are different since Erlang do per process GC and processes in Erlang are implemented the same way on all platforms which isn't the case with OS threads. But I don't fully understand technically why Erlang processes are so much more efficient and has much smaller memory footprint per process. Both the OS and Erlang VM has to do scheduling, context switches and keep track of the values in the registers and so on... Simply why isn't OS threads implemented in the same way as processes in Erlang? Does it have to support something more? and why does it need a bigger memory footprint? Technically why is processes in Erlang more efficient than OS threads? And why can't threads in the OS be implemented and managed in the same efficient way?

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • ASP.MVC 2 RTM + ModelState Error at Id property

    - by Zote
    I have this classes: public class GroupMetadata { [HiddenInput(DisplayValue = false)] public int Id { get; set; } [Required] public string Name { get; set; } } [MetadataType(typeof(GrupoMetadata))] public partial class Group { public virtual int Id { get; set; } public virtual string Name { get; set; } } And this action: [HttpPost] public ActionResult Edit(Group group) { if (ModelState.IsValid) { // Logic to save return RedirectToAction("Index"); } return View(group); } That's my view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Group>" %> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> <div> <%=Html.ActionLink("Back", "Index") %> </div> </asp:Content> But ModelState is always invalid! As I can see, for MVC validation 0 is invalid, but for me is valid. How can I fix it since, I didn't put any kind of validation in Id property? UPDATE: I don't know how or why, but renaming Id, in my case to PK, solves this problem. Do you know if this an issue in my logic/configuration or is an bug or expected behavior? Thank you

    Read the article

  • Should I release NSString before assigning a new value to it?

    - by Elliot Chen
    Hi, Please give me some suggestions about how to change a NSString variable. At my class, I set a member var: NSString *m_movieName; ... @property(nonatomic, retain) NSString *m_movieName; At viewDidLoad method, I assign a default name to this var: -(void)viewDidLoad{ NSString *s1 = [[NSString alloc] initWithFormat:@"Forrest Gump"]; self.m_movieName = s1; ... [s1 release]; [super viewDidLoad] } At some function, I want to give a new name to this var, so I did like: -(void)SomeFunc{ NSString *s2 = [[NSString alloc] initWithFormat:@"Brave Heart"]; //[self.movieName release]; // ??????? Should perform here? self.m_moiveName = s2; [s2 release]; } I know, NSString* var is just a pointer to an allocated memory block, and 'assign' operation will increment this memory block's using count. For my situation, should I release m_movieName before assigning a value to it? If I do not release it (via [self.movieName release]), when and where will the previous block be released? Thanks for your help very much!

    Read the article

  • getting proxies of the correct type in nhibernate

    - by Nir
    I have a problem with uninitialized proxies in nhibernate The Domain Model Let's say I have two parallel class hierarchies: Animal, Dog, Cat and AnimalOwner, DogOwner, CatOwner where Dog and Cat both inherit from Animal and DogOwner and CatOwner both inherit from AnimalOwner. AnimalOwner has a reference of type Animal called OwnedAnimal. Here are the classes in the example: public abstract class Animal { // some properties } public class Dog : Animal { // some more properties } public class Cat : Animal { // some more properties } public class AnimalOwner { public virtual Animal OwnedAnimal {get;set;} // more properties... } public class DogOwner : AnimalOwner { // even more properties } public class CatOwner : AnimalOwner { // even more properties } The classes have proper nhibernate mapping, all properties are persistent and everything that can be lazy loaded is lazy loaded. The application business logic only let you to set a Dog in a DogOwner and a Cat in a CatOwner. The Problem I have code like this: public void ProcessDogOwner(DogOwner owner) { Dog dog = (Dog)owner.OwnedAnimal; .... } This method can be called by many diffrent methods, in most cases the dog is already in memory and everything is ok, but rarely the dog isn't already in memory - in this case I get an nhibernate "uninitialized proxy" but the cast throws an exception because nhibernate genrates a proxy for Animal and not for Dog. I understand that this is how nhibernate works, but I need to know the type without loading the object - or, more correctly I need the uninitialized proxy to be a proxy of Cat or Dog and not a proxy of Animal. Constraints I can't change the domain model, the model is handed to me by another department, I tried to get them to change the model and failed. The actual model is much more complicated then the example and the classes have many references between them, using eager loading or adding joins to the queries is out of the question for performance reasons. I have full control of the source code, the hbm mapping and the database schema and I can change them any way I want (as long as I don't change the relationships between the model classes). I have many methods like the one in the example and I don't want to modify all of them. Thanks, Nir

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • Problem executing script using Python and subprocces.call yet works in Bash

    - by Antoine Benkemoun
    Hello, For the first time, I am asking a little bit of help over here as I am more of a ServerFault person. I am doing some scripting in Python and I've been loving the language so far yet I have this little problem which is keeping my script from working. Here is the code line in question : subprocess.call('xen-create-image --hostname '+nom+' --memory '+memory+' --partitions=/root/scripts/part.tmp --ip '+ip+' --netmask '+netmask+' --gateway '+gateway+' --passwd',shell=True) I have tried the same thing with os.popen. All the variables are correctly set. When I execute the command in question in my regular Linux shell, it works perfectly fine but when I execute it using my Python scripts, I get bizarre errors. I even replaced subprocess.call() by the print function to make sure I am using the exact output of the command. I went looking into environment variables of my shell but they are pretty much the same... I'll post the error I am getting but I'm not sure it's relevant to my problem. Use of uninitialized value $lines[0] in substitution (s///) at /usr/share/perl5/Config/IniFiles.pm line 614. Use of uninitialized value $_ in pattern match (m//) at /usr/share/perl5/Config/IniFiles.pm line 628. I am not a Python expert so I'm most likely missing something here. Thank you in advance for your help, Antoine

    Read the article

  • Looking for advice on importing large dataset in sqlite and Cocoa/Objective-C

    - by jluckyiv
    I have a fairly large hierarchical dataset I'm importing. The total size of the database after import is about 270MB in sqlite. My current method works, but I know I'm hogging memory as I do it. For instance, if I run with Zombies, my system freezes up (although it will execute just fine if I don't use that Instrument). I was hoping for some algorithm advice. I have three hierarchical tables comprising about 400,000 records. The highest level has about 30 records, the next has about 20,000, the last has the balance. Right now, I'm using nested for loops to import. I know I'm creating an unreasonably large object graph, but I'm also looking to serialize to JSON or XML because I want to break up the records into downloadable chunks for the end user to import a la carte. I have the code written to do the serialization, but I'm wondering if I can serialize the object graph if I only have pieces in memory. Here's pseudocode showing the basic process for sqlite import. I left out the unnecessary detail. [database open]; [database beginTransaction]; NSArray *firstLevels = [[FirstLevel fetchFromURL:url retain]; for (FirstLevel *firstLevel in firstLevels) { [firstLevel save]; int id1 = [firstLevel primaryKey]; NSArray *secondLevels = [[SecondLevel fetchFromURL:url] retain]; for (SecondLevel *secondLevel in secondLevels) { [secondLevel saveWithForeignKey:id1]; int id2 = [secondLevel primaryKey]; NSArray *thirdLevels = [[ThirdLevel fetchFromURL:url] retain]; for (ThirdLevel *thirdLevel in thirdLevels) { [thirdLevel saveWithForeignKey:id2]; } [database commit]; [database beginTransaction]; [thirdLevels release]; } [secondLevels release]; } [database commit]; [database release]; [firstLevels release];

    Read the article

  • strategy for choosing proper object and proper method

    - by zerkms
    in the code below at first if statements block (there will be more than just "worker" condition, joined with else if) i select proper filter_object. After this in the same conditional block i select what filter should be applied by filter object. This code is silly. public class Filter { public static List<data.Issue> fetch(string type, string filter) { Filter_Base filter_object = new Filter_Base(filter); if (type == "worker") { filter_object = new Filter_Worker(filter); } else if (type == "dispatcher") { filter_object = new Filter_Dispatcher(filter); } List<data.Issue> result = new List<data.Issue>(); if (filter == "new") { result = filter_object.new_issues(); } else if (filter == "ended") { result = filter_object.ended_issues(); } return result; } } public class Filter_Base { protected string _filter; public Filter_Base(string filter) { _filter = filter; } public virtual List<data.Issue> new_issues() { return new List<data.Issue>(); } public virtual List<data.Issue> ended_issues() { return new List<data.Issue>(); } } public class Filter_Worker : Filter_Base { public Filter_Worker(string filter) : base(filter) { } public override List<data.Issue> new_issues() { return (from i in data.db.GetInstance().Issues where (new int[] { 4, 5 }).Contains(i.RequestStatusId) select i).Take(10).ToList(); } } public class Filter_Dispatcher : Filter_Base { public Filter_Dispatcher(string filter) : base(filter) { } } it will be used in some kind of: Filter.fetch("worker", "new"); this code means, that for user that belongs to role "worker" only "new" issues will be fetched (this is some kind of small and simple CRM). Or another: Filter.fetch("dispatcher", "ended"); // here we get finished issues for dispatcher role Any proposals on how to improve it?

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >