Search Results

Search found 41025 results on 1641 pages for 'in memory database'.

Page 149/1641 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Reusing a NSString variable - does it cause a memory leak?

    - by Chris S
    Coming from a .NET background I'm use to reusing string variables for storage, so is the code below likely to cause a memory leak? The code is targeting OS X on the iphone/itouch so no automatic GC. -(NSString*) stringExample { NSString *result = @"example"; result = [result stringByAppendingString:@" test"]; // where does "example" go? return result; } What confuses me is an NSStrings are immutable, but you can reuse an 'immutable' variable with no problem.

    Read the article

  • SQL Server many-to-many design recommendation

    - by Jean-Philippe Brabant
    I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation. At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row. As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements. Is their other way ? And which one should I use.

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • How to represent a Many-To-Many relationship in XML or other simple file format?

    - by CSharperWithJava
    I have a list management appliaction that stores its data in a many-to-many relationship database. I.E. A note can be in any number of lists, and a list can have any number of notes. I also can export this data to and XML file and import it in another instance of my app for sharing lists between users. However, this is based on a legacy system where the list to note relationship was one-to-many (ideal for XML). Now a note that is in multiple lists is esentially split into two identical rows in the DB and all relation between them is lost. Question: How can I represent this many-to-many relationship in a simple, standard file format? (Preferably XML to maintain backwards compatibility)

    Read the article

  • How can I run ARM code from external memory?

    - by samoz
    I am using an LPC2132 ARM chip to develop a program. However, my program has grown larger than the space on the chip. How can I connect my chip to some sort of external memory chip to hold additional executable code? Is this possible? If not, what do people normally do when they run out of chip space?

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • 3G dongle and memory card detection

    - by user212632
    My questions is about 3G dongle (Huawei E1752) that I use for my internet on Ubuntu 12.04. The big issue is that ubuntu only recognise the dongle if I plug it in before booting up. But if somehow I loses connection (due to network being low) then the only way to use the 3G dongle again is to reboot my machine, which becomes a pain. I have the same issue with the memory card reader of my laptop whereby it only reads when I insert the card before booting up. My laptop is an Acer V3 -571g. At first I thought it was an issue that the model was quite new, but I have been updating my ubuntu for a while, and this issue has kept being the same

    Read the article

  • Is valgrind crazy or is this is a genuine std map iterator memory leak?

    - by Alberto Toglia
    Well, I'm very new to Valgrind and memory leak profilers in general. And I must say it is a bit scary when you start using them cause you can't stop wondering how many leaks you might have left unsolved before! To the point, as I'm not an experienced in c++ programmer, I would like to check if this is certainly a memory leak or is it that Valgrind is doing a false positive? typedef std::vector<int> Vector; typedef std::vector<Vector> VectorVector; typedef std::map<std::string, Vector*> MapVector; typedef std::pair<std::string, Vector*> PairVector; typedef std::map<std::string, Vector*>::iterator IteratorVector; VectorVector vv; MapVector m1; MapVector m2; vv.push_back(Vector()); m1.insert(PairVector("one", &vv.back())); vv.push_back(Vector()); m2.insert(PairVector("two", &vv.back())); IteratorVector i = m1.find("one"); i->second->push_back(10); m2.insert(PairVector("one", i->second)); m2.clear(); m1.clear(); vv.clear(); Why is that? Shouldn't the clear command call the destructor of every object and every vector? Now after doing some tests I found different solutions to the leak: 1) Deleting the line i-second-push_back(10); 2) adding a delete i-second; after it's been used. 3) Deleting the second vv.push_back(Vector()); and m2.insert(PairVector("two", &vv.back())); statements. Using solution 2) makes Valgring print: 10 allocs, 11 frees Is that OK? As I'm not using new why should I delete? Thanks, for any help!

    Read the article

  • Moving from a traditional in memory Java session to persistent storage sessions

    - by Benju
    We have decided to take the plunge and move from using a typical java session provider in Tomcat/Jetty/etc to persisting everything to a central datastore. We are looking at using MongoDB for this. A few options come to mind... http://wiki.eclipse.org/Jetty/Tutorial/MongoDB_Session_Clustering This is nice because it will "auto-magically" persist our session to a Mongo installation. I am concerned however that we will not have fine grained control of what is happening. https://github.com/mattinsler/com.lowereast.guiceymongo/ GuiceMongo is interesting as it integrates with Guice. Perhaps we could persist everything via this ORM. Has anybody had to deal with this kind of move? It seems that moving from in memory to persistent session storage has a lot of gotchas.

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • How should I build a simple database package for my python application?

    - by Carson Myers
    I'm building a database library for my application using sqlite3 as the base. I want to structure it like so: db/ __init__.py users.py blah.py etc.py So I would do this in Python: import db db.users.create('username', 'password') I'm suffering analysis paralysis (oh no!) about how to handle the database connection. I don't really want to use classes in these modules, it doesn't really seem appropriate to be able to create a bunch of "users" objects that can all manipulate the same database in the same ways -- so inheriting a connection is a no-go. Should I have one global connection to the database that all the modules use, and then put this in each module: #users.py from db_stuff import connection Or should I create a new connection for each module and keep that alive? Or should I create a new connection for every transaction? How are these database connections supposed to be used? The same goes for cursor objects: Do I create a new cursor for each transaction? Create just one for each database connection?

    Read the article

  • How to find foreign-key dependencies pointing to one record in Oracle?

    - by daveslab
    Hi folks, I have a very large Oracle database, with many many tables and millions of rows. I need to delete one of them, but want to make sure that dropping it will not break any other dependent rows that point to it as a foreign key record. Is there a way to get a list of all the other records, or at least table schemas, that point to this row? I know that I could just try to delete it myself, and catch the exception, but I won't be running the script myself and need it to run clean the first time through. I have the tools SQL Developer from Oracle, and PL/SQL Developer from AllRoundAutomations at my disposal. Thanks in advance!

    Read the article

  • How can I load a SQLITE database from a buffer with the C API ?

    - by rockeye
    Hello, I am trying to load a database from the memory instead of opening a .sqlite file. I have read the C/C++ API reference but I can not find the proper method. The buffer I am trying to load is simply an sqlite file loaded in memory. I just want to use this buffer (a const char* array) without using the filesystem (I could have saved this buffer in a file, then load the file, but no). First, I create a memory DB : mErrorCode = sqlite3_open_v2(":memory:", &mSqlDatabase, lMode, NULL); This returns SQLITE_OK, then I tried to use the buffer as a statement and call preparev2(MyDB, MyBufferData, MyBufferLength, MyStatement, NULL) but it's not really a statement, and it returns an error. Same result if I call directly exec(MyDB, MyBufferData, NULL, NULL, NULL); I guess there is an appropriate method to achieve this as it might be common to load a DB from a stream or from decrypted data... Thanks.

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • C++ - Is it possible to implement memory leak testing in a unit test?

    - by sevaxx
    I'm trying to implement unit testing for my code and I'm having a hard time doing it. Ideally I would like to test some classes not only for good functionality but also for proper memory allocation/deallocation. I wonder if this check can be done using a unit testing framework. I am using Visual Assert btw. I would love to see some sample code , if possible !

    Read the article

  • Does OpenCL allow concurrent writes to same memory address?

    - by Wonko
    Is two (or more) different threads allowed to write to the same memory location in global space in OpenCL? The write is always changing a uchar from 0 to 1 so the outcome should be predictable, but I'm getting erratic results in my program, so I'm wondering if the reason can be that some of the writes fail. Could it help to declare the buffer write-only and copy it to a read-only buffer afterwards?

    Read the article

  • What is a good metaphor for c memory management?

    - by fsmc
    I'm trying to find a good metaphor to explain memory allocation, initialization and freeing in c to a non technical audience. I've heard pass-by-reference/value talked about quite well with postal service usage, but not so much for allocation/deallocation. So for I've thought about using the idea of renting a space might work, but I wonder if the SO crew can provide something better.

    Read the article

  • How to save memory when reading a file in Php ?

    - by coolboycsaba
    I have a 200kb file, what I use in multiple pages, but on each page I need only 1-2 lines of that file so how I can read only these lines what I need if I know the line number? For example if I need only the 10th line, I don`t want to load in memory all the lines, just the 10th line. Sorry for my bad english!

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >