Search Results

Search found 22961 results on 919 pages for 'memory management'.

Page 416/919 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Insert multiple values using INSERT INTO

    - by Ben McCormack
    In SQL Server 2005, I'm trying to figure out why I'm not able to insert multiple fields into a table. The following query, which inserts one record, works fine: INSERT INTO [MyDB].[dbo].[MyTable] ([FieldID] ,[Description]) VALUES (1000,N'test') However, the following query, which specifies more than one value, fails: INSERT INTO [MyDB].[dbo].[MyTable] ([FieldID] ,[Description]) VALUES (1000,N'test'),(1001,N'test2') I get this message: Msg 102, Level 15, State 1, Line 5 Incorrect syntax near ','. When I looked up the help for INSERT in SQL Sever Management Studio, one of their examples showed using the "Values" syntax that I used (with groups of values in parentheses and separated by commas). The help documentation I found in SQL Server Management Studio looks like it's for SQL Server 2008, so perhaps that's the reason that the insert doesn't work. Either way, I can't figure out why it won't work.

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • what's faster: merging lists or dicts in python?

    - by tipu
    I'm working with an app that is cpu-bound more than memory bound, and I'm trying to merge two things whether they be sets or dicts. Now the thing is i can choose either one, but I'm wondering if merging dicts would be faster since it's all in memory? Or is it always going to be O(n), n being the size of the smaller set. The reason I asked about dicts rather than sets is because I can't convert a set to json, because that results in {key1, key2, key3} and json needs a key/value pair, so I am using a dict so json dumps returns {key1:1, key2:1, key3:1}. Yes this is wasteful, but if it proves to be faster then I'm okay with it.

    Read the article

  • Algorithm: How to tell if an array is a permutation in O(n)?

    - by Iulian Serbanoiu
    Hello, Input: A read-only array of N elements containing integer values from 1 to N. And a memory zone of a fixed size (10, 100, 1000 etc - not depending on N). How to tell in O(n) if the array represents a permutation? --What I achieved so far:-- I use the limited memory area to store the sum and the product of the array. I compare the sum with N*(N+1)/2 and the product with N! I know that if condition (2) is true I might have a permutation. I'm wondering if there's a way to prove that condition (2) is sufficient to tell if I have a permutation. So far I haven't figured this out ... Thanks, Iulian

    Read the article

  • J2ME Reduce Image color-depth/ Compress Image size

    - by updateraj
    Hi, I need to transmit the image from the mobile phone to the server. I am able to reduce the image screen size but not the memory size. I understand i have to deal with the color depth. J2ME does not seem to offer any scaling method which is available in J2SE: image rescaled = image.getScaledInstance(thumbWidth, thumbHeight, Image.SCALE_AREA_AVERAGING); BufferedImage biRescaled = toBufferedImage(rescaled, thumbWidth, thumbHeight, BufferedImage.TYPE_INT_RGB); How i would i tackle this ? I would like to reduce the image memory size before i transmit to the server. Thank you

    Read the article

  • Handling exception form unmanaged dll in C#

    - by StuffHappens
    Hello. I have the following function written in C# public static string GetNominativeDeclension(string surnameNamePatronimic) { if(surnameNamePatronimic == null) throw new ArgumentNullException("surnameNamePatronimic"); IntPtr[] ptrs = null; try { ptrs = StringsToIntPtrArray(surnameNamePatronimic); int resultLen = MaxResultBufSize; int err = decGetNominativePadeg(ptrs[0], ptrs[1], ref resultLen); ThrowException(err); return IntPtrToString(ptrs, resultLen); } catch { return surnameNamePatronimic; } finally { FreeIntPtr(ptrs); } } Function decGetNominativePadeg is in unmanaged dll [DllImport("Padeg.dll", EntryPoint = "GetNominativePadeg")] private static extern Int32 decGetNominativePadeg(IntPtr surnameNamePatronimic, IntPtr result, ref Int32 resultLength); and throws an exception: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The catch that is in C# code doesn't actually catch it. Why? How to handle this exception? Thank you for your help!

    Read the article

  • streaming XML serialization in .net

    - by Luca Martinetti
    Hello, I'm trying to serialize a very large IEnumerable<MyObject> using an XmlSerializer without keeping all the objects in memory. The IEnumerable<MyObject> is actually lazy.. I'm looking for a streaming solution that will: Take an object from the IEnumerable<MyObject> Serialize it to the underlying stream using the standard serialization (I don't want to handcraft the XML here!) Discard the in memory data and move to the next I'm trying with this code: using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(MyObject)); foreach (var myObject in myObjectsIEnumerable) { xmlSerializer.Serialize(writer, myObject); } } but I'm getting multiple XML headers and I cannot specify a root tag <MyObjects> so my XML is invalid. Any idea? Thanks

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Efficient banner rotation with PHP

    - by reggie
    I rotate a banner on my site by selecting it randomly from an array of banners. Sample code as demonstration: <?php $banners = array( '<iframe>...</iframe>', '<a href="#"><img src="#.jpg" alt="" /></a>', //and so on ); echo $banners(rand(0, count($banners))); ?> The array of banners has become quite big. I am concerned with the amount of memory that this array adds to the execution of my page. But I can't figure out a better way of showing a random banner without loading all the banners into memory...

    Read the article

  • Node.js for lua?

    - by Shahbaz
    I've been playing around with node.js (nodejs) for the past few day and it is fantastic. As far as I can tell, lua doesn't have a similar integration of libev and libio which let's one avoid almost any blocking calls and interact with the network and the filesystem in an asynchronous manner. I'm slowly porting my java implementation to nodejs, but I'm shocked that luajit is much faster than v8 JavaScript AND uses far less memory! I imagine writing my server in such an environment (very fast and responsive, very low memory usage, very expressive) will improve my project immensly. Being new to lua, I'm just not sure if such a thing exists. I'll appreciate any pointers. Thanks

    Read the article

  • Java Heap Overflow, Forcing Garbage Collection

    - by Nicholas
    I've create a trie tree with an array of children. When deleting a word, I set the children null, which I would assume deletes the node(delete is a relative term). I know that null doesn't delete the child, just sets it to null, which when using a large amount of words it causes to overflow the heap. Running a top on linux, I can see my memory usage spike to 1gb pretty quickly, but if I force garbage collection after the delete (Runtime.gc()) the memory usage goes to 50mb and never above that. From what I'm told, java by default runs garbage collection before a heap overflow happens, but I can't see to make that happen.

    Read the article

  • Can you decode a mutable Bitmap from an InputStream?

    - by Daniel Lew
    Right now I've got an Android application that: Downloads an image. Does some pre-processing to that image. Displays the image. The dilemma is that I would like this process to use less memory, so that I can afford to download a higher-resolution image. However, when I download the image now, I use BitmapFactory.decodeStream(), which has the unfortunate side effect of returning an immutable Bitmap. As a result, I'm having to create a copy of the Bitmap before I can start operating on it, which means I have to have 2x the size of the Bitmap's memory allocated (at least for a brief period of time; once the copy is complete I can recycle the original). Is there a way to decode an InputStream into a mutable Bitmap?

    Read the article

  • increasing amazon root volume size

    - by OCD
    I have a default amazon ec2 instance with 8GB root volume size. I am running out of space. I have: Detach the current EBS volume in AWS Management Console (Web). Create snapshot of this volume. Created a new Volume with 50G space with my snapshot. Attach the new volume back to the instance to /dev/sda1 However, when I reconnect to the account with: > df -h I can see from the management console that my new Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 8256952 8173624 0 100% / tmpfs 308508 40 308468 1% /dev/shm It's still not using my new volume's size, how to make this work?

    Read the article

  • Problem with connection to MS SQL Server database using SSMS

    - by Charles
    I have a database on line with Godaddy (who uses SQL Server 2005). They provide basic management tools, but tell you that for more advanced tools you can connect directly using SSMS. I followed their instructions to ensure my online database will accept remote connections, and can apparently log in using SSMS with success (after giving my hostname and access data). However: When attempting to expand the "Databases" folder tree, I get the following error: Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server& LinkId=20476 ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The server principal "cmitchell" is not able to access the database "3pointdb" under the current security context. (Microsoft SQL Server, Error: 916) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4262&EvtSrc=MSSQLServer&EvtID=916&LinkId=20476

    Read the article

  • How can I show SQL Server LOGS (2005)

    - by Marcin Rybacki
    Hello, I'm trying to track the error thrown by SQL Server 2005. The problem is SQL Server reports it in my native language so it's hard for me to google it. I think that the core issue would be avialable in English in SQL Server LOGS. I'm running SQL Server Management Studio Express, going to "Management" node, and then SQL Server Logs. I can see the list of logs but I cannot enter them, the only available option in context menu is Refresh. Could you help me to show the contents of those logs?

    Read the article

  • MySQL query cache vs caching result-sets in the application layer

    - by GetFree
    I'm running a php/mysql-driven website with a lot of visits and I'm considering the possibility of caching result-sets in shared memory in order to reduce database load. However, right now MySQL's query cache is enabled and it seems to be doing a pretty good job since if I disable query caching, the use of CPU jumps to 100% immediately. Given that situation, I dont know if caching result-sets (or even the generated HTML code) locally in shared memory with PHP will result in any noticeable performace improvement. Does anyone out there have any experience on this matter? PS: Please avoid suggesting heavy-artillery solutions like memcached. Right now I'm looking for simple solutions that dont require too much time to implement, deploy and maintain.

    Read the article

  • Java object caching, which is faster, reading from a file or from a remote machine?

    - by Kumar225
    I am at a point where I need to take the decision on what to do when caching of objects reaches the configured threshold. Should I store the objects in a indexed file (like provided by JCS) and read them from the file (file IO) when required or have the object stored in a distributed cache (network, serialization, deserialization) We are using Solaris as OS. ============================ Adding some more information. I have this question so as to determine if I can switch to distributed caching. The remote server which will have cache will have more memory and better disk and this remote server will only be used for caching. One of the problems we cannot increase the locally cached objects is , it stores the cached objects in JVM heap which has limited memory(using 32bit JVM). ======================================================================== Thanks, we finally ended up choosing Coherence as our Cache product. This provides many cache configuration topologies, in process vs remote vs disk ..etc.

    Read the article

  • Do I need to force the GAC to reload an assembly? Is this possible?

    - by Ben McCormack
    I've added types to my .NET classes that I'm using for COM interop. To get it to work with my VB6 application, I unregistered the DLL and re-registered it (using regasm). I then uninstalled and reinstalled it to the GAC (using gacutil). The types are showing up in the VB6 object explorer, but when I run the application in the VB6 IDE, it breaks on the line that instantiates the new types with the error: Automation Errror - The System cannot find the file specified. I thought this odd since I had already updated the GAC, so I uninstalled the dll from the GAC and got the exact same error, which seems to indicate that the older version of the dll is already in memory and needs to be "reloaded" so that the newer DLL is in memory. Is this possible, and if so, what do I need to do?

    Read the article

  • Web-based document merge solution?

    - by rugcutter
    We are looking for a web-based document merge solution. Our application is a web-based project management tool built using Xataface - PHP on Windows IIS + mySQL. We have a function that allows the user to generate a status report in Microsoft Word format based on data in the tool. Currently this function is implemented using LiveDocX. We have a status report template, and LiveDocX performs the merge into the template using data from our project management tool. The main drawback is LiveDocx is web-service based. We are looking to replace LiveDocX in order to reduce our dependence on the up-time of a third-party web-service that we cannot control. Does anyone have any suggestions on a web based document merge solution that I can install on my IIS or PHP based server?

    Read the article

  • How to insert HTML into a UIWebView

    - by Mohan Gulati
    I have HTML Content which was being displayed in a text. The next iteration of my app is display the HTML contents into a UIWebView. So I basically replaced my UITextView with UIWebView. However I could not figure out how to inset my HTML snippit into the view. It seems to need a URLRequest which I do not want. I have already stored the HTML content in memory and want to load and display it from memory. Any ideas how I should proceed?

    Read the article

  • pushScene and popScene or replaceScene . which should we use and when ?

    - by srikanth rongali
    I am using push scene to get the next scene. But, I read that for each PushScene the scene is stored in stack. The memory usage is more. So, I am using the replaceScene in place of pushScene. But, with replace scene I am getting the memory-bad-access message in debugger. So, I want to popScene after using it, so that the retain count is zero. But, I am confused in using popScene. If I have a Scene1 and Scene2. I used the following to go in to Scene2. Now I need to remove Scene1 from stack. [[CCDirector sharedDirector] pushScene:Scene2]; Where should I write the popScene to popScene1. How to get the previous scene in current running scene ? Thank you/

    Read the article

  • Core dump utility for .NET

    - by Dave
    In my past life as a COBOL mainframe developer I made extensive use of a tool called Abendaid which, in the event of an exception, would give me a complete memory dump including a formatted list of every variable in memory as well as a complete stack trace of the program with the offending statement highlighted. This made pinpointing the cause of an error much simpler and saved a lot of step-through debugging and/or trace statements. Now I've made the transition to C# and .NET web development I find that the information provided by ASP.NET only tells half the story, giving me a stack trace, but not any of the variable or class information. This makes debugging more difficult as you then have to run the process again with the debugger to try and reproduce the error, not easy with intermittent errors or with assemblies that run under the likes of SQL Server or CRM. I've looked around quite a lot for something that does this but I can't find anything obvious. Does anyone have any idea if there is one, or if not, what I'd need to start with in order to write one?

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >