Search Results

Search found 15543 results on 622 pages for 'memory editing'.

Page 80/622 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • Storing large array of tiles, but allowing easy access to data

    - by Cyral
    I've been thinking about this for a while. I have a 2D tile bases platformer in XNA with a large array of tile data, I've been running into memory problems with large maps. (I will add chunks soon!) Currently, Each tile contains an Item along with other properties like how its rotated, if it has forground / background, etc. An Item is static and has properties like the name, tooltip, type of item, how much light it emits, the collision it does to player, etc. Examples: public class Item { public static List<Item> Items; public Collision blockCollisionType; public string nameOfItem; public bool someOtherVariable,etc,etc public static Item Air public static Item Stone; public static Item Dirt; static Item() { Items = new List<Item>() { (Stone = new Item() { nameOfItem = "Stone", blockCollisionType = Collision.Solid, }), (Air = new Item() { nameOfItem = "Air", blockCollisionType = Collision.Passable, }), }; } } Would be an Item, The array of Tiles would contain a Tile for each point, public class Tile { public Item item; //What type it is public bool onBackground; public int someOtherVariables,etc,etc } Now, Most would probably use an enum, or a form of ID to identify blocks. Well my system is really nice just to find out about an item. I can simply do tiles[x,y].item.Name To get the name for example. I realized my Item property of the tile is over 1000 Bytes! Wow! What I'm looking for is a way to use an ID (Int or byte depending on how many items) instead of an Item but still have a method for retreiving data about the type of item a tile contains.

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory consumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paradigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • Where's the Swap File/Partition?

    - by chrisbunney
    I'm investigating the virtual memory configuration of a Debian based Amazon EC2 instance, and as my background isn't in system admin, I'm slightly confused by what I'm seeing. We're using MongoDB, and the monitoring server we have indicates that the Mongo process is using about 20GB of swap space, however I can't figure out where this is located on the server. As far as I can tell from using the various suggested methods from Google, there is either a much smaller amount, or none at all. top indicates that there is 1.8GB of swap memory: top - 15:35:21 up 6 days, 3:23, 1 user, load average: 1.60, 1.43, 1.37 Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 1.3%sy, 0.0%ni, 14.7%id, 83.8%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 3928924k total, 2855572k used, 1073352k free, 640564k buffers Swap: 0k total, 0k used, 0k free, 1887788k cached swapon -s doesn't seem to think there's any swap space: Filename Type Size Used Priority free -m doesn't think there's any swap either: total used free shared buffers cached Mem: 3836 3663 172 0 626 2701 -/+ buffers/cache: 336 3500 Swap: 0 0 0 And neither does vmstat: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 0 66224 641372 2874744 0 0 21 5012 21 33 2 2 76 19 But cat /etc/fstab thinks there is a swap partition: /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 /mnt ext3 defaults 0 0 /dev/xvda3 swap swap defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 However df -k gives no indication of the xvda3 partition: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 16513960 15675324 0 100% / tmpfs 1964460 8 1964452 1% /lib/init/rw udev 1914148 28 1914120 1% /dev tmpfs 1964460 4 1964456 1% /dev/shm So I really don't know what to make of this, because I appear to have a process using about 10 times more virtual memory than what might be available, and I have no idea where this virtual memory is on the system. I'm probably misinterpreting the output of the tools, so I'd be grateful if someone would be able to set me straight: What have I got wrong, what's the right interpretation, and how do you reach that interpretation? EDIT0: We use 10gen's MMS for monitoring the database, the relevant section for memory from the last data point is: "mem": { "virtual": 20749, "bits": 64, "supported": true, "mappedWithJournal": 20376, "mapped": 10188, "resident": 1219 }, This JSON is specific to the database process (I believe) rather than the system as a whole. fdisk -l /dev/xvda outputs... nothing? I tried each of the 3 xvda entries in /etc/fstab as well: root@ip:~# fdisk -l /dev/xvda1 Disk /dev/xvda1: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table root@ip:~# fdisk -l /dev/xvda2 root@ip:~# fdisk -l /dev/xvda3 root@ip:~# Edit1: Output of cat /proc/meminfo for the sake of completeness: MemTotal: 3928924 kB MemFree: 726600 kB Buffers: 648368 kB Cached: 2216556 kB SwapCached: 0 kB Active: 1945100 kB Inactive: 994016 kB Active(anon): 60476 kB Inactive(anon): 12952 kB Active(file): 1884624 kB Inactive(file): 981064 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 387180 kB Writeback: 0 kB AnonPages: 73380 kB Mapped: 1188260 kB Shmem: 48 kB Slab: 149768 kB SReclaimable: 146076 kB SUnreclaim: 3692 kB KernelStack: 1104 kB PageTables: 16096 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1964460 kB Committed_AS: 305572 kB VmallocTotal: 34359738367 kB VmallocUsed: 16760 kB VmallocChunk: 34359721448 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 3932160 kB DirectMap2M: 0 kB

    Read the article

  • Memory management, and async operations: when does an object become nil?

    - by Kenny Winker
    I have a view that will be displaying downloaded images and text. I'd like to handle all the downloading asynchronously using ASIHTTPRequest, but I'm not sure how to go about notifying the view when downloads are finished... If I pass my view controller as the delegate of the ASIHTTPRequest, and then my view is destroyed (user navigates away) will it fail gracefully when it tries to message my view controller because the delegate is now nil? i.e. if i do this: UIViewController *myvc = [[UIViewController alloc] init]; request.delegate = myvc; [myvc release]; Do myvc, and request.delegate now == a pointer to nil? This is the problem with being self-taught... I'm kinda fuzzy on some basic concepts. Other ideas of how to handle this are welcome.

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory comsumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paridigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • How long should it take for someone to be able to type code from memory?

    - by LordSnoutimus
    Hi, I understand that this question could be answered with a simple sentence and that it may be viewed as subjective, however, I am a young student who is interested in pursuing a career in programming and wondered how long it took some of you to get to the level of experience you are now?. I ask this because I am currently working on building an application in Java on the Android platform and it bothers me that I am constantly having to look up how to write a certain section of code in my application such as writing to a database, or how the if loop should be structured. My question really is, how long did it take for you to become experienced enough to actually know exactly how your next line of code was going to look, before you even wrote it?

    Read the article

  • Removing a view from it's superview causes memory error - why?

    - by mystify
    Xcode is throwing an error at me: malloc: * error for object 0x103f000: pointer being freed was not allocated * set a breakpoint in malloc_error_break to debug I tracked down the code until a line where I do this: - (void)inputValueCommitted:(NSString *)animationID finished:(BOOL)finished context:(void *)context { // retainCount of myView is 2! (one for the retain-property, one for beeing a subview) [self.myView removeFromSuperview]; // ERROR-LINE !! self.myView = nil; } When I remove that errorful line, the error is gone. So in conclusion: I can't get rid of my view! It's an UIImageView with nothing else inside, just showing an image. What I do is this: I create an UIView Animation Block, create that UIImageView, assign it to an retain-property with self.myView = ..., and after the animation is done, I just want to get rid of that view. So I remove it from it's superview and then set my property to nil, which lets it go away - in theory. Did anyone else encounter such issues? iPhone SDK 3.0.

    Read the article

  • How to solve Memory leaks in Lib Xml Parser in objective-C where the list is returned?

    - by Madan Mohan
    Hi Guys, I got leaks in Lib Xml Parser while retrieving the data from the net, Here in the below code, I have allocated the list - (void)getCustomersList { // make an operation so we can push it into the queue SEL method = @selector(parseForData); NSInvocationOperation *op = [[NSInvocationOperation alloc] initWithTarget:self selector:method object:nil]; customersTempList = [[NSMutableArray alloc] initWithCapacity:20];// allocated list [self.retrieverQueue addOperation:op]; [op release]; } // return each recode // in parser .m class one of the condition in endElement where it shows a leak. else if(0 == strncmp((const char *)localname, kCustomerElement, kCustomerElementLength)) { [customersTempList addObject:customer]; printf("\n no of objects in temp list:%d", [customersTempList count]); if ([customersTempList count] == 20) { NSMutableArray* argsList = [customersTempList copy];//////////////////////here it is showing leak. printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } // returned the list after all the records are stored in the list else if(0 == strncmp((const char *)localname, kCustomersElement, kCustomersElementLength)) { printf("\n Calling reload data with %d new objects", [customersTempList count]); NSMutableArray* argsList = [customersTempList copy]; printf("\n Calling reload data with %d new objects", [argsList count]); SEL selector = @selector(parser:addCustomerObject:); NSMethodSignature *sig = [(id)self.delegate methodSignatureForSelector:selector]; if(nil != sig && [self.delegate respondsToSelector:selector]) { NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:sig]; [invocation retainArguments]; [invocation setTarget:self.delegate]; [invocation setSelector:selector]; [invocation setArgument:&self atIndex:2]; [invocation setArgument:&argsList atIndex:3]; [invocation performSelectorOnMainThread:@selector(invoke) withObject:NULL waitUntilDone:NO]; } [customersTempList removeAllObjects]; } } please help me out of this, Thanks, Madan.

    Read the article

  • Why can't I reclaim my dynamically allocated memory using the "delete" keyword?

    - by synaptik
    I have the following class: class Patient { public: Patient(int x); ~Patient(); private: int* RP; }; Patient::Patient(int x) { RP = new int [x]; } Patient::~Patient() { delete [] RP; } I create an instance of this class on the stack as follows: void f() { Patient p(10); } Now, when f() returns, I get a "double free or corruption" error, which signals to me that something is attempted to be deleted more than once. But I don't understand why that would be so. The space for the array is created on the heap, and just because the function from inside which the space was allocated returns, I wouldn't expect the space to be reclaimed. I thought that if I allocate space on the heap (using the new keyword), then the only way to reclaim that space is to use the delete keyword. Help! :)

    Read the article

  • Huge Graph Structure

    - by Harph
    I'm developing an application in which I need a structure for represent a huge graph (between 1000000 and 6000000 nodes and 100 or 600 edges) in memory. The edges representation will contain some attribute of the relation. I have tried a memory map representation, arrays, dictionaries and string for represent that structure in memory, but this always crash because the memory limit. I would to get an advice of how can I represent this, or something similar. By the way, I'm using python.

    Read the article

  • Concatenating a string and byte array in to unmanaged memory.

    - by Scott Chamberlain
    This is a followup to my last question. I now have a byte[] of values for my bitmap image. Eventually I will be passing a string to the print spooler of the format String.Format("GW{0},{1},{2},{3},", X, Y, stride, _Bitmap.Height) + my binary data; I am using the SendBytesToPrinter command from here. Here is my code so far to send it to the printer public static bool SendStringPlusByteBlockToPrinter(string szPrinterName, string szString, byte[] bytes) { IntPtr pBytes; Int32 dwCount; // How many characters are in the string? dwCount = szString.Length; // Assume that the printer is expecting ANSI text, and then convert // the string to ANSI text. pBytes = Marshal.StringToCoTaskMemAnsi(szString); pBytes = Marshal.ReAllocCoTaskMem(pBytes, szString.Length + bytes.Length); Marshal.Copy(bytes,0, SOMTHING GOES HERE,bytes.Length); // this is the problem line // Send the converted ANSI string + the concatenated bytes to the printer. SendBytesToPrinter(szPrinterName, pBytes, dwCount); Marshal.FreeCoTaskMem(pBytes); return true; } My issue is I do not know how to make my data appended on to the end of the string. Any help would be greatly appreciated, and if I am doing this totally wrong I am fine in going a entirely different way (for example somehow getting the binary data concatenated on to the string before the move to unmanaged space. P.S. As a second question, will ReAllocCoTaskMem move the data that is sitting in it before the call to the new location?

    Read the article

  • Do the 'up to date' guarantees provided by final field in Java's memory model extend to indirect ref

    - by mattbh
    The Java language spec defines semantics of final fields in section 17.5: The usage model for final fields is a simple one. Set the final fields for an object in that object's constructor. Do not write a reference to the object being constructed in a place where another thread can see it before the object's constructor is finished. If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are. My question is - does the 'up-to-date' guarantee extend to the contents of nested arrays, and nested objects? An example scenario: Thread A constructs a HashMap of ArrayLists, then assigns the HashMap to final field 'myFinal' in an instance of class 'MyClass' Thread B sees a (non-synchronized) reference to the MyClass instance and reads 'myFinal', and accesses and reads the contents of one of the ArrayLists In this scenario, are the members of the ArrayList as seen by Thread B guaranteed to be at least as up to date as they were when MyClass's constructor completed?

    Read the article

  • How do you detach an array of strings from shared memory? C

    - by Tim
    I have: int array_id; char* records[10]; // get the shared segment if ((array_id = shmget(IPC_PRIVATE, 1, 0666)) == -1) { perror("Array Creating"); } // attach records[0] = (char*) shmat(array_id, (void*)0, 0); if ((int) *records == -1) { perror("Array Attachment"); } which works fine, but when i try and detach i get an "invalid argument" error. // detach int error; if( (error = shmdt((void*) records[0])) == -1) { perror(array detachment); } any ideas? thank you

    Read the article

  • Running out of memory when getting a bitmap from a server?

    - by ikky
    Hi! I'm making an application which uses MANY images. The application gets the images from a server, and downloads them one at a time. After many images the creation of a bitmap returns an exception, but i don't know how to solve this. Here is my function for downloading the images: public static Bitmap getImageFromWholeURL(String sURL) { HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create(sURL); myRequest.Method = "GET"; // If it does not exist anything on the url, then return null try { HttpWebResponse myResponse = (HttpWebResponse)myRequest.GetResponse(); System.Drawing.Bitmap bmp = new System.Drawing.Bitmap(myResponse.GetResponseStream()); myResponse.Close(); return bmp; } catch (Exception e) { return null; } } Can anyone help me out here? Thanks in advance!

    Read the article

  • pointer to preallocated memory as an input parameter and have the function fill it

    - by djones2010
    test code: void modify_it(char * mystuff) { char test[7] = "123456"; //last element is null i presume for c style strings here. //static char test[] = "123123"; //when i do this i thought i should be able to gain access to this bit of memory when the function is destroyed but that does not seem to be the case. //char * test = new char[7]; //this is also creating memory on stack and not the heap i reckon and gets destroyed once the function is done with. strcpy_s(mystuff,7,test); //this does the job as long as memory for mystuff has been allocated outside the function. mystuff = test; //this does not work. I know with c style strings you can't just do string assignments they have to be actually copied. in this case I was using this in conjunction with static char test thinking by having it as static the memory would not get destroyed and i can then simply point mystuff to test and be done with it. i would later have address the memory cleanup in the main function. but anyway this never worked. } int main(void) { char * mystuff = new char [7]; //allocate memory on heap where the pointer will point cool(mystuff); std::string test_case(mystuff); std::cout<<test_case.c_str(); //this is the only way i know how to use cout by making it into a string c++ string. delete [] mystuff; return 0; } in the case, of a static array in the function why would it not work. in the case, when i allocated memory using new in the function does it get created on the stack or heap? in the case, i have string which needs to be copied into a char * form. everything i see usually requires const char* instead of just char*. I know i could use reference to take care of this easy. Or char ** to send in the pointer and do it that way. But i just wanted to know if I could do it with just char *. Anyway your thoughts and comments plus any examples would be very helpful.

    Read the article

  • What's in-memory database technology that do realtime materialized view?

    - by KA100
    What I'm looking for is something like materialized views in front-end that shows my data in diffident ways without full recalculation. let's say I have stock watcher with many front-end views and dashborads some based on aggregation, order by or just filter with different criteria defined realtime by user. Now, I receive online record updates from some webservice and it's not like "data warehouse" every single record can be updated any time and it actually happens every second. Is there any technology can help me in such I create something like materialized view and it's update it without doing full recalculation every time data changed. Thank you.

    Read the article

  • Iterating over a large data set in long running Python process - memory issues?

    - by user1094786
    I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher). Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc. What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually? Any advice would be appreciated. Thanks!

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >