Search Results

Search found 6387 results on 256 pages for 'cpu allocation'.

Page 41/256 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Mac OS X Server 10.6.6 "disables" CPU in VMware Fusion [closed]

    - by wjlafrance
    Hello! I installed Mac OS X Server 10.6.0 in VMWare Fusion the other night and it worked perfectly, until I ran Software Update. I upgraded to 10.6.6 through the combo updater, and now when I start the VM it says: "The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point." I've switched the operating system in the options to OS X Server 32bit, 64bit, and even to Windows 7, and nothing has worked. Does anyone have any ideas?

    Read the article

  • Mysql process goes over 100% of CPU usage

    - by Temnovit
    Hello! I'm experiencing some problems with my LAMP server. Recently, everything became very slow, even though visitor count on my websites didn't change to much. When I run top command, it sais that mysql process has taken over 150-200% of CPU. How's that possible, I always thought that 100% is a maximum? I'm running Ubuntu 9.04 server edition with 1,5 GB RAM my.cnf settings: key_buffer = 64M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 200 table_cache = 512 table_definition_cache = 512 thread_concurrency = 2 read_buffer_size = 1M sort_buffer_size = 4M join_buffer_size = 1M query_cache_limit = 1M # the maximum size of individual query results query_cache_size = 128M Here is the output of MySQLTuner: The top command: What could be the cause of this problem? Can I make changes to my my.cnf to prevent server from hanging?

    Read the article

  • MSSQL 2005 Snapshot Agent Uses 100% CPU

    - by matth1jd
    When setting up a new subscription to a publication (transactional replication) from 64-bit SQL Server 2005 to 64-bit SQL Server 2005 the Snapshot Agent on the publisher consumes 100% of the CPU. I am using SSMS to create the new subscription. My initial impression is that this could be from row locking occurring during the generation of the snapshot but I have read that a concurrent snapshot is generated by default in SQL Server 2005, and that row locking shouldn't be a concern. As this is a production server I would like to be able to initialize replication without bringing the box to it's knees. Any suggestions would be helpful and appreciated.

    Read the article

  • sqlserver.exe uses 100% CPU

    - by Markus
    I've created an application (asp.net) that once a day syncs an entire database through XML-files. The sync first creates an transaction and then clears the databases tables and then starts to parse and insert the new rows into the database. When all the parsing is complete it commits the transaction. This works fine on a SQL Server 2005 (on another machine), but on SQL Server 2005 Express, the process starts to use 100% CPU after a while, and as I log the inserts being made I can see that it just stops inserting. No exception, it just stops inserting. Anyone got any idea what this may be? I've previously run the syncronization on another sql 2005 express (also on another computer), and that worked. The server has only 2GB RAM, could this be the problem?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • In plesk 9.3.0, which php.ini is in use?

    - by Gaia
    I have 3 (actually 4, but the 4th one is for installatron) php.ini files in my Virtuozzo Container running RHEL 5.x /vz/root/1003/usr/local/psa/admin/conf/php.ini /vz/root/1003/etc/php.ini /vz/root/1003/etc/etc/php.ini which one do I use to change the MEMORY_LIMIT for a wordpress app running in the container 1003? Thanks!

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • Daemon program that uses select() inside infinite loop uses significantly more CPU when ported from

    - by Jake
    I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range). My first thought was that something must be wrong with the event loop. The event loop is an infinite loop ( while(1) ) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue. Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely? One more thing I am sure sure which version of vmware server is being used. It was just updated about a month ago though. Thanks.

    Read the article

  • File transfer eating alot of CPU

    - by Dan C.
    I'm trying to transfer a file over a IHttpHandler, the code is pretty simple. However when i start a single transfer it uses about 20% of the CPU. If i were to scale this to 20 simultaneous transfers the CPU is very high. Is there a better way I can be doing this to keep the CPU lower? the client code just sends over chunks of the file 64KB at a time. public void ProcessRequest(HttpContext context) { if (context.Request.Params["secretKey"] == null) { } else { accessCode = context.Request.Params["secretKey"].ToString(); } if (accessCode == "test") { string fileName = context.Request.Params["fileName"].ToString(); byte[] buffer = Convert.FromBase64String(context.Request.Form["data"]); string fileGuid = context.Request.Params["smGuid"].ToString(); string user = context.Request.Params["user"].ToString(); SaveFile(fileName, buffer, user); } } public void SaveFile(string fileName, byte[] buffer, string user) { string DirPath = @"E:\Filestorage\" + user + @"\"; if (!Directory.Exists(DirPath)) { Directory.CreateDirectory(DirPath); } string FilePath = @"E:\Filestorage\" + user + @"\" + fileName; FileStream writer = new FileStream(FilePath, File.Exists(FilePath) ? FileMode.Append : FileMode.Create, FileAccess.Write, FileShare.ReadWrite); writer.Write(buffer, 0, buffer.Length); writer.Close(); }

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

  • Replacing stock Core 2 Duo heatsink fan (just the fan really) with a Dell CPU fan

    - by user647345
    My old heatsink fan broke and I'm trying to reconnect its plugs to a new fan. My Dell CPU fan has some custom Dell plug. I snipped the old fan's wire in half and kept the plug on the end of it. I want to connect it to the Dell fan wire to the plug. The motherboard is a P5Q-e, the stock Core 2 Duo fan was .20A and the dell is .70A. Is that going to matter? The wire from the fan has four wires, the wire with the plug has four wires. They share three, of the four colors: red, black, and blue. Dell's fourth wire is white, while the plug's fourth wire is yellow. Is it safe to assume that I just connect the yellow and the white plug together and match the rest up? I don't want to take any risk of damaging anything. It runs fine passively without a fan, but I have speedstep on, so I would like to use this fan and just fasten it to the heatsink with some twist ties and paperclips and call it a day.

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • High load average due to high system cpu load (%sys)

    - by Nick
    We have server with high traffic website. Recently we moved from 2 x 4 core server (8 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 5.x, to 2 x 4 core server (16 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 6.3 Server running nginx as a proxy, mysql server and sphinx-search. Traffic is high, but mysql and sphinx-search databases are relatively small, and usually everything works blazing fast. Today server experienced load average of 100++. Looking at top and sar, we noticed that (%sys) is very high - 50 to 70%. Disk utilization was less 1%. We tried to reboot, but problem existed after the reboot. At any moment server had at least 3-4 GB free RAM. Only message shown by dmesg was "possible SYN flooding on port 80. Sending cookies.". Here is snippet of sar 11:00:01 CPU %user %nice %system %iowait %steal %idle 11:10:01 all 21.60 0.00 66.38 0.03 0.00 11.99 We know that this is traffic issue, but we do not know how to proceed future and where to check for solution. Is there a way we can find where exactly those "66.38%" are used. Any suggestions would be appreciated.

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • Remote Desktop svchost (networkservice) & lsa.exe high cpu usage, hangs on welcome screen

    - by Rohan1
    We have deployed an RDS Farm with 12 virtual RDS servers using Hyper V. Currently some users are not able to log on. After passing credentials to the connection broker, the session hangs on the "Welcome" screen. Using resource monitor we've seen that svchost (with the "networkservice" service) has a CPU usage of 50%, when viewing the wait chain on the process it displays that it's waiting for a lsa.exe to finish. We can't kill any of the users processes, even when trying with taskkill /f. Suspending lsa.exe did work but didn't have any effect. The networkservice also couldn't be restarted. Also, if this happens, the current users logged on to the RDS server can't be displayed. Task manager crashes when viewing the users, RDS service manager crashes when viewing the users (even remotely) and the cmd command "query session" doesn't work. No antivirus is installed on the RDS server. The only thing we can do is rebooting the server, which is not an option because of the fact that other users are in active sessions. Does anyone have ANY idea what's going on? We didn't encounter this in our pre-production setup.

    Read the article

  • How powerful of a PC do you need to edit HD videos?

    - by Xeoncross
    I have a Core2Quad Q8200 (2.3GHz) with 4GB of RAM, a 512MB PCIe video card, and a SATA-2 HD. Yet it still isn't fast enough to edit 720i/p video in Sony Vegas or Adobe Premiere/Aftereffects. My RAM usage never peaks over 1.6GB, but my CPU cores make it to 95% quick! Right now the preview panes in all these programs lag to bad to actually work on the videos. I get to see 1-3 frames every second or two! So how fast do I have to go? At what point will my CPU be fast enough to actually edit these videos? I have to assume that regular people and their regular sub $2k computers can actually work with this footage. Another way to answer this is, how fast is the PC you used to edit videos? Update: I'ts worth noting that now that I have Adobe Pre/AF CS4 I am more interested in getting that working than my older Vegas 6. If you didn't have to re-run RAM preview every, single, time you made one change it would be my answer. But since I like to test many filters and effects before choosing one - I have to re-render a 1-sec section of footage over-and-over and it drives me nuts waiting. Perhaps a motherboard with Dual Xeon chips or something would be able to handle this. It would probably be as much as a dual-crossfire setup and would also speed up other applications.

    Read the article

  • Mountain Lion overheating issue have to do with launchd/Python?

    - by Christopher Jones
    So, Ever since I installed ML, my MacBook Air has been running SUPER hot. Opened up activity monitor, and everything seemed to be pretty normal, until I had it refresh every .5 seconds... and then I started seeing some interesting things. A 'Python' process appears and is terminated several times a second, and uses TONS of CPU 70-110. It's parent process is 'launchd' - and when I sample the process, there is a lot going on with Python. http://db.tt/ovuX3hZM These appear and disappear too quickly to get one... this one only happened to be using 70 ish percent of CPU... but they consistently hit 100-110%. http://db.tt/ovuX3hZMg The parent process... launchd. lots of context switches and UNIX system calls... What is the deal here? (photo goes here when I earn the street cred) The sample of launchd. ANY help here could be of help to not only me, but possibly many others experiencing decreased battery life and warmer laps these days because of this Mountain Lion weirdness. PLEASE HELP! PS - I'd put the screen grabs inline, but i don't have enough street cred yet.

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • What is the difference among NSString alloc:initWithCString versus stringWithUTF8String?

    - by mobibob
    I thought these two methods were (memory allocation-wise) equivalent, however, I was seeing "out of scope" and "NSCFString" in the debugger if I used what I thought was the convenient method (commented out below) and when I switched to the more explicit method my code stopped crashing! Notice that I am getting the string that is being stored in my container from sqlite3 query. p = (char*) sqlite3_column_text (queryStmt, 1); // GUID = (NSString*) [NSString stringWithUTF8String: (p!=NULL) ? p : ""]; GUID = [[NSString alloc] initWithCString:(p!=NULL) ? p : "" encoding:NSUTF8StringEncoding]; Also note, that if I looked at the values in the debugger and printed them with NSLog they looked correct, however, I don't think new memory was allocated and the value copied. Instead the memory pointer was stored - went out of scope - referenced later - crash!

    Read the article

  • Efficient algorithm for creating an ideal distribution of groups into containers?

    - by Inshim
    I have groups of students that need to be allocated into classrooms of a fixed capacity (say, 100 chairs in each). Each group must only be allocated to a single classroom, even if it is larger than the capacity (ie there can be an overflow, with students standing up) I need an algorithm to make the allocations with minimum overflows and under-capacity classrooms. A naive algorithm to do this allocation is horrendously slow when having ~200 groups, with a distribution of about half of them being under 20% of the classroom size. Any ideas where I can find at least some good starting point for making this algorithm lightning fast? Thanks!

    Read the article

  • How well do zippers perform in practice, and when should they be used?

    - by Rob
    I think that the zipper is a beautiful idea; it elegantly provides a way to walk a list or tree and make what appear to be local updates in a functional way. Asymptotically, the costs appear to be reasonable. But traversing the data structure requires memory allocation at each iteration, where a normal list or tree traversal is just pointer chasing. This seems expensive (please correct me if I am wrong). Are the costs prohibitive? And what under what circumstances would it be reasonable to use a zipper?

    Read the article

  • Can you tune C runtime heap segment reservation size on XP?

    - by Jason
    When the VC6 C runtime on XP can't serve an allocation request within an existing heap segment, it reserves a new segment. The size of these new segments increase by factors of 2 (until there are not large enough free areas to do that, at which point it falls down to smaller segments.) In any case, is there any way to control this behavior on XP with the VC6 runtime? For example, doubling up to a point, but capping at 64MB segments. If there is no way on XP but there is on 7, that would be good to know too. Or if there is no way on VC6 but there is on VC8 or up would be interesting.

    Read the article

  • My processor is running slower than usually it has to run

    - by Soham
    I've Core2Duo E7400 2.80GHz processor on my Intel D945gcnl mobo. From CPU-Z, I've get to know that my processor speed is 1596MHz with X6 multiplier and 266MHz Bus Speed on each core. Why my processor is being operated at 1596 MHz rather than 2.80GHz...!!???? From my side I've tried to disable SpeedStep from my bios by setting EIST to 'Disable' and also tried to change Power Option to 'High Performance' in Windows 7. And also done like suggested in this question:http://superuser.com/questions/119176/processor-not-running-at-max-speed But it gains me nothing. I've also tried to run few massive applications together to check whether it was increasing at that time or not, but it remains same. Should I have to increase my multiplier or overclock to gain that lost speed...??? Should I have to check my power supply for any problem..??? or anything else...??? Please help me on this.... And yeah I've desktop computer so no problem causing by battery. Here's my CPU-Z Screenshot: http://i56.tinypic.com/2lk4mqc.jpg

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >