Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 202/1257 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • bypassing administrator of windows.

    - by Pennf0lio
    Hi, Are there software that allows you to bypass administrator restrictions? The problem is a virus restricted me to do anything, I can install anything because I don't have the administrator permission. are there software you can recommend to overpass this? thanks!

    Read the article

  • Performance of Cluster Shared Volume file copy from SAN

    - by Sequenzia
    I am hoping someone can help me out with a strange issue. We are running a Microsoft Failover Cluster with Server 2008 R2 and an Equallogic PS4000 SAN. Our main configuration has 2 Dell Poweredge T710 Servers in the cluster. We have CSV and Quorm setup. The servers each have 10 Broadcom 1Gb NICs. Right now 4 of the NICS are on the iSCSI network for accessing the SAN. They use MPIO and the Dell HIT pack. We have 5 VMs running on each node and everything runs smooth. No noticeable performance issues or anything. From the SAN I can see the 4 iSCSI connections from each server to each volume (CSV and Quorm). Again, it seems to perform great. The problem I am running into is with backups. I have tried a few backup programs like backupchain and Veeam. The problem is both of them are very very slow to backup the VMs. For instance I have a 500GB (fixed disc) VHD that’s running on the cluster. It takes over 18 hours to backup that VHD and that’s with compression and depuping turned off which is supposed to be the fasted. We also have a separate server that is just for backups. It has a lot of directed attached storage. As part of the troubleshooting I decided to bring that server into the cluster as a node. It now has access to the CSV and can read from C:\clusterstorage\volume1 which is where our VHDs live. This backup server only has 2 NICs. 1 NIC is going to the iSCSI network and the other is just on the main network. It has Intel NICS in it without any sort of MPIO or teaming. So with the 3rd server now in the cluster I started doing some benchmarking. I have a test VHD that’s about 7GBs that’s stored in the CSV. I have tested file copying that VHD from all 3 servers to directed attached storage in the respective server. The 2 Dell servers that are the main nodes in the cluster (they house the VMs) are reading that file at about 20Mbs/Sec. Which at that rate is way to slow for the backups. The other server which only has 1 NIC to the SAN is reading at around 100Mbs/Sec. I spent a few hours on the phone with Dell today about this . We went through all kind of tests and he was pretty dumb founded. He really has no idea why that server with only 1 NIC is reading about 5 times as fast as the servers with 4 NICS and MPIO. We looked at the network utilization of the NICs while the file copy was going on. The servers with the 4 NICs had a small increase of activity during the file copy but they only went up to around 8-10% on all 4 NICs. The other server with the 1 NIC jumped up to over 80% during the file copy. I plan on doing some more testing after hours and calling Dell back tomorrow but I really am confused (and so is Dell’s support rep) why I cannot get faster file copy access to the CSV on those servers. Anyone have any input on this? Any feedback would be greatly appreciated. Thanks in advance.

    Read the article

  • Looking for a free Morphing - Ageing program for Mac OS X.6.

    - by waszkiewicz
    Hi! I'm looking for a free morphing and/or ageing software for Mac OS X.6, not too hard to use, with the kind of Mac "one click" function, if you know what I mean. Cheap programs are also welcome, I paid for a Photoshop license, so why not another software... My goal is to be able to modify faces, make them look older or younger, add some piercing or other, funny stuff. Thanks for your tipps.

    Read the article

  • Instant MSI deployment via GPO

    - by HannesFostie
    Is it possible to instantly deploy a certain piece of software by creating a GPO in Active Directory? I realize it's possible to do this but only after rebooting the computer, and that is something we don't always want to do, especially since some of the software I want to deploy on servers. What are the options? One thing to note is that most of our users do NOT have admin rights as I am talking not only about servers but also about workstations in the classrooms.

    Read the article

  • Can someone find me a reference to this quote?

    - by Robot
    I'm looking for a reasonable reference to a known software personality who said something along the lines "make sure your software runs the most common cases fast/easy but all cases are possible". I'm sure there are many 80/20 quotes, so I'm looking for the most famous that gets that point across. -Robot

    Read the article

  • Linux software Raid 10 no superblock

    - by Shoshomiga
    I have a software raid 10 with 6 x 2tb hard drives (raid 1 for /boot), ubuntu 10.04 is the os. I had a raid controller failure that put 2 drives out of sync, crashed the system and initially the os didnt boot up and went into initramfs instead, saying that drives were busy but I eventually managed to bring the raid up by stopping and assembling the drives. The os booted up and said that there were filesystem errors, I chose to ignore because it would remount the fs in read-only mode if there was a problem. Everything seemed to be working fine and the 2 drives started to rebuild, I was sure that it was a sata controller failure because I had dma errors in my log files. The os crashed soon after that with ext errors. Now its not bringing up the raid, it says that there is no superblock on /dev/sda2. I tried to reassemble manually with all the device names but it still would not bring up the raid 10 complaining about the missing superblock on sda2, and sda1 was also dropped from the raid 1. When I did examine on the raid10 it says that 1 of the initially failed drives is a spare, the other is spare rebuilding and sda2 is removed. It seems that sda decided to fail right when the system was vulnerable to it because when I boot up a live cd it spews out sda unrecoverable read failures. I have been trying to fix this all week but I'm not sure where to go with this now, I ordered more hard drives because I didn't have a complete backup, but its too late for that now and the only thing I could do is mirror all the hard drives onto the new ones (I'm not sure whether sda was mirrored without errors). On the internet I read that you can recover from this by recreating the array with the same options as when it was made, however because sda is failing I cant use it and I don't want to risk using its mirror instead, so I'm waiting to get another hard drive. I'm also not sure whether to include the out of sync drives or if I can actually use those instead to recover the array. Sorry if this is a mess to read but I've been trying to fix this all day and its late at night now, any thoughts on this would be greatly appreciated. I also did a memtest and changed the motherboard in addition to everything else. EDIT: This is my partition layout Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0009c34a Device Boot Start End Blocks Id System /dev/sdb1 * 2048 511999 254976 83 Linux /dev/sdb2 512000 3904980991 1952234496 83 Linux /dev/sdb3 3904980992 3907028991 1024000 82 Linux swap / Solaris

    Read the article

  • DVD-Player "Simulation"

    - by SjoerdV
    This may sound like a strange question, but I was wondering if there is software available which can emulate the behaviour of standalone dvd-players. I'm currently debugging a DVD we're creating, and I can't afford to go hopping to my house every time to check. The reason I'm asking is because the problems just appear on 'some' dvd-players which I cannot predict. Other option maybe, is there software that can check a VIDEO_TS folder or iso file for errors?

    Read the article

  • I finished coding my program. What's next? what are the steps I should take that would enable me to

    - by Luay
    Hi, I finished developing a program and would like to sell it online. However, I am not really sure of what to do next. Here is my current plan (in-order): 1- Add a 'deployment' project (i'm using visual studio) to my project so I can create a setup file for my program. 2- Use visual studio 'testing' add ons to test the program. I have no idea how to do it, but will teach myself. 3- When all is done, install the program on my wife's, parents and in-laws computer to further test it under different environments. 3- Setup a small Ltd. company. ( I might start this earlier as it might take a few weeks to complete). 4- Build / finish building a website (actually I started on this step a few weeks ago but haven't devoted enough time for it to complete yet). 5- Find a software / service to protect my program from piracy. I know it is impossible to protect the program from piracy. I understand that fully. But I still want to implement some sort of solution that wouldn't harm or deter honest customers. 6- Set up a business paypal account. 7- Find a software / service to handle payments on my website 8- Find a software / service to handle issuing license codes 9- Set up all of the above and go 'live'. However, I have zero experience in this sort of thing and would like your guidance on some points. 1- Is it necessary to setup a company? Can I sell as an individual? Will selling as an individual deter persons or companies from buying my software? 2- What is a decent choice for as a software / service to protect my program from my piracy. I have done a quick search and found something called Quick License Manager by Interactive Studios. Is this the sort of thing I am looking for? Is it any good? At which stage do I use or implement their service (or any similar service you might point me too? I guess I am really asking: how does this work? Would they give me a file I just add to my project, rebuild it and that's it? 3- What should I implement to handle payments online and how complicated is it? could you point me to any such services, please? 4- What should I implement to handle issuing license codes and how will that software / service coordinate with the software / service that handles the anti-piracy stuff? could you point me to such services, please? 5- In a different Stack Overflow question (by another user) someone suggested a cms system called Software Droid (http://www.softwaredroid.com). Is this what I am after. Will it help? 6- If the steps i'm following are incorrect in order or logic could you please advise me on what I should actually do? I guess these are question for those of you who have 'been there and done that'. So any guidance will be vary much appreciated. Many thanks in advance for your help.

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • How to install multiple versions of the same software?

    - by Matt Love
    I manage phone systems for multiple clients. Each system uses the same administrator software, but it runs on different versions depending on what version of firmware is installed on the system controller. The software is downloaded directly from the system controller so it's the right version. For example, if the controller runs on version 5.0.2, you have to run version 5.0.2 of the administrator software. You can't administrate a 5.0.2 controller with a later versions of administrator software. Bottom line, you have to have the right version of software to log into the controller. The software is not executable on its own, you have to install it. So every time I want to log into a different controller, I have to reinstall the right software. Any way to get around this? I'm running Windows 7 Enterprise x86.

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Performance of Silverlight Datagrid in Silverlight 3 vs Silverlight 4 on a mac

    - by Simon
    I'm using Silverlight Beta 4 for a LOB application. After finding out today that I'll have to wait perhaps 4 months to be able to develop with SL4 on Visual Studio 2010 I'm thinking I need to downgrade my application to SL3 but thats another question. The problem is I'm noticing absolutely abismal performance for simple datagrids that work just fine on a PC when I'm running on a Mac. These grids contain only 5-10 columns and maybe 50 rows. Paging up and down takes about 1-2 seconds sometimes. I would appreciate anybody's experience in which of the following is the best solution: reverting to Silverlight 3 and hoping DataGrid is faster switching to 3rd party datagrid such as Telerik forgetting silverlight altogether I was hoping that possibly SL4 runtime might be updated but that won't happen probably for 3-4 months. Just a reminder - this is specifically a mac issue. Performance on my PC while slightly slow to populate the grid initially is fine.

    Read the article

  • RAD/Eclipse Eclipse Test and Performance Tools Platform, export data to text file

    - by Berlin Brown
    I am using the RAD (also on Eclipse) Test and Performance Monitoring. I monitor CPU performance time with it, on particular methods, etc. It is a good tool for my monitoring my applications but I can't copy/paste or export the output to a text file format. So I can send to the others. There has to be a way to export this? Also, I can save the output to file but it is '*.trcxml' binary file? has anyone seen a parser for this file format?

    Read the article

  • Deploying software on compromised machines

    - by Martin
    I've been involved in a discussion about how to build internet voting software for a general election. We've reached a general consensus that there exist plenty of secure methods for two way authentication and communication. However, someone came along and pointed out that in a general election some of the machines being used are almost certainly going to be compromised. To quote: Let me be an evil electoral fraudster. I want to sample peoples votes as they vote and hope I get something scandalous. I hire a bot-net from some really shady dudes who control 1000 compromised machines in the UK just for election day. I capture the voting habits of 1000 voters on election day. I notice 5 of them have voted BNP. I look these users up and check out their machines, I look through their documents on their machine and find out their names and addresses. I find out one of them is the wife of a tory MP. I leak 'wife of tory mp is a fascist!' to some blogger I know. It hits the internet and goes viral, swings an election. That's a serious problem! So, what are the best techniques for running software where user interactions with the software must be kept secret, on a machine which is possibly compromised?

    Read the article

  • iPhone Foundation - performance implications of mutable and xxxWithCapacity:0

    - by Adam Eberbach
    All of the collection classes have two versions - mutable and immutable, such as NSArray and NSMutableArray. Is the distinction merely to promote careful programming by providing a const collection or is there some performance hit when using a mutable object as opposed to immutable? Similarly each of the collection classes has a method xxxxWithCapacity, like [NSMutableArray arrayWithCapacity:0]. I often use zero as the argument because it seems a better choice than guessing wrongly how many objects might be added. Is there some performance advantage to creating a collection with capacity for enough objects in advance? If not why isn't the function something like + (id)emptyArray?

    Read the article

  • Logging strategy vs. performance

    - by vtortola
    Hi, I'm developing a web application that has to support lots of simultaneous requests, and I'd like to keep it fast enough. I have now to implement a logging strategy, I'm gonna use log4net, but ... what and how should I log? I mean: How logging impacts in performance? is it possible/recomendable logging using async calls? Is better use a text file or a database? Is it possible to do it conditional? for example, default log to the database, and if it fails, the switch to a text file. What about multithreading? should I care about synchronization when I use log4net? or it's thread safe out of the box? In the requirements appear that the application should cache a couple of things per request, and I'm afraid of the performance impact of that. Cheers.

    Read the article

  • .NET or Windows Synchronization Primitives Performance Specifications

    - by ovanes
    Hello *, I am currently writing a scientific article, where I need to be very exact with citation. Can someone point me to either MSDN, MSDN article, some published article source or a book, where I can find performance comparison of Windows or .NET Synchronization primitives. I know that these are in the descending performance order: Interlocked API, Critical Section, .NET lock-statement, Monitor, Mutex, EventWaitHandle, Semaphore. Many Thanks, Ovanes P.S. I found a great book: Concurrent Programming on Windows by Joe Duffy. This book is written by one of the head concurrency developers for .NET Framework and is simply brilliant with lots of explanations, how things work or were implemented.

    Read the article

  • NSURLConnection performance

    - by oksk
    Hi all, I'm using NSURLConnection for downloading some images in my app currently. Before implementing via this, I implemented it by NSData(dataWithContentOfURL) in NSThread. But I wanted to cancel during downloading images, So I changed it to NSURLConnection. But It happens other problem. Performance was very low after changing. For example, There is at least 5seconds for downloading images at NSThread(NSData async) But, There is more than 2 or 3 times than it at NSURLConnection(async) !! Can I enhance performance ?? How?? (* sorry about my question with NSData(dataWithContentOfFile). correct question is dataWithContentOfURL)

    Read the article

  • Managed language for scientific computing software

    - by heisen
    Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >