Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 78/837 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • Unlocking High Performance with Policy Administration Replacement

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ansi-language:EN-CA; mso-fareast-language:EN-CA;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ansi-language:EN-CA; mso-fareast-language:EN-CA;} It is clear the insurance industry is undergoing significant changes as it consolidates and prepares for growth. The increasing focus on customer centricity, enhanced and speedier product development capabilities, and compliance with regulatory changes has forced companies to rethink well-entrenched policy administration processes. In previous Oracle Insurance blogs I’ve highlighted industry research pointing to policy administration replacement as a top IT priority for carriers. It is predicted that by 2013, the global IT spend on policy administration alone is likely to be almost 22 percentage of the total insurance IT spend. To achieve growth, insurers are adopting new pricing models, enhancing distribution reach, and quickly launching new products and services—all of which depend on agile and effective policy administration processes and technologies. Next month speakers from Oracle Insurance and Capgemini Financial Services will discuss how insurers can competitively drive high performance through policy administration replacement during a free, one-hour webcast hosted by LOMA. Roger Soppe, Oracle senior director, Insurance Strategy, together with Capgemini’s Lars Ernsting, leader, Life & Pensions COE, and Scott Mampre, vice president, Insurance, will be the speakers. Specifically, they’ll be highlighting: How replacing a legacy policy administration system with a modern, flexible platform optimizes IT and operations costs, creates consistent processes and eliminates resource redundancies How selecting the right partner with the best blend of technology, operational, and consulting capabilities, is an important pre-requisite to unlock high performance from policy administration transformation to achieve product, operational, and cost leadership  The value of outsourcing closed block operations We look forward to your participation on Thursday, July 14, 11:00 a.m. ET. Please register now. Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • GLSL Shader Texture Performance

    - by Austin
    I currently have a project that renders OpenGL video using a vertex and fragment shader. The shaders work fine as-is, but in trying to add in texturing, I am running into performance issues and can't figure out why. Before adding texturing, my program ran just fine and loaded my CPU between 0%-4%. When adding texturing (specifically textures AND color -- noted by comment below), my CPU is 100% loaded. The only code I have added is the relevant texturing code to the shader, and the "glBindTexture()" calls to the rendering code. Here are my shaders and relevant rending code. Vertex Shader: #version 150 uniform mat4 mvMatrix; uniform mat4 mvpMatrix; uniform mat3 normalMatrix; uniform vec4 lightPosition; uniform float diffuseValue; layout(location = 0) in vec3 vertex; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; smooth out VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertOut; void main(void) { gl_Position = mvpMatrix * vec4(vertex, 1.0); VertOut.normal = normalize(normalMatrix * normal); VertOut.toLight = normalize(vec3(mvMatrix * lightPosition - gl_Position)); VertOut.color = color; VertOut.diffuseValue = diffuseValue; VertOut.texCoord = texCoord; } Fragment Shader: #version 150 smooth in VertData { vec3 color; vec3 normal; vec3 toLight; float diffuseValue; vec2 texCoord; } VertIn; uniform sampler2D tex; layout(location = 0) out vec3 colorOut; void main(void) { float diffuseComp = max( dot(normalize(VertIn.normal), normalize(VertIn.toLight)) ), 0.0); vec4 color = texture2D(tex, VertIn.texCoord); colorOut = color.rgb * diffuseComp * VertIn.diffuseValue + color.rgb * (1 - VertIn.diffuseValue); // FOLLOWING LINE CAUSES PERFORMANCE ISSUES colorOut *= VertIn.color; } Relevant Rendering Code: // 3 textures have been successfully pre-loaded, and can be used // texture[0] is a 1x1 white texture to effectively turn off texturing glUseProgram(program); // Draw squares glBindTexture(GL_TEXTURE_2D, texture[1]); // Set attributes, uniforms, etc glDrawArrays(GL_QUADS, 0, 6*4); // Draw triangles glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 3*4); // Draw reference planes glBindTexture(GL_TEXTURE_2D, texture[0]); // Set attributes, uniforms, etc glDrawArrays(GL_LINES, 0, 4*81*2); // Draw terrain glBindTexture(GL_TEXTURE_2D, texture[2]); // Set attributes, uniforms, etc glDrawArrays(GL_TRIANGLES, 0, 501*501*6); // Release glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); Any help is greatly appreciated!

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Backup VM : Copy virtual disk xxx.vmdk The virtual disk is either corrupted or not a supported forma

    - by boiteavinc
    Hi I'm french. I'm student and I use ESXI4 for my studies. When I try backup my VMs, with ghettoVCBg2.pl and vSphere Management Assistant, I get the following error message on Vsphere Client : "Copy virtual disk xxx.vmdk The virtual disk is either corrupted or not a supported format". I have no error in the log VCB ; 03-23-2010 08:31:56 -- debug: Main: Login by vi-fastpass to: esxi 03-23-2010 08:31:56 -- debug: copyTask: Task START 03-23-2010 08:31:56 -- debug: copyTask: waiting for next job and sleep ... 03-23-2010 08:32:00 -- info: Initiate backup for AD_DNS_DHCP found on esxi 03-23-2010 08:32:09 -- debug: AD_DNS_DHCP original powerState: poweredOn 03-23-2010 08:32:09 -- debug: Creating Snapshot "ghettoVCBg2-snapshot-2010-03-23" for AD_DNS_DHCP 03-23-2010 08:33:19 -- info: AD_DNS_DHCP has 1 VMDK(s) 03-23-2010 08:33:19 -- debug: backupVMDK: Backing up "Raptor1 AD_DNS_DHCP/AD_DNS_DHCP.vmdk" to "Backup_VM VM/AD_DNS_DHCP/AD_DNS_DHCP-2010-03-23/$ 03-23-2010 08:33:19 -- debug: backupVMDK: Signal copyThread to start 03-23-2010 08:33:19 -- debug: backupVMDK: Backup progress: Elapsed time 0 min 03-23-2010 08:33:19 -- debug: copyTask: Wake up and follow the white rabbit, with status: doCopy 03-23-2010 08:33:19 -- debug: CopyThread: Start backing up VMDK(s) ... 03-23-2010 08:33:25 -- debug: copyTask: send copySuccess message ... 03-23-2010 08:33:25 -- debug: copyTask: waiting for next job and sleep ... 03-23-2010 08:34:20 -- debug: backupVMDK: Successfully completed backup for Raptor1 AD_DNS_DHCP/AD_DNS_DHCP.vmdk Elapsed time: 1 min 03-23-2010 08:34:22 -- debug: Removing Snapshot "ghettoVCBg2-snapshot-2010-03-23" for AD_DNS_DHCP 03-23-2010 08:34:24 -- debug: checkVMBackupRotation: Starting ... 03-23-2010 08:34:26 -- debug: Purging Backup_VM VM/AD_DNS_DHCP/AD_DNS_DHCP-2010-03-23--1 due to rotation max 03-23-2010 08:34:28 -- info: Backup completed for AD_DNS_DHCP! 03-23-2010 08:34:28 -- debug: Main: Disconnect from: esxi 03-23-2010 08:34:28 -- debug: Main: Calling final clean up 03-23-2010 08:34:28 -- debug: cleanUP: Thread clean up starting ... 03-23-2010 08:34:28 -- debug: cleanUp: Send exit to copyThread 03-23-2010 08:34:28 -- debug: copyTask: Wake up and follow the white rabbit, with status: exit 03-23-2010 08:34:28 -- debug: copyTask: die ... 03-23-2010 08:34:28 -- debug: cleanUp: Join passed 03-23-2010 08:34:28 -- info: ============================== ghettoVCBg2 LOG END ============================== My ghetto conf file is : VM_BACKUP_DATASTORE = "Backup_VM" VM_BACKUP_DIRECTORY = "VM" VM_BACKUP_ROTATION_COUNT = "3" DISK_BACKUP_FORMAT = "thin" ADAPTER_FORMAT = "lsilogic" POWER_VM_DOWN_BEFORE_BACKUP = "0" VM_SNAPSHOT_MEMORY = "1" VM_SNAPSHOT_QUIESCE = "1" LOG_LEVEL = "info" VM_VMDK_FILES = "all" I tried several I tried several DISK_BACKUP_FORMAT, I have same error. Despite the error, even when I get files on the NFS share. But when I try to open the vmx file with vmware workstation. I get this error: Can not open the disk 'G: \ Backup ESX \ VM \ AD_DNS_DHCP \ AD_DNS_DHCP-2010-03-23 - 1 \ AD_DNS_DHCP.vmdk' or one of the snapshot disks it depends on. Reason: The called function can not be performed on partial chains. Please open the parent virtual disk. I have no snapshot on my VM on ESXi. Can you help me ?

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Free Mac whole disk encryption

    - by stevekuo
    Are there any free solutions for encrypting the entire boot disk on a Mac? I'm aware of PGP's Whole Disk Encryption, but it's a bit steep at $149. TrueCrypt's System Encryption seems to only work with Windows.

    Read the article

  • Disk drive disappear in Windows Explorer

    - by Stan
    OS: WinXP SP3 When pressing win+E or open Windows Explorer, under My Computer, usually there are several disk drives. But somehow they just disappeared. If I type c:\ in address bar, then the disk will re-appear in the tree though. How to get them back? Thanks.

    Read the article

  • Disk usage treemap software for headless Linux

    - by CyberShadow
    There are some programs which can display used disk space using a treemap, such as WinDirStat for Windows and KDirStat for KDE/Linux: I'm looking for something similar, but for a headless Linux box. Alternatively, what are other good ways to visualize used disk space with just SSH access?

    Read the article

  • Improving RDP performance

    - by blade
    Hi, How can I improve RDP performance? I'm on an 8mb line, and I have disabled all the fancy features like visual styles. One page said if I set a low speed, that too will increase performance. Is there any proof in this? Also, there was an ad here about an application/technology which can increase RDP performance by x20. Has anyone used this? Thanks

    Read the article

  • Poor WAMP performance when using SMB/UNC Paths

    - by Brad
    I've configured a WAMP (Windows Apache MySQL and PHP) stack when when configured to use local storage takes 3-4 seconds to load. When I use an SMB/UNC share it takes 12-15 seconds to load. Here are the two lines in my httpd.conf: #DocumentRoot "//10.99.108.11/test_htdocs" #<Directory "//10.99.108.11/test_htdocs"> #DocumentRoot "C:/www" #<Directory "C:/www"> Is there performance tuning I can do on windows server 2008 R2 to improve performance or is there another way to improve performance using smb

    Read the article

  • Installing Windows 7 upgrade version on a clean disk

    - by BobMarley
    Is it possible to install the much cheaper Windows 7 upgrade version on a clean disk? What information will I need? 1) Will the Windows 7 installer ask me for my XP license key? or 2) Will the Windows 7 installer only run if it can detect an existing XP installation? Furthermore, what will happen if my disk crashes and I need to reinstall in the future? Will I need my XP license key again?

    Read the article

  • Hard disk number in boot.ini?

    - by MA1
    Hi All Need help to remove a confusion. Suppose we have two hard disk. Referring to boot.ini, In case of MULTI and SCSI syntax: which parameter exactly tells us hard disk number? Regards,

    Read the article

  • Hard disk with trustworthy SMART support

    - by Paggas
    Which hard disk drive do you suggest with trustworthy SMART diagnostics? That is, a hard disk that can truthfully report sector reallocations and other pre-failure indicators. I'm asking this because I have seen quite a few hard disks with SMART support fail with no warning in the SMART diagnostics, so a hard drive that can report such problems with some degree of reliability would be much appreciated :)

    Read the article

  • When to use a RAM-Disk?

    - by Ice
    Hi, I know that RAM-Disk are fast, faster than any Disk but they lose their contents on a shutdown of the operating system. The capacity is limited to the RAM. Is there a usefull implementation on a new 64-Bit windows 2008 server? Peace Ice

    Read the article

  • RunDLL error Hard Disk Drive

    - by stanleyenriquez
    I can't open my hard disk drive. The error box says "RUnDLL, there was a problem starting ~$WCFLPM.FAT32 The specified module could not be found". My HDD has a virus before. My hard disk contains a shortcut and the shortcut contains all files. I don't want to format it just yet because it contains quite many files. I tried troubleshooting it but none of the solutions in the internet helped.

    Read the article

  • About can't export the performance problems.

    - by kyrathy
    At first,let me describe my environment My VirtualCenter is install on window 2008 ,and theDataBase that VC used is SQL 2008 I really want to ask is ..... When I use vsphere clinet to connect VC.....I got a problem. the performance chart only can show "realtime "...... whatever I only want to view the chart , or I want to export the performance log . when i manually want to export performance, and I select the time to 1 hour ,1 day ,1 month, or from a to b. it showed "No performance data to report for selected objects" only select realtime can export data normally. Before I install Vsphere 4 , I install the SQL 2008 , used the schema in the install CD(I follow the step to create SQL DB for vSphere) Could anybody help me how to solve this problem? And if need any information ,just tell me to provide. Thanks a lot.

    Read the article

  • mac external-hard-disk "software update"

    - by Pietro
    When I make a software update, the files are downloaded on my MacBook's internal hard disk. How can I set a different hard disk as default? I suppose the files related to the software update are compressed packages that have to be saved, opened and decompressed. I would like to use the internal HD just to update MacOS, without storing any temporary files. Thank you! Pietro MacBook Pro 2009, 256 GB SSD, MacOSX 10.6.4

    Read the article

  • Blaze error: No disk in drive

    - by jruday
    Running Blaze on Win 7 64-bit PC and getting the following error: There is not disk in the drive. Please insert a disk into drive\Device\Harddisk2\DR2. View error message here: http://screencast.com/t/ZjQxNDc0NW. Any suggestions would be appreciated. Thanks.

    Read the article

  • external disk suddenly unmounting

    - by hasen j
    Platform: Ubuntu 9.10 Disk Brand/model: WD My Book The external hard disk suddenly unmounts after a while. I suspect it's due to it "sleeping" to save power. I don't recall the problem having occurred before the upgrade to Karmic. How can this be fixed?

    Read the article

  • squid cache disk configuration

    - by Gogonez
    just wondering how far drive configuration will affect squid cache performance. what kind of drive configuration that fast enough for squid ? is it true that block-level parity strip raid faster than byte-level one ? is mirrored drive config will decrease squid cache write process ? how much swap space that squid realy need to store cache (reverse mode) for 200mb web doc ? what kind of benchmark should i do to analyze squid disk performance ?

    Read the article

  • IE on Windows 7 not saving files to disk [closed]

    - by Gemini
    I am running Win 7 Build 7100. Since I restored this system I am facing peculiar issues - all effectively rendering this system unusable. The biggest peeve is: Any file downloaded from IE is never saved to disk. IE shows the entire download progress bar and at the end of download, no file is saved anywhere on the disk!

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >