Search Results

Search found 25442 results on 1018 pages for 'disk size'.

Page 12/1018 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to make a disk image and restore from it later?

    - by torbengb
    I'm a new Linux user. I've reinstalled my Wubi from scratch at least ten times the last few weeks because while getting the system up and running (drivers, resolution, etc.) I've broken something (X, grub, unknowns) and I can't get it back to work. Especially for a newbie like me, it's easier (and much faster) to just reinstall the whole shebang than try to troubleshoot several layers of failed "fixing" attempts. Coming from Windows, I expect that there is some "disk image" utility that I can run to make a snapshot of my Linux install (and of the boot partition!!) before I meddle with stuff. Then, after I've foobar'ed my machine, I would somehow restore my machine back to that working snapshot. What's the Linux equivalent of Windows disk imagers like Acronis True Image or Norton Ghost?

    Read the article

  • Possibility of recovering files from a dd zero-filled hard disk

    - by unknownthreat
    I have "zero filled" (complete wiped) an external hard disk using dd, and from what I have heard: people said you should at least "zero fill" 3 times to be sure that the data are really wiped and no one can recover anything. So I decided to scan the disk once again after I've zero filled the disk. I was expecting the disk to still have some random binary left. It turned out that it has only a few sequential bytes in the very beginning. This is probably the file structure type and other headers stuff. Other than that, it's all zeros and nothing else. So if we have to recover any file from a zero filled disk, ...how? From what I've heard, even you zero fill the disk, you should still have some data left. ...or could dd really completely annihilate all data?

    Read the article

  • Firefox- How to change font size of items in bookmarks sidebar

    - by TheClamkinator
    In Windows. (I know I can open Desktop > Properties > Appearance > Advanced [and figure out which item determines it; probably Menu]. But, I don't want other things changed - just the bookmarks. This is because I have so many and I want more of them to display. I did see the question on changing font size of items in folder, on bookmarks toolbar, and sub-folders, etc; But, I'm talking sidebar on firefox, I don't have Google Chrome and I am not a developer.

    Read the article

  • Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

    - by Ricardo Ferreira
    The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence. To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below. package com.oracle.coherence.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Random; @SuppressWarnings("serial") public class Person implements Serializable { private String firstName; private String lastName; private List<Object> fat; private String email; public Person() { generateFat(); } public Person(String firstName, String lastName, String email) { setFirstName(firstName); setLastName(lastName); setEmail(email); generateFat(); } private void generateFat() { fat = new ArrayList<Object>(); Random random = new Random(); for (int i = 0; i < random.nextInt(18000); i++) { HashMap<Long, Double> internalFat = new HashMap<Long, Double>(); for (int j = 0; j < random.nextInt(10000); j++) { internalFat.put(random.nextLong(), random.nextDouble()); } fat.add(internalFat); } } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below. package com.oracle.coherence.demo; import com.oracle.coherence.domain.Person; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; public class CreatePeopleCacheAndPopulateWithData { public static void main(String[] args) { // Asks Coherence for a new cache named "People"... NamedCache people = CacheFactory.getCache("People"); // Creates three people that will be putted into the data grid. Each person // generates an internal fat that should increase its size in terms of bytes... Person pessoa1 = new Person("Ricardo", "Ferreira", "[email protected]"); Person pessoa2 = new Person("Vitor", "Ferreira", "[email protected]"); Person pessoa3 = new Person("Vivian", "Ferreira", "[email protected]"); // Insert three people at the data grid... people.put(1, pessoa1); people.put(2, pessoa2); people.put(3, pessoa3); // Waits for 5 minutes until the user runs the Java program // that calculates the total size of the people cache... try { System.out.println("---> Waiting for 5 minutes for the cache size calculation..."); Thread.sleep(300000); } catch (InterruptedException ie) { ie.printStackTrace(); } } } Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below. package com.oracle.coherence.demo; import java.text.DecimalFormat; import java.util.Map; import java.util.Set; import java.util.TreeMap; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.ObjectName; import com.tangosol.net.CacheFactory; public class CalculateTheSizeOfPeopleCache { @SuppressWarnings({ "unchecked", "rawtypes" }) private void run() throws Exception { // Enable JMX support in this Coherence data grid session... System.setProperty("tangosol.coherence.management", "all"); // Create a sample cache just to access the data grid... CacheFactory.getCache(MBeanServerFactory.class.getName()); // Gets the JMX server from Coherence data grid... MBeanServer jmxServer = getJMXServer(); // Creates a internal data structure that would maintain // the statistics from each cache in the data grid... Map cacheList = new TreeMap(); Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null); for (Object jmxObject : jmxObjectList) { ObjectName jmxObjectName = (ObjectName) jmxObject; String cacheName = jmxObjectName.getKeyProperty("name"); if (cacheName.equals(MBeanServerFactory.class.getName())) { continue; } else { cacheList.put(cacheName, new Statistics(cacheName)); } } // Updates the internal data structure with statistic data // retrieved from caches inside the in-memory data grid... Set<String> cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Set resultSet = jmxServer.queryNames( new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null); for (Object resultSetRef : resultSet) { ObjectName objectName = (ObjectName) resultSetRef; if (objectName.getKeyProperty("tier").equals("back")) { int unit = (Integer) jmxServer.getAttribute(objectName, "Units"); int size = (Integer) jmxServer.getAttribute(objectName, "Size"); Statistics statistics = (Statistics) cacheList.get(cacheName); statistics.incrementUnit(unit); statistics.incrementSize(size); cacheList.put(cacheName, statistics); } } } // Finally... print the objects from the internal data // structure that represents the statistics from caches... cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Statistics estatisticas = (Statistics) cacheList.get(cacheName); System.out.println(estatisticas); } } public MBeanServer getJMXServer() { MBeanServer jmxServer = null; for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) { jmxServer = (MBeanServer) jmxServerRef; if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) { break; } jmxServer = null; } if (jmxServer == null) { jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN); } return jmxServer; } private class Statistics { private long unit; private long size; private String cacheName; public Statistics(String cacheName) { this.cacheName = cacheName; } public void incrementUnit(long unit) { this.unit += unit; } public void incrementSize(long size) { this.size += size; } public long getUnit() { return unit; } public long getSize() { return size; } public double getUnitInMB() { return unit / (1024.0 * 1024.0); } public double getAverageSize() { return size == 0 ? 0 : unit / size; } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("\nCache Statistics of '").append(cacheName).append("':\n"); sb.append(" - Total Entries of Cache -----> " + getSize()).append("\n"); sb.append(" - Used Memory (Bytes) --------> " + getUnit()).append("\n"); sb.append(" - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n"); sb.append(" - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n"); return sb.toString(); } } public static void main(String[] args) throws Exception { new CalculateTheSizeOfPeopleCache().run(); } public static final DecimalFormat FORMAT = new DecimalFormat("###.###"); public static final String DEFAULT_DOMAIN = ""; public static final String DOMAIN_NAME = "Coherence"; } I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics". This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this: 2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified 2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 3.6.0.4 Build 19111 Grid Edition: Development mode Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. 2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml" 2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml" 2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider 2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) 2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB Group{Address=224.3.6.0, Port=36000, TTL=4} MasterMemberSet ( ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) ) RecycleMillis=1200000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) TcpRing{Connections=[1]} IpMonitor{AddressListSize=0} 2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1 2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1 2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions Cache Statistics of 'People': - Total Entries of Cache -----> 3 - Used Memory (Bytes) --------> 883920 - Used Memory (MB) -----------> 0.843 - Object Average Size --------> 294640 I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

    Read the article

  • which file stored os.environ,and store where , disk c: or disk d:

    - by zjm1126
    my code is : os.environ['ss']='ssss' print os.environ and it show : {'TMP': 'C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp', 'COMPUTERNAME': 'PC-200908062210', 'USERDOMAIN': 'PC-200908062210', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'PROCESSOR_IDENTIFIER': 'x86 Family 6 Model 15 Stepping 2, GenuineIntel', 'PROGRAMFILES': 'C:\\Program Files', 'PROCESSOR_REVISION': '0f02', 'SYSTEMROOT': 'C:\\WINDOWS', 'PATH': 'C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\Program Files\\Hewlett-Packard\\IAM\\bin;C:\\Program Files\\Common Files\\Thunder Network\\KanKan\\Codecs;D:\\Program Files\\TortoiseSVN\\bin;d:\\Program Files\\Mercurial\\;D:\\Program Files\\Graphviz2.26.3\\bin;D:\\TDDOWNLOAD\\ok\\gettext\\bin;D:\\Python25;C:\\Program Files\\StormII\\Codec;C:\\Program Files\\StormII;D:\\zjm_code\\;D:\\Python25\\Scripts;D:\\MinGW\\bin;d:\\Program Files\\Google\\google_appengine\\', 'TEMP': 'C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp', 'BID': '56727834-D5C3-4EBF-BFAA-FA0933E4E721', 'PROCESSOR_ARCHITECTURE': 'x86', 'ALLUSERSPROFILE': 'C:\\Documents and Settings\\All Users', 'SESSIONNAME': 'Console', 'HOMEPATH': '\\Documents and Settings\\Administrator', 'USERNAME': 'Administrator', 'LOGONSERVER': '\\\\PC-200908062210', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH', 'CLIENTNAME': 'Console', 'FP_NO_HOST_CHECK': 'NO', 'WINDIR': 'C:\\WINDOWS', 'APPDATA': 'C:\\Documents and Settings\\Administrator\\Application Data', 'HOMEDRIVE': 'C:', 'SS': 'ssss', 'SYSTEMDRIVE': 'C:', 'NUMBER_OF_PROCESSORS': '2', 'PROCESSOR_LEVEL': '6', 'OS': 'Windows_NT', 'USERPROFILE': 'C:\\Documents and Settings\\Administrator'} i find google-app-engine set user_id in os.version not in session,look here at line 96-100 and line 257 , and aeoid at line 177 , and i want to know : which file stored os.environ ,and store where , disk c: ,or disk d: ? thanks

    Read the article

  • How can I control disk numbering (enumeration) in Windows 7 Disk Management?

    - by tim11g
    A desktop system had two drives (Assigned C and D, which were enumerated in Disk Management as Disk 0 and Disk 1). A new SSD was added as the boot drive, after copying the C drive to the SSD. The SSD was connected to SATA 0 (master) port on the motherboard. The previous C Drive was moved to SATA 2 and is reformatted as a non-booting NTFS partition. The D drive remained on SATA 1. The system boots and everything seems fine. I was able to manually adjust the Drive Letters. However, the list in Disk Management is re-ordered. Disk 0 is the the previous Disk 2 (D Drive) on SATA 1, Disk 1 is the new Boot Drive (now C) on SATA 0, and Disk 2 is the former C Drive (now assigned E) on SATA 2. Does the Disk 0, 1, 2, designation mean anything? I would prefer to have them display in Disk Management as Drives C, D, and E from top to bottom. Is the Disk enumeration based on the SATA port or something else? (If it was based on SATA Port, they should be ordered C, D, E. Is there any way to re-order the Disk number assignments? What actually does determine the Disk number enumeration?

    Read the article

  • win 7 something is causing excessive disk usage, maybe chrome?

    - by camcam
    On my Win 7 computer, this happens: I work for several hours without problems, also using Chrome Suddenly, just after refreshing a page in Chrome, disk starts being used excessively (here I can close Chrome or not, no matter, the disk won't stop) Disk diod is on all the time and I hear it running like crazy, all computer is a little slowed down It last 5-10 minutes In the meantime, I go to Windows Task Manager and observe what processes are using disk and turn them off one by one - but no success in stopping the excessive disk usage After approximately 10 minutes everything stops I go to Chrome (or re-open it) and refresh the page with mixed results - sometimes the whole process repeats immediately, sometimes not Basically, it is almost always Chrome refreshing random page that starts the excessive disk usage, but killing Chrome process does not stop the disk. Going to the same page in Firefox is not causing problems. Windows Search is turned off. I would like to know what is really happening. Perhaps there is a utility which would allow me to see which process is really using the disk, so that I can disable the service ? (not chrome, because killing chrome does not change anything) or even better, perhaps there is a way to fix it?

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • What does 'Highest active time' for disk activity in Windows resource monitor mean?

    - by Nick R
    I know what the disk io, disk queue length and other measures are, but what does 'Highest active time' mean? Is it the amount of time it is busy handling requests, or something else? When it is high, does it mean the CPU is busy doing some IO work, or is it just indicating that the disk is busy handling requests? I'm trying to work out if 50% active time means that 50% of the time the disk is either seeking, reading or writing, rather than the kernel is spending 50% of it's time servicing IO requests. Edit Another quick data point here. If you look at the difference between an SSD and a physical disk, the SSD has significantly less activity, so I guess this really means the amount of time the operating system is waiting for the disk to respond and returning data.

    Read the article

  • Windows 7 Reading Proper Disk Usage Statistics on Mounted Volumes

    - by Troy Perkins
    I'm running windows 7 with 2 x 1.5 TBYTE Drives. The second internal drive is setup as a mounted volume as C:\Archives Clicking computer icon in windows explorer, it only shows capacity stats for C: and Not C:\Archives Also, the usage stats that do show for C: show to be 100% capacity red - yet the system runs fine. No warnings. Can someone explain this? I do have a lot of stuff on the c: drive, but I'm sure its not 1.5 TB worth and C:\Archives hardly has anything it. Thanks! Troy

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Windows 2003 Dynamic Disk error

    - by ChrisH
    Hi, I was trying to ghost a partition on a Windows 2003 server, using Ghost 2003. Unfortunately things went horribly wrong, and now I can't boot back into my system. As you can see, Ghost creates a wee little partition to do its dirty work, and has dislodged my other partitions. Partition 2 in the image below is my C drive. Any suggestions as to how I might get this active again so that it boots? Cheers, Chris

    Read the article

  • Idle hard disk makes noise.

    - by ULTRA_POROV
    Like a fan or something. I checked it. I stopped all fans (cpu, video, psu) and the noise was still there. I read online that it might be a motor or something. I have put a great deal of effort making my pc quiet. Installed a quiet psu and cpu fan, reduced the fan speed of my video card, bought a ssd... But my drive for data makes this noise. I would never have expected that. Do all hard disks make this kind of noise? I guess most people won't notice it because of the other fans they have in the system, I however can hear it quite clearly because all my other fans are almost silent. So should i get a new one or should i just live with it, considering that i might end up with a drive that also makes this noise.

    Read the article

  • Ubuntu 14.04 disk utility SMART self-test failed threshold not exceeded

    - by user2323470
    I'm using the "Disks" program in Ubuntu 14.04 (live DVD) to assess the health of a drive I suspect is failing. However, when I first opened the program, it showed that the overall health was OK and all assessments are OK as well. I then tried to run a short self-test, but now the overall assessment shows a red "SELF-TEST FAILED". In the details section it says "Last self-test failed (read)" and "threshold not exceeded". All individual assessments are still OK though!! What I don't understand is, does that mean that the test executed and determined that the drive is a goner, or does it mean that the test didn't actually execute properly?

    Read the article

  • how to limit disk space per user in a PHP web application & CentOS

    - by solid
    we have a web application written in PHP and we want all our users to be able to upload images for e.g. 50MB. We will create a directory structure so that every user has its own folder like app/user1/images app/user2/images ... Now everytime a user uploads an image, we need to check if this is still allowed or not but we don't want 1000 users to continously scan our hard drive counting file sizes in their directory. So writing a script that counts all file sizes in a user directory is not an option I guess? Is there an easier way to calculate used up space per user and limit our app accordingly?

    Read the article

  • Save Website To Disk

    - by Christian
    Hello everyone! I have a very poor internet connection when I'm living at home. The only time I have a good internet is at college. When I get home, the most mundane task like opening a web-page becomes a five minute stress-test. So what I was thinking was to download the web-page, for example superdickery. I was wondering what the best method would be to download the entire image archive of the page? Would this be illegal, if I did this? It's just that I don't want to be frustrated every time I just want to load a simple jpeg image.

    Read the article

  • Mac Disk-Utility input/output error

    - by Michelle
    a couple of other people have posted about this but my specific problem has not been addressed. For months I have been backing up DVDs and home movies with no problem then all of a sudden I get an "input/output" error. Yes I have cleaned the disks. Actually I have tried 8 different ones - they are not all bad so its obviously my computer. I have done a scan and cleaned up the HD a bit just in case but nothing is helping. I don't want to download other programs since this one works but seems to be having a problem. Any ideas?

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Mounting an FTP as a virtual disk (FTPDrive analogue)

    - by axk
    FTPDrive has been a great utility for me, but it does not support 64bit Windows 7. The feature of FTPDrive that is useful for me is accesing files from an FTP as local files without pre-downloading so that I can preview and watch movies from a local FTP server without waiting for a full movie to get downloaded first. Do you know of any software which allows accessing files over FTP without pre-downloading? Thanks!

    Read the article

  • the effect of large number of files on disk space in unix filesystems

    - by user46976
    If I have a text file in Unix that contains N-many independent entries (e.g. records about employees, where each employee has a separate record), is it expected that this file will take up less space than if I split the file into N files, each containing the entry for one employee? in other words, can one save significant space on unix file systems by concatenating many files together, or is the difference negligible? thanks.

    Read the article

  • Why is an Ext4 disk check so much faster than NTFS?

    - by Brendan Long
    I had a situation today where I restarted my computer and it said I needed to check the disk for consistancy. About 10 minutes later (at "1%" complete), I gave up and decided to let it run when I go home. For comparison, my home computer uses Ext4 for all of the partitions, and the disk checks (which run around once week) only take a couple seconds. I remember reading that having fast disk checks was a priority, but I don't know how they could do that. So, how does Ext4 do disk checks so fast? Is there some huge breakthrough in doing this after NTFS came out (~10 years ago)? Note: The NTFS disk is ~300 GB and the Ext4 disk is ~500 GB. Both are about half full.

    Read the article

  • Why is an Ext4 disk check so much faster than NTFS?

    - by Brendan Long
    I had a situation today where I restarted my computer and it said I needed to check the disk for consistancy. About 10 minutes later (at "1%" complete), I gave up and decided to let it run when I go home. For comparison, my home computer uses Ext4 for all of the partitions, and the disk checks (which run around once week) only take a couple seconds. I remember reading that having fast disk checks was a priority, but I don't know how they could do that. So, how does Ext4 do disk checks so fast? Is there some huge breakthrough in doing this after NTFS came out (~10 years ago)? Note: The NTFS disk is ~300 GB and the Ext4 disk is ~500 GB. Both are about half full.

    Read the article

  • GParted in UBUNTU shows entire disk as UNALLOCATED SPACE

    - by msPeachy
    Good day to everyone. I hope someone can help me with my problem. I have a dual boot Windows and Ubuntu system. I recently encountered an hd0 out of disk error and wasn't able to boot Ubuntu. So I booted into Windows, after 2 to 3 times of booting and rebooting Windows, I tried booting Ubuntu but still I get the hd0 out of disk error. I decided to run Ubuntu from LIVEUSB to try to fix my Ubuntu partition using GParted, but when I run GParted, it shows my entire disk as UNALLOCATED SPACE! The strange thing is that Nautilus still shows and mounts my partitions. Also every time I boot into Windows , my partitions exists and I am able to read and write to them. I have no idea what is wrong. Please help! I can't stand using Windows since most of the tools I use are in Ubuntu. I don't mind reinstalling Ubuntu. In fact I already tried reinstalling using the LIVEUSB but I wasn't able to, since GParted or the Ubuntu installer itself does not recognized my partitions and shows the entire disk as unallocated space. I am currently running Ubuntu from LIVEUSB. Here's the outpuf of sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb30ab30a Device Boot Start End Blocks Id System /dev/sda1 * 2048 104869887 52433920 83 Linux /dev/sda2 104869888 105074687 102400 7 HPFS/NTFS/exFAT /dev/sda3 105074688 156149759 25537536 7 HPFS/NTFS/exFAT /dev/sda4 156151800 625153409 234500805 f W95 Ext'd (LBA) /dev/sda5 156151808 169156591 6502392 82 Linux swap / Solaris /dev/sda6 169158656 294991871 62916608 7 HPFS/NTFS/exFAT /dev/sda7 294993920 471037944 88022012+ 7 HPFS/NTFS/exFAT /dev/sda8 471041928 625121152 77039612+ 7 HPFS/NTFS/exFAT When I run, sudo parted -l, I got this error message: ubuntu@ubuntu:~$ sudo parted -l Error: Can't have a partition outside the disk!

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >