Search Results

Search found 607 results on 25 pages for 'tb'.

Page 10/25 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Convert FAT32 to NTFS, risk/time?

    - by Rakward
    After a quick search I found that through a command prompt I can convert a drive from FAT32 to NTFS without losing data(see here). What I want to ask here is, how safe is this method on a 1.5 TB drive with 500 GB of data? What are the chances of this freezing up(or is there really nothin to worry about) and what is the probable time, a couple of minutes or a whole hour? Sorry if this seems like a stupid question, just want to play on the safe side here ...

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • Cannot see folders on my external drive [closed]

    - by Incognito
    I have a Windows Vista Home Premium machine, and am using a WD portable drive. My external drive is showing that it is occupying space, but I am no longer able to see any of my folders in there. Out of the 1 TB, its shows that 600 gigs are free, but the folders are gone! Please suggest some way of retrieving the files (I think they're must be some way to retireve them, since the disk utilization shows that something is on the disk). Appreciate your help!

    Read the article

  • Raid 5 GPT Partitioning

    - by user39325
    I have a Dell Poweredge r710 server with five 1 TB disks. All of them are in RAID 5. I was trying to install Centos but it says "Your boot partition is on disk using GPT Partition..." I read somewhere that centos can't install on a disk larger than 2TB, so I made some partitions smaller, but it's not working. PS, I am going to install Proxmox on that, but Proxmox also won't accept disks larger than 2TB.

    Read the article

  • How long would this file transfer take?

    - by CT
    I have 12 hours to backup 2 TB of data. I would like to backup to a network share to a computer using consumer WD 2TB Black 7200rpm hard drives. Gigabit Ethernet. What other variables would I need to consider to see if this is feasible? How would I set up this calculation?

    Read the article

  • Installing/dual-booting Fedora 17 on existing Windows 7 HDD

    - by Moose4
    I have a 64-bit Windows 7 install as the only partition on a 1 TB HDD, with about 350 GB free. I would like to install Fedora 17 as a dual-boot option on this system and give it about 100 GB to play with. If in the Fedora install utility I choose to shrink the W7 partition by 100 GB to give it space, will that cause me to lose my existing W7 data? And how do I go about setting up dual-boot (with Windows 7 as the default)?

    Read the article

  • C# WPF application is using too much memory while GC.GetTotalMemory() is low

    - by Dmitry
    I wrote little WPF application with 2 threads - main thread is GUI thread and another thread is worker. App has one WPF form with some controls. There is a button, allowing to select directory. After selecting directory, application scans for .jpg files in that directory and checks if their thumbnails are in hashtable. if they are, it does nothing. else it's adding their full filenames to queue for worker. Worker is taking filenames from this queue, loading JPEG images (using WPF's JpegBitmapDecoder and BitmapFrame), making thumbnails of them (using WPF's TransformedBitmap) and adding them to hashtable. Everything works fine, except memory consumption by this application when making thumbnails for big images (like 5000x5000 pixels). I've added textboxes on my form to show memory consumption (GC.GetTotalMemory() and Process.GetCurrentProcess().PrivateMemorySize64) and was very surprised, cuz GC.GetTotalMemory() stays close to 1-2 Mbytes, while private memory size constantly grows, especially when loading new image (~ +100Mb per image). Even after loading all images, making thumbnails of them and freeing original images, private memory size stays at ~700-800Mbytes. My VirtualBox is limited to 512Mb of physical memory and Windows in VirtualBox starts to swap alot to handle this huge memory consumption. I guess I'm doing something wrong, but I don't know how to investigate this problem, cuz according to GC, allocated memory size is very low. Attaching code of thumbnail loader class: class ThumbnailLoader { Hashtable thumbnails; Queue<string> taskqueue; EventWaitHandle wh; Thread[] workers; bool stop; object locker; int width, height, processed, added; public ThumbnailLoader() { int workercount,i; wh = new AutoResetEvent(false); thumbnails = new Hashtable(); taskqueue = new Queue<string>(); stop = false; locker = new object(); width = height = 64; processed = added = 0; workercount = Environment.ProcessorCount; workers=new Thread[workercount]; for (i = 0; i < workercount; i++) { workers[i] = new Thread(Worker); workers[i].IsBackground = true; workers[i].Priority = ThreadPriority.Highest; workers[i].Start(); } } public void SetThumbnailSize(int twidth, int theight) { width = twidth; height = theight; if (thumbnails.Count!=0) AddTask("#resethash"); } public void GetProgress(out int Added, out int Processed) { Added = added; Processed = processed; } private void AddTask(string filename) { lock(locker) { taskqueue.Enqueue(filename); wh.Set(); added++; } } private string NextTask() { lock(locker) { if (taskqueue.Count == 0) return null; else { processed++; return taskqueue.Dequeue(); } } } public static string FileNameToHash(string s) { return FormsAuthentication.HashPasswordForStoringInConfigFile(s, "MD5"); } public bool GetThumbnail(string filename,out BitmapFrame thumbnail) { string hash; hash = FileNameToHash(filename); if (thumbnails.ContainsKey(hash)) { thumbnail=(BitmapFrame)thumbnails[hash]; return true; } AddTask(filename); thumbnail = null; return false; } private BitmapFrame LoadThumbnail(string filename) { FileStream fs; JpegBitmapDecoder bd; BitmapFrame oldbf, bf; TransformedBitmap tb; double scale, dx, dy; fs = new FileStream(filename, FileMode.Open); bd = new JpegBitmapDecoder(fs, BitmapCreateOptions.None, BitmapCacheOption.OnLoad); oldbf = bd.Frames[0]; dx = (double)oldbf.Width / width; dy = (double)oldbf.Height / height; if (dx > dy) scale = 1 / dx; else scale = 1 / dy; tb = new TransformedBitmap(oldbf, new ScaleTransform(scale, scale)); bf = BitmapFrame.Create(tb); fs.Close(); oldbf = null; bd = null; GC.Collect(); return bf; } public void Dispose() { lock(locker) { stop = true; } AddTask(null); foreach (Thread worker in workers) { worker.Join(); } wh.Close(); } private void Worker() { string curtask,hash; while (!stop) { curtask = NextTask(); if (curtask == null) wh.WaitOne(); else { if (curtask == "#resethash") thumbnails.Clear(); else { hash = FileNameToHash(curtask); try { thumbnails[hash] = LoadThumbnail(curtask); } catch { thumbnails[hash] = null; } } } } } }

    Read the article

  • SSAS Multithreaded sync with Windows 2008 R2

    - by ACALVETT
    We have been happily running some of our systems on WIndows 2003 and have had an upgrade to W2K8 R2 on the list for quite some time. The upgrade has now completed and we can start taking advantage of some of the new features which is the reason for this post. For a long time we have used the sample Robocopy script from the SQLCat team to synchronize some of our larger SSAS databases. If your wondering what i mean by large, around 5 TB with a good few thousand partitions. The script works like a dream...(read more)

    Read the article

  • Plex Media Server: Won't find media External Hard Drive

    - by grayed01
    I am attempting to turn an older PC into a home media server with Ubuntu 12.04 using Plex Media Server. I have a newer WD 2 TB external USB HD with all of my media on it. I can not for the life of me figure out why Plex will not recognize the files on this hard drive. It shows the external as there, Ubuntu shows the files and allows me to play them, view them, etc. Plex shows the name "External". But when I click it, it is 100% empty and shows nothing to add. I can access the files on the external through file sharing just fine, on my windows computers but would love to be able to use Plex for streaming with our Roku. I am fairly new to Ubuntu, I have used Plex with the same HD on Windows and it worked fine. I have read multiple articles on this and nothing seems to be doing the trick. How can I solve this?

    Read the article

  • Apache Prefork Configuration

    - by user1618606
    I'm newbie on VPS configuration. So, I've installed apache, php and mysql and now I need to know how to configure Prefork to optimize Apache. The system configuration is: CPU Cores 2 x 2 Ghz @ 4 Ghz RAM Memory 2304 MB DDR3 Burst Memory 3 GB DDR3 Disk Space 30 GB SSD Bandwidth 3 TB SwitchPort 1 Gbps Actually, after linux, mysql, apache and php, there are 250 MB memory in use. Well, I don't have idea to calculate. I saw in some websistes, some vars like: KeepAlive On KeepAliveTimeout 1 MaxKeepAliveRequests 100 StartServers 15 MinSpareServers 15 MaxSpareServers 15 MaxClients 20 MaxRequestsPerChild 0 or StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 How I could to do: Prefork or worker? Where and how the vars are placed? In httpd.conf?

    Read the article

  • Is GParted safe?

    - by Olivier Pons
    I want to resize my partitions: I have 3 partitions: Ubuntu 10.04 Windows Seven Ubuntu 11.10 It's booting with the boot installed by the Ubuntu 11.10 version. I want to expand (only expand) all the 3 partitions. My HD is 1,8 Tb so it's big and I have no possibility to save before expanding. So my question is: if you tell me GParted work 99,99 % of the time, I'm willing to take the risk. If you tell me GParted work 90 % of the time, I won't take that risk.

    Read the article

  • New Computer

    - by Matt Christian
    Last night I received my computer that was ordered with my tax return money.  Here are the specs of my old computer: - Pentium 4 Processor - 3-4 GB RAM - ~256 GB HDD space (2 drives) - nVidia card (AGP 8x) Sorry I can't be more specific, my memory is gone :p  Here are the new computer specs (mostly): - 2.8ghz Pentium i7 quadcore - 6 GB RAM - 1 TB HDD space (1 drive) - 1 GB Radeon card (PCI-X) I also got a new monitor (22" Asus with HDMI) so will be using my 19" widescreen as a secondary monitor. If I remember I'll hop on here and post the specifics later on...

    Read the article

  • How safe is GParted when resizing Linux and Windows partitions?

    - by Olivier Pons
    I want to resize my partitions: I have 3 partitions: Ubuntu 10.04 Windows Seven Ubuntu 11.10 It's booting with the boot installed by the Ubuntu 11.10 version. I want to expand (only expand) all the 3 partitions. My HD is 1,8 Tb so it's big and I have no possibility to save before expanding. So my question is: if you tell me GParted work 99,99 % of the time, I'm willing to take the risk. If you tell me GParted work 90 % of the time, I won't take that risk.

    Read the article

  • No space on device

    - by user170810
    Today I tried to move a huge directory to a common directory for 2 users. I first tried to change the permissions of my home directory. So something happened with my ~/.private directory and the disk was full! I have a 2 TB disk. I cannot make a backup, I cannot even enter graphics mode. Please help. Are there a way to restore my contents without making backup? Even creating ~/Private is impossible -- "no space on the device". I have no Private, and don't tell me that I have erased it, because it is not true. Are there some utilities to restore the files and make them not encrypted?

    Read the article

  • Announcing StorageTek LTO 6

    - by uwes
    Announcing StorageTek LTO-6 Full Height 8 Gb Fibre Channel IBM Tape Drives! We’re pleased to announce the availability of StorageTek LTO 6 tape drives in our StorageTek SL3000 and SL8500 modular tape libraries, which offers the following features: Higher Capacity - StorageTek LTO 6 drives have the ability to write 2.5 TB of native data to one LTO 6 cartridge, a 66% improvement over StorageTek LTO 5 Better Performance - StorageTek LTO 6 drive performance is 160 MB/sec (uncompressed), 14% faster than LTO 5 Investment Protection - StorageTek LTO 6 drives are backward read and write compatible to earlier generations for existing LTO customers  StorageTek LTO 6 will be in the system and orderable for the StorageTek SL3000 and SL8500 on Tuesday, December 4! For More Information Go To: Oracle.com Tape Page Oracle Technology Network Tape Page

    Read the article

  • Why does my Ubuntu freeze in the middle of something with a blank screen?

    - by mohamed
    I am facing a serious problem as my Dell Optiplex 745 loaded with Ubuntu 12.04 32bit freezes with a blank screen no mouse cursor no keyboard activity in many occasions. Sometimes when I open firefox, or software center, I really can't tell what the main problem is, but what i am certain about is it i have to do a hard reset to reboot. I tried to use a live-cd to boot but it did the same, tried the recovery console but couldn't find a way to fix from there (I am a new user) suspected a hardware problem but windows runs flawlessly so I believe its something related to ubuntu. Specs: Intel Celeron D 3.06 GHZ 3.0 GB ram Intel built in graphics Q965 1 TB HDD Bios Phoenex 2.6.4 Please help I really love ubuntu and I don't want to go back to windows!

    Read the article

  • Need Help Changing Owner of External HArd Drive

    - by Thomas Ballew
    My understanding of code is about zero. I can open a terminal window, and type commands that are given to me, but that's about it. If someone can help me with this question, and explain at a level I'm likely to understand, thanks. If not, thanks anyway. I have an external hard drive with two partitions. I bought this drive when my operating system was Apple, 10.5 or so, and it was formatted as HFS+ with that system. Now, connecting the HD to my Linux system, I can read files, but I have about 1.5 TB of space that I can't use, because I am not the owner of the file, so can't write to the HD. Short of reformatting the HD, is there a way for me to set the permissions for the HD so I can write to it? Again, thank you.

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

  • Ubuntu server 13 hangs on boot

    - by Slava Fomin II
    I formatted my WD30EFRX 3TB SATA HDD as GPT with the following layout: 200 MB FAT32 (BOOTABLE) 50GB EXT4 (Mount point: /) 4 GB SWAP SPACE ~2.9 TB EXT4 (with no mount point) Then i created a Flash USB using Rufus 1.3.3 and the following ISO-image: ubuntu-13.04-server-amd64.iso. I used following configuration to create the USB: GPT partition scheme for UEFI computer FAT32 (16KB) I successfully booted from this USB on my Intel D510MO motherboard in UEFI mode (there were a black and white boot loader menu instead of a colorful GUI) and installed the OS. During the boot loader installation i saw it was grub-efi-amd64 as it must be. After installation i removed the flash drive and tried to boot the system, but just after a BIOS POST it hangs (there is a white blinking cursor at the left top corner of the black screen) and nothing happens. UEFI Mode is enabled in the BIOS configuration and i've updated the BIOS to the latest firmware. What went wrong, how i can confirm that system was installed properly? How can i boot the OS?

    Read the article

  • Corrupted File System on Dual HD/Dual Boot System

    - by Troy
    I have the following system set up: 2 drives, 1 TB each, one with Windows 7 and the other with what used to be Ubuntu 11.x After an update my system became corrupted and now the file system is apparently corrupt. The Ubuntu drive is /dev/sda2, the Windows 7 is /dev/sda1. I've tried fsck /dev/sda2 -t ext3 and that does nothing. I'm not sure what to do at this point. I don't even mind wiping the /dev/sda2 drive clean, so it will at least accept a completely new installation of Ubuntu. I just don't know how to do that. Please help. Thank you

    Read the article

  • Big Data Appliance

    - by David Dorf
    Today Oracle announced the next release of it's Big Data Appliance, an engineered system composed of hardware and software targeting the efficient processing of big data.  The solution leverages 288 Intel cores running Cloudera's distribution of Apache Hadoop in 1.1 TB of main memory.  This monster helps companies acquire, organize, and analyze large volumes of structured and un-structured data. Additionally a new versions of the Oracle Big Data Connectors and Oracle NoSQL Database were released. Why is this important to retailers?  As the infographic below conveys, mobile and social have added even more data to the already huge collections of POS transactions and e-commerce weblogs.  Retailers know that mining that data will help them make better decisions that lead to increased sales, better customer service, and ultimately a successful retail business. Monetate

    Read the article

  • Save BIG on Storage &mdash; with Oracle Advanced Compression

    - by [email protected]
    Recently, we published a podcast revealing just how much Oracle benefits from its internal use of Oracle Database 11g and Advanced Compression. With hundreds of TB and millions of dollars saved, Oracle Advanced Compression is dramatically reducing storage costs and substantially improving efficiency across the company. Now, here's your chance: Meet the experts, have your questions answered by them and immediately start using your storage more efficiently: On April 14th, join me for a live Webcast with Oracle's Tim Shetler, Vice President of Product Management and Bill Hodak, Principal Product Manager, to learn just how Oracle Advanced Compression can Reduce disk space requirements for all types of data Improve query and storage performance Lower storage costs throughout the datacenter Register here! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >