Search Results

Search found 9062 results on 363 pages for 'big o'.

Page 73/363 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Change DPI setting in Windows 8.1 for the Logon Screen

    - by jmc302005
    How can the DPI setting be changed for the Logon Screen in Windows 8.1? Microsoft has added per-user DPI settings. But this means that there is no adjustable DPI setting for the Lock/Logon screen. You can change the DPI setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However, it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen TV for my monitor on my desktop. Big enough for me to sit in my recliner and use my computer. I use the on-screen keyboard most of the time. (I don't want to keep a keyboard next to me.) The problem is that with the new DPI setup the on-screen keyboard takes up nearly half the screen, which is too big. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS\.DEFAULT\Control Panel\Desktop there is a string value named LogicalDPIOverride with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. How can I change this default DPI? Can I use the LogPixels string that worked for DPI in Windows 7? Here are two Screen shots, one of the Lock Screen and one of the Logon Screen:

    Read the article

  • How to configure amavisd-new for only scanning on particular senders/servers?

    - by mailq
    I'd like to know how to configure amavisd-new to only scan for Spam on particular clients (IPs, CIDRs or hostnames) or alternatively sender's email domain. I know that it is possible to do it on a recipient's mail address but not on how to do it for the sender's mail address. It is even possible to do it on a recipient's IP address with policy banks. But my approach should be to be independent of recipient and only relay on the sender. What I want to accomplish is to only scan mails originating from Yahoo, Google, Hotmail and the other big senders. So it is easier to configure which senders should be observed than the ones that shouldn't. I known that it is easier to achieve on the MTA side, but that is not part of the question because I already go a solution on the MTA side. I want to do it on amavisd-new. And it doesn't help to know how to put senders on a whitelist, as this still means that the mail goes through all the scanning but then gets a high negative score. The mail shouldn't be scanned at all unless sent by the big players. So which parameters in amavisd-new is the right one to enable scanning for particular senders and only for these?

    Read the article

  • Stream video file in debian?

    - by Rob
    I've tried ffserver with ffmpeg, I've tried VLC, and I'm not sure what else to try or what I've done wrong. I've gone through, with VLC +-[ robert@s10 ]--[ ~ ] +[#!]¬ vlc --version VLC media player 2.0.0 Twoflower (revision 2.0.0-0-g421a4fc) VLC version 2.0.0 Twoflower (2.0.0-0-g421a4fc) Compiled by buildd on biber.debian.org (Mar 1 2012 22:21:37) Compiler: gcc version 4.6.2 (Debian 4.6.2-14) This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by the VideoLAN team; see the AUTHORS file. and tried everything I could in the streaming section, but I can't get the stream to actually work. Looking around, apparently debian strips the encoders from the package? I want to do share some videos I've made with friends on IRC, and it would be easiest if I could just stream it so we can all watch at the same time and critique parts of it in real time. Has anyone done something similar? Linux s10 3.2.0-2-686-pae #1 SMP Tue Mar 20 19:48:26 UTC 2012 i686 GNU/Linux Basic home network, I am behind a NAT (192.168.1.*) and have dynamic DNS set up. That doesn't really matter too much, I can figure that out, but it's not even working locally. I have a file server set up and could just share the files that way, but I'd rather have everyone watching at the same time (or just about). Not worried about installing new packages or building something from source, that's not a big issue, just want to get it working. Big plus if I can do it from command line.

    Read the article

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • Why have trackballs almost disappeared? [closed]

    - by Gary M. Mugford
    One of the movement sensors in my Microsoft Trackball Explorer has failed and right now I am using a mouse. Ugggh! I'll got steal one of the various Logitech trackballs spread around the house, but they all have issues. The Trackman has a horrible placement for the scroll wheel. Others have marbles for the thumb rather than a big ball for the fingers and at least one trackball is working around here without having a scroll wheel at all! (The one at the dinner table, for when I dine alone). My question is, why have trackballs fallen into disfavour? Seems to me that trackballs are great for crowded desktops (you know, the one with keyboards, notes, pens and coffee cups), and for laptops with those hated overly-sensitive touchpads. But right now, it seems to be a choice between that Logitech Trackman and some Kensington models that lack scroll wheels. All I want is a nice big ball to manipulate with the fingers and two buttons on the thumb side with the scroil wheel between them. Placement of other buttons is completely optional. Is that asking too much?

    Read the article

  • How to compress .pdfs in word 2007?

    - by chobo2
    Hi I am trying to send my cover letter and resume away but apparently it is too big to send through craigs list(my computer says the total size is 500kb) as it has a 600kb limit(so small should be at least a meg). Hi there. You recently tried to email Some job Email, an anonymous craigslist address. However, your message was too big to be sent through our system. Craigslist has a 600KB limit on the messages we'll send. Please reduce the size of your mail and try again. Thanks for using craigslist. So when I convert my word 2007(.docx) files to pdf they become huge. Like they got from 32kb to 320kb. So is there a way I can either get around craigslist limits or compress my pdfs a bit to make it happy. I don't want to send zips and stuff since the person who gets it might not even know what to do. I rather not send .docx since not sure if will have office 2007 or the compatibility view installed and I rather just send it as pdf(as some place require it anyways to be in pdfs). Thanks

    Read the article

  • Disk IO slow on ESXi, even slower on a VM (freeNAS + iSCSI)

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas 8.0.4). Those two machines are connected to each other with Gigabit ethernet. The raid-z volume is divided into three parts: two zvols, shared with iscsi, and one directly on top of zfs, shared with nfs and similar. I ssh'd into the freeNAS box, and did some testing on the disks. I used ddto test the third part of the disks (straight on top of ZFS). I copied a 4GB (2x the amount of RAM) block from /dev/zero to the disk, and the speed was 80MB/s. Other of the iSCSI shared zvols is a datastore for the ESXi. I did similar test with time dd .. there. Since the dd there did not give the speed, I divided the amount of data transfered by the time show by time. The result was around 30-40 MB/s. Thats about half of the speed from the freeNAS host! Then I tested the IO on a VM running on the same ESXi host. The VM was a light CentOS 6.0 machine, which was not really doing anything else at that time. There were no other VMs running on the server at the time, and the other two "parts" of the disk array were not used. A similar dd test gave me result of about 15-20 MB/s. That is again about half of the result on a lower level! Of course the is some overhead in raid-z - zfs - zvolume - iSCSI - VMFS - VM, but I don't expect it to be that big. I belive there must be something wrong in my system. I have heard about bad performance of freeNAS's iSCSI, is that it? I have not managed to get any other "big" SAN OS to run on the box (NexentaSTOR, openfiler). Can you see any obvious problems with my setup?

    Read the article

  • Kernel hacking methodology - how to find out where to hack the linux kernel

    - by Flavius
    I have a throw-away cheap laptop I'd like to twiddle around, a Thinkpad SL 500. What bothers me are two leds, the one for wireless connectivity, and the one for hibernation, which don't light up at all, although they're functional, I've tried it on windows. So I would like to write a kernel driver for them, nothing big, it just looks like a good idea to play around with the kernel. My question is what methodology should I follow systematically to find out what devices are responsible for those leds (in general, not necessarily specific to my hardware), and what drivers are responsible for the other two leds that work, bluetooth and the battery indicator? And when I say methodology, I really mean the methodology, step by step, with reasons for each step, like in the answer I've gave to someone else over here: What does && mean in void *p = &&abc; I am profficient at fgrepping through big code repositories, using static code analysers & co, but I think my lack of hardware knowledge hinders me on this problem. PS: I'm using ArchLinux, so almost the latest kernel version.

    Read the article

  • Messed up USB stick doesn't show in blkid

    - by Felix
    I was playing around with a USB stick (booting archlinux with qemu off of it and trying to perform an installation on the same stick at the same time -- brave, I know, but I was just messing around). Now, after failing to boot and install at the same time, it seems I have sort of messed up my stick. What I think happened is that I used cfdisk to wipe everything on it and create one big partition, but formatting it then failed, so now there's a big partition with no filesystem. Just to make it clear: I'm not worried for my stick, I know I can recover it at any point. What I find intriguing is that after plugging the stick into my computer (using Ubuntu), there's no (terminal) way to find out what block device (/dev/sdx) it has associated. The only way I could determine that was with GParted: But blkid shows the following: /dev/sda1: UUID="12F695CFF695B387" LABEL="System Reserved" TYPE="ntfs" /dev/sda2: UUID="A0BAA6EABAA6BC62" TYPE="ntfs" /dev/sdb1: UUID="546aec8b-9ad6-4571-b07a-adba63e25820" TYPE="ext4" /dev/sdb2: UUID="2a8b82d8-6c6e-4053-a446-bab970d93d7c" TYPE="swap" /dev/sdb3: UUID="7cbede7d-c930-4e59-9d1b-01f2d79bd092" TYPE="ext4" No trace of /dev/sdc. My question is: if I didn't have a graphical interface (to use GParted), how would I have known which block device is my stick?

    Read the article

  • Outlook 2007 font sizes

    - by Flack
    Hello, Something really strange seems to have happened to my Outlook 2007. Everything was working fine for a long time now but at the end of today, all of a sudden, all of the fonts in Outlook are messed up. The font size of mails I write is huge (I am not zoomed in) and the font sizes of the buttons are big too, specifically the "Send", "To", and "Cc" buttons. I tried changing the font sizes through Outlook, but some of the buttons on the "Mail Format" tab in Options are not working, mainly the "Stationary and Fonts" button. I hit it but no window opens. This is all happening on my x64 machine. I took a look at my x32 machine, which also has Outlook 2007 installed and everything is ok there. Below is a link to an image comparing the broken, large font Outlook (top of picture) and the normal, working outlook. The text in the mails I compose is also abnormally large in the broken Outlook. Big font Outlook buttons. Any ideas? This came out of no where after a few months of no problems. Thanks.

    Read the article

  • Powershell Copy-Item fails silently

    - by R W
    I have a powershell 2.0 script running on Windows Server 2008 R2 64bit that copies some Hyper-V .vhd files to another server as a 'backup solution'. The script gets a list of the .vhd's to copy then iterates over that list to copy them using Copy-Item. It also writes some logging info to a file as well. The files are copied to another server (Windows Server 2003 Sp2) into a directory compressed with NTFS compression. One of the files isn't copied. It's relatively big ~ 68Gb. The others are 20Gb or less. The wierd thing is that during the copy process the file appears on the destination server and the log file generated seems to indicate the file is copied due to the difference in the times of the log file entries. I see no error messages on the log file and nothing in the event log of either machine. Here's the code that does the copy. Get-ChildItem $VMSource *.vhd -Recurse | foreach-object { $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) started" $fullname = $_.FullName Add-Content $logFileName "$time : Copying $fullname to $VMDestination" Copy-Item $fullname $VMDestination -Force -ErrorAction SilentlyContinue -ErrorVariable errors foreach($error in $errors) { if ($error.Exception -ne $null) { Add-Content $logFileName "'tERROR COPYING FILE : $($error.Exception)" } } $time = Get-Date -format HH.mm.ss Add-Content $logFileName "$time : File Copy ($_) finished" } I can only think there's some problem with copying a file that big to a compressed directory maybe? Any ideas?

    Read the article

  • An easily customizable linux distribution using minimal disk space?

    - by Frank
    I'm looking for a linux distribution that can be easily used to create my own distribution that's the same system with some software installed. So basically I should be able to create an iso which, when installed, will have the linux distribution with my desired installed. More specifically, I plan on installing mysql and a bit of my own software which shouldn't be too big. However, this distribution needs to be extremely small in terms of disk space. The distribution, including mysql should not exceed 100mb. It should, of course still be able to connect to the internet and perform other standard functions. I don't need X/any sort of window manager, and would prefer not to have it since it would increase disk usage. Currently I have tried ttylinux and tiny core linux. I've found that ttylinux, while is extremely small, has almost nothing so that mysql can't even be installed. Tiny core linux, on the other hand is a bit too big. I've found openembedded and linux from scratch, but I would prefer for the install and build process to be much easier. What other distribution would you recommend for my purposes? Minimizing disk usage is the most important, followed by ease of installing and creating the custom distribution.

    Read the article

  • AVCHD MTS h264 1080p file with choppy playback in Linux

    - by marc
    When I'm trying play video files from my camera: Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1) Input #0, mpegts, from '00027.MTS': Duration: 00:00:38.88, start: 2.884289, bitrate: 16945 kb/s Program 1 Stream #0.0[0x1011]: Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s … on my Linux computer (Ubuntu 12.04), I get choppy playback. It's completly unusable... I tried: Totem VLC mplayer The result is always same issue. I sent the same video file to a friend who has ubuntu 10.04 to test, and he also has the same issue. He has Windows 7, and confirms that on Windows, the video work well. I have an Intel® Core™2 CPU 6300 @ 1.86GHz × 2 with GF 9600 GT, with closed NVIDIA drivers. This is not any kind of issue with big files playing slow from an HDD issue. I have an SSD drive! I spent the last days and nights, trying hundreds of commands for ffmpeg, handbrake, mencoder... Any of them won't let me create a file with enough quality. I downloaded few movies from YouTube in 1080p, and playback worked well without any big pixels and choppiness. I would like have highest possible quality, I will put following files onto a Blu-ray disk so I don't need to compress them to get a smaller size. I just want smoth playback on my Linux box. On Windows, the same file is working well.

    Read the article

  • Reliable applicance for routing IT emergency calls (SIP and ISDN)

    - by chiborg
    We have a fairly big IT installation and our IT staff needs to be reachable 24/7. At the moment we have the following setup for "emergency" calls to our IT staff on our main Asterisk box: An incoming emergency number (connected via SIP trunk and a BRI card in case the SIP trunk goes down). When the number is called during the office hours, all the SIP phones of the IT staff are called simultaneously. When the number is called out of office hours interface, a list of mobile phone numbers is called, one after another until someone picks up. The list can be changed by the IT staff via command line script. The setup works well, but the Asterisk is heavily used in a call center, has experienced some outages and misconfigurations, each of them bringing down the IT emergency number. So we'd like to put the IT emergency call functionality on a separate device. This does not need to be a big server, it even does not need to be Asterisk, it only has one purpose and should do it reliably. It should be very low-maintenance. Any suggestions for hard- and software?

    Read the article

  • Clipboard bug in Wordpad in Windows 7 (accidentally pasting large file into application)

    - by frenchglen
    In Win7, I use Wordpad, and I really like it. For my needs it's lean and fast, yet has the formatting functionalities I'm after when working on my TXT/RTF files on a daily basis. I don't intend to change text editors. There's a really bad bug which has ALWAYS plagued me. If you have a large file contained in the clipboard, like a 238MB FLAC file, and you accidentally paste it into Wordpad for whatever reason - it hangs the application for a VERY long time (like 2 hours, it depends on how big the file is, because it tries to 'handle' it). You either have to close the application and lose any unsaved changes, or go do something else until the item has finished pasting into Wordpad (it actually eventually drops the file's icon in wordpad just like how it appears in Windows Explorer). It's a Windows bug, a Wordpad bug. Is there some solution for this? Or is the problem fixed in Windows 8 (if anyone can tell me)? .....I'm not going to try out Win8 myself, merely to answer this question - that's what I'm asking it on SuperUSer for! I'm really hoping it's one of those little-yet-big things that they've fixed in Win8 (like removing the 255-character file path limit in Explorer, which is awesome). Thank you for your help, if you have Win8 handy and can test this. :)

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • BIOS and Windows cannot detect CDROM device

    - by eman
    Hello! I have a HL-DT-ST RW/DVD GCC-4521B dvdrom device and a big problem. Some days ago everything worked fine. A friend installed some software and then the drives in winxp has been marked as corrupt. I uninstalled the software, but still corrupt drives. The next step I have done was running the current software GCC-4521B101(E).exe. When I ran this software again, the drives was automatically updated, but still marked as corrupt (in the Device Manager), even if I did a reboot. And then the big mistake: once more I tried to run this software, but during the update process, the machine restarted and boom! The DVDROM device doesn't work anymore. The led doesn't blink and if I push the eject button, nothing happens. Also bios and winxp doesn't recognize the optical drive. Then I plugged an other optical drive and it worked, but my old drive seems to be dead. So, what happened and how to solve this problem? Please help. Regards!

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >