Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 143/457 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • How can i recover a zip password using CUDA (GPU) ?

    - by marc
    How can i recover a zip password on linux using CUDA (GPU). For the past two days i tried using "fcrackzip" but it's too slow Few months back i saw some application that can use GPU / CUDA and get large performance boost in comparison to CPU. If brute-force using cuda is not possible, please tell me what's the best application for performing a dictionary attack, and where can i find best (largest) dictionary. Regards

    Read the article

  • Move to next selection in Word 2007

    - by Arthur Ward
    When I have multiple selections in a Word 2007 document, such as after selecting all instances of a style, how can I move from one selection to the next? When you issue the select all instances command, the view snaps to the next instance of the style, but how can I find the other instances? Any cursor key will unselect everything. Using the mouse to scroll through the document is not feasible for large documents, plus the selection could be a single character -- very easy to miss!

    Read the article

  • What can I do to prevent BIND from outputting these logs

    - by lacrosse1991
    I recently noticed that BIND has been producing a large amount of logs in /var/syslog relating to one particular server (ezdns) What can I do to prevent these logs from appearing? Why would this server be the only server that is causing BIND to produce these logs? I've search around google and have found a few different solutions for hiding these logs, but I would like to know why this one server is so troublesome

    Read the article

  • Allow only certain files to be exposed to the web on Lighttpd?

    - by darkAsPitch
    Just installed it on my linux desktop, and I only want 1 or 2 files accessible to the outside world. Everything else should only be accessibly via http://localhost/ for various privacy/security reasons. It is just a test server, don't want just anybody accessing my large batch files. How would you go about allowing only certain select files access to the internet and making everything else available only via http://localhost/?

    Read the article

  • Is there anything better then Microsoft Project?

    - by GuruAbyss
    I'll soon be knee deep into a very large project and I'm looking into project management software. I need users opinions on software based (no web based) solutions that are equal or better then MS Project. It can be open source or closed source. Thank you all in advanced for your insight and opinions!

    Read the article

  • Page cache flushing behavior under heavy append load

    - by Bryce
    I'm trying to understand the behavior of the Linux pdflush daemon when: The page cache is initially pretty much empty There is a large amount of free memory The system starts undergoing heavy write load My understanding right now is that the vm.dirty_ratio and vm.dirty_background_ratio that control page cache flushing behavior are with respect to the present size of the page cache, which means that my writes will flush earlier than they would if the page cache was pre-populated (even with dummy data from some random file), and thus throughput will be lower. Is this accurate?

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • How to copy remote machines text to local machines clipboard through SSH?

    - by recluze
    I work on a remote machine through ssh. I have a very large text file there (approx. 500 lines) which I usually need to modify, then copy the contents of that file and paste it in my local browser. The way I usually do this is cat filename and then select/copy the ssh output. That takes a lot of time. I was wondering if there is a utility that will put the remote file's contents in my local clipboard.

    Read the article

  • hide toolbar buttons in Chrome / Chromium

    - by romant
    Am a large keyboard user, and I've never hit back/forward/refresh or the favourite buttons on my browsers. Within safari, I can modify each of the buttons that appear. I wish in Chrome to only be able to see the address bar, and the page. Is this possible?

    Read the article

  • load balancing in Tomcat

    - by Alvin
    Hi All, I want to implement the load balancing in tomcat 6.0 so that we can create more than one instance of a tomcat and when any of the instance is down then other instance will run our application. so that our application will never be down even when the large number of concurrent request comes. But i have no idea to implement it. Please give your precious suggestions.

    Read the article

  • Connecting a 2560x1440 display to a laptop?

    - by tjollans
    Having read Jeff Atwood's blog post on Korean 27" IPS LCDs, I've been wondering to what extent these are useful in a notebook + large display situation. I own a Lenovo Thinkpad Edge E320 with 2nd gen. integrated Intel graphics. According to the spec from Intel, this should support HDMI version 1.4, and, using DisplayPort, resolutions up to 2560x1600. HDMI version 1.4 supports resolutions up to 4096×2160, however, according to c't (German), the HDMI interface used with Intel chips only supports 1920x1200. The same goes for the DVI output - dual-link DVI-D, apparently, is not supported by Intel. It would appear that my laptop cannot digitally drive this kind of resolution. Now what about other laptops? According to the article in c't above, AMD's integrated graphics chips have the same limitation as Intel's. NVIDIA graphics cards, apparently, only offer resolutions up to 1900x1200 over HDMI out of the box, but it's possible, when using Linux at least, to trick the driver into enabling higher resolutions. Is this still true? What's the situation on Windows and OSX? I found no information on whether discrete AMD chips support ultra-high resolutions over HDMI. Owners of laptops with (Mini) DisplayPort / Thunderbolt won't have any issues with displays this large, but if you're planning to go for a display with dual-link DVI-D input only (like the Korean ones), you're going to need an adapter, which will set you back something like €70-€100 (since the protocols are incompatible). The big question mark in this equation is VGA: a lot of laptops have it, and I don't see any reason to think this resolution is not supported by the hardware (an oft-quoted figure appears to be 2048x1536@75Hz, so 2560x1440@60Hz should be possible, right?), but are the drivers likely to cause problems? Perhaps more critically, you'd need a VGA to dual-link DVI-D adapter that converts analog to digital signals. Do these exist? How good are they? How expensive are they? Is there a performance penalty involved? Please correct me if I'm wrong on any points. In summary, what are the requirements on a laptop to drive an external LCD at 2560x1440, in particular one that supports dual-link DVI-D only, and what tools and adapters can be used to lower the bar?

    Read the article

  • How do you fix a MySQL “Incorrect key file” error when you can’t repair the table?

    - by Wayne M
    I'm trying to run a rather large query that is supposed to run nightly to populate a table. I'm getting an error saying Incorrect key file for table '/var/tmp/#sql_201e_0.MYI'; try to repair it but the storage engine I'm using (whatever the default is, I guess?) doesn't support repairing tables. how do I fix this so I can run the query? We are under pressure to get this table loaded for a client.

    Read the article

  • Bridged NIC's but only one active

    - by rockinthesixstring
    I've "Bridged" the NIC's in my Server 2003 box but when I do a large file transfer, I see that only one is active at a time. What do I need to do to spread the love across both NIC's? I'm hoping to increase transfer speeds from my Server to my network. PS: I have a D-Link DGS-1016D Switch.

    Read the article

  • Which is faster, copying everything at once or one thing at a time?

    - by fredley
    I am transferring a bunch (20+) of large (1GB+) files to my external flash drive over USB 2.0. Is it quicker to just sling them all over at once (as in one at a time but not waiting for the previous transfer to finish) so that there are multiple transfers going on, or transfer one, wait for it to finish, transfer the next. The files are coming from a variety of locations so I can't do one single big transfer. Are there any other advantages to one way or the other that are worth considering?

    Read the article

  • How to copy files from HDD to HDD with integrity checking

    - by RafaelM
    I am moving data from an almost dead HDD to an external USB drive using linux , because for some reason Windows cannot see the data. I want to copy a large amount of data over from the HDD to the USB drive with integrity checking. I thought about copying everything over and then checking with md5summer but this would take a reaally long time because its a lot of data and this is not a very powerful PC. What tool can use to do this on Linux?

    Read the article

  • How do I remove the numerous Genius Playlists from my iPad?

    - by spilth
    I’m using the Music app on my iPad 1 with iOS 5.1.1. For some reason, over time my iPad’s Playlists have been populated with more and more Genius Playlists making it extremely sluggish and almost impossible to access my own playlists. As of right now there are 16 copies for an “Adult Contemporary Rock Mix”. Multiple copies exist for a large number of genres. How do I get these off my iPad so I can get to the playlists I actually care about?

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • DEBIAN minimum hard disk footprint

    - by user41072
    Hi, I found this http://serverfault.com/questions/29071/red-hat-server-minimal-install question while searching google for debian minimal install. User shylent wrote thet he uses really basic debian install so small that processes can count on one hand fingers :D. So Im searching and asking for starting point to create basic linux distro but not from scratch like LFS but make linux distro based on debian for example. I used debootstrap but still it is 150M large.

    Read the article

  • How can I archive a 30GB file?

    - by Joel Coehoorn
    I have a zip 30GB zip file containing an archive of digital materials available in the school library that I want to burn to dvd. Of course, 30Gb is far too large for a single dvd and the content is already zipped. I'm open to ideas, but leaning towards suggestions that will help me automatically spread the file over multiple dvds, including a simple program to stitch it back together again later.

    Read the article

  • What text file search tool are you using?

    - by user156144
    I am working on legacy projects with thousands of files spanning more than 10 projects. I am wondering what other people are using for searching string in a text file. I am using windows and i typically search on 10,000 files to make sure some code is not called from other places. I've tried numerous text search tools mentioned here such as Actual Search & Replace, Ultraedit, notepad++ but they all take very long time to search due to the large # of files they have to look into.

    Read the article

  • Cannot delete audit logs with sudo

    - by DazSlayer
    I am using auditctl to log all commands run on my Ubuntu system and I working on a script that parses the log into a more readable format. Since these logs tend to become very large, I want to periodically delete the logs. I found that by running sudo rm /var/log/audit/* I would get rm: cannot remove `/var/log/audit/*': No such file or directory however by running sudo su rm /var/log/audit/* The logs would be deleted without any problem. What could be the cause of this?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >