Search Results

Search found 11972 results on 479 pages for 'writing'.

Page 340/479 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • How could Load average numbers from 'htop' exceed 100% CPU utlization?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Representing server state with a metric

    - by Sal
    I'm using Microsoft's Performance Monitor to dump logs of RAM, CPU, network, and disk usage from multiple servers. I'd like to get a single metric that captures the state of a given variable to a good extent. For instance, disk usage is pretty stable, so if I take a single reading that says I have 50% remaining disk space, that reading will give me an accurate measure for the day. (The servers aren't doing heavy IO writing.) However, the tricky part here is monitoring CPU and network usage. The logs currently dump the % CPU usage every ten seconds. If I take a straight average of the numbers, it may not represent reality, as % CPU will be much lower during the night than day. (We host websites that sell appliance items.) I'd like to get an average over a span during peak hours (about 5 hours in the day) and present a daily peak hour metric. Of course, there are most likely some readings that will come in as overly spiked (if multiple users pinged the server at once) or no use (a momentary idle state). Is there a standard distribution/test industries use in these situation?

    Read the article

  • Download JDK onto a remote server

    - by itsadok
    I want to get the latest JDK onto a server in a remote location. Downloading the JDK from Sun's website requires jumping through all kinds of hoops until you actually get the file. I'm not sure exactly if they use cookies or my IP address, but simply copying the file URL and trying wget on the server doesn't work. Googling for mirrors of the JDK, I could only find old versions. Right now I'm left with the option of downloading it into my computer, then uploading it to the server. This feels slow and stupid. Anyone got a better idea? EDIT: Thanks for all the replies. Just to clarify, as I'm writing this I'm rsyncing the 78MB file to my server. It should be done in about an hour, so it's not such a big deal. However, since this is not the first time I'm doing this, I was hoping for a better solution for next time. Solution: What I ended up doing was sudo aptitude install lynx-cur www-browser http://java.sun.com/javase/downloads/ From there it's mostly using the arrow and enter keys, and answering "Yes" to a lot of lynx security questions (about cookies and certificates). Thanks to resonator.

    Read the article

  • How to email photo from Ubuntu F-Spot application via Gmail?

    - by Norman Ramsey
    My father runs Ubuntu and wants to be able to use the Gnome photo manager, F-Spot, to email photos. However, he must use Gmail as his client because (a) it's the only client he knows how to use and (b) his ISP refuses to reveal his SMTP password. I've got as far as setting up Firefox to use GMail to handle mailto: links and I've also configured firefox as the system default mailer using gnome-default-applications-properties. F-Spot presents a mailto: URL with an attach=file:///tmp/mumble.jpg header. So here's the problem: the attachment never shows up. I can't tell if Firefox is dropping the attachment header, if GMail doesn't support the header, or what. I've learned that: There's no official header in the mailto: URL RFC that explains how to add an attachment. I can't find documentation on how Firefox handles mailto: URLs that would explain to me how to communicate to Firefox that I want an attachment. I can't find any documentation for GMail's URL API that would enable me to tell GMail directly to start composing a message with a given file as an attachement. I'm perfectly capable of writing a shell script to interpolate around F-Spot to massage the URL that F-Spot presents into something that will coax Firefox into doing the right thing. But I can't figure out how to persuade Firefox to start composing a GMail message with a local file attached. Any help would be greatly appreciated.

    Read the article

  • SQL Query to update parent record with child record values

    - by Wells
    I need to create a Trigger that fires when a child record (Codes) is added, updated or deleted. The Trigger stuffs a string of comma separated Code values from all child records (Codes) into a single field in the parent record (Projects) of the added, updated or deleted child record. I am stuck on writing a correct query to retrieve the Code values from just those child records that are the children of a single parent record. -- Create the test tables CREATE TABLE projects ( ProjectId varchar(16) PRIMARY KEY, ProjectName varchar(100), Codestring nvarchar(100) ) GO CREATE TABLE prcodes ( CodeId varchar(16) PRIMARY KEY, Code varchar (4), ProjectId varchar(16) ) GO -- Add sample data to tables: Two projects records, one with 3 child records, the other with 2. INSERT INTO projects (ProjectId, ProjectName) SELECT '101','Smith' UNION ALL SELECT '102','Jones' GO INSERT INTO prcodes (CodeId, Code, ProjectId) SELECT 'A1','Blue', '101' UNION ALL SELECT 'A2','Pink', '101' UNION ALL SELECT 'A3','Gray', '101' UNION ALL SELECT 'A4','Blue', '102' UNION ALL SELECT 'A5','Gray', '102' GO I am stuck on how to create a correct Update query. Can you help fix this query? -- Partially working, but stuffs all values, not just values from chile (prcodes) records of parent (projects) UPDATE proj SET proj.Codestring = (SELECT STUFF((SELECT ',' + prc.Code FROM projects proj INNER JOIN prcodes prc ON proj.ProjectId = prc.ProjectId ORDER BY 1 ASC FOR XML PATH('')),1, 1, '')) The result I get for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Blue,Gray,Gray,Pink ... But the result I need for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Pink,Gray ... Here is my start on the Trigger. The Update query, above, will be added to this Trigger. Can you help me complete the Trigger creation query? CREATE TRIGGER Update_Codestring ON prcodes AFTER INSERT, UPDATE, DELETE AS WITH CTE AS ( select ProjectId from inserted union select ProjectId from deleted )

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • How to check DVD integrity at max read speed of DVD writer

    - by ashishsony
    I need to check the integrity of burned DVDs so that I can be sure about my backed-up data. I use DL-DVDs to take the backup. Earlier I used VSO Inspector software for the same but the day I switched to DL-DVDs the VSO Inspector gives me errors upon checking. I think the errors are because the switching of layer writing involves some dummy data somewhere. Secondly, it's damned slow for checking. I believe if there is a utility that can read all files (not the disk surface) and report if some files are unreadable would do the job. But it should be quick! Nobody wants to sit for disk checking for 3-4 hours after a quick 30 min data burn! I am looking for such a utility on Windows or Linux. Even scripts (python, etc) will do. I just want to be assured that the data is safe. Can someone help me in this? Thanks.

    Read the article

  • Standalone WLST for both WebLogic 8.1 and 9.2?

    - by imiric
    Hi, I'm writing a simple script to facilitate changing JDBC connection URLs in several WL environments, among these both v8.1 and v9.2. I want to create a standalone script, outside of any WL installation, just including wlst.jar/jython.jar/weblogic.jar, that will work both on WL 8.1 and 9.2 (obviously by referencing different MBeans). Now, this works OK for WL 8.1. I copy weblogic.jar from the server, and have managed to get ahold of both wlst.jar and jython.jar (wasn't easy, Oracle doesn't host them anymore). Also I need to make sure to locally run under the same JRE as the server (WL8.1 runs on Java 1.4.2). But if I try to connect to WL 9.2 from this setup, I get a NullPointerException when trying to access any MBean (probably because I'm running on JRE 1.4.2 and WL 9.2 uses 1.5.0). Also, I am unable to create a standalone environment for WL 9.2. If I copy weblogic.jar from 9.2 and run WLST like so: java -cp "wlst.jar:jython.jar:weblogic-92.jar" weblogic.WLST I get a java.lang.NoClassDefFoundError: weblogic/management/configuration/RepositoryMBean error. I can't find this class in weblogic92/server/lib, but it IS inside weblogic.jar from WL 8.1. So I'm really losing my patience here... Is there any way to create a standalone WLST client that can connect to any version of WebLogic (8.1 & 9.2 in the meantime)? I really wouldn't want to have to ssh into the WL environment to run my WLST script... Any ideas/suggestions are welcome. Thanks, Ivan

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • Constant beeping

    - by Bogi Juul Peterson
    My computer has a problem with beeping. I'll first tell you how it started: One night I decided to watch a cartoon, and I took some water and food with me to eat while I watch. I fell asleep and when I woke up I had spilled water all over my keyboard. I didn't react quickly, because it went unnoticed for about 1 and a half hours. When I noticed, I quickly put my laptop on a radiator to try to dry it. When I turned the computer on there was a constant beeping sound. I didn't know what caused it, and I'm still not sure, but I think it has something to do with the USB hub. If I let the computer stand for some hours without any USB connections, it's fine, but when I put any kind of USB device in, it starts beeping. The laptop keyboard doesn't work so I have to have some kind of USB device connected to use the computer. The beeping is going on while I'm writing this and it makes my computer lag so much that it can make delays from the keyboard be from 1 second to 10 seconds. I need this computer because I don't have the money to buy a new one, and I have to use this computer for homework and games. The beeping is constant, with less than a second between beeps. If anyone has a solution, please help.

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Terrible noises from subwoofer of ACER Aspire 6930 with Realtek sound chip

    - by OneWorld
    After approximately 5-15 min of listening to music my subwoofer begins to make terrible noises. He's just "coughing". That began after 6 months I had this computer. Now I found out, that I can temporarily fix this problem by "restarting" the audio stream of the application that plays music. For example reloading last.fm page (reloads the flash file). Another way to reset the audio playback is switching the speaker configuration shown below in the screenshot. According to many posts on the internet like http://www.tomshardware.co.uk/forum/52918-20-acer-aspire-6935g-speaker-problem ACER support isn't any help Exchanging hardware doesn't fix the problem Even the later models have this problem Turning off the volume of the subwoofer is not an option to me. I still have warranty (I bought an extension of one year). I already tried about 15 versions of the Realtek driver with no success. I am not sure but MAYBE the problem did not occur on the original windows vista that was shipped with this computer. However, I removed the original windows for good reasons (english). What do you suggest me? Did anyone fix this problem? Maybe by writing a script which resets the audio streams every 5 minutes? Shall I take the effort to deal with the acer support until they give me another model? (I won't have a computer than for a longer time, will spend money on telephone hotlines (1,30 EUR / min)......) Here are additional infos, if they are any help: Windows 7 64 Bit (Original was Windows Vista Home Premium 32 Bit) All specs Audio driver version:

    Read the article

  • How do you do a keyword search the Services.msc (mmc) window in Windows 7?

    - by Warren P
    When you want to run a service, you have very limited capabilities, in all current Windows versions, as far as I can tell. I usually start Services by typing "services.msc" into the Start-Run box, on most versions of Windows, this works. I know how to click the "Name" column in the MMC view of Windows Services. If you know what the first few characters of a service name is, you can usually sort by the name, and type the prefix to scroll the list down (find Windows Search for example). This seems pretty weak to me, so I spent some time searching the interwebs for tools that do a better job of managing services. Usually I have a keyword that I know "fooWare" might be the keyword, and I need to find the (usually badly named) service and start it and stop it. This is often WAY too hard. The best I could do is "NET SERVICES" from the command line, and maybe add a grep in there, but that doesn't list every service, only a few of them. And the MMC snap-in in Win7 now has an Export List button, exporting to csv text file feature which I have used from time to time, to export and then search. I have thought of writing my own tool. I'm hoping a better "service manager" utility exists out there that sysadmins use. I'd like a search box at the top right corner, kind of the same way that the Add-Remove-Programs dialog in Win7 and Vista has a search facility. Does such a services utility exist out there?

    Read the article

  • Scriptable BitTorrent clients?

    - by James McMahon
    In an effort further automate all the little computer house keeping tasks that can waste my time I am looking into BitTorrent clients that have the ability to script common tasks. I've done some Googling and it looks like Transmission might have some of said such capabilities, but there site wasn't very clear on the details. Things I am looking to do; Prioritize and label torrents based on trackers Set seed length based on trackers and filesize Set additional seed time when a torrent's seed time expires based on a number of factors, like time spent seeding, remaining disk space and ratio. Move torrents to appropriate places post seeding based on labels and tracker Basically, while I could Python or Bash script things like moving torrents around and other simple actions, I need away to talk to the client to figure out things like the torrent seed time, tracker, labels, filesize, etc. Is there any client out there that would allow me to all or a subset these actions? I have access to Linux, Mac and Windows and am not tied to any particular torrent client. I am a programmer so I have no problems writing scripts, but examples of torrent scripting would also be helpful.

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • Can I get all active directory passwords in clear text using reversible encryption?

    - by christian123
    EDIT: Can anybody actually answer the question? Thanks, I don't need no audit trail, I WILL know all the passwords and users can't change them and I will continue to do so. This is not for hacking! We recently migrated away from a old and rusty Linux/Samba domain to an active directory. We had a custom little interface to manage accounts there. It always stored the passwords of all users and all service accounts in cleartext in a secure location (Of course, many of you will certainly not think of this a being secure, but without real exploits nobody could read that) and disabled password changing on the samba domain controller. In addition, no user can ever select his own passwords, we create them using pwgen. We don't change them every 40 days or so, but only every 2 years to reward employees for really learning them and NOT writing them down. We need the passwords to e.g. go into user accounts and modify settings that are too complicated for group policies or to help users. These might certainly be controversial policies, but I want to continue them on AD. Now I save new accounts and their PWGEN-generated (pwgen creates nice sounding random words with nice amounts of vowels, consonants and numbers) manually into the old text-file that the old scripts used to maintain automatically. How can I get this functionality back in AD? I see that there is "reversible encryption" in AD accounts, probably for challenge response authentication systems that need the cleartext password stored on the server. Is there a script that displays all these passwords? That would be great. (Again: I trust my DC not to be compromised.) Or can I have a plugin into AD users&computers that gets a notification of every new password and stores it into a file? On clients that is possible with GINA-dlls, they can get notified about passwords and get the cleartext.

    Read the article

  • Mac OS X : Open up 3 terminals, run different commands from all for each of them, to set up a develo

    - by taelor
    I'm a Ruby on Rails Web Developer and there is a lot of repetition I go through to start up my development environment. I was wondering if there is any way that I can remove some of this repetition by writing a script, or using a program (like quicksilver) or something to get my work environment going. I know how to use quicksilver to open up terminal, and I even have a saved window group to get my 3 or 4 panes open. The next thing I would love to automatically happen is getting all three to goto a certain directory, and each run different commands. One will start the local server, and in another tab, start a background process. the other would open text mate, and then start a console session, while the last one runs a svn(or git) status. Oh yah, and I would love to go ahead and open firefox, and a few tabs going to a couple of locations. Does anyone have any suggestions on how I could make all this happen in once quicksilver command, or a double click on some type of script on my Desktop?

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Create a partition table on a hardware RAID1 drive with [c]fdisk

    - by Lev Levitsky
    My question is, is there a reason for this not to work? Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds". Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though. I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too. EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any). But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error. What can be the problem? EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >