Search Results

Search found 22982 results on 920 pages for 'users'.

Page 765/920 | < Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >

  • Simulate a DFS share for a user not on domain with a folder in path

    - by user223655
    I have a consultant whose computer is not on the domain and needs to access various network resources. Unfortunately while adding a computer to the domain is a difficult bureaucratic process (and would disallow much of his development software from even running given the domain restrictions), we can allow him to have credentials to access network resources. As such, he accesses various network resources via NET USE etc. without using DFS. There is one piece of software which requires him to have the same hardcoded path as other domain users but that path is a DFS path which he can't map (i.e., the software checks the path at runtime and will only run if it matches the registered path and will reject it in the context of using a DFS versus conventional machine path) I was wondering if there's some method to simulate the DFS path without actually using DFS. e.g., the path the software needs to see is "\ABC\DFS\software\app.exe" whereas the non DFS path is "\DEF\Software\app.exe" while I could make his hosts file point DEF to ABC, I'm not sure if I can somehow make it point there with the DFS "folder" as well are there any methods for this short of making changes to the AD to allow him to use DFS or add him to the domain (both of which are politically/technically challenging sadly)? Thanks guys

    Read the article

  • How to prevent an SSD from disappearing from BIOS

    - by Midimatt
    I've only recently upgraded my old machine to a new one with a brand new 60gb SSD as my boot drive and a 1TB main drive. Paranoid about completely breaking my SSD, I read up on a lot of issues that I needed to watch out for, including making sure AHCI was turned on and trim enabled. PC has been working fine for a few weeks now, until today. My wife was watching some TV on the machine when it started to act strange and eventually blue screened. She rebooted and the boot mgr was missing. When I got home from work I checked the BIOS and the drive had disappeared. I panicked and looked up some possible fixes, and I discovered a large amount of people having problems with the drive firmware, especially on OCZ Vertex and Agility drives, and my drive is an Agility 3 drive. The problems included blue screens followed by missing drives, and a solution was to reset the CMOS and try again. This worked, and now everything seems to be working fine. My question is, is there any way to prevent this from happening? Am I missing a setting for my SSD? All of the posts I found were from early to mid-2011 nothing for the end of 2011 to 2012. So I am wondering if I've missed anything. EDIT: Checked my drives firmware and it is 2.15, which has had issues reported by users.

    Read the article

  • Transfer of ownership of Windows 7

    - by ziggy
    I am thinking of purchasing a copy of Windows 7 via either ebay or GumTree. I am unsure as to how the product key works. A close friend of mine is warning me against buying it from ebay as he is suggesting that once it has been used, the operating system registers itself on microsoft servers using the serial number of the motherboard of the system where it has been installed. This means once installed on one machine you wont be able to install it on another machine. Now i am struggling to believe that an operating system can only be installed on one machine. Can someone please explain exactly how this works. I can see a lot of copies being sold on Ebay which are used. I used the 'Ask a question' option and the majority of the users are saying that i should be able to use it. If someone buys Windows 7 from the shop, installs it on his PC but then decides that he wants to sell it can he not sell it? Will the person buying it not be able to use it? Does the person selling it have to somehow unregister it first? What do i need to look out for if buying it from Ebay? Thanks

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • How to set an executable white list?

    - by izabera
    Under Linux, is it possible to set a white-list of executables for a certain group of users? I need them to be unable to use, for example, make, gcc and executables on removable disks. How can this be done? Edit, let me explain better. I'm dealing with a high school IT system, young geeks that (during the lessons) want to play, surf the net, damage those computer however they can. The major step to achieve this goal was to remove the system they're familiar with and install Ubuntu in all the computers. This actually works quite well, but recent events proved that this is not enough. I want to allow them to execute certain safe programs, like Open Office, and to deny any other program, whether it is preinstalled software, something they carry in usb drives, a downloaded program or a script they program on site. It's possible to remove the 'x' permission on any file on the pc, but of course it would be impractical. Furthermore, they would be able to run anything they download. I thought the best solution would be to make a white-list of safe programs and to deny anything else, but I don't really know how to do it. Any idea is helpful.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • debian lenny email server

    - by Dal
    Hi I am a newbie and set up a debian lenny at home and set up the web and email server in the default installation. I followed the instructions for Exim and ran dpkg-reconfigure exim4-config and set it up for mydomainhere.com. I created a one line message file and attempted to test exim by running the command exim [email protected] < msgfile. I also tried using exim4 Exim but i get same error -bash: Exim: command not found. Obviously I am ignorant on how to run exim and test. I also tried to run a php file that sends a test mail and had no success. That script is tested and works fine if I send it from my hosting isp on a different domain. So I know the php script is good. I set up the debian system behind a netgear firewall and uses 192.168.1.x ip . The web server works great and users can visit my site. But I am lack the knowledge on how to get the email working. Appreciate is someone can guide me.

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • printer assignments for windows xp workstations within an active directory environment

    - by another_netadmin
    I'm using the following script to handle removing any old networked printers from machines and then assigning the propper ones and making one of them the default. This script is assigned to the OU the workstations reside in and uses group policy loopback so all users that login will get the appropriate printers mapped for them. I tried to use the new Printer Manager as part of W2K3 R2 but when assigning the default this way I get an error that the printer doesn't exist so I'm back to using the script. One flaw that I'm noticing is that it won't remove any printers that happen to be mapped from an RDP session (we don't see this everywhere but there are a few locations). Is there any way to enumerate all RDP printers and remove them similar to how I'm enumerating and removing networked printers? ' ' Printers.vbs - Windows Logon Script. ' RemovePrinters AddPrinters Sub RemovePrinters() On Error Resume Next Dim strPrinter Set objNetwork = WScript.CreateObject("WScript.Network") Set colPrinters = objNetwork.EnumPrinterConnections For i = 0 to colPrinters.Count -1 Step 2 strPrinter=CStr(colPrinters.Item(i+1)) If Not InStr(strPrinter,"\\") = 0 Then objNetwork.RemovePrinterConnection strPrinter, True, True End If Next End Sub Sub AddPrinters() On Error GoTo 0 Set objNetwork = CreateObject("WScript.Network") objNetwork.AddWindowsPrinterConnection "\\printers1\JH120-DELL5310" objNetwork.SetDefaultPrinter "\\printers1\Jh120-DELL5310" End Sub

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Exchange Full Access issue

    - by Benjamin Jones
    I was just hired as a System Admin for a small company. They use Exchange 2010 for their Mail Server. I've never had a permission issue like this with Exchange because I worked for a larger firm with less responsibility before. Their old system admin is LONG GONE, so I can't ask him what he did. The issue: Right now ANYONE can gain access to a mailbox and view the mail in the mailbox. This is disabled by default you say and you have to grant them full access ? You are right, but the old System Admin I guess didn't know what he was doing. SO right now user A can open up user B mailbox with out being granted permission. So here is what I found out. Every user in EMC Full Access Permission has Exchange Server group granted. Within the Exchange Server Group, Domain User's is a Member Of. Within Domain User's all user's are listed as Members. So my guess is because of this all users can access ANY mailbox? Well GOOD News. The company is small (35 people) and they are not computer savvy, so hopefully no one has figured out they can open anyone's mailbox.(From what I can tell no). Next thing I did was with my domain user in EMC, delete Exchange Servers Group in FUll Access Permissions and grant access to my user. I made sure that my memeber was apart of the Exchange Server Group. Went to our OWA site and now I don't have permission to my own mailbox. Re did everything to the way it was with my user and now I'm stuck. Any help? I would think granting a single user that is in the Exchange Server group, Full Access to that mailbox would enable them to open that mailbox???? I guess I am wrong.

    Read the article

  • pam_ldap.so before pam_unix.so? Is it ever possible?

    - by user1075993
    we have a couple of servers with PAM+LDAP. The configuration is standard (see http://arthurdejong.org/nss-pam-ldapd/setup or http://wiki.debian.org/LDAP/PAM). For example, /etc/pam.d/common-auth contains: auth sufficient pam_unix.so nullok_secure auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth requiered pam_deny.so And, of course, it works for both ldap and local users. But every login goes first to pam_unix.so, fails, and only then tries pam_ldap.so successfully. As a result, we have a well-known failure message for every single ldap user login: pam_unix(<some_service>:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<some_host> user=<some_user> I have up to 60000 of such log messages per day and I want to change the configuration so, that PAM will try ldap authentication first, and only if it fails - try pam_unix.so (I think it can improve the i/o performance of the server). But if I change common-auth to the following: auth sufficient pam_ldap.so use_first_pass auth sufficient pam_unix.so nullok_secure auth requiered pam_deny.so Then I simply can't login anymore with local (non-ldap) user (e.g., via ssh). Does somebody knows the right configuration? Why Debian and nss-pam-ldapd have pam_unix.so at first by default? Is there really no way to change it? Thank you in advance. P.S. I don't want to disable logs, but want to set ldap authentication on the first place.

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • cheap gigabit switch for small business

    - by neoice
    my friend's business is currently borrowing my Adtran 1224R and is very happy with it. it's configured with a few VLANs to segment customers, internal traffic and public wifi. port 1 is a "trunk" port to the router, a chunky Linux box with iptables+NAT. they push a lot of traffic over the LAN (data backups) and really need gigabit. besides, I'd like my Adtran back :P my goal is to find a cheap(ish) switch that can function as a drop-in replacement. it looks like VLAN trunking is actually part of the 802.1q spec, so anything with VLAN support should cover the current trunk-to-router setup. it's nice to have both a web interface and SSH, but I can configure it either way if needed. things like the Netgear GS724T have caught my eye, but it seems like none of the hardware in the $300-500 range have really solid reviews. I'm concerned that "cheaper" hardware might not work for a network full of power users. does anyone have a recommendation for the Netgear GS724T or a switch that will meet my needs?

    Read the article

  • How to modify a message, so it will be for 100% recognizable as spam by Exchange junk e-mail filter

    - by user71061
    Hi! I have an sendmail server, sitting in front of my Exchange server. This server filter spam with SpamAssassin (and do it incredibly well!), but it merely tag spam messages with appropriate header flags and by modifying message subject. When such a message arrives to user mailbox on Exchange server, where it is examined by Echange/Outlook junk e-mail filter, which put most of spam in junk message folder. And that is my problem: most, but not all! To put all spam in junk e-mail message folder, user has to define an rule, saying f.e: "If header contains text 'X-Spam-Flag: YES' then move it to 'Junk e-mail messages' folder". Fine, but it has to be done on every user (for some users, this task is too "complicated" to made it themselves :-) . So I want to know, how could I modify message header in such a way, that Exchange junk e-mail filter will for 100% recognize this message as a spam, freeing user from task of defining his own rule. Some solution could be defining such a rule by using AD and group policy, but I wan't to avoid this due to many possible caveats: there are so many combination of different operating system and different Outlook versions, and to be honest, I doubt if it is even possible.

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Debian 100% cpu every 30 minutes but not loggable?

    - by user654123
    I have a Debian 7 x64 machine running with Digital Ocean that has every 30 Minutes a 100% cpu usage for about 1 minute. A couple of days ago it stayed there for a couple of hours so the server finally crashed and I had to repair my Mysql databases. The server is a pure webserver running apache2 and Mysql. I tried tracing which processes use the cpu but with no luck. The script I used: #!/bin/sh while true; do ps -A -eo pcpu,pid,user,args | sort -k 1 -r | head -3 >> proclog.txt; echo "\n" >> proclog.txt; sleep 2; done I was monitoring htop as well while this was happening, but the top processess' cpu usage didn't add up to ~15% even though htop's cpu meter showed constant 100%. htop was configured to show all users' processess, user- and kernel-threads. Edit: By stopping Apache2 & Mysql prior to the expected 100% usage I can tell both are not responsible for it. The 100% usage occurred anyway. This is what the graph looked like the past hours:

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • How to store Movies on a separate volume from the iTunes media folder?

    - by Manca Weeks
    I have a rather enormous Music collection. The music itself is approaching the 1TB mark. I am storing that on an external drive already. My iTunes library files are in their default location (/Users/me/Music/iTunes). My iTunes media folder is on an external drive /Volumes/iTunes/iTunes Music This has been working as expected. Now I would like to store just the contents of the Movies folder in the iTunes media folder on a separate drive. Apparently, iTunes doesn't like aliases or symlinks. I saw somewhere that one could mount a volume in a different directory than the default /Volumes. I would like to permanently mount my new Movies volume in the directory /Volumes/iTunes/iTunes Music/Movies. I know there is a command to do this, but how does one configure Mac OS 10.6.4 to always automatically mount that volume in this directory? I hope someone can enlighten me... If I find a solution, I can finally import all my movies into iTunes and be able to search them and stuff - it would be a dream. Thanks, M

    Read the article

  • Siege - running a stress test benchmark

    - by morgoth84
    I need to do a benchmark test of a HTTPS server using Siege, to see how it behaves under massive load. I'm initiating tests from another machine which is quite powerful and it is connected to the same physical switch the server is connected on. But when I initiate a test, I can't get it to make more than 170 requests per second. With this load the server's CPU usage is at 15-20% and the average response time for a request is approx. 0.03 seconds. Load of the client machine is approx. at 10%. So, I gradually increase the number of users in Siege (the number of worker threads) and request rate linearly increases up to 170 reqs/sec, but it never gets over it. No matter how many more worker threads I start, the load on the server is never more than 20% (and the client's load also doesn't increase any more). How can I overcome this? I've googled a bit and found out that after a request is completed, a socket associated with one ephermal port remains in WAIT_TIME state for some time during which it can't be reused. I tried to overcome this by doing these things: sysctl -w net.ipv4.ip_local_port_range="1024 65535" echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle Oh, and the client machine is a Linux (RedHat, I think, but I'm not sure). Any help would be appreciated.

    Read the article

< Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >