Search Results

Search found 10731 results on 430 pages for 'k day'.

Page 346/430 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • The Web Hosting Connundrum for "not quite" developers

    - by saltcod
    Hey all, Apologies if this post feels like its been covered elsewhere, but I don't think it has. I've been down a winding web hosting road. To date, I've tried: Joyent, Media Temple, Bluehost, Hostgator, and finally Linode. The reason for switching are likely obvious to everyone: speed. With the exception of the lightening fast Linode, all of the shared hosts are absolutely sloooow. What do do when you're not really a "developer" While I'v grown addicted to the speed of Linode, I really don't feel like its where I should be. I have this nagging feeling in the back of my mind that one of these days (likely soon), I'm going to run into something that i won't be able to figure out and i'll have days worth of downtime. Just the other day, for example, I realized that one of my domains wasn't sending emails. After 4(!) hours looking into the problem, I still can't get sendmail or postfix to work. Four hours!! I want to be a Drupal expert, not a Ubuntu expert That's really the heart of my problem: I spend way too much time learning Ubuntu's ins-and-outs, and not nearly enough time working on Drupal. So here goes: Is there a web host out there anywhere that offers the speed of Linode, but will let me focus on Drupal instead of sys-admin-ing? Thanks! [ I know, I know. There are going to be lots of people who read this saying - "just learn Ubuntu like a real developer". And I get that. I do. But when I work full-time and try and develop some of these sites in my evenings and weekends, I'm really feeling like the sys-admin stuff gets in the way.

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • Laptop freezing every few seconds, including screen + sound

    - by zenstealth
    Just a few days ago, my Windows 7 HP dv4170us (1.76Ghz CPU, 1GB Ram) laptop started to freeze every other second where everything on screen and and sound (such as a song playing in iTunes) would just freeze until I bash it violently (without actually breaking the laptop) or wait for a couple of more seconds. I think it started one night when I noticed that a USB mouse of mine stopped working, and it displayed random "Device was not recognized" errors. I just unplugged the mouse and ignored it. Skip forward to the next day, is started freezing, and as of today I can't get my computer to not keep freezing. I tried to backup my files onto an external hdd, but it almost corrupted the drive. I ran 4 complete virus scans using MSSE and MalwareBytes (both quick and full scans), and they all came up clean. In the Task manager, the CPU usage is on a constant max, and so is the RAM (if I have just a few apps running, I only have like 30Mb of free RAM left). Also, on the outside of my laptop, right above where the CPU is located, it's very, very hot. I suspect that something is wrong internally within inside of the computer, but I'm not sure. It also does the same thing when booted into Ubuntu.Does anyone know what could be wrong with it?

    Read the article

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • crontab not running on VirtualBox unless I'm logged in

    - by Mike
    I am running Ubuntu Server 9.04 in VirtualBox on my work PC as a development environment. I have some scripts that I've put in my user's crontab that run throughout the day while I'm SSHed into the VM. Last night, I closed PuTTy and all of my other running applications (except for VirtualBox and the VM) and went home. I came back this morning to discover that my cron jobs didn't run at all, yet when I SSHed into the VM, the next scheduled job ran. I set the schedule to 5min to test, disconnected again, and the jobs stopped running on schedule. They seem to only run if I'm logged in to the machine. Obviously, I want them to run on schedule even if I'm not logged in to the VM, otherwise there's no point. Is there something I've failed to configure correctly? New Information: There are now 3 entries in /var/log/cron.log saying the following "Mount of private directory return code [256]"... the entries correspond to when the cron job is supposed to run. I thought they are supposed to run as my userid? Why would my own userid be unable to run a script in my home directory?

    Read the article

  • Powershell Script Scheduled Task Stopped Running (Could not Start)

    - by Hatsune Yuki
    I'm running a scheduled task (for Powershell Script) on Windows 2003 Server. I believe the script works fine. The task is scheduled to run every 10 minutes from 7:00am to 11:50pm everyday. However, it never gets to run more for than a day. It always stops some time in the afternoon (between 2pm and 6pm). I'm not sure exactly what happened but I always get the error The attempt to log on to the account associated with the task failed, therefore, the task did not run. The specific error is: 0x80070569: Logon failure: the user has not been granted the requested logon type at this computer. Verify that the task's Run-as name and password are valid and try again. It seems like most people with this error are saying that they need to make user "logon as a batch job". However, this option is greyed-out for me. I search for other places where users have similar problems but the solutions are not written in detail (some of them have something to do with GPO). I've only used the basic features of Windows Server and I have no clue how to get to the place they are referring to. Can someone please confirm whether "logon as a batch job" is indeed a solution and provide a detailed walkthrough on how to solve my problem? Thanks. p.s. someone suggested the website http://technet.microsoft.com/en-us/library/cc755659(v=ws.10) I tried to followed the method for web server with domain. However, got stuck on the 6th step where it mentions Group Policy Object. I don't know where it is.

    Read the article

  • mysqld-nt.exe exist in task list, but actually it's not running?

    - by PHP
    mysqld-nt.exe is showing in the task manager, but I cannot connect to it. I tried: telnet localhost 3306 And it fails to connect. So I restarted the server,and it's ok. This happens every day. Any ideas? EDIT Here is the error log(I didn't find anything abnormal though): 100122 10:11:16 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100122 10:11:16 InnoDB: Starting shutdown... 100122 10:11:18 InnoDB: Shutdown completed; log sequence number 0 22939338 100122 10:11:18 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100122 10:12:40 InnoDB: Started; log sequence number 0 22939338 100122 10:12:42 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100123 16:20:44 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100123 16:20:44 InnoDB: Starting shutdown... 100123 16:20:46 InnoDB: Shutdown completed; log sequence number 0 22939832 100123 16:20:46 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100123 16:22:09 InnoDB: Started; log sequence number 0 22939832 100123 16:22:11 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL) 100125 9:18:59 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Normal shutdown 100125 9:18:59 InnoDB: Starting shutdown... 100125 9:19:00 InnoDB: Shutdown completed; log sequence number 0 22941001 100125 9:19:00 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: Shutdown complete 100125 9:20:22 InnoDB: Started; log sequence number 0 22941001 100125 9:20:25 [Note] D:\MySQL\MySQL Server 5.0\bin\mysqld-nt: ready for connections. Version: '5.0.24-community-nt' socket: '' port: 3306 MySQL Community Edition (GPL)

    Read the article

  • Rails application keeps timing out when attempting to connect to Postgresql DB

    - by Corillian
    I'm hosting a postgresql database on a small windows azure Ubuntu 13.04 VM with a default postgresql.conf. I have a Rails application running on a medium windows azure Ubuntu 13.04 VM. When accessing the postgresql database the rails application is constantly timing out. In its database.yml I have the connection pool size set to 120 and the timeout set to 15 seconds. Despite this my rails logs are full of the following error message: ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5 seconds (waited 5.0023203 seconds). The max pool size is currently 120; consider increasing it. My postgresql.conf has a max connection limit of 120, making it any larger prevents the server from being able to successfully restart. I've also made sure that ssl was off in the postgresql.conf per this article but beyond that I have no idea what's going on. My postgresql logs don't contain any info indicating something is going wrong. My website is getting ~1k hits per day so perhaps a small VM instance just isn't powerful enough? I appreciate any assistance! [Edit1] The postgresql database is in a separate cloud service within the same affinity group. For example: db small VM: mydatabase.cloudapp.net (Affinity Group US East) forums medium VM: myforums.cloudapp.net (Affinity Group US East) On the database server I have opened port 5432. The connection to the database server from the forums server is using its hostname. Is it possible that the DNS resolution is what's taking so long?

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • windows 7 clock jumps back about every hour + internet disconnects

    - by IlyaD
    I have the weirdest problem, for the last few days every hour or so (not exact) the clock jumps about back in jumps of 15 to 60 minutes and after some time the internet disconnects. i can reconnect the internet by unplugging and plugging back the network cable. This is the second time this problem happens to me, the first time was when I installed VirtualBox. whaen I removed it the problem was gone. This time I have no idea what triggered it, the only thing I installed about the time the problem started was iTunes update (10.5.1) I tried removing it but the problem remained. (also tried to remove other programs i installed day or 2 before and it also didn't help) also, this is not a VM and currently I don't have any VM software. any ideas..? UPDATE since the solution is in the comments i'll write it here - apparently the windows time service got broken, this is the fix: http://technet.microsoft.com/en-us/library/cc738995(WS.10).aspx (run "fix it" from IE browser only) UPDATE 2 The fix didn't work, about 2 hours passed and first the internet disconnected and then the clock jumped back... This probably means that the problem is not only in the clock

    Read the article

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • running a laptop continuously

    - by intuited
    I have an experienced laptop — a Dell Latitude D400, with a Pentium M CPU — that I'd like to run as an always-on server. This model was launched in 2004; I got mine second-hand in about 2007. I've heard that continuous operation is generally not a good idea with consumer hardware, but am lacking in specific knowledge about related problems, and have little idea of how much such usage patterns would reduce the lifespan of the machine. I'm mostly concerned with the unit's core components; parts such as the hard drive which are readily replaceable are, well, readily replaceable. What sorts of things can I do to increase the lifespan of this machine under such circumstances? For example, I'm guessing that it would be wise to limit the CPU frequency or take other steps to keep the internal temperature low. However, I'm not sure where the point of diminishing returns would lie with such an approach — 50°C? 40°C? Would it be useful to suspend the machine periodically, for perhaps an hour each day, or a few hours each week?

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Timeperiod Setting

    - by Alvin G. Matunog
    I am running Nagios 3.2.3 on CentOS 5.7 32bit and I have a bit of a problem scheduling timeperiods. Please help me. Objective, to stop monitoring during server restart (since the server will be restarted and will be restarted automatically because of updates and system backup). The restart is scheduled every 1st Sunday of every month. My monitoring runs 24x7 but during restarts I want to stop the monitoring and resume after 30 mins. So every 1st Sunday I want my schedule to be 00:00-11:30,12:00-24:00. This means that it will stop at 11:30 and resumes on 12:00nn. If I set this on every Sunday there is no problem. But if I set this time on every 1st Sundays, it stops at 11:30 but resumes on the next day (Monday) and not on 12:00nn Sunday. I don't know what I am missing. If I set on regular weekdays there is no problem. But on Offset weekdays (1st Sunday) it doesn`t work the way it should have. Here is my definition; define timeperiod{ name 1st_sunday timeperiod_name 1st_sunday alias No monitoring every 1st Sunday thursday 1 00:00-11:30,12:00-24:00 } define timeperiod{ timeperiod_name irregular alias regular checking use 1st_sunday sunday 08:30-22:00 monday 08:30-22:00 tuesday 08:30-22:00 wednesday 08:30-22:00 thursday 08:30-19:10,19:20-22:00 friday 08:30-22:00 saturday 08:30-22:00 } Can anyone help me? Please? Thank you.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >