Search Results

Search found 18096 results on 724 pages for 'let'.

Page 517/724 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Is there any way to limit the turbo boost speed / intensity on i7 lap?

    - by Anonymous
    I've just got a used i7 laptop, one of these overheating pavilions from HP with quad cores. And I really want to find a compromise between the temp and performance. If I use linpack, or some other heavy benchmark, the temp easily gets to 95+, and having a TJ of 100 Degrees, for a 2630QM model, it really gets me throttling, that no cooling pad or even an industrial fan could solve. I figured later that it is due to turbo boost, and if I set my power settings to use 99% of the CPU instead of 100%, and it seems to disable the turbo boost, so the temp gets better. But then again it loses quite a bit of performance. The regular clock is 2GHz, and in turbo boost it gets to 2.6Ghz, but I just wonder if I could limit it to around 2.3Ghz, that would be a real nice thing. Also there is another question I've hard time getting answer to. It seems to me that clocks are very quickly boosting up to max even when not needed, eg, it's ok if the CPU has 0% load, the clocks get to their 800MHz, but even if it gets to about 5% it quickly jumps to a max and even popping up turbo, which seems very strange to me. So I wonder if there is any way to adjust the sensitivity of the Speed Step feature. I believe it would be more logical to demand increased clock if it hits let's say 50% load. I do understand that most of these features are probably hardwired somewhere in the CPU itself or the MB, which has no tuning options just like on many laptops. But I would appreciate if you could recommend some thing, or some software. Thanks

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • How can I remove all drivers and other files related to a USB Mass Storage device?

    - by Bob
    I have a flash drive here that does not work on one OS on computer - let's call it the desktop Windows 7. It works fine on another computer - laptop Windows 7. It also works fine on Windows 8 on the same desktop computer. Other flash drives work fine under desktop Windows 7. So not a hardware issue, not a generic USB Mass Storage driver issue. It's something specific to this drive. On desktop Windows 7, I can connect the drive but no volume comes up under Windows Explorer. Ditto for Disk Management. With diskpart, loading hangs until I unplug the drive, if I replug it and try list disk it hangs again. If I unplug the drive at this point, list disk prints out all attached drives - including the just removed flash drive. The drive consistently appears under Device Manager, but uninstalling the drivers, restarting and reinstalling the drivers (by inserting the drive) only works for the first insertion. After that it fails again. I get the feeling that the driver files are not actually removed, and are corrupted, meaning every reinstall it's the same corrupted drivers being installed. Is there any way to remove these drivers completely? Or perhaps some other setting Windows 7 retains? Formatting the drive through another computer/OS does not help. I've also tried a complete wipe and rebuild of the MBR and single partition. The allocation unit size makes no difference; neither does a NTFS format. This is a relatively small matter, and I would not like to reinstall the entire OS!

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Syncing two sheets, while being able to hide different data

    - by Joshua
    I'm pretty new to excel- so please bear with me. I have created a spreadsheet to organize gear by serial numbers and by who has it. This list is getting updated multiple times daily as gear shuffles regularly. I have gear that is assigned and unassigned. On the main sheet I have all the data, the way I want it to be organized. What I'm trying to do is duplicate this sheet, so that both sheets automatically keep the same data at all times, but on the first sheet I can hide all the unassigned gear, and view only the assigned gear, and then be able to narrow it down in groups using the hide function heavily. On the second sheet I want to be able to hide all of the assigned gear, and all the columns of gear that have no unassigned gear. End result will be that as gear is moved between individuals or is unassigned entirely, I make that adjustment on one sheet and the data stays the same on both, but the way I view that same sheet is different on both. If I'm making no sense just let me know and I'll try to explain again more clearly. Thanks

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • Using a nat rule to translate 80/443 traffic to web server, but internal users cannot access it using external ip/domain name

    - by Josh
    I am using Cisco ASDM for ASA I have my internal network called soa. My outside interface is called outside. Let's say my outside IP given to me by my ISP isp is y.y.y.y I have a web server inside my network with a static ip of x.x.x.110. I have configured 2 static nat rules (one for http the other for https). Source is x.x.x.110. Interface is outside, service (http or https). Maybe I am doing this wrong, but when I run the packet tracer, I choose outside interface and for the source IP I used 8.8.8.8 and the destination ip is my outside IP address, y.y.y.y When I run that, it shows the packet traversing successfully, using 9 steps. For my other test, I switch to the soa interface, input an ip on that network, and leave the destination the same. This test comes up with 2 steps and then fails on my access list. When I see the rule that fails, it is my catch all which is source: any desitnation: any, service: ip action: deny. What rule do I need to make to allow my soa network access to go out and come back in by my external IP addess (using a domain name attached to that ip in my dns, of course)?

    Read the article

  • Max. Temp. on Intel Burn Test for Stock Dell Precision T3500

    - by HK1
    I'm troubleshooting an issue on a Dell Precision T3500. As part of my troubleshooting I've decided to try running a stress test using Intel Burn Test software. This machine is a stock configuration with 12GB of RAM and a Xeon W3670 processor (nothing overclocked). When I run IBT using the standard mode, SpeedFan reports a processor temperature in excess of 80C. I've seen numbers as high as 90C but even at that temperature the machine does not become unstable or crash. However, it seems way too high. This processor has a TCase of 67.9C according to Intel's website. I'm guessing that means I'm in the danger zone any time I go over that temperature. I've checked the cooling system and everything looks fine. I've even took out the heat sink and reinstalled it with new thermal compound. This did not appear to make the problem better or worse. Is there a discrepancy somewhere here in the way temperatures are measured or displayed? I've also tried using HWMonitor from CPUID and it reports the same temperatures. Should I just let the Standard Test go and disregard the temperature outputs?

    Read the article

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • DVI splitter not working as expected/confusion between DVI-D and -I

    - by Freakishly
    Hey guys, thanks for looking. I have an ATI FirePro™ V3700 in my desktop machine, and I have been running a dual-monitor setup quite effortlessly, thanks to the two DVI ports on the card. I came upon a third monitor, and wanted to extend my desktop to 3 screens, so I purchased a DVI splitter from Amazon. Now, I can only duplicate the second monitor onto the third, not extend it. I've tried all possible combinations of input to no avail. Here's the setup: The ATI FirePro™ V3700 has two Dual-Link DVI-I outputs The splitter splits a single Dual-Link DVI-I port into two Dual-Link DVI-I outputs Two of the monitors are NEC E222W, and the third monitor is a Dell 2001FP. Each monitor has one D-Sub and one Dual-Link DVI-D input. Cables going from the video card to the monitors are two Dual-Link DVI-D to the NECs and one Single-Link DVI-D to the Dell. Is the problem likely with the DVI-D/DVI-I mismatch? Or is it with the cable on the Dell that is only a Single-Link? The cables are easily replaceable, the monitors not so much. Thanks for your time, I really appreciate it. http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/v3700/Pages/v3700-specs.aspx http://www.amazon.com/Cables-Unlimited-DVI-D-Splitter-PCM-2260/product-reviews/B000H09RFM/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1 www dot newegg dot com/Product/Product.aspx?Item=N82E16824002495 accessories dot us dot dell dot com/sna/PopupProductDetail.aspx?cs=19&l=en&c=us&sku=320-1578 Apologies for the fudged links, I'm new here and they won't let me post more than two :P

    Read the article

  • Defining Virtual and Real User Directories with Dovecot & Postfix

    - by blankabout
    Following a wobble described in this question we now have virtual and real users authenticating with Dovecot, the problem now is that the real users (who have been on the system for years) can no longer access their mail. I'm guessing that it is because Dovecot is configured to point to the virtual mailboxes but not the real mail boxes. These are snippets from the config files: /etc/dovecot/dovecot.conf !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.cong passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } userdb { driver=static #args = mail_uid=dovecot mail_gid=dovecot /etc/dovecot/userdb args = uid=vmail gid=vmail home=/var/spool/vhosts/%d/%n /etc/dovecot/userdb } [email protected]:::::/var/spool/vhosts/virtualdomain.com/:/bin/false:: We think the problem is that the Dovecot file 10-auth.conf does not contain a method of accessing the mailboxes for the real users. We have looked around on this site, dovecot.org and done the usual googling but cannot find anywhere that describes how to set up virtual users on alongside legacy real users. Any help would be appreciated, especially by our real users who would like the contents of their inboxes to be available! If any further config is required, please let me know.

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Wireless card overheating?

    - by Sidney
    Ok, so I've had my laptop for several years (I wanna say 4, but possibly more), it's a Toshiba Satellite. I'm running Linux mint 15, and am having a strange new issue, after several hours of running my wireless stops. It can SEE wireless networks, but refuses to connect to any of them. (On a sidenote, connecting to a router with a cable at this point works fine) The fact that it can SEE the networks make me think the card is in good condition, and it's software related The fact that it works for several hours before booting me makes me think perhaps the transmitter is getting too hot. I don't use my laptop in dusty environments, and keep it on an elevated surface (alternatively, I actively try not to let it sit on soft surfaces where the vents get covered). I spray out the cpu fan about once a year with compressed air about once a year, so I really don't think the insides should be too dirty. Finally, unfortunately, sensors only gives me CPU temps, but they run about 40-50 degrees C, which from my understanding is perfectly normal for an I3. Does anyone have any suggestions on what I can do to determine the root cause of this?

    Read the article

  • Tools required for a Web Development Project..

    - by RBA
    Hi, I wanted to design a project in linux which could contain programming languages(C, perl, PHP, HTML, XML etc) basically a web based project. Why i have chosen to build on Linux is because it is Open Source, and lot many things can be automated through scripting languages, which in windows i don't know. So, i have installed linux on a virtual machine(Host-Windows 2007 & Guest Linux CentOS), CentOS(command line interface). Since i am a beginner, so I want to know what all tools can be used to facilitate and ease my development process. Some which i know are listed below, and request you to please share your experience on this. 1) Using Putty so that can access the Linux machine from anywhere within the network. 2) Since i want to develop on Linux, but want to use windows as developing platform. So have downloaded Eclipse Editor (C/PHP) on windows. But want to know how can i access linux files from here?? 3) Installed Samba, and still trying to figure out how can i access linux files remotely on Windows. 4) Please share your experience, as how can i ease my development process. and what all tools i can use..?? Please let me know if you need any other clarification..

    Read the article

  • Converting Audio To Video Output and Attaching Text?

    - by ZeeMan
    I am currently working on a project and before i get started i thought it'd be nice to check with stackOverflow community, and see maybe they can help me with this. The Idea: I have about a thousand MP3 files that i need to convert into Video files to be upload on Youtube for my work. Here is where it gets tricky i need to also attach the Text associated with the Audio to the Video as an Image. I was thinking .ppt. The Problem: I can do this one audio file at a time but it would take me a zillion years. lol!! The Question: Can I Create Some Kind Of Program Using Let's Say XML or JavaScript Or XHTML or some other programming language to do a MASS content creation and all i have to do is feed it the Information?? possibly a script?? or is it possible to create an example .ppt file and then hack it so that i can have it reproduce itself with different information?? The Note: Thanks U In Advance For Helping Out!!! Regards, ZeeMan!!!

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

  • Ubuntu server crashes; need help figuring how to figure out why

    - by neezer
    I have a 768 Slice at slicehost.com running Ubuntu Server 8.04.2 LTS (hardy) with a LAMP stack on it that periodically crashes, though why I am not sure. From what I can tell, there is a process that basically goes rogue and consumes all the memory on the slice, suffocating all the other programs running until the whole thing comes to a grinding halt, and I have to do a hard reboot of the slice to get it back up and running again. I can't detect any pattern for this (it seems to happen about once a month, more or less). Here's a screenshot of my console during the last crash: I would assume that a possible cause might a PHP script or an apache configuration rule that might cause the crash if triggered? How would I be able to find out which one is the offending one? I've checked and rechecked all my PHP scripts, and running them doesn't seem to trigger the crash. I've also been able to log on to my system during a crash and see what's running (with top), but I can't tell how the offending process was started, so I can't trace the root of the problem! I know my description is overly generic, but unfortunately my expertise in tracking down the source of these glitches is very limited. If you need any additional information about my system in order to help me figure this out, please let me know in the comments, and I will append it to the question. My only other lead as to the culprit here is Wordpress, which we have installed on this server. Here are the details: Wordpress 3.0.3 with the following plugins installed and activated: Addmarx - Bookmark/Share/Email Dropdown, Akismet, All in One SEO Pack, Animated Banners, Automatically publish highlights of any website, directly to your Blog, Broken Link Checker, CMS Dashboard, Collapsing Categories, Status Updater, SubHeading, Ultimate Google Analytics, VastSubCat, WP-CMS Post Control, and WP Super Cache

    Read the article

  • Datacentre Rack naming convention with flexibility for reassignment of server roles

    - by g18c
    We are just shifting across to a new rack and until now have used names of cartoon characters. This is not going to work anymore, and need a better naming convention. Physically i would like to name the servers by location, and then have an alias as to its actual function/customer, i.e. Physical name LONS1R1SVR1 meaning London, suite 1, rack 1, server 1 Customer Alias Since the servers can be reassigned from time to time, for the above physical server name, i would have an alias as a column in a spreadsheet, that would be set to the customers host-name, i.e. wwww.customerserver1.com Patching For patching, I am looking at labeling up the physically connections, i.e. LON1S1R1SVR1-PWR1 LON1S1R1SVR1-PWR2 LON1S1R1SVR1-ETH0 LON1S1R1SVR1-KVM Ultimately if i am labeling cables, I really want to avoid putting LON1S1R1SQLSVR on any patch cord in case the server gets formatted and changed from a SQL server to a WWW server which would need to relabel all the patch cords also. In addition, throwing in virtual machines, i have got confused very quickly. I appreciated that it may be confusing having a physical host-name and customer alias. Please let me know what you run with and any other standards or best practices that i can follow?

    Read the article

  • Whats the easiest route to trying out mono 2.6?

    - by E J
    We have several web applications built on Microsoft technologies (asp.net+mvc framework, built using VS2008, MS SQL Server). I have recently be playing with Ubuntu (9.10), installed using Wubi, and wanted to see if I can get our apps running on a foss software stack. I have got the hang of the very basics of Postgresql and I have read that there is some support for Linq to SQL in mono (as of 2.6) as well as asp.net/MVC. However I am unsure how to go about getting Mono 2.6 up and running. Here is what I have discovered so far: Ubuntu is not meant for the 'cutting edge' it is designed to be stable hence, it sometimes takes a release cycle or two for new software to make it to the repositories Mono is already installed by default, but it is likely to stay at version 2.4 for at least the 10.4 release You can install paralell environments of Mono, if you know what your doing. I have had a go at setting up parallel environments, but haven't had any luck yet. (And TBH I am not certain that that will do what I think it's gonna do). (tl;dr start here) Is there a distribution of Linux similar enough to Ubuntu, that I wouldn't have to start the learning curve all over again, but that will let me install Mono 2.6, Postgresql, (and possibly mono-develop 2.4)? Or should I persist with Ubuntu?

    Read the article

  • Windows 7 hangs on black screen for a while after log in

    - by steini
    I get the welcome screen. I click on my user and get the "logging on" screen. After that all I get is a black screen with a mouse cursor. I can't even start task manager. No ctrl+alt+del or ctrl+shift+escape. It stays like this for about 10 minutes, then the desktop finally starts loading. According to the hdd led on my case, windows isn't even trying to access the hard drive for that whole time. It's just hanging doing nothing it seems. What I have tried: Uninstalled video driver and removed leftovers with driver sweeper Disabled all startup programs and non microsoft services Loaded "last known good configuration" Ran the alleged "black screen fix" from prevx against my best judgement (don't really like running random exes without knowing what they do at all) None of that works. I can boot into safe mode normally. My specs: i7 920 Gigabyte X58-UD3R Gigabyte HD5870 1GB 12GB Mushkin Silverline 1333MHz Windows 7 Ultimate x64 I'm also having another problem which I suspect is related. After I have gotten the computer up and running, everything works perfectly, but when it's been on for a while it starts behaving strangely when changing display modes. When I start up a game or anything that changes the screen resolution the computer freezes for about a minute every time until I reboot again. I think this is probably related to the black screen problem. Just thought I'd check to see if anyone has had the same problem. Let me know if I should post any more details about my system to help diagnose this. Thanks in advance.

    Read the article

  • Test script if host is back online

    - by brubelsabs
    E.g. system: Ubuntu/Debian. As many of you do this probably via ping and a terminal, I always forget this terminal when switching to other task... So a noftification popup would be useful. So can I do better as this?: while; do if ping -c 1 your.host.com; expr $? = 0; then notify-send "your.host.com back online"; sleep 30s; else sleep 30s; fi; done You will need zsh and libnotify to let the snippet work. As script: #!/usr/bin/env zsh while; do if ping -c 1 $1; expr $? = 0; then notify-send "$1 back online"; sleep 30s; else sleep 30s; fi; done

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >