Search Results

Search found 12824 results on 513 pages for 'glen little'.

Page 347/513 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • Clipboard bug in Wordpad in Windows 7 (accidentally pasting large file into application)

    - by frenchglen
    In Win7, I use Wordpad, and I really like it. For my needs it's lean and fast, yet has the formatting functionalities I'm after when working on my TXT/RTF files on a daily basis. I don't intend to change text editors. There's a really bad bug which has ALWAYS plagued me. If you have a large file contained in the clipboard, like a 238MB FLAC file, and you accidentally paste it into Wordpad for whatever reason - it hangs the application for a VERY long time (like 2 hours, it depends on how big the file is, because it tries to 'handle' it). You either have to close the application and lose any unsaved changes, or go do something else until the item has finished pasting into Wordpad (it actually eventually drops the file's icon in wordpad just like how it appears in Windows Explorer). It's a Windows bug, a Wordpad bug. Is there some solution for this? Or is the problem fixed in Windows 8 (if anyone can tell me)? .....I'm not going to try out Win8 myself, merely to answer this question - that's what I'm asking it on SuperUSer for! I'm really hoping it's one of those little-yet-big things that they've fixed in Win8 (like removing the 255-character file path limit in Explorer, which is awesome). Thank you for your help, if you have Win8 handy and can test this. :)

    Read the article

  • VMWare converter performance

    - by bellocarico
    Hello, I have a question about my test lab. It's more to understand the concept more than apply this into production: I have an ESXi with few VMs linux/windows configured and I'd like to use VMWare converter to create backups. To speedup the process I decided to create a Windows VM on the same ESXi host where I've installed Windows 7 and VMWare Converter. The Host has a gigabit card but it's currently connected to a 100Mb FD port. Windows 7 sees a 1gb card connected. When I do the backup using VMWare converter I specify the host IP as source and destination, so I thought the copy could be faster then use my laptop across the network. Well, to cut a long sotry short: I get dreadful performance (4Mb/sec). I'm a buit confused on this because despite the fact that the host is running 100Mb communication between VMs and hosts shouldn't (correct me if I'm wrong) have any limitation instead. I did tweak windows 7 to optimise network performane but I got just a little improvement. i still need 4 hours to back up a 50Gb (thin) VM. Additionally I wanted to ask: Would jumbo frame help in this? I know that jumbo frame have to be supported end to end, and the network switch where the host is currently connected doesn't support this, but I was wondering: 1) Does ESXi host support jumbo frames at all? 2) Can I enable it somehow? 3) If I do so, I guess bulk transfert between VMs and host would improve, but would this affect the communication going through the real switch as this doesn't do jumbo? Thanks for reading

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • What parts of a motherboard age, and how can I choose one with the longest possible life?

    - by Robert Harvey
    I have a home-built computer that's probably about four years old. I realize this probably seems ancient to some folks, but computers have no moving parts (except the fans), so theoretically they should last a long time, if I still have software to run on them. A few weeks ago, it began blue-screening and freezing up, with various error messages. It almost always happened about five minutes after startup. I assumed that the video card was overheating, since the cheap little fan on the heatsink died, so I replaced it. Long story short, after upgrading the video drivers a couple of times and performing some other troubleshooting, I remembered that the last time this happened, I took out the memory SIMS and cleaned the contacts with a gum eraser, so I did that again (noting that the SATA cables were very close to the chips on the SIMS). I re-routed the cables and reinstalled the SIMS. So far, so good; the machine has been trouble-free since. But blue-screens are distressing; I never know what bits are being chewed up in my OS installation when something like this happens. So I'm wondering if I'm choosing my components properly. If it matters, it's an Intel D915GAG motherboard and Corsair memory, but what I'm wondering is, should I be looking for certain characteristics when I choose these parts for my next computer, so that I can avoid this problem in my next build?

    Read the article

  • How can I uninstall Broadcom Bluetooth software that won't disappear in Windows XP?

    - by T. Webster
    My situation is similar to this question for Windows 7, except my OS is Windows XP SP3. I have recently realized I made the mistake of buying a Bluetooth adapter and installing the Broadcom/Widcomm Bluetooth stack driver software. Now that I know that the software is no good, I want to uninstall it. Then I'll install the Toshiba stack for the Cirago adapter. Right now, when I double-click on the setup.exe file that came with the Cirago driver, nothing happens (it doesn't start setup). I've attempted to uninstall all Bluetooth driver software in Device Manager, and I don't see any remnants of any Bluetooth drivers there. But I do see that the little Bluetooth icon still persists: I don't see anything about Bluetooth, Broadcom, or Widcomm in Add/Remove Programs. I don't see any folder names Broadcom or Widcomm in Program Files folder, either. But I do see that Broadcom does show up in the registry with respect to Bluetooth, as shown here. I also renamed every file in C:\Windows\inf that starts with "bth" so that it ends with ".old". What should I do now to completely wipe this persistent Broadcom Bluetooth software off my computer?

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • KVM and JBoss Java Application Server

    - by Jason
    We have a large Xen deployment running on both RHEL and CentOS and have recently started looking at KVM since this is where it looks like the future of VM's are on Linux. We can load the server and get everything running without an issue. However when we load up a new one with JBoss (4.2 Community edition, Sun JDK 6) and load a large EAR the server goes a little crazy. The %sy will jump to 80-99% and just hang for large periods of time we see a similar jump in %us on the host machine. We though at first this might be I/O as it seems to happen at start of JBoss but then would "cool down" after everything got loaded. We did some tests by extracting some large tar.gz files and using jar -xvf on the ear but could not re-create. Then we starting thinking this might be some type of memory access issues. We loaded a c-program that would generate a lot of memory activity and sure enough we saw the spikes again. Not as high mind you but we did see it jump. We then wrote a small java program to do the same thing and sure enough we saw it jump again. Any thoughts on what might be causing this? Is this just the way KVM works? As a side note we do NOT see this behavior on any other setup. Xen, VMWare and/or native iron. The system does seem a bit slower than our Xen / VMware ones.

    Read the article

  • Understanding NFS4 exports and pseudofilesystem

    - by Trevor Harrison
    I think I understand the way pre-NFS4 exports work, specifically the namespace of the exported point. (ie. export /mnt/blah on server, use mount server:/mnt/blah /my/mnt/point on client) However, I'm having a hard time wrapping my head around NFS4 exports. What I've been able to gather so far is that you export a 'root' by marking it with fsid=0, which you then import on the client side by referring to it as '/'. (ie. exportfs -o fsid=0 /mnt/blah on server, mount server:/ on client) However, after that, it gets a little weird. From my playing around, it seems I can't export anything else thats not under /mnt/blah. For example, exportfs /home/user1 fails when trying to mount from the client unless /mnt/blah/home/user1 exists on the server. If this is the case, what is the difference between exportfs /mnt/blah/subdir1 on server and mount server:/subdir1 on client and just skipping the exportfs and mounting whatever subdir of /mnt/blah you want? Why would you need to export anything other than the root? Its all in the same namespace anyway.

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Sending Mail from Web App to Google Apps won't work - internal routing? VPS

    - by Charlino
    I've got a web application, www.mysuperwebapp.com, which sends out emails for various reasons - the contact us page is a good example. I am using google apps on the domain and I've setup a google apps group, Support ([email protected]), which I want the emails from the contact us page to go to. But the emails don't seem to be sending... I thought it could be that the groups security is a little tighter than normal email, so I change the contact us email to go to [email protected] - but they still didn't appear. So I'm guessing that it has something to do with internal routing and the messages aren't leaving the server/network at all. Eg Sending an email from the mysuperwebapp.com computer to a mysuperwebapp.com email address. I put an entry into the hosts file for 123.123.123.123 mysuperwebapp.com but that doesn't seem to have helped. Also, there doesn't seem to be anything of interest in the event log. What do I need to do? Or what do I need to get my VPS hoster to do? TIA, Charles Ps. The VPS is a Windows 2008 box with IIS7 and the default SMTP (IIS6?) server. The web app is ASP.NET MVC - not that that should matter.

    Read the article

  • Coffee spilled inside computer, damaged hard drive

    - by Harpreet
    Today coffee spilled over my table, and some of it (very less) reached the PC case placed under the table. I think little bit of it got inside the PC case through the front. As that happened the fan started running very fast and made noise. I tried to restart to see if it becomes fine, but the computer didn't start again. First it gave an error of "Alert! Air temperature sensor not detected" and didn't start. Next I tried again multiple times of starting the computer but then it gave some memory error. I was not able to start the computer. Incase there's a problem in hard disk or something related to memory, is there any way we can extract our work or data? I am scared if I am not able to extract my work in case some problem occurs like that. What options would I have? Help!! EDIT: I have attached the photo here and you can see the area spilt in red circle. The hard drive electronics have been affected and internal speaker may also have been affected. Any advise on cleaning and if hard drive can work? EDIT 2: Are there any professional services offered to extract data from blemished hard disk, like this one, in case I am not able to run it personally?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Virus ridden computer freezes on startup - can't access safe mode

    - by Eric
    Someone whom I love but who cannot be trusted with a live internet connection downloaded a particularly nasty virus that in turn downloaded a variety of unknown other viruses onto my home computer. The computer now freezes completely a few seconds after reaching the desktop and is unresponsive to any keyboard or mouse command. There are videos of my little kid on this hard drive that are not backed up and that I cannot bear to lose. But if I could get in there long enough to copy them off to an external drive I would have no problem doing a clean windows install to fix the problem; everything else is backed up online but the videos were too large. Normally I would start by going into safe mode but I have a large Dell monitor that doesn't show anything until the welcome screen appears. I think that I have gotten into the setup screen once or twice by mashing keys before I can see anything, but this monitor doesn't support that so I can't see what I'm doing to get it to boot from CD or anything else. I'm at my wits end. Any advice?

    Read the article

  • NAT confusion regarding cisco ASA5510

    - by LonelyLonelyNetworkN00b
    I'm setting up my first cisco firewalls. A little information first:I have two asa5510 setup in a working active/standby pair. From my ISP i have two public subnets. A /29 and a /26. On my DMZ interface i have the /26 configured. On my WAN Interface i have configured the /29 IPs. My isp routes the /26 via the /29 primary IP. I'm running ASA 8.2. I've turned NAT-Control off, because i don't want to use nat for for other than some internal interfaces. In essence i don't want to use NAT unless i specify it. I have a internal interface with the network of 192.168.100.0/24. I've tried setting up nat limke this: nat (inside) 1 192.168.100.0 255.255.255.0 global (WAN) 1 interface I was under the impression that this would let connections that was going from 192.168.100.0/24 and out the WAN interface to be Port-Address-translated. I'm not getting this to work for some reason. Inside interface has security level of 100, and wan has security level of 0.

    Read the article

  • backup of TFS with DPM 2007 - different backup times

    - by user46516
    I have DPM set up to back up my TFS server every 30 minutes, the reason being it's a way better interface than the quirky SQL backup interface. I also do a full backup nightly using an SQL maintenance job. My thinking is I would use DPM to restore my databases in case of losing my database and the nightly full backup would be a "just in case" the DPM restore doesn't work. I was thinking a little harder about this set up today and started to think about the fact that the DPM backup of the individual databases happens at different 30 min windows.. i.e. one happens at 13h30, another at 13h34 etc. Would this difference in time be a problem when it comes to restoring the TFS server? If I restore the databases and they are from different times, will this create corruption with pointers in one database pointing to missing items in the other database.. do the databases even rely on each other or are they completely interdependant. Lastly, how would SQL (log) backup cope with this?

    Read the article

  • Fedora 12 on Vmware network disabled on restore

    - by Chaitanya
    I have a fedora 12 guest running on VMWare on windows 7. I use it mainly for the occasional linux dev. Whenever I restart the guest, networking works fine. But if I close the VMware player and save state, the next time I start the image, networking is disabled (red x on the network icon. message saying networking disabled). I can't seem to find a way to restore networking. I have to reboot the guest to get my network access back again. My Ubuntu image doesn't have this problem. I can close the player and when I re run the image, I can pick up where I left off, with all the open firefox windows and application windows as I left them. Fedora saves state, but doesn't seem to enable networking. There is a relevant warning I have seen "SELinux is preventing /sbin/ifconfig "read" access to/var/run/vmware-active-nics." But I am not sure how to solve it. I know fedora isn't officially supported by VMware, but it seems to be working fine for the most part and meeting my needs, except for this one little issue. Any help would be much appreciated.

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • Can't connect to Synology DiskStation through HTTPS when using Windows 7 Import

    - by LeonidasFett
    a little background to my problem: I have a Synology DiskStation 213j that I use as a backup/data storage solution. When I'm at work, I would like to push and pull files from my DiskStation but I can't use VPN which is forbidden for outgoing connections. So I wanted to try to use HTTPS so I can at least connect securely to the web interface. I mostly use Chrome which uses the Windows Certificate Store. So I tried importing a self-signed certificate into it, without success. I still get a warning in Chrome telling me the connection is not secure because it can't be verified. When I import the certificate into Firefox though, it works and I can connect through HTTPS. I checked my domain on this site: http://www.sslshopper.com/ssl-checker.html It shows no errors, only a warning that the certificate is self-signed. Which is OK in this case. Any got any idea why importing the certificate into Windows 7 doesn't work? I tried Right-Click domain.mydomain.de.crt File --> Install certificate --> Next --> both options here (in case of "Place certificate in following store:" I selected "Third Party Root Certificate Authorities") to no avail.

    Read the article

  • Can I use CNAME with ip address? Why If works (sometimes)?

    - by Maciek Sawicki
    I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;somesubdomain.somedomain.com. IN A ;; ANSWER SECTION: somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1. ;; AUTHORITY SECTION: . 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400 ;; Query time: 72 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Apr 10 00:11:01 2012 ;; MSG SIZE rcvd: 136

    Read the article

  • Software way to cool down an old MacBook Pro

    - by notMacBookProSuperUser
    Hi all, First a little background: I've got lots of computers, including Linux PCs and two MacBook Pro (and a MacMini). My concern is with my 'old' MacBookPro (Core Duo). It really does overheat. Warranty is long void. Years ago (I'd say 2.5 years ago or so) one day it overheated so bad that the battery inflated due to the heat. I got a new battery for free but it's still getting incredibly hot (much other than any other computer I've got: my newer Core 2 Duo MacBook Pro doesn't get nearly as hot as the old one. It s really a pain because I use my old MBP when I m in front of TV, having it on my lap, and it can really become unbearable. I don't want to open that old MBP. On Linux I can force a new CPU 'governor' that decides how the CPU is allowed to operate: it can be 'on demand', 'always max speed', 'always speed x', etc. Does the same exist under MacOS X? Is there a way, say if a 1.86 Ghz Core Duo can run at 1.6 Ghz, to ask MacOS X: "never run this CPU above 1.6 Ghz" ?

    Read the article

  • Ubuntu VM "read only file system" fix?

    - by David
    I was going to install VMWare tools on an Ubuntu server Virtual Machine, but I ran into the issue of not being able to create a cdrom directory in the /mnt directory. I then tested to see if it was just a permissions issue, but I couldn't even create a folder in the home directory. It continues to state that it is a read only file system. I know a little about Linux, and I'm not comfortable with it yet. Any advice would be much appreciated. Requested Information from a comment: username@servername:~$ mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type tmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) For sure root output. root@server01:~# mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type tmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)

    Read the article

  • can't connect to vsftpd from outside network

    - by rick
    i know this has been asked many times before, but nothing seems to resolve my issue. i have vsftpd running on ubuntu 10.04. i can connect with ftp localhost on the machine. i can connect from another machine in my network. i just cannot connect from outside. the machine is behind an airport extreme managed by airport utility on a mac. 21 is open as per nmap: macmini:~$ nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 23:49 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.00045s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 997 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 631/tcp open ipp netstat says 21 is listening: macmini:~$ netstat -lep --tcp | grep ftp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 *:ftp *:* LISTEN iptables: macmini:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination when i try to connect from my external IP (or a dyndns name which resolves there) it times out. ("control connection timed out") as i know very little about networking, i feel like something may jump out as clearly wrong?

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >