Search Results

Search found 19279 results on 772 pages for 'everything'.

Page 503/772 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Ballooning Mac OS X kernel_task and Wired memory usage. How to diagnose / fix?

    - by user28930
    I have a very strange issue, which I'm having a hard time diagnosing as to the root cause. I have a Mac Pro (2008, 8-core 2.8 GHz, 8800GT) with 14 GB of RAM (recently upgraded because of this issue!). When I boot my system and log in, vm_stat / top / Activity Monitor will show that kernel_task has about 150 MB allocated, and the machine has about 800 MB of Wired memory being allocated. Even initially, 800 MB seems an awful lot of wired memory to be allocated with no applications running - but, it gets worse. (NB: Wired is locked, unswappable memory) After a very short time, sometimes triggered by something as simple as launching a terminal, kernel_task will balloon to 8-900 MB of Real Mem (RSIZE), and Wired Memory will accelerate to 1.6 GB (implying that all the extra memory requests are for wired RAM in the kernel). If I quit everything (I.E: no running applications, bar an activity monitor or terminal to view top), there is no appreciable reduction in either kernel_task RSIZE, or Wired Memory usage. Going the opposite way, and loading the system with tasks also shows that wired memory does not get reduced - and that importantly it is not reduced in preference to heavy swapping. If I log out and log back in again, it reduces a bit (450 MB kernel_task, 1.28 GB Wired), but not back to the start. I'm not running any wacky kexts - and futhermore, kextstat shows no huge memory allocations there; the largest being com.apple.nvidia.nv50hal at about 4 MB of Memory. The machine feels overall more sluggish when this has happened - unsurprisingly because such a huge amount of RAM has been marked as non-pageable. So I have a few questions: 1) Is there a good way to diagnose what has allocated all of this wired memory? It's often over 2 times the kernel_task size, running no applications. The real memory total doesn't seem to add up - it seems that there is a bunch of RAM that isn't being accounted for anywhere. 2) What is happening to cause the kernel to suddenly require 6 times as much memory?

    Read the article

  • Basic IP address structure

    - by dannymcc
    We currently have a few servers, around 30-40 workstations and 16 phones. Each device has a static IP address. As an example the standard settings for a new workstation is; IP: 192.168.1.XXX Subnet: 255.255.255.0 Gateway: 192.168.1.99 DNS: 192.168.1.50 As I am slowly exploring new server OS's and virtualisation etc. I am getting close to wanting a wider range of IP addresses. What I would like to do is seperate the devices by IP as follows: Servers 192.168.1.XXX Workstations 192.168.2.XXX Printers 192.168.3.XXX Phones 192.168.4.XXX VM's 192.168.5.XXX Is this a bad idea, or is this a common way of doing things? My biggest concern is the phones and subnet masks. The phones are managed by our provider although I have access to the server that runs them. Would I need to change the subnet mask to 255.255.0.0 on all devices? Or only those that change? For example, the phones don't need to connect to any other devices other than other phones and the phone server. So if I have the phones on 192.168.1.XXX with a subnet mask of 255.255.255.0 and then moved everything I had complete ownership/control of to 192.168.X.XXX with a new subnet mask of 255.255.0.0. Would that work?

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • How to bypass firewall to connect to a proxy server?

    - by Bruce
    I am conducting a small experiment on my office network. I have setup a proxy server on my desktop machine (connected to my LAN) and I have volunteers access the internet via my proxy server. Everything is working well. The problem is people cannot connect to the proxy server through their laptops. I asked my network admin and he said the wireless network has a firewall which prevents users from connecting to my proxy. He said I could tunnel the traffic or use SSH though. I am afraid I do not understand fully what is going on. Is there a way by which users connected on the wireless network can connect to my desktop? I am using FreeProxy on Windows as my proxy server: http://www.handcraftedsoftware.org/index.php?page=download FreeProxy allows me to create a SOCKS 4/4a/5 proxy. Is that what I need? Part of the experiment involves logging the URL requests of the users. I am doing a measurement study. So, any solution must allow me to log the URL requests of users. Also, what changes do I need to make in the browser configuration.

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Trying to run QEMU with a file as hda

    - by Felix
    I'm trying to run QEMU and use a simple file on the host system as the guest's hard drive. Here's what I attempted so far: $ dd if=/dev/zero of=/home/felix/vm/archlinux.img bs=1MB count=8192 8192+0 records in 8192+0 records out 8192000000 bytes (8.2 GB) copied, 86.6054 s, 94.6 MB/s $ qemu -hda /home/felix/vm/archlinux.img -cdrom archlinux-2009.08-netinstall-i686.iso -boot d Then I try to install Archlinux to that file. It goes pretty well (it's able to format it from what I can tell) until I start installing packages, when I get errors like this: And, of course, everything goes downhill from there (unable to mount the partition, corrupted files, ...). What am I doing wrong? Note: I'm just doing this for entertainment purposes. I don't intend to actually use this on servers or anything. The only use I can think of for this kind of installation would be to actually get an 8GB USB stick and dd that file to it and wham! You have a bootable stick with a fully fledged and customized OS, and without torturing the stick through the installation.

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • My hard drive seems to be overheating... what should I do?

    - by George Edison
    After a cold boot, the hard drive in my notebook jumps to 56? within an hour or so of idling. Is 56? a cause for alarm? Notes: The notebook is on a flat desk and none of the vents are obstructed. The video card is currently at 55? and the CPU at 50?. It's a Western Digital 250GB hard drive. SMART reports the drive healthy but does warn that: Edit: this problem had a very surprise ending. I inverted the notebook and unscrewed some of the panels on the back (there was one covering the hard drive, and one that provided access to the memory). I couldn't see any dust, so I simply screwed everything back together and powered it on... and it worked! The temperature is now staying at 46?, and it feels notable cooler to the touch. So I can only assume that some internal fan was malfunctioning or something. Whatever the case, it's working now so I won't complain. Edit: I have an SSD now, so temperature isn't as big an issue as it was when I had a mechanical drive.

    Read the article

  • No Outbound Internet on Windows Home Server

    - by Kyle B.
    Could someone provide some steps for me to check my internet connection on my Windows Home Server? It seems to have intermittent connectivity issues and I am unsure of how to diagnose the problem because it is a headless (no monitor, no keyboard) machine so the only way to get to the device is via remote desktop (which works fine). When connected to the machine, it doesn't pull up any microsoft.com sites and some other sites it does pull up (i.e. gmail.com) and some it doesn't (stackoverflow.com). To make matters more complicated, it has worked intermittently in the past for reasons unknown. Are there tools I can use to properly diagnose the reason for the connection failure? I can ping 127.0.0.1 just fine, I have internet working on my other router-connected machines, so I'm not sure why this one would fail. Any suggestions would be much appreciated and up-voted :) ** edit - thanks for suggestions guys, I'm going to try these tonight and will update my post. ** edit #2 - I hoping this is a more permanant fix, but I have both changed my port on the router as well as restarted the router at the same time. The internet (for the moment) appears to be working. I will be sure to try everything we have discussed should this problem persist. Thanks, Kyle

    Read the article

  • Dual Xeon Server voltages are low

    - by Mindflux
    I've got a whitebox server running CentOS 5.7. It's a Dual Xeon 5620, 24GB of RAM. The mainboard is a SuperMicro X8DT6-F and the chassis is a SC825TQ-R720LPB. Dual 720W Power supplies. We had a big power outage a couple weeks back that took down everything, I don't have any pre-power outage figures for this server, and the only reason I noticed these is because when I was bringing up the servers I was checking them out with more scrutiny than usual. http://i.imgur.com/rSjiw.png (Image of voltage readings) As you can see, CPU1 DIMM is low, +3.3V is high, 3.3VSB is high, +5v is high, +12v is REAL LOW (out of normal 5% (plus/minus))... and VBAT is off the charts. With my whitebox VAR we've tried the following: Swap out PSU with another server I have with the same PSUs. Try different power cord Update BMC/IPMI firmware in case readings were wrong (They aren't) Update BIOS Try different PDU Try a different outlet and/or circuit Replaced Voltage Regulator Unit At this point, the only thing we haven't done, seemingly is replace the mainboard.. which is what the next step will be unless something else shines some light on the situation. I should mention the system is rock solid otherwise which is a surprise given the 12v voltage is that far off.

    Read the article

  • NetApp FAS270 head doesn't see disks

    - by wfaulk
    I have an FAS270C. For months, I've been running it in a split-head manner (that is, with each head serving data totally independently, and without any clustering even being enabled) in order to facilitate moving some data around. I finally got everything situated, moved all the data to one of the heads, and was trying to get clustering set back up. Now when I try to install OnTap onto the "new" head, it cannot see any of the disks in the head shelf. (That is, the shelf into which the heads are inserted.) I've booted into maintenance mode, and it shows me that the 0b adapter, which should be the adapter that that shelf and its disks should be presented on, is in "OFFLINE (physical)" state. If I try to enable it with either "storage enable adapter 0b" or "fcadmin online 0b", it waits for about 30 seconds and then says: [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. [fci.adapter.online.failed:error]: Fibre Channel adapter 0b failed to come online. There is currently nothing attached to its external 0b port. I've tried it with and without an SFP plugged into it, and with and without its internal termination switch on. The currently active head can see those disks, and can see that two of them are assigned to the other head. Before I started reconfiguring, the "new" head could see disks on that shelf. They may even be the same disks that OnTap was installed on previously. Does anyone have any idea how to proceed?

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • Unknown Apache2 + PHP5 FastCGI 500 error .. caused by search engine bots?

    - by rdjurovich
    My Ubuntu server is configured with Apache 2.2.8 and PHP 5.2.4-2ubuntu5.18 in FastCGI mode. Everything works well, except I am seeing 500 errors that only seem to come from bots accessing the server.. for example (access.log): x.125.71.104 - - [16/Nov/2011:10:27:39 +1100] "GET / HTTP/1.1" 500 41377 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" x.40.103.239 - - [16/Nov/2011:11:05:56 +1100] "GET / HTTP/1.0" 500 14717 "-" "Mozilla/5.0 (compatible; mon.itor.us - free monitoring service; http://mon.itor.us)" x.249.67.114 - - [14/Nov/2011:20:57:17 +1100] "GET / HTTP/1.1" 500 101 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" x.55.39.85 - - [14/Nov/2011:19:31:06 +1100] "GET / HTTP/1.1" 500 7032 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)._" It is my understanding that a 500 error will be thrown when the PHP process fails to respond to Apache, which could be caused by a fatal PHP error or if PHP runs out of processes.. so my assumption is that either the bots are hitting the server too hard, killing the PHP processes, or something in the request header from bots is causing a fatal error in my PHP script? If anyone can offer advice on this it would be greatly appreciated! Ryan

    Read the article

  • How to force certain traffic through GRE tunnel?

    - by wew
    Here's what I do. Server (public internet is 222.x.x.x): echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf sysctl -p iptunnel add gre1 mode gre local 222.x.x.x remote 115.x.x.x ttl 255 ip add add 192.168.168.1/30 dev gre1 ip link set gre1 up iptables -t nat -A POSTROUTING -s 192.168.168.0/30 -j SNAT --to-source 222.x.x.x iptables -t nat -A PREROUTING -d 222.x.x.x -j DNAT --to-destination 192.168.168.2 Client (public internet is 115.x.x.x): iptunnel add gre1 mode gre local 115.x.x.x remote 222.x.x.x ttl 255 ip add add 192.168.168.2/30 dev gre1 ip link set gre1 up echo '100 tunnel' >> /etc/iproute2/rt_tables ip rule add from 192.168.168.0/30 table tunnel ip route add default via 192.168.168.1 table tunnel Until here, all seems going right. But then 1st question, how to use GRE tunnel as a default route? Client computer is still using 115.x.x.x interface as default. 2nd question, how to force only ICMP traffic to go through tunnel, and everything else go default interface? I try doing this in client computer: ip rule add fwmark 200 table tunnel iptables -t mangle -A OUTPUT -p udp -j MARK --set-mark 200 But after doing this, my ping program will timeout (if I not doing 2 command above, and using ping -I gre1 ip instead, it will works). Later I want to do something else also, like only UDP port 53 through tunnel, etc. 3rd question, in client computer, I force one mysql program to listen on gre1 interface 192.168.168.2. In client computer, there's also one more public interface (IP 114.x.x.x)... How to forward traffic properly using iptables and route so mysql also respond a request coming from this 114.x.x.x public interface?

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Acer Aspire One -- strange battery problem, charges only up to ~90%

    - by houbysoft
    I have this strange problem on the acer aspire one d250. It happened already once before, stayed for about two weeks, and then "fixed itself". The problem is as follows: the battery can't seem to get fully charged; ie the indicator is stuck at about 90% (it's probably not a software problem -- I have ArchLinux and Windows 7 installed and both report exactly the same) and it never passes that value, but it still shows the status as "charging" (I tried everything I could think of -- leaving it charging for extremely long amounts of time, doing a few complete charge-recharge cycles, removing/reinserting the battery, cleaning the connectors, even updating the BIOS, etc., and nothing helped). Also, when it is getting charged, it charges pretty fast until about 70% and then progresses extremely slowly. The battery holds the charge that appears on the battery indicator normally. Just can't get the battery to charge fully -- I can't get it past the 90%. At first I thought this would be a simple battery failure (even if the computer is not that old, about 6-7 months), but as I mentioned it happened once before, and then one day it fixed itself. I tried contacting Acer about this, but the support was not helpful, completely stupid, it seemed like they used canned responses, the usual. Any thoughts on how to fix this?

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • Exchange DEAD! Server recovered but no users can log in

    - by erotsppa
    Yesterday we had a hardware failure and brought our exchange server down. The hardware was repaired and the server was brought back up. Windows server 2008 did the disk check upon bootup and everything was recovered. However, no users can log into their exchange account! This is true with IMAP, Exchange and OWA! All three of them, refused to accept any users. For example when I try to access OWA, I get the following page http://pastie.org/584061 We verified that all the services are up (IMAP, POP, SMTP, IIS etc). We were able to connect to all those services with their respective ports through telnet. What could be the problem? It looks like the database cannot be mounted, from Exchange management console, when I try to mount the database it gives: Microsoft Exchange Error Failed to mount database 'Mailbox Database'. Mailbox Database Failed Error: Exchange is unable to mount the database that you specified. Specified database: SERVER\First Storage Group\Mailbox Database; Error code: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=-528) . I read online that there is a repair utility, so I tried it. I navigated to my edb file and ran eseutil /p "Mailbox Database.edb". It printed the following output *Repair completed. Database corruption has been repaired! Note: It is recommended that you immediately perform a full backup of this database. If you restore a backup made before the repair, the database will be rolled back to the state it was in at the time of that backup. Operation completed successfully with 595 (JET_wrnDatabaseRepaired, Database cor ruption has been repaired) after 885.750 seconds.* However I am still unable to mount!

    Read the article

  • Task Scheduler Crashing MMC

    - by Valrok
    I've been getting errors whenever I try to run the task scheduler for Windows 2008 R2. Each time that I've tried running it, the task scheduler will crash and report the following: Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: System.Windows.Forms Problem Signature 05: 2.0.0.0 Problem Signature 06: 50c29e85 Problem Signature 07: 151f Problem Signature 08: 18 Problem Signature 09: Exception OS Version: 6.1.7601.2.1.0.16.7 Locale ID: 1033 I've been looking online but so far I keep finding mixed results on what could be the fix for this and was wondering if anyone here has ever ran into this issue before. I read that this issue could be because of Security Update for Microsoft Windows (KB2449742) and that by uninstalling it I would be able to fix this issue, however I was not able to locate this anywhere in the server. Here's the link if interested Patch wise, everything is up to date. Also, I tried running hotfix KB2688730 to see if that would work after doing some research online, however the hotfix is not applicable to the computer. If anyone could provide some information on how to fix this and get the task scheduler running again it would be extremely helpful!

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >