Search Results

Search found 6887 results on 276 pages for 'internal'.

Page 209/276 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • How to (properly) back up a live QEMU/KVM VM?

    - by Roman
    I'm currently engineering a backup solution for KVM VM's as an additional measure to traditional backups. Unfortunately, all currently (August 2013) existing solutions I came across so far either: do not ensure a consistent backup of the VM (losing RAM state, creating a dirty image, or other things), or require lengthy downtime (complete VM shutdown while backing up). I'm aware of QEMU/libvirt's functionality of taking snapshots, however, it's not yet usable since: image-internal snapshots present you with an ever-changing image file, resulting in a likely dirty backup (assuming one uses qcow2 images at all). one cannot yet merge a currently active external snapshot into the original backing image ("blockcommit"). Out of the above reasons, I'm now implementing a script that: Saves the VM's state and halts it Sets up a devicemapper snapshot(s) where the VM's disk images and state reside Resumes the VM Mount the snapshot(s) of step 2. Backs up the VM's disk and state (configuration for convenience) Merges back the snapshot(s). If I got everything right, this will take consistent backups of VM's with only seconds (if at all, since 1-3 is fast, possibly sub-second) of downtime. Of course, when restoring, the VM will be way in the past, but at least giving me the option of an orderly shutdown/reboot. Am I missing something with this solution? Or has someone indeed already implemented this?

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • Suspected Corrupted Windows 7 MBR?

    - by AridDecay
    So, this may not be the correct place to put up my question, but i'll give it a shot. I'm having an issue repairing this computer. It was brought to me with the described issue of 'Not turning on' Later, I found that it would come with the error of 'No boot sector found on internal hard drive.' I assumed it was an MBR issue due to a virus or cutting a Windows update short. I booted into my trusty recovery enviroment and ran bootrec.exe /FIXMBR and restarted -- No luck I started to think (After multiple attempts to get the MBR sorted out, including creating a new boot sector) that the Hard-drive was possibly starting to cave in on itself, so I booted into a linux bootable CD and went to check the SMART data. Odd, say's it's inaccessible. That seems odd to me, considering it's a newer (two years old or so) Windows 7 computer. All new Hard-drives have SMART. So, I checked the BIOS. No mention of SMART anywhere. Greaaaat. I decided as a last-ditch effort to switch the hard-drive type to ATA in the BIOS (God knows why, I was getting frusterated) instead of AHCI. VOILA! It actually attempts to boot, gets halfway through the little windows animation, does an incredibly (Half a second) quick BSOD, and shuts down. Does anyone have ideas on what's going on here? I'm at my wits end.

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

  • Windows Home Server 2011, No disks "suitable for a backup destination"

    - by Scott Beeson
    I recently installed Windows Home Server 2011 and love it. However, when I try to set up server backups, it says no suitable disks are available. Initially, before I set up my RAID, it found one of my twin drives and said it would work. Once I set up the mirroring, that one is no longer available (obviously). However, I have an internal SATA 1TB drive and an external USB2.0 1TB drive hooked up. Both are recognized by Disk Management. WHS11 still says nothing suitable for backups. The two drives details are as follows: Edit to clarify: The system partition is on Disk 0, not listed below. The two below are the two that SHOULD be available for system backups. Disk 1: Dynamic "Data" (D:) 931.51 GB NTFS, Healthy Disk 3: Basic 200 MB Healthy (EFI System Partition) "Backup" 930.66 GB NTFS, Healthy (Primary Partition) What's a bit odd is that in Disk Management the "Backup" volume does not show a drive letter, even though I assigned Z: (which is reflected in "My Computer". I also cannot make this a dynamic disk as it says it's unsupported by the device.

    Read the article

  • Router intermittently failing

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. All of these things work most of the time, but break down intermittently, for a few minutes at a time. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing between the bridge and my internet-facing interface: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw Ping(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Also, the router is currently "working", and I checked on a problematic client: arp infrastructure infrastructure.mydomain (10.47.94.1) at 0:23:54:bb:7d:ce on en0 ifscope [ethernet] I tried it when the router was down, and I (eventually) got the same response. It took about 30 seconds to return, though.

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

  • Mac OSX - looking for software for notes, snippets, ideas, etc.

    - by eatloaf
    I have the following requirements: Mobile accessibility: Either a complimentary iphone app to sync with, or DropBox or Google Docs syncing or equivalent so I can use other mobile note applications to edit notes remotely. Minimally some form of markup, but ideally something I can drag and drop images into and do some formatting. Rich Text support is reasonable. Hierarchical organization, AKA outlining. Internal (note to note) linking. I like to cross reference items and thoughts internally and the relationships aren't always hierarchical. These were closest to what I was looking for but, as far as I can tell, suffer from the noted flaws: Mori : No mobile solution. EagleFiler : No item hierarchy. MacJournal : No entry hierarchy. iphone app converts edited entries to plain text. Evernote : No interior linking. No hierarchy. I think I've tried every serious contender and none of them have all four (seemingly simple) requirements. I'm hoping that I'm either missing an existing feature in an app I've tried or that someone knows of something I haven't found it yet.

    Read the article

  • pfSense routing between two routers with shared network

    - by JohnCC
    I have a network set-up using two pfSense routers arranged like this:- DMZ1 WAN1 WAN2 DMZ2 | | | | | | | | \___ PF1 PF2___/ | | | | \___TRUSTED___/ Each pfSense router has its own separate WAN connection, and a separate DMZ network attached to it. They share a common TRUSTED LAN between them. The machines on the trusted network have PF1 as their default gateway. PF1 has a static route defined to DMZ2 via PF2, and PF2 has a static route to DMZ1 via PF1. There is NAT to the WAN but internal networks (DMZ1/2 and TRUSTED) use different RFC1918 subnets. I inherited this arrangement, and all used to work fine. I made a config change to PF1 (relating to multicast), and machines on DMZ2 suddenly could not talk to TRUSTED. I rolled the change back, but the problem persisted. What I guess you'd hope would happen is that TCP packets would go DMZ2 - PF2 - TRUSTED and on return TRUSTED - PF1 - PF2 - DMZ2. That's the only way I can see it would have worked. However, PF1 drops the returning packets. I've verified this using tcpdump. I've worked around this by adding static routes to DMZ2 via PF2 to the servers on TRUSTED, but some devices on there do not support static routes so this is not ideal. Is there way to make this arrangement work decently, or is the design inherently flawed? Thanks!

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Google Chrome won't install via IE9/Windows7

    - by purir
    Using IE9, I've tried installing Google Chrome on Windows 7 from this url http://www.google.com/chrome/eula.html?hl=en-GB&platform=win But get the following error (apologies for uncouthly dumping this nonsense...) Any ideas dearly appreciated! PLATFORM VERSION INFO Windows : 6.1.7601.65536 (Win32NT) Common Language Runtime : 4.0.30319.235 System.Deployment.dll : 4.0.30319.1 (RTMRel.030319-0100) clr.dll : 4.0.30319.235 (RTMGDR.030319-2300) dfdll.dll : 4.0.30319.1 (RTMRel.030319-0100) dfshim.dll : 4.0.31106.0 (Main.031106-0000) SOURCES Deployment url : _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser resulted in exception. Following failure messages were detected: + The system cannot find the file specified. (Exception from HRESULT: 0x80070002) COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [25/06/2011 11:41:04] : Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser has started. ERROR DETAILS Following errors were detected during this operation. * [25/06/2011 11:41:04] System.IO.FileNotFoundException - The system cannot find the file specified. (Exception from HRESULT: 0x80070002) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.GetUserStore(UInt32 Flags, IntPtr hToken, Guid& riid) at System.Deployment.Application.ComponentStore..ctor(ComponentStoreType storeType, SubscriptionStore subStore) at System.Deployment.Application.SubscriptionStore..ctor(String deployPath, String tempPath, ComponentStoreType storeType) at System.Deployment.Application.SubscriptionStore.get_CurrentUser() at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • (Windows 7) Shared External Drive Permission Issues

    - by connec
    So, say I share my system (C) drive through windows (E.g. properties -> Sharing -> Advanced Sharing -> Share this Folder). I can then access this drive at \\Comp\C on another networked computer - all is well. However, if I insert a removable (USB) disk, say "E", and proceed to share it the same way, when I attempt to access \\Comp\E (either directly or through browsing) I get an error: Windows cannot access \\Comp\E You do not have permission to access \\Comp\E. Contact your network administrator to request access. Now, the permissions (Advanced Sharing -> Permissions) are set with "Everyone" having read access (same as the internal drive), so this doesn't make a lot of sense. Also of note, I have an SSH server on my computer (through Cygwin) and even through SSH (logging in as an administrator user) I cannot access /cygdrive/e (although /cygdrive/c is accessible). As a final note, the drive is of course accessible on the host machine (E:\), and also at \\Comp\E on the host machine.

    Read the article

  • Grayed-Out Sleep and Hibernate Options on Windows 7 After Updating Graphics Driver

    - by Maxim Z.
    I have a Gateway M275 Tablet PC, on which I've installed Windows 7 Ultimate. The laptop is quite old, so there aren't any Win7 drivers for it, not to mention any Vista drivers. Win7 has been working for some time, but I noticed that my video output wasn't working. I went into Device Manager and found that I didn't have a driver for my video card: it just recognized it as the standard one. I searched online and found an XP driver for it, released by Gateway. Device Manager accepted this driver and prompted me to reboot. After that, I noticed that my Sleep and Hibernate options in the Shut Down menu have been grayed-out. I looked online and found that many people are attributing this to display drivers, as such an old driver would surely not be compatible with the standby procedures Windows 7 uses. To make it clear: I was able to Sleep and Hibernate before updating the drivers; now, I can't. Running powercfg /a gives me, "An internal system component has disabled this standby state," for each available standby mode. Is there some way that the driver can be modified to support hibernation? The new driver fixed my video output problem, but I guess hibernation is more important for me. If not, what steps should I take to remove the driver and just leave the standard Windows one, which previously supported hibernation and sleep on this computer? Thanks in advance.

    Read the article

  • Allowing directory view/traversal for a specific VirtualHost in Apache 2.2

    - by warren
    I have the following vhost configured: <VirtualHost *:80> DocumentRoot /var/www/myvhost ServerName myv.host.com ServerAlias myv.host.com ErrorLog logs/myvhost-error_log CustomLog logs/myvhost-access_log combined ServerAdmin [email protected] <Directory /var/www/myvhost> AllowOverride All Options +Indexes </Directory> </VirtualHost> The configuration appears to be correct from the apachectl tool's perspective. However, I cannot get a directory listing on that vhost: Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. The error log shows the following: [Wed Mar 07 19:23:33 2012] [error] [client 66.6.145.214] Directory index forbidden by Options directive: /var/www/****** update2 More recently, the following is now kicking-into the error.log: [Wed Mar 07 20:16:10 2012] [error] [client 192.152.243.233] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/error/noindex.html update3 Today, the following is getting kicked-out: [Thu Mar 08 14:05:56 2012] [error] [client 66.6.145.214] Directory index forbidden by Options directive: /var/www/<mydir> [Thu Mar 08 14:05:56 2012] [error] [client 66.6.145.214] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/error/noindex.html [Thu Mar 08 14:05:57 2012] [error] [client 66.6.145.214] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. This is after modifying the vhosts.conf file thusly: <VirtualHost *:80> DocumentRoot /var/www/<mydir> ServerName myhost ServerAlias myhost ErrorLog logs/myhost-error_log CustomLog logs/myhost-access_log combined ServerAdmin admin@myhost <Directory "/var/www/<mydir>"> Options All +Indexes +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> What is missing? update 4 All subdirectories of the root directory do directory listings properly - it is only the root which cannot.

    Read the article

  • WSUS registry file: NoAutoRebootWithLoggedOnUsers entry being ignored

    - by the_pete
    We are using a registry entry to connect our internal workstations to our WSUS server and everything seems to be working except the NoAutoRebootWithLoggedOnUsers entry. Without fail, over the last few weeks, our lab setup as well as our users have been prompted to restart their machines with a 15 minute time out and there's nothing they can do about it. They can't postpone or cancel the restart, all options in the prompt are greyed out. Below is the registry file we are using to connect our workstations to our WSUS server: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "AcceptTrustedPublisherCerts"=dword:00000001 "ElevateNonAdmins"=dword:00000000 "WUServer"="http://xxx.xxx.xxx.xxx:8530" "WUStatusServer"="http://xxx.xxx.xxx.xxx:8530" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU] "AUOptions"=dword:00000004 "AutoInstallMinorUpdates"=dword:00000001 "DetectionFrequencyEnabled"=dword:00000001 "DetectionFrequency"=dword:00000002 "NoAutoUpdate"=dword:00000000 "NoAutoRebootWithLoggedOnUsers"=dword:00000001 "RebootRelaunchTimeout"=dword:00000030 "RebootRelaunchTimeoutEnabled"=dword:00000001 "RescheduleWaitTime"=dword:00000020 "RescheduleWaitTimeEnabled"=dword:00000001 "ScheduledInstallDay"=dword:00000000 "ScheduledInstallTime"=dword:00000003 "UseWUServer"=dword:00000001 There is a bit of redundancy, if you want to call it that, having both the NoAutoRebootWithLoggedOnUsers entry as well as the entries for RebootRelaunchTimeout but we wanted to see if we could either disable the restart, or give our users a larger window within which they could wrap up their work, etc. before restarting. Neither of these entries seems to work, but our priority is getting NoAutoRebootWithLoggedOnUsers working and any help with this would be greatly appreciated.

    Read the article

  • Active Directory server down, recovering without reinstalling

    - by whatever
    My Windows 2003 server suddenly ceased to function as a DC (this server is the only DC of the domain). All AD related services are down. The only way I can login to the AD is physically to the machine. Everytime I access an AD-related service (e.g. "AD users and computers") I get the below error: Naming information cannot be located because: The specified directory service attribute or value does not exist. Contact your system administrator to verify that your domain is properly configured and is currently online. I found the below system event which matches the time when the issue started, this re-occurs everytime I reboot the server. NTDS General | Global Catalog | Active Directory was unable to establish a connection with the global catalog. Additional Data Error value: 1355 The specified domain either does not exist or could not be contacted. Internal ID: 3200d33 I started the troubleshooting with DNS. Netdiag throws the below error although I think this is simply a consequence of not being able to access the Global Catalog. The procedure entry point DnsGetPrimaryDomainName_UTF8 could not be located in the dynamic link library DNSAPI.dll. Anyway DNS seems OK because I can ping the DC FQDN from the DC itself. I found the below solution which is supposed to help by doing some cleanup of the metadata: http://support.microsoft.com/kb/216498 If I follow procedure 1 here is what I get at step 9: no current site Domain - DC=<mydomain>,DC=<com> no current server no current naming context I can continue the procedure until step 14. I haven't tested step 15 as my understanding is that I will have to reinstall the whole AD again. Is there any way I can recover my AD from there without having to reinstall the whole thing? Update: Yes, the server was powered off/on because reboot would take forever (not because I thought power cycling the unit would fix it more than a reboot).

    Read the article

  • Proxmox: VMs and different public IPs

    - by Raj
    I have a server which has two NICs and both are directly connected to internet. I have five different public IP addresses available for the VMs. The host machine (Proxmox) doesn't need to use any (it'll use a private IP and that's all) but will have internet connection. I've gone through the Proxmox documentation and I'm not able to understand the big picture to set up the right network configuration for my needs. In short, what I have is: One server (Proxmox, host machine) On that server, 5 VMs are created 5 public IP addresses available (one for each VM), let's say: 80.123.21.1, 80.123.21.2, 80.123.21.3, 80.123.21.4, 80.123.21.5 What I have now for the host is the following: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.1.101 netmask 255.255.255.0 bridge_ports eth0 bridge_stp off bridge_fd 0 auto vmbr1 iface vmbr1 inet manual It can be reached from the internal network, so that's OK. It has internet connection, which is also OK. vmbr1 is going to be used by the VMs. Each VM will have its own IP on his network interfaces configuration file. For some reason, VMs will not have internet and they won't be able to have public IP address. If I use NAT, it will work correctly, but they will not use the public allocated IP addresses for them. Am I missing something?

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • LAN Webserver not accessible through PPTP VPN

    - by Joe
    I have this LAN Network with 10 clients and one server. The server has 4 virtual machines and a BIND DNS Server. When the router assigns an IP through the DHCP , it also gives the ip of the DNS Server, to resolve internal domains. Everything apparently works fine, the clients being able to access the server's vm's resources, but I also have to create the possibility of remote access. I installed the PPTP VPN on the server, and the vpn clients would get the same ip address range as the router's dhcp is assigning. Apparently everything is fine here also, except the fact that when we connect through the vpn , we cannot access the webserver on port 80 ( the webserver being one of the server's VM ). The iptables on the webserver has been turned off for testing purposes and the router's firewall is directing all the external traffic to the server. Can somebody suggest a solution to this? Extra details : VPN Server : PPTP Server Centos 6.3 x64 VPN Client : Windows 7 default PPTP VPN Connection The client is successfully connected to the server, everything works ( FTP/MYSQL/SSH/DNS ) , except the fact that when I try to access the webserver IP on the browser, it won't work.Pinging it works perfectly.

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >