Search Results

Search found 1102 results on 45 pages for 'jonathan oliver'.

Page 12/45 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • nginx redirect to a subdomain even without trailing slashes

    - by Oliver A.
    Hi, I just installed and partially configured nginx on a dedicated server of mine. But I've got some trouble understanding the regexp. I would like to make nginx redirect www.mydomain.com/forum/ AND www.mydomain.com/forum (note the missing trailing slash; case-insensitive; same applies for "forums" instead of "forum") to forum.mydomain.com/. This is what I came up with: location ~* ^/(forum|forums) { rewrite ^/(.*)/(.*)$ http://forum.mydomain.com/$2? permanent; } ... but for some reason it works with trailing slashes only. :-/ Please help me! Thanks in advance!

    Read the article

  • RDP Connection to XP from Windows 7 crashes part of XP

    - by Oliver Nelson
    I have some XP machines that my users RDP into. If I RDP into the machine from something other than a Windows 7 box, and then RDP into the machine with a Windows 7 box, the XP machines crash (not BSOD though). I get an event ID 1003 with these details: Error code 1000008e, parameter1 c0000005, parameter2 bf85b6b7, parameter3 b6149a88, parameter4 00000000. I have this happening on quite a few machines, and I can reliably reproduce it. I've found no way to fix it. If I only connect from Windows 7 machines, it never fails. Any suggestions? (I've checked color settings and basic RDP client settings, and they all seem to match between the Windows 7 Clients and the XP clients).

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Why is Windows Update trying to install an update I don't need?

    - by Oliver Salzburg
    I have a Windows 7 system that currently has a single update pending: Windows Internet Explorer 9 for Windows 7 for x64-based Systems If I try to install the update, Windows Update will: Create a restore point Fail with the error: Code 9C48 Windows Update encountered an error. The event log for the event reads: Installation Failure: Windows failed to install the following update with error 0x80070643: Windows Internet Explorer 9 for Windows 7 for x64-based Systems. If you search the web for that error, there are many other people with the exact same issue. Sadly, I am unable to apply the proposed solutions to my case, because I just installed this system. There is nothing on it, except Windows 7. I installed the system and ran through the updates. I also did the exact same process with this machine several times over the past few days due to a long-term test we just started. I didn't have any problems with any Windows Update on the previous installation runs and I know I didn't do anything different this time because I followed the installation procedures instructions which are to be used during the test. How did this happen and how do I solve it? Further Investigation So, as I always like to do, I ran the update again while running Process Monitor and dug up further details. WindowsUpdate.log First of all, there is a Windows Update log file located at C:\Windows\WindowsUpdate.log which I didn't know about. But I fail to see any significant entry in it, maybe you're more lucky: 2012-04-10 22:46:58:017 956 728 AU AU received approval from Ux for 1 updates 2012-04-10 22:46:58:017 956 728 AU AU setting pending client directive to 'Progress Ux' 2012-04-10 22:46:58:095 956 728 AU BeginInteractiveInstall invoked for Download 2012-04-10 22:46:58:095 956 728 AU Auto-approving update for download, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, ForUx=1, IsOwnerUx=1, HasDeadline=0, IsMinor=0 2012-04-10 22:46:58:095 956 728 AU Auto-approved 1 update(s) for download (for Ux) 2012-04-10 22:46:58:110 956 728 AU UpdateDownloadProperties: 0 download(s) are still in progress. 2012-04-10 22:46:58:110 956 728 AU ############# 2012-04-10 22:46:58:110 956 728 AU ## START ## AU: Download updates 2012-04-10 22:46:58:110 956 728 AU ######### 2012-04-10 22:46:58:110 956 728 AU # Approved updates = 1 2012-04-10 22:46:58:110 956 728 AU AU initiated download, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, callId = {35DF928B-B428-4BAC-8C63-55295967EFBB} 2012-04-10 22:46:58:110 956 728 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:46:58:110 956 728 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:110 956 728 AU Currently showing Progress UX client - so not launching any other client 2012-04-10 22:46:58:110 956 bb8 DnldMgr ************* 2012-04-10 22:46:58:110 956 bb8 DnldMgr ** START ** DnldMgr: Downloading updates [CallerId = AutomaticUpdatesWuApp] 2012-04-10 22:46:58:110 956 bb8 DnldMgr ********* 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Call ID = {35DF928B-B428-4BAC-8C63-55295967EFBB} 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Priority = 3, Interactive = 1, Owner is system = 0, Explicit proxy = 0, Proxy session id = 1, ServiceId = {9482F4B4-E343-43B6-B170-9A65BC822C77} 2012-04-10 22:46:58:110 956 bb8 DnldMgr * Updates to download = 1 2012-04-10 22:46:58:110 956 bb8 Agent * Title = Windows Internet Explorer 9 for Windows 7 for x64-based Systems 2012-04-10 22:46:58:110 956 bb8 Agent * UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100 2012-04-10 22:46:58:110 956 bb8 Agent * Bundles 1 updates: 2012-04-10 22:46:58:110 956 bb8 Agent * {6D9A90B7-FAF9-4A47-9EFE-A506264873B3}.100 2012-04-10 22:46:58:110 956 bb8 DnldMgr *********** DnldMgr: New download job [UpdateId = {6D9A90B7-FAF9-4A47-9EFE-A506264873B3}.100] *********** 2012-04-10 22:46:58:110 956 728 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:110 956 728 AU # Pending download calls = 1 2012-04-10 22:46:58:110 956 728 AU ## RESUMED ## AU: Download update [UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}, succeeded] 2012-04-10 22:46:58:313 956 bb8 Agent ** END ** Agent: Downloading updates [CallerId = AutomaticUpdatesWuApp] 2012-04-10 22:46:58:313 956 bb8 Agent ************* 2012-04-10 22:46:58:313 956 718 AU ######### 2012-04-10 22:46:58:313 956 718 AU ## END ## AU: Download updates 2012-04-10 22:46:58:313 956 718 AU ############# 2012-04-10 22:46:58:313 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:46:58:313 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:313 956 718 AU Currently showing Progress UX client - so not launching any other client 2012-04-10 22:46:58:313 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:46:58:313 956 aac AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:46:58:313 956 aac AU No featured updates available. 2012-04-10 22:47:00:107 956 aac AU BeginInteractiveInstall invoked for Install 2012-04-10 22:47:00:107 956 aac AU Auto-approving update for install, updateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}.100, ForUx=1, IsOwnerUx=1, HasDeadline=0, IsMinor=0 2012-04-10 22:47:00:107 956 aac AU Auto-approved 1 update(s) for install (for Ux), installType=1 2012-04-10 22:47:00:107 956 aac AU ############# 2012-04-10 22:47:00:107 956 aac AU ## START ## AU: Install updates 2012-04-10 22:47:00:107 956 aac AU ######### 2012-04-10 22:47:00:107 956 aac AU # Initiating manual install 2012-04-10 22:47:00:107 956 aac AU # Approved updates = 1 2012-04-10 22:47:00:107 956 aac AU ## RESUMED ## AU: Installing update [UpdateId = {B33ACEC1-3265-4D01-9C37-AC0892E95ED9}] 2012-04-10 22:47:13:773 2232 9fc Handler : WARNING: Exit code = 0x8024200B 2012-04-10 22:47:13:773 956 718 AU # WARNING: Install failed, error = 0x80070643 / 0x00009C48 2012-04-10 22:47:13:773 2232 9fc Handler ::::::::: 2012-04-10 22:47:13:773 2232 9fc Handler :: END :: Handler: Command Line Install 2012-04-10 22:47:13:773 2232 9fc Handler ::::::::::::: 2012-04-10 22:47:13:851 956 a7c Agent ********* 2012-04-10 22:47:13:851 956 a7c Agent ** END ** Agent: Installing updates [CallerId = AutomaticUpdates] 2012-04-10 22:47:13:851 956 718 AU Install call completed. 2012-04-10 22:47:13:851 956 a7c Agent ************* 2012-04-10 22:47:13:851 956 718 AU # WARNING: Install call completed, reboot required = No, error = 0x00000000 2012-04-10 22:47:13:851 956 718 AU ######### 2012-04-10 22:47:13:851 956 718 AU ## END ## AU: Installing updates [CallId = {FCFF2A5C-25AB-4FB9-AB2B-35C65CCA6A9F}] 2012-04-10 22:47:13:851 956 718 AU ############# 2012-04-10 22:47:13:851 956 718 AU Install complete for all calls, reboot NOT needed 2012-04-10 22:47:13:851 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:47:13:851 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:13:851 956 498 AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:47:13:851 956 498 AU No featured updates available. 2012-04-10 22:47:14:366 956 168 AU No featured updates notifications to show 2012-04-10 22:47:14:366 956 168 AU UpdateDownloadProperties: 0 download(s) are still in progress. 2012-04-10 22:47:14:366 956 168 AU Triggering Offline detection (non-interactive) 2012-04-10 22:47:14:366 956 168 AU AU setting pending client directive to 'Install Complete Ux' 2012-04-10 22:47:14:366 956 168 AU Changing existing AU client directive from 'Progress Ux' to 'Install Complete Ux', session id = 0x1 2012-04-10 22:47:14:366 956 168 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:14:366 956 b78 AU ############# 2012-04-10 22:47:14:366 956 b78 AU ## START ## AU: Search for updates 2012-04-10 22:47:14:366 956 b78 AU ######### 2012-04-10 22:47:14:366 956 b78 AU ## RESUMED ## AU: Search for updates [CallId = {0198DD3A-D7B0-48F5-A77D-795F8A1BDCE8}] 2012-04-10 22:47:16:097 956 718 AU # 1 updates detected 2012-04-10 22:47:16:097 956 718 AU ######### 2012-04-10 22:47:16:097 956 718 AU ## END ## AU: Search for updates [CallId = {0198DD3A-D7B0-48F5-A77D-795F8A1BDCE8}] 2012-04-10 22:47:16:097 956 718 AU ############# 2012-04-10 22:47:16:097 956 718 AU No featured updates notifications to show 2012-04-10 22:47:16:097 956 718 AU Setting AU scheduled install time to 2012-04-11 01:00:00 2012-04-10 22:47:16:097 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:16:097 956 718 AU Successfully wrote event for AU health state:0 2012-04-10 22:47:16:113 956 55c AU Getting featured update notifications. fIncludeDismissed = true 2012-04-10 22:47:16:113 956 55c AU No featured updates available. 2012-04-10 22:47:18:780 956 bb8 Report REPORT EVENT: {27479C66-E930-4F9C-AFF2-27EDD76DED8F} 2012-04-10 22:47:13:773+0200 1 182 101 {B33ACEC1-3265-4D01-9C37-AC0892E95ED9} 100 80070643 AutomaticUpdates Failure Content Install Installation Failure: Windows failed to install the following update with error 0x80070643: Windows Internet Explorer 9 for Windows 7 for x64-based Systems. 2012-04-10 22:47:18:780 956 bb8 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2012-04-10 22:47:18:780 956 bb8 Report WER Report sent: 7.5.7601.17514 0x80070643 B33ACEC1-3265-4D01-9C37-AC0892E95ED9 Install 101 Unmanaged 2012-04-10 22:47:18:780 956 bb8 Report CWERReporter finishing event handling. (00000000) WU-IE9-Windows7-x64.exe The actual update that is executed is downloaded and stored at the following location: C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe Executing that file manually, results in the following error message: IE9_main.log The IE9 installer/updater also creates an own log file located at C:\Windows\IE9_main.log For the update session in question, the installer logged: 00:00.000: ==================================================================== 00:00.016: Started: 2012/04/10 (Y/M/D) 23:10:53.897 (local) 00:00.032: Time Format in this log: MM:ss.mmm (minutes:seconds.milliseconds) 00:00.063: Command line: "C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe" 00:00.078: INFO: Setup installer for Internet Explorer: 9.0.8112.16421 00:00.094: INFO: Previous version of Internet Explorer: 9.0.8112.16443 00:00.110: INFO: Checking if iexplore.exe's current version is between 9.0.6001.0... 00:00.125: INFO: ...and 9.1.0.0... 00:00.141: INFO: Maximum version on which to run IEAK branding is: 9.1.0.0... 00:00.156: ERROR: A newer version of Internet Explorer is already installed on the system. 00:00.188: ERROR: Internet Explorer version check failed. 01:03.789: INFO: Setup exit code: 0x00009C48 (40008) - A more recent version of Internet Explorer is installed. 01:03.820: INFO: Scheduling upload to IE SQM server: http://sqm.microsoft.com/sqm/ie/sqmserver.dll 01:03.852: INFO: SQM Upload returned 403 01:03.867: INFO: Cleaning up temporary files in: C:\Windows\TEMP\IE978E.tmp 01:03.883: INFO: Unable to remove directory C:\Windows\TEMP\IE978E.tmp, marking for deletion on reboot. 01:03.898: INFO: Released Internet Explorer Installer Mutex Which pretty much confirms what the error message says when executing the update manually; it's simply already installed or even obsolete because a newer version is installed. So, why does it try to keep installing the update? Possible solutions? Uninstalling Windows Internet Explorer 9 and manually installing the cached C:\Windows\SoftwareDistribution\Download\Install\WU-IE9-Windows7-x64.exe will result in the same error after applying all pending updates. Applying the FixIt for the issue You receive “0x80070643” or “0x643” error codes when you try to install .NET Framework updates through Windows Update or Microsoft Updates will not resolve the issue. Applying the suggested solution for the issue Error message when you try to install updates by using the Windows Update or Microsoft Update Web site: "0x80070003" will not resolve the issue. Running the FixIt Automatically diagnose and fix common problems with Windows Update does report having resolved issues with Windows Update, but didn't resolve the issue. Running the FixIt for the issue How to troubleshoot Windows Update or Microsoft Update when you are repeatedly offered an update does not resolve the issue. Neither with normal nor with aggressive settings.

    Read the article

  • Fatal error on "mode 120000" file during git -> svn migration

    - by Oliver
    Following instructions from the following website: http://code.google.com/p/support/wiki/ImportingFromGit I'm trying to migrate a git repository to svn, but during the "git rebase master tmp" step it fails with the following error after apply the first few patches: $ git rebase master tmp First, rewinding head to replay your work on top of it... Applying: Imported Applying: Cleaned up the readme file Applying: fix problem with versions fatal: unable to write file foobar mode 120000 Patch failed at 0003 fix problem with versions When you have resolved this problem run "git rebase --continue". If you would prefer to skip this patch, instead run "git rebase --skip". To restore the original branch and stop rebasing run "git rebase --abort". I understand that 120000 may refer to a symlink, but Subversion has supported symlinks for a long time now. Subversion installed is 1.6.5, Git is 1.6.3.3. Running on Ubuntu Linux. The system is not running out of disk space and this operation is taking place within my home directory so permissions should not be an issue.

    Read the article

  • nginx regex locations w/ different roots not working as expected

    - by Wells Oliver
    I have the following two rules: location / { root /var/www/default; } location ~* /myapp(.*)$ { root /home/me/myapp/www; try_files $uri $uri/ /handle.php?url=$uri&$args; } When I browse to myapp/foo it works- kind of, the error is logged as a 404: *3 open() "/var/www/default/handle.php" failed (2: No such file or directory) - so its handling the regex match but just not using the right document root-- why is this? For the record, I am trying to get /myapp/* requests handled by the second location, and everything else the first.

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • Why is Denic not accepting my nameservers?

    - by Oliver Salzburg
    I'm currently in the process of moving all of our domains to our own nameservers. Which wasn't an issue until I hit our own .de domain. I (think I) understand the implications of having the NS inside it's own domain, hence the need for glue records. Until yesterday, I would have assumed I have a pretty good understanding of Bind and DNS zones until I was presented with this error from the Denic nameserver predelegation check: Inconsistent set of nameserver IP addresses (NS, provided glues, determined glues) ns2.hartwig-at.de [88.198.242.190/88.198.242.190] Default resolver determined: [], other resolvers determined: {88.198.242.190/88.198.242.190=[/2a01:4f8:d13:3c85:0:0:0:2, /88.198.242.190]} Inconsistent set of nameserver IP addresses (NS, provided glues, determined glues) ns1.hartwig-at.de [cloud.hartwig-at.de/176.221.46.23] Default resolver determined: [], other resolvers determined: {cloud.hartwig-at.de/176.221.46.23=[/2a00:1158:3:0:0:0:0:b6, /176.221.46.23]} Screenshot of the result The support of my registrar is either far better educated than me or doesn't have a clue. Either way, they're avoiding my questions in regards to what this error means. They just tell me Your nameserver has to return your own nameservers as the default resolver. But that doesn't make any sense to me and they refuse to try to explain it any other way. This is the head of my current zone file: @ 86400 IN SOA ns1.hartwig-at.de. hostmaster.hartwig-at.de. ( 2012070505 ; serial 1d ; refresh 3h ; retry 4w ; expiry 1h ) ; minimum 3600 IN NS ns1.hartwig-at.de. 3600 IN NS ns2.hartwig-at.de. 3600 IN MX 10 remote.hartwig-at.de. 3600 IN MX 20 mx1.hartwig-at.de. 3600 IN MX 30 mx2.hartwig-at.de. localhost 3600 IN A 127.0.0.1 localhost 3600 IN AAAA ::1 @ 3600 IN A 176.221.46.23 3600 IN AAAA 2a00:1158:3::b6 * 3600 IN A 176.221.46.23 3600 IN AAAA 2a00:1158:3::b6 hetzner 3600 IN A 88.198.242.190 hetzner 3600 IN AAAA 2a01:4f8:d13:3c85::2 cloud 3600 IN A 176.221.46.23 cloud 3600 IN AAAA 2a00:1158:3::b6 ; List all NS as A/AAAA record ns 3600 IN A 176.221.46.23 ns 3600 IN AAAA 2a00:1158:3::b6 ns1 3600 IN A 176.221.46.23 ns1 3600 IN AAAA 2a00:1158:3::b6 ns2 3600 IN A 88.198.242.190 ns2 3600 IN AAAA 2a01:4f8:d13:3c85::2 So, what is the problem with my zone? And what is the "default resolver"?

    Read the article

  • How do I apply WinHTTP proxy settings domain-wide?

    - by Oliver Salzburg
    We're already configuring Internet Explorer proxy settings through group policy and it works great. Sadly, I've recently run into multiple issues where those settings are ignored by certain services. I realized that these service have one thing in common. They use WinHTTP, which has its own proxy settings. Now I'm asking myself how to apply those across the whole domain. I realize that I could create a logon script and simply run netsh winhttp import proxy source=ie, but, from experience I know that these settings require a reboot to take effect. So this wouldn't help me at all in a logon script. So, how can I do it?

    Read the article

  • nginx: try_files not finding static files, falling back to PHP

    - by Wells Oliver
    Relevant configuration: location /myapp { root /home/me/myapp/www; try_files $uri $uri/ /myapp/index.php?url=$uri&$args; location ~ \.php { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } } I absolutely have a file foo.html in /home/me/myapp/www but when I browse to /myapp/foo.html it is handled by PHP, the final fallback in the try_files list. Why is this happening?

    Read the article

  • allowing index access only with .htaccess

    - by Oliver Nourish
    Hello I have this in my .htaccess file, in the site root: Options -Indexes <directory ../.*> Deny from all </directory> <Files .htaccess> order allow,deny deny from all </Files> <Files index.php> Order allow,deny allow from all </Files> What I'm trying to achieve is to block folder and file access to anything that isn't called index.php, regardless of which directory is accessed. I have the folder part working perfectly and the deny from all rule is working as well - but my attempt to allow access to index.php is failing. Basically could someone tell me how to get it working?

    Read the article

  • Scheduling Automatic Backups for Virtual Private Web Server running CENTOS 6.3 and WHM

    - by Oliver Farrell
    I'm pretty new to administering my own VPS - but thus far am finding it quite a compelling experience. There's something quite refreshing about having complete control over everything it does. One thing that I would like to look at is a suitable backup solution (a few times a day). My current setup is as follows: I'm running a CENTOS 6.3 VPS with a single 25GB hard drive solely for the purpose of hosting websites. I'm using WHM & cPanel for administering them. I now plan on adding an additional hard disk and hooking it up to my VPS. What I'm not sure about is how I get the two disks talking and get the backup process going. I'm not a seasons SSH-er so don't really know where to start. I'm hosting with Serverlove (one of the best hosting providers I've used) and am provided with a number of unique identifiers for each hard disk so I imagine these may play a part in linking them together. I appreciate that this is a little vague (I'm clutching at straws) but any assistance is very much appreciated.

    Read the article

  • Why does the EFI shell not detect my Windows DVD?

    - by Oliver Salzburg
    I'm currently looking into (U)EFI for the first time and am already really confused. I insert the Windows Server 2008 R2 Enterprise disc into the DVD-ROM and boot into the EFI shell. The shell will automatically list all detected devices, which are: blk0 :CDRom - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master)/CDROM(Entry0) blk1 :BlockDevice - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master) To my understanding, it should have already detected the filesystem on blk0 and should have mounted it as fs0. Why is that not happening? If I insert a USB drive, it gets mounted just fine. The board is an Intel S5520HC in case that makes a difference.

    Read the article

  • Oracle 11g RDBMS Enterprise Edition download?

    - by Oliver Michels
    I am looking for a URL where i can download the Enterprise Edition of Oracle 11g. I know that i can download the Standard Edition of 11g at Oracle's technet, but as i would like to use some of those enterprise options, which are disabled in standard, i would rather reinstall the enterprise software on our development server.

    Read the article

  • How exactly does processing files work on AFP shares?

    - by Oliver Joseph Ash
    I recently made myself a NAS, and I’ve been wondering about how AFP shares work. If I have a ZIP on the AFP share, and I use Finder to decompress the file, what will the process for decompressing this file be? Will it read the file into memory on my Mac, process it, and then write the results to the AFP share? I’ve been wondering because if I login via SSH to decompress, I seem to get faster results.

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • How can I open a console application with a given window size?

    - by Oliver Salzburg
    The application I want to start is MongoDB. If I would start it normally, it looks like this: I don't like the amount of line breaks and I have a lot of screen space, so I would like to utilize said space to get rid of the line breaks. I can change the size of the console window with MODE, so I wrote a batch file like this: @ECHO OFF MODE con:cols=140 lines=70 %~dp0mongodb\bin\mongod --dbpath %~dp0data --rest So far, so good. When I start this batch file, I get a larger window, as desired. But when I now press Ctrl+C to exit MongoDB, I get the annoying prompt: Terminate batch job (Y/N)? Which is useless, because the command I just exited out of was the last command in the batch job anyway and no matter what I answer, the result is the same. So, how can I get a larger console window for the application without having that prompt when I hit Ctrl+C?

    Read the article

  • HTML files browsable but PHP ones aren't

    - by Oliver Nourish
    Hello I'm checking the ftp settings a client has sent me. I can create, edit and upload/download .html files fine. However I'm finding that .php files aren't brows-able, unless I don't use php tags. I know very little about the clients server at this point, but I have checked for a .htaccess file and not found one. What else can I do to determine if php is supported? This seems to be resolved.

    Read the article

  • How do you get around security warnings when redirecting AppData?

    - by Oliver Salzburg
    I've recently set up folder redirection for my user profile in a domain. For now, I've redirected AppData, Desktop, Pictures, Documents and Favorites. So far, so good. But now I've noticed a quite disturbing side effect of the whole thing. Whenever I click any of my pinned elements on the task bar, I get the following warning: The shortcuts get synced as well and are no longer trusted. They're located at \\DFS\UserData\Profiles\OliverSalzburg\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar That seems like it could be a problem when rolling it out to the whole company.

    Read the article

  • Routing all data through an VPN tunnel with ppp

    - by Oliver
    I'm trying to create a VPN tunnel that forwards all data from the local machine to the VPN server. I'm using ppp-2.4.5 for this with the following configuration: pty "pptp <VPNServer> --nolaunchpppd" name <my login name> remotename PPTP usepeerdns require-mppe-128 file /etc/ppp/options.pptp persist maxfail 0 holdoff 5 I have a script in if-up.d with the following content: route del default eth0 route add default dev ppp0 Before starting the VPN tunnel my routing looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 2 0 0 eth0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 After starting the tunnel (via pon) it looks like: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 12.34.56.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 127.0.0.0 127.0.0.1 255.0.0.0 UG 0 0 0 lo 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 Now the problem is, that the VPN tunnel seems to be looped into itself. If I run ifconfig after a few seconds without any traffic: eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.0.0 broadcast 192.168.255.255 ether 00:01:2e:2f:ff:35 txqueuelen 1000 (Ethernet) RX packets 39931 bytes 6784614 (6.4 MiB) RX errors 0 dropped 90 overruns 0 frame 0 TX packets 34980 bytes 7633181 (7.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xfbdc0000-fbde0000 ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1496 inet 12.34.56.78 netmask 255.255.255.255 destination 12.34.56.1 ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 7 bytes 94 (94.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 782863 bytes 349257986 (333.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 It states that already over 300 MiB have been send, ppp0 is only online since a few seconds and the connection isn't working anyway. Can someone please help me to fix the routing table, so that the traffic from ppp0 is not send again through ppp0 but instead goes to the remote server?

    Read the article

  • Debian 6: setting up FTP just for website editing

    - by David Oliver
    I have a VPS using Debian 6.0. Currently, SSH is set to not accept password logins, and only key-based ones. A person who needs to work on one particular website (a vhost) wishes to use FTP. He doesn't need/want SSH. How can I set up FTP access for him, enabling him to have write permissions for all files in the relevant directory, and only the relevant directory? The directory is /srv/www/domainname.com/public_html Currently, all directories and files in that directory belong to www-data:www-data and are 644/755. I've installed vsftpd and have been reading some guides, but they all seem to deal with allowing multiple users to have their own user-named directories which isn't what I'm after. I can't seem to work out how to simply define one FTP user with a password that has access to one directory of my choosing. This is my first experience of setting up an FTP server. Thanks. Edit: have also found this - maybe I should be using ProFTPd, or can vsftpd also do what I want?

    Read the article

  • OpenLdap 2.4 on centos 6 doesn't listen on port 636

    - by Oliver Henriot
    I have an openldap 2.4 server on centos 6 whose confg I copied from those I have running under openldap 2.3 servers on centos 5 machines. On openldap 2.3, specifying TLSCACertificateFile, TLSCertificateFile and TLSCertificateKeyFile with correct values makes the server listen on port 636. This is not the case on the openldap 2.4 setup. I have configured it with loglevel -1 but I have not seen any clue as to what might be wrong and reading the openldap 2.4 manual doesn't indicate if any of the other TLS related parameters are now mandatory. I don't think so though because if I run the service manually, using "# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"", the server does listen on port 636 and I can query it using "ldapsearch -H ldaps://myserver:636". Is there something I am missing to get the server to listen on port 636 without having to always launch it manually? Is this linked to centos 6 or openldap 2.4? Thank you. Cheers,

    Read the article

  • Encrypting peer-to-peer application with iptables and stunnel

    - by Jonathan Oliver
    I'm running legacy applications in which I do not have access to the source code. These components talk to each other using plaintext on a particular port. I would like to be able to secure the communications between the two or more nodes using something like stunnel to facilitate peer-to-peer communication rather than using a more traditional (and centralized) VPN package like OpenVPN, etc. Ideally, the traffic flow would go like this: app@hostA:1234 tries to open a TCP connection to app@hostB:1234. iptables captures and redirects the traffic on port 1234 to stunnel running on hostA at port 5678. stunnel@hostA negotiates and establishes a connection with stunnel@hostB:4567. stunnel@hostB forwards any decrypted traffic to app@hostB:1234. In essence, I'm trying to set this up to where any outbound traffic (generated on the local machine) to port N forwards through stunnel to port N+1, and the receiving side receives on port N+1, decrypts, and forwards to the local application at port N. I'm not particularly concerned about losing the hostA origin IP address/machine identity when stunnel@hostB forwards to app@hostB because the communications payload contains identifying information. The other trick in this is that normally with stunnel you have a client/server architecture. But this application is much more P2P because nodes can come and go dynamically and hard-coding some kind of "connection = hostN:port" in the stunnel configuration won't work.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >