Search Results

Search found 29603 results on 1185 pages for 'search path'.

Page 236/1185 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • how to get the path of a localized plist under mac os x.

    - by user272312
    Hi, i have a plist placed inside en.lproj. I am trying to get its path this way, NSString *bundlePath = [[NSBundle mainBundle] bundlePath]; NSMutableString *localizedPath = [[NSMutableString alloc] initWithFormat:@"%@/%@.%@/%@",bundlePath,lang,@"lproj",@"1.plist"]; where 1.plist is in en.lproj/1.plist put in resources. Does [[NSBundle mainBundle] bundlePath]; give correct path to my resources? Can any body give me any clue. I am using this for iPhone , and i am new to Mac os x development, can it be used for mac os x development as well? -- Regards, U'suf

    Read the article

  • Beginner servlet question: accessing files in a .war, which path?

    - by Navigateur
    When a third-party library I'm using tries to access a file, I'm getting "Error opening ... file ... (No such file or directory)" even though I KNOW the file is in the WAR. I've tried both packaged (.war) and "exploded" (directory) deployment, and the file is definitely there. I've tried setting full permissions on it too. It's on Unix (Ubuntu). File is war/dict/index.sense and the error is "dict/index.sense (No such file or directory)". It works fine on my Windows computer when running in hosted mode as a GWT app from Eclipse, just not when I transfer it to the Unix machine for deployment. My question is: has anybody experienced this before and/or are there differences in relative path that I should consider i.e. what's the root path for relative file access in a war?

    Read the article

  • Access to the path C:\... is denied?

    - by user2969489
    I've created a simple download manager with two textboxes and two buttons, one for download and one for specifying the path where i want to save the downloaded file... (folderBrowserDialog1.SelectedPath;) But when i specify the path that the file is going to be saved, it requires me to specify the type too like C:\Users\Me\Desktop\photo.jpg...When i leave it without \photo.jpg it shows C:\Users\Me\Desktop' is denied. I want that automatically to detect the extension and not to write \photo.jpg, bmp...everytime. Thanks.

    Read the article

  • Nginx http_mp4_module seam installed but dont work

    - by Tahola
    I try to use the http_mp4_module on my Ubuntu server but that didnt seem to work at all. When i check nginx -V i get : nginx version: nginx/1.1.19 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.1.19/debian/modules/chunkin-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/headers-more-nginx-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-development-kit --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-http-push --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-lua --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-module --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upload-progress --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.1.19/debian/modules/nginx-dav-ext-module --with-http_mp4_module and --with-http_flv_module are there, I also add on sites-available/domaine.conf location ~ .mp4$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; } location ~ .flv$ { flv; } and Nginx restarted witout error, everything seem ok but when i check my urls myvideo.mp4?start=60 return a 404 error (what i think is normal) and video.mp4?starttime=60 return the video but whatever the starttime number is i get the full video from the begining, did i miss something ?

    Read the article

  • SharePoint 2010 MSDN Labs

    - by Kelly Jones
    Eric Ligman, from Microsoft, posted a great blog post this week listing all of the SharePoint 2010 Virtual Labs that are available from Microsoft.  His blog entry is here: http://blogs.msdn.com/b/mssmallbiz/archive/2012/03/13/sharepoint-server-2010-msdn-virtual-labs-available-to-you-online-plus-more-sharepoint-2010-resources.aspx He also posted other resources as well. I’ve copied his Virtual Lab links here: SharePoint Server 2010 Virtual Labs MSDN Virtual Lab: SharePoint Server 2010: Introduction MSDN Virtual Lab: Getting Started with SharePoint 2010 MSDN Virtual Lab: SharePoint 2010 User Interface Advancements MSDN Virtual Lab: SharePoint Server 2010 Connectors & Using the Business Data Connectivity (BDC) Service MSDN Virtual Lab: SharePoint Server 2010: Advanced Search Security MSDN Virtual Lab: SharePoint Server 2010: Configuring Search UIs MSDN Virtual Lab: SharePoint Server 2010: Content Processing and Property Extraction MSDN Virtual Lab: SharePoint Server 2010: Developing a Custom Connector MSDN Virtual Lab: SharePoint Server 2010: Fast Search Web Crawler MSDN Virtual Lab: SharePoint Server 2010: Federated Search MSDN Virtual Lab: SharePoint Server 2010: Linguistics MSDN Virtual Lab: SharePoint Server 2010: People Search Administration and Management MSDN Virtual Lab: SharePoint Server 2010: Relevancy and Ranking MSDN Virtual Lab: Customizing MySites MSDN Virtual Lab: Designing Lists and Schemas MSDN Virtual Lab: Developing a BCS External Content Type with Visual Studio 2010 MSDN Virtual Lab: Developing a Sandboxed Solution with Web Parts MSDN Virtual Lab: Developing a Visual Web Part in Visual Studio 2010 MSDN Virtual Lab: Developing Business Intelligence Applications MSDN Virtual Lab: Enterprise Content Management MSDN Virtual Lab: LINQ to SharePoint 2010 MSDN Virtual Lab: Visual Studio SharePoint Tools MSDN Virtual Lab: Workflow In addition to the SharePoint Server 2010 Virtual Labs, here are a few other SharePoint 2010 resources that I thought you might also be interested in: Technical reference for Microsoft SharePoint Server 2010 SharePoint 2010: IT Pro Evaluation Guide Connecting SharePoint 2010 to Line-of-Business Systems to Deliver Business-Critical Solutions Configure SharePoint Server 2010 as a Single Server with Microsoft SQL Server: Test Lab Guide Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 Deploying FAST Search Server 2010 for SharePoint FAST Search Server 2010 for SharePoint Add or Remove an Index Column Upgrade worksheet for SharePoint Server 2010 Microsoft SharePoint Server 2010 Technical Library in Compiled Help format Microsoft SharePoint Foundation 2010 Technical Library in Compiled Help format Microsoft FAST Search Server 2010 for SharePoint Technical Library in Compiled Help format Microsoft Reseller partner Learning Path Microsoft solutions partners and ISVs Learning Path Microsoft partner Practice Accelerator for SharePoint Microsoft partner SharePoint 2010 Internal Use Licenses SharePoint Case Studies SharePoint MSDN Forums SharePoint TechNet Forums Microsoft SharePoint 2010 page on Microsoft Partner Network portal

    Read the article

  • Unzip individual files from multiple zip files and extract those individual files to home directory

    - by James P.
    I would like to unzip individual files. These files have a .txt extension. These files also live within multiple zipped files. Here is the command I'm trying to use. unzip -jn /path/to/zipped/files/zipArchiveFile2011\*.zip /path/to/specific/individual/files/myfiles2011*.txt -d /path/to/home/directory/for/extract/ From my understanding, the -j option excludes directories and will extract only the txt files The -n option will not overwrite a file if it has already been extracted. I've also learned that the forward slash in /path/to/zipped/files/zipArchiveFile2011\*.zip is necessary to escape the wildcard (*) character. Here is sample of error messages I'm coming accross: Archive: /path/to/zipped/files/zipArchiveFile20110808.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110807.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110808.txt Archive: /path/to/zipped/files/zipArchiveFile20110809.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110810.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110809.txt I feel that I'm missing something very simple. I've tried using single quotes (') and double quotes (") around directory paths. But no luck.

    Read the article

  • How to use semantic markup and Google Places to assist in local search SEO?

    - by ElHaix
    In this article, adding additional localized markup is supposed to help your site's SEO. ie. <div itemscope itemtype="http://data-vocabulary.org/Organization"> <span itemprop="name">Search Engine People</span> <span itemprop="address" itemscope itemtype="http://data-vocabulary.org/Address"> <span itemprop="street-address">100 Westney Road South Unit 200, Building E</span> <span itemprop="locality">Ajax</span>, <span itemprop="region">ON</span> <span itemprop="country-name">Canada</span> <span itemprop="postal-code">L1S 7H3</span> </div> What about a site that contains valid localized results, where the actual business location is not relevant. For example, a site with valid local results from San Francisco, CA and Phoenix, AZ. Should these tags be added to the localized results, and has anyone got any experience with how much adding these tags have improved results? In terms of Google Places, however, they seem to ask for the business' actual physical location. Is there a way to use Google Places in the aforementioned example to assist in SEO?

    Read the article

  • Crashplan not starting. Fails to find swt-gtk

    - by Pibben
    I've installed Crashplan on my (K)Ubuntu computer. However, it fails to start and the ui_output.log says: [09.02.12 15:24:43.518 INFO main root ] ************************************************************* [09.02.12 15:24:43.519 INFO main root ] ************************************************************* [09.02.12 15:24:43.524 INFO main root ] Loading lib/swt-64.jar, exists=true [09.02.12 15:24:43.525 INFO main root ] [file:/usr/local/crashplan/lib/com.backup42.desktop.jar, file:/usr/local/crashplan/lang/, file:/usr/local/crashplan/skin/, file:/usr/local/crashplan/lib/swt-64.jar] [09.02.12 15:24:43.527 INFO main root ] STARTED CrashPlanDesktop [09.02.12 15:24:43.528 INFO main root ] CPVERSION = 3.2.1 - 1332824401321 (2012-03-27T05:00:01:321+0000) [09.02.12 15:24:43.529 INFO main root ] ARGS = [ ] [09.02.12 15:24:43.531 INFO main root ] LOCALE = English (United States) [09.02.12 15:24:43.570 ERROR main com.backup42.desktop.CPDesktop ] Failed to launch CPDesktop; java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file, java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.C.<clinit>(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source) at com.backup42.desktop.CPDesktop.<init>(CPDesktop.java:231) at com.backup42.desktop.CPDesktop.main(CPDesktop.java:161) [09.02.12 15:24:43.570 ERROR main root ] Failed to launch CPDesktop; java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file I've installed the relevant (I think) swt-gtk packages.

    Read the article

  • How to export computers from Active Directory to XML using Powershell?

    - by CoDeRs
    I am trying to create a powershell scripts for Remote Desktop Connection Manager using the active directory module. My first thought was get a list of computers in AD and parse them out into XML format similar to the OU structure that is in AD. I have no problem with that, the below code will work just but not how I wanted. EG # here is a the array $OUs Americas/Canada/Canada Computers/Desktops Americas/Canada/Canada Computers/Laptops Americas/Canada/Canada Computers/Virtual Computers Americas/USA/USA Computers/Laptops Computers Disabled Accounts Domain Controllers EMEA/UK/UK Computers/Desktops EMEA/UK/UK Computers/Laptops Outside Sales and Service/Laptops Servers I wanted to have the basic XML structured like this Americas Canada Canada Computers Desktops Laptops Virtual Computers USA USA Computers Laptops Computers Disabled Accounts Domain Controllers EMEA UK UK Computers Desktops Laptops Outside Sales and Service Laptops Servers However if you run the below it does not nest the next string in the array it only restarts the from the beginning and duplicating Americas Canada Canada Computers Desktops Americas Canada Canada Computers Laptops Americas Canada Canada Computers Virtual Computers Americas USA USA Computers Laptops RDCMGenerator.ps1 #Importing Microsoft`s PowerShell-module for administering ActiveDirectory Import-Module ActiveDirectory #Initial variables $OUs = @() $RDCMVer = "2.2" $userName = "domain\username" $password = "Hashed Password+" $Path = "$env:temp\test.xml" $allComputers = Get-ADComputer -LDAPFilter "(OperatingSystem=*)" -Properties Name,Description,CanonicalName | Sort-Object CanonicalName | select Name,Description,CanonicalName $allOUObjects = $allComputers | Foreach {"$($_.CanonicalName)"} Function Initialize-XML{ ##<RDCMan schemaVersion="1"> $xmlWriter.WriteStartElement('RDCMan') $XmlWriter.WriteAttributeString('schemaVersion', '1') $xmlWriter.WriteElementString('version',$RDCMVer) $xmlWriter.WriteStartElement('file') $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$env:userdomain) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('userName',$userName) $xmlWriter.WriteElementString('domain',$env:userdomain) $xmlWriter.WriteStartElement('password') $XmlWriter.WriteAttributeString('storeAsClearText', 'false') $XmlWriter.WriteRaw($password) $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('size','1024 x 768') $xmlWriter.WriteElementString('sameSizeAsClientArea','True') $xmlWriter.WriteElementString('fullScreen','False') $xmlWriter.WriteElementString('colorDepth','32') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Group ($groupName){ #Start Group $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$groupName) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Server ($computerName, $computerDescription) { #Start Server $xmlWriter.WriteStartElement('server') $xmlWriter.WriteElementString('name',$computerName) $xmlWriter.WriteElementString('displayName',$computerDescription) $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() #Stop Server } Function Close-XML { $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() # finalize the document: $xmlWriter.Flush() $xmlWriter.Close() notepad $path } #Strip out Domain and Computer Name from CanonicalName foreach($OU in $allOUObjects){ $newSplit = $OU.split("/") $rebildOU = "" for($i=1; $i -le ($newSplit.count - 2); $i++){ $rebildOU += $newSplit[$i] + "/" } $OUs += $rebildOU.substring(0,($rebildOU.length - 1)) } #Remove Duplicate OU's $OUs = $OUs | select -uniq #$OUs # get an XMLTextWriter to create the XML $XmlWriter = New-Object System.XMl.XmlTextWriter($Path,$UTF8) # choose a pretty formatting: $xmlWriter.Formatting = 'Indented' $xmlWriter.Indentation = 1 $XmlWriter.IndentChar = "`t" # write the header $xmlWriter.WriteStartDocument() # # 'encoding', 'utf-8' How? # # set XSL statements #Initialize Pre-Defined XML Initialize-XML ######################################################### # Start Loop for each OU-Path that has a computer in it ######################################################### foreach ($OU in $OUs){ $totalGroupName = "" #Create / Reset Total OU-Path Completed $OU.split("/") | foreach { #Split the OU-Path into individual OU's $groupName = "$_" #Current OU $totalGroupName += $groupName + "/" #Total OU-Path Completed $xmlWriter.WriteStartElement('group') #Start new XML Group Create-Group $groupName #Call function to create XML Group ################################################ # Start Loop for each Computer in $allComputers ################################################ foreach($computer in $allComputers){ $computerOU = $computer.CanonicalName #Set the computers OU-Path $OUSplit = $computerOU.split("/") #Create the Split for the OU-Path $rebiltOU = "" #Create / Reset the stripped OU-Path for($i=1; $i -le ($OUSplit.count - 2); $i++){ #Start Loop for OU-Path to strip out the Domain and Computer Name $rebiltOU += $OUSplit[$i] + "/" #Rebuild the stripped OU-Path } if ($rebiltOU -eq $totalGroupName){ #Compare the Current OU-Path with the computers stripped OU-Path $computerName = $computer.Name #Set the computer name $computerDescription = $computerName + " - " + $computer.Description #Set the computer Description Create-Server $computerName $computerDescription #Call function to create XML Server } } } ################################################### # Start Loop to close out XML Groups created above ################################################### $totalGroupName.split("/") | foreach { #Split the if ($_ -ne "" ){ $xmlWriter.WriteEndElement() #End Group } } } Close-XML

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Loop through several servers, find specific dlls , get the dll version, internal filename and path?

    - by Graham
    I am a newby to Powershell, and using PS v2. I can see the massive potential it has, but I just can't get the following code to work fully. I am trying to end up with a csv file that contains the wild carded required dlls in the GAC_MSIL or sub-directory, get the dll version, internal filename and path, and the server IP address. The code is below, and it is in single line format because it appears easier to remote onto one of the servers in the server farm and run the single line from that console, ue to security log-ins etc. The code has produced a set of results, but only for the last server, it probably does the first server, then overwrites it but I am not sure about that. I have done a lot of reading about using arrays, and custom objects, and had a go at doing that, but my scripting skills in PS are not yet up to it. Code: $out = "Ouput_dll_ver_results.csv";foreach ($server in '11.222.33.123', '11.222.33.124') {$VersionInfo = (Get-ChildItem \$server\C$\windows\assembly\GAC_MSIL -recurse -Include abc*.dll,def*.dll,ghi*.dll,jkl*.dll | Where-Object { $.FullName -notmatch "\windows\assembly\temp\" })}; $VersionInfo | %{Get-Command $.FullName} | select -expand File* |Export-Csv $out Can you please advise if/how the above code can be corrected, and if not, what alternatives do I have to get the information I need. Many thanks in advance. Graham

    Read the article

  • Is it possible to trace the delegation path for a DNS lookup?

    - by Josh Glover
    I'm trying to determine why a Nagios host check is failing (hostnames and IPs have been changed to protect the guilty): : jmglov@laurana; host www.foo.com ;; connection timed out; no servers could be reached : jmglov@laurana; for ns in `grep -o '\([0-9]\+[.]\)\{3\}[0-9]\+$' /etc/resolv.conf`; do ping -qc 1 $ns; done PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 10.911/10.911/10.911/0.000 ms PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. --- 192.168.1.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms So I know that my nameservers are reachable, meaning that some nameserver along the delegation path to the authoritative nameserver for my host is not responding. Is there an easy way to determine which nameserver this is (basically a traceroute for DNS)?

    Read the article

  • lighttpd: weird behavior on multiple rewrite rule matches

    - by netmikey
    I have a 20-rewrite.conf for my php application looking like this: $HTTP["host"] =~ "www.mydomain.com" { url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/my_app.php" ) } I want to be able to put the webserver in kind of a "maintenance" mode while I update my application from scm. To do this, my idea was to enable an additional rewrite configuration file before this one. The 16-rewrite-maintenance.conf file looks like this: url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/maintenance_app.php" ) Now, on the maintenance page, I have a logo that doesn't get loaded. I get a 404 error. Lighttpd debug says the following: 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.349) -- sanatising URI 2012-12-13 20:28:06: (response.c.350) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (mod_access.c.135) -- mod_access_uri_handler called 2012-12-13 20:28:06: (response.c.470) -- before doc_root 2012-12-13 20:28:06: (response.c.471) Doc-Root : /www 2012-12-13 20:28:06: (response.c.472) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.473) Path : 2012-12-13 20:28:06: (response.c.521) -- after doc_root 2012-12-13 20:28:06: (response.c.522) Doc-Root : /www 2012-12-13 20:28:06: (response.c.523) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.524) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.541) -- logical -> physical 2012-12-13 20:28:06: (response.c.542) Doc-Root : /www 2012-12-13 20:28:06: (response.c.543) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.544) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.561) -- handling physical path 2012-12-13 20:28:06: (response.c.562) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.618) -- file not found 2012-12-13 20:28:06: (response.c.619) Path : /www/img/content/logo.png, /img/content/logo.png Any clue on why lighttpd matches both rules (from my application rewrite config and from my maintenance rewrite config) and concatenates them with a comma - that doesn't seem to make any sense?! Shouldn't it stop after the first match with rewrite-once?

    Read the article

  • Haproxy ACL for balance on URL request

    - by Elgreco08
    I'm usung Ubuntu with haproxy 1.4.13 version. Its load balancing two subdomains: app1.domain.com app2.domain.com now i want to be able to use ACL to send based on url request to the right backends For example: http://app1.domain.com/path/games/index.php sould be send to backend1 http://app1.domain.com/path/photos/index.php should be send to backend2 http://app2.domain.com/path/mail/index.php sould be send to backend3 http://app2.domain.com/path/wazap/index.php should be send to backend4 i did used the code the the following acl frontend http-farm bind 0.0.0.0:80 acl app1web hdr_beg(host) -i app1 # for http://app1.domain.com acl app2web hdr_beg(host) -i app2 # for http://app2.domain.com acl msg-url-1 url_reg ^\/path/games/.* acl msg-url-2 url_reg ^\/path/photos/.* acl msg-url-3 url_reg ^\/path/mail/.* acl msg-url-4 url_reg ^\/path/wazap/.* use_backend games if msg-url-1 app1web use_backend photos if msg-url-2 app2web use_backend mail if ..... backend games option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app1.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.10:80 check inter 2000 fall 3 server appsrv-2 192.168.1.11:80 check inter 2000 fall 3 backend photos option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app2.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.13:80 check inter 2000 fall 3 server appsrv-2 192.168.1.14:80 check inter 2000 fall 3 .... Since the path mail, photos...etc will be application pools on iis, i want to monitor them if they are alive, if the pool does not respond it should stop serving it. my problem is for sure in the regular expression in the ACL acl msg-url-4 url_reg ^\/path/wazap/.* What should i change in the ACL to make it work ? thanks for any hints

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • How to write PowerShell code part 3 (calling external script)

    - by ybbest
    In this post, I’d like to show you how to calling external script from a PowerShell script. I’d like to use the site creation script as an example. You can download script here. 1. To call the external script, you need to first to grab the script path. You can do so by calling $scriptPath = Split-Path $myInvocation.MyCommand.Path to grab the current script path. You can then use this to build the path for your external script path. $scriptPath = Split-Path $myInvocation.MyCommand.Path $ExternalScript=$scriptPath+"\CreateSiteCollection.ps1" $configurationXmlPath=$scriptPath+"\SiteCollection.xml" [xml] $configurationXml=Get-Content $configurationXmlPath & "$ExternalScript" $configurationXml Write-Host 2.If you like to pass in any parameters , you need to define your script parameters in param () at the top of the script and separate each parameter by a comma (,) and when calling the method you do not need comma (,) to separate each parameter. #Pass in the Parameters. param ([xml] $xmlinput)

    Read the article

  • How to stop Firefox on an SSD from freezing when using the search box or submitting a form?

    - by sblair
    Firefox usually freezes for about a second whenever I search for something from the toolbar search box, when submitting a form, or when clearing the search box history. I suspect it has something to do with the auto-complete feature. Using Windows 7's Resource Monitor, the problem seems to be from the file: C:\Users\<username>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile>\formhistory.sqlite-journal I believe this is a temporary file which caches database writes. The following screenshot shows the very high response times from six different searches, and that the queue length on drive C shoots off the scale: My Firefox profile is on an Intel X25-M G2 SSD. The problem doesn't seem to occur if I create a new profile on a hard disk drive. However, I'd like to know why the problem exists on the SSD in the first place (because it's an annoying problem which contradicts the reason I bought an SSD, and it might happen with other applications too), and how to prevent it. It still occurs if Firefox is started in safe mode, and with the recent beta versions. Updates: VACUUMing the Firefox profile databases does not help with this problem. The SSD Optimizer in the Intel SSD Toolbox does not help either.

    Read the article

  • Insert Hyperlink via VBA

    - by Martin
    I have a Word VBA macro that loops through a directory and writes down the file path of files selected for some criteria into a new Word document. Works well as plain text (as part of a loop): wdDocResults.Content.InsertAfter objFile.Path & Chr(13) However, I'd like them to be hyperlinks. The following works as single macro, but when called from within another script, it does nothing at all (no matter if path is provided as variable or string, or as H:... or \\MyServernameAsNetDrive...): ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= objFile.Path, _ SubAddress:="", ScreenTip:="", TextToDisplay:=objFile.Path If try to select the current line in order to make sure something is selected at the right place -- error: out of memory": wrdDocResults.Content.InsertAfter objFil.Path Selection.Expand wdLine ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:= objFile.Path, _ SubAddress:="", ScreenTip:="", TextToDisplay:=objFile.Path I also tried inserting a string resembling the Hyperlink field code ({ Hyperlink "..." }, which is of course not recognized... Any help is appreciated... Thanks in advance!

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • Emacs 24.1: How do I restore i-search Ctrl-Y behavior from older versions?

    - by Eric
    In emacs 24.1, when you do Ctrl-Y in an interactive search, it yanks the kill buffer into the search string ("it pastes the clipboard contents" in any-other-app's language) and tries to match it. In the last 20 versions or so, pressing Ctrl-Y matches the rest of the current line. I have two very common use cases: Match this line, revert the buffer, and search for the line (less often:) Where else is this text in the buffer? I tried modifying /lisp/isearch.el, switching the bindings for isearch-yank-line (which I want) and isearch-yank-kill (which I'm fine binding to the ridiculous \M-s\C-e key sequence). But I don't think this file even gets picked up. But I don't think this file even gets loaded. If I explicitly load it, I still get the 24.1 behavior. Here's my change: (add-hook 'isearch-mode-hook (lambda () (define-key isearch-mode-map "\C-y" 'isearch-yank-line) (define-key isearch-mode-map "\M-s\C-e" 'isearch-yank-kill) )) No change in the behavior. I even tried hacking isearch.el, still no change. This is on Windows btw, but I suspect it doesn't matter. Could someone tell me how I can restore the old binding?

    Read the article

  • How to write PowerShell code part 3 (calling external script)

    - by ybbest
    In this post, I’d like to show you how to calling external script from a PowerShell script. I’d like to use the site creation script as an example. You can download script here. 1. To call the external script, you need to first to grab the script path. You can do so by calling $scriptPath = Split-Path $myInvocation.MyCommand.Path to grab the current script path. You can then use this to build the path for your external script path. $scriptPath = Split-Path $myInvocation.MyCommand.Path $ExternalScript=$scriptPath+"\CreateSiteCollection.ps1" $configurationXmlPath=$scriptPath+"\SiteCollection.xml" [xml] $configurationXml=Get-Content $configurationXmlPath & "$ExternalScript" $configurationXml Write-Host 2.If you like to pass in any parameters , you need to define your script parameters in param () at the top of the script and separate each parameter by a comma (,) and when calling the method you do not need comma (,) to separate each parameter. #Pass in the Parameters. param ([xml] $xmlinput)

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Javascript Event in Innerhtml Resulting from PHP Server Script

    - by user144527
    I'm (very slowly) making a website, and I'm creating a search engine for the database, which is essential to organize the dependencies during data entry. Anyway, what I would like is to type a few keywords into a box, have a menu pop up with various search results, and have the box fill with the ID number of the selected entry when it's clicked. Currently, I have a document called search.php which fills a div called search-output using xmlhttp.open() and the innerhtml property. Everything is working perfectly except for filling the original search box with the ID number when clicking. My first attempt was to add an onclick event to each entry in the output from search.php. Unfortunately, I found that javascript inserted into innerhtml is not run for security reasons. I've been Googling for hours but haven't been able to find a solution. How can I get the original search text box to fill with the correct ID when I click it? Is what I'm doing a good setup for the results I desire, or is there a better way to integrate search features into data entry?

    Read the article

  • RedirectPermanent vs RewriteRule [R]

    - by notbrain
    I currently have a perm_redirects.conf file that gets included into my apache config stack where I have lines in the format RedirectPermanent /old/url/path /new/url/path It looks like I'm required to use an absolute URL for the new path, e.g.: http://example.com/new/url/path. In the logs I'm getting "incomplete redirect target /new/url/path was corrected to http://example.com/new/url/path." (paraphrased). In the 2.2 docs for RewriteRule, at the bottom they show the following as being a valid redirect, with only the url-paths instead of an abs URL for the right hand side of the redirect: RewriteRule ^/old/url/path(.*) /new/url/path$1 [R] But I can't seem to get that format to work to replicate the functionality of the RedirectPermanent version. Is this possible?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >