Search Results

Search found 17460 results on 699 pages for 'validate request'.

Page 220/699 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Apache file negotiation failed

    - by lorenzo.marcon
    I'm having the following issue on a host using Apache 2.2.22 + PHP 5.4.0 I need to provide the file /home/server1/htdocs/admin/contents.php when a user makes the request: http://server1/admin/contents, but I obtain this message on the server error_log. Negotiation: discovered file(s) matching request: /home/server1/htdocs/admin/contents (None could be negotiated) Notice that I have mod_negotiation enabled and MultiViews among the options for the related virtualhost: <Directory "/home/server1/htdocs"> Options Indexes Includes FollowSymLinks MultiViews Order allow,deny Allow from all AllowOverride All </Directory> I also use mod_rewrite, with the following .htaccess rules: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^\./]*)$ index.php?t=$1 [L] </IfModule> It seems very strange, but on the same box with PHP 5.3.6 it used to work correctly. I'm just trying an upgrade to PHP 5.4.0, but I cannot solve this negotiation issue. Any idea on why Apache cannot match contents.php when asking for content (which should be what mod_negotiation is supposed to do)?

    Read the article

  • Salesforce deployment guideline using Sandbox

    - by ybbest
    Create Deployment connection Enable the inbound change set settings on the destination Environment you would like to deploy the solution to. Enable the outbound change set settings on the source Environment where you package your application. The best practice is to Package everything in the changeset and salesforce will only deploy the change into your destination environment. If you only package the change, you could miss some of the changes. You can clone the change set on the source destination however the initial packaging takes some time as you need to go through everything and select the components manually. After the change set is packaged, you need to upload the chagneset so that destination environment can see the change set in its incoming change set list. Click Validate the change set before deployment. References: Development Lifecycle Guide Change Sets Best Practices

    Read the article

  • How to talk to a virtual host on a guest OS?

    - by Bernd
    Let's say there is a host OS (Mac OS X) and a virtual machine running Ubuntu as guest OS. The guest OS has the IP 192.186.56.101 and some virtual hosts, e.g. ubuntu.server So, how to really map a request to the virtual host ubuntu.server on the guest OS? I tried: Configure the host OS in /etc/hosts to map ubuntu.server to 192.186.56.101 On the guest OS we have the trouble. It accepts the request for 192.186.56.101 which is not ubuntu.server and therefor the ubuntu.server virtual host will never be requested. Just the localhost on the guest OS. It might surely be possible to simply then use 192.168.56.101. But this would only work for one host per guest OS. Any idea? Or is there a bug in my train of thoughts?

    Read the article

  • How to set up a one-man research in the difference between BDD and Waterfall?

    - by Martijn van der Maas
    Earlier, I asked a question about how to measure the quality of a project. The outcome of that question was that the quality of the project can be divided into two parts: Internal quality (code quality, measurable by code quality metrics) External quality (Acceptance test, how well the software meets the requirements) So based on that, I want to set up some research and validate the outcome of the project. The problem is, I will conduct this research on my own, so it's not possible to run the project once in BDD style and the other one in waterfall by myself. It's also not possible to compare BDD and waterfall projects on a larger scale, due to the fact that there are not enough BDD projects that can be measured because of the age of BDD. So, my question is: did anybody face this problem? How could I execute my experiment in such a way that it is of scientific value?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • apache2 mod_proxy without 301 moved permanently?

    - by Guy Sensei
    Is it possible to not send a 301 moved permanently response to the client when using mod_proxy? I would like the client to deal with the reverse proxy as opaquely as possible. My Virtual Host Settings- relevant snippet ProxyPreserveHost On ProxyPass /GTM http://192.168.1.27/GTM ProxyPassReverse /GTM http://192.168.1.27/GTM wget localhost/GTM --2011-09-27 21:54:22-- localhost/GTM Resolving localhost... ::1, 127.0.0.1 Connecting to localhost|::1|:80... failed: Connection refused. Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: localhost/GTM/ [following] --2011-09-27 21:54:22-- localhost/GTM/ Reusing existing connection to localhost:80. HTTP request sent, awaiting response... 200 OK

    Read the article

  • Suscribers locking during snapshot replication

    - by remi_bourgarel
    Hi all :) Here is my architecture : I have a main server and 4 replica (the servers are synchronized with snapshot replication). The replication is working fine, except for one thing : during the replication a lot of SELECT request on one of the replica fail with a time-out. Here are my questions : Can I avoid these time-out ? If I can't : how can I detect the start and the end of the replication to redirect all the request on one of the replica to the main ? Thanks Sorry if you already answered to that kind of question but I couldn't find anything.

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Custom Session Management using HashTable

    - by kaleidoscope
    ASP.NET session state lets you associate a server-side string or object dictionary containing state data with a particular HTTP client session. A session is defined as a series of requests issued by the same client within a certain period of time, and is managed by associating a session ID with each unique client. The ID is supplied by the client on each request, either in a cookie or as a special fragment of the request URL. The session data is stored on the server side in one of the supported session state stores, which include in-process memory, SQL Server™ database, and the ASP.NET State Server service. The latter two modes enable session state to be shared among multiple Web servers on a Web farm and do not require server affinity. Implement Custom session Handler you need to follow following process : 1. Create class library which will inherit from  SessionStateStoreProviderBase abstract Class. 2. Implement all abstract Method in your base class. 3.Change Mode of session to “Custom” in web.config file and provide Provider as your Namespace with classname. <sessionState mode=”Custom” customProvider=”Namespace.classname”> <Providers> <add name=”Name” type=”Namespace.classname”> </sessionstate> For more Details Please refer following links :   http://msdn.microsoft.com/en-us/magazine/cc163730.aspx http://msdn.microsoft.com/en-us/library/system.web.sessionstate.sessionstatestoreproviderbase.aspx - Chandraprakash, S Technorati Tags: Chandraprakash,Session state Managment

    Read the article

  • VS 11 Beta merge tool is awesome, except for resovling conflicts

    - by deadlydog
    If you've downloaded the new VS 11 Beta and done any merging, then you've probably seen the new diff and merge tools built into VS 11.  They are awesome, and by far a vast improvement over the ones included in VS 2010.  There is one problem with the merge tool though, and in my opinion it is huge.Basically the problem with the new VS 11 Beta merge tool is that when you are resolving conflicts after performing a merge, you cannot tell what changes were made in each file where the code is conflicting.  Was the conflicting code added, deleted, or modified in the source and target branches?  I don't know (without explicitly opening up the history of both the source and target files), and the merge tool doesn't tell me.  In my opinion this is a huge fail on the part of the designers/developers of the merge tool, as it actually forces me to either spend an extra minute for every conflict to view the source and target file history, or to go back to use the merge tool in VS 2010 to properly assess which changes I should take.I submitted this as a bug to Microsoft, but they say that this is intentional by design. WHAT?! So they purposely crippled their tool in order to make it pretty and keep the look consistent with the new diff tool?  That's like purposely putting a little hole in the bottom of your cup for design reasons to make it look cool.  Sure, the cup looks cool, but I'm not going to use it if it leaks all over the place and doesn't do the job that it is intended for. Bah! but I digress.Because this bug is apparently a feature, they asked me to open up a "feature request" to have the problem fixed. Please go vote up both my bug submission and the feature request so that this tool will actually be useful by the time the final VS 11 product is released.

    Read the article

  • The JavaServer Faces 2.2 viewAction Component

    - by Janice J. Heiss
    Life just got easier for users of JavaServer Faces. In a new article, now up on otn/java, titled “New JavaServer Faces 2.2 Feature: The viewAction Component,” Tom McGinn, Oracle’s Principal Curriculum Developer for Oracle Server Technologies, explores the advantages offered by the JavaServer Faces 2.2 view action feature, which, according to McGinn, “simplifies the process for performing conditional checks on initial and postback requests, enables control over which phase of the lifecycle an action is performed in, and enables both implicit and declarative navigation.”As McGinn observes: “A view action operates like a button command (UICommand) component. By default, it is executed during the Invoke Application phase in response to an initial request. However, as you'll see, view actions can be invoked during any phase of the lifecycle and, optionally, during postback, making view actions well suited to performing preview checks.”McGinn explains that the JavaServer Faces 2.2 view action feature offers several advantages over the previous method of performing evaluations before a page is rendered:   * View actions can be triggered early on, before a full component tree is built, resulting in a lighter weight call.   * View action timing can be controlled.   * View actions can be used in the same context as the GET request.   * View actions support both implicit and explicit navigation.   * View actions support both non-faces (initial) and faces (postback) requests.Read the complete article here.

    Read the article

  • How to remove this malware

    - by muratto12
    Some files in my site contains some extra lines. After I've deleted them manually, I find them corrupted again some time later. it is all coming from http://*.changeip.name/ some js files. How can I remove them? <!--pizda--><script type='text/javascript' src='http://m2.changeip.name/validate.js?ftpid=15035'></script><!--/pizda--> <iframe src=http://pizda.changeip.name/?f=1065433 framebor der=0 marginheight=0 marginwidth=0 scrolling=0 width=5 heigh t=5 border=0> <iframe src=http://kuku.changeip.name/?f=1065433 framebord er=0 marginheight=0 marginwidth=0 scrolling=0 width=5 height =5 border=0>

    Read the article

  • Accessing the JSESSIONID from JSF

    - by Frank Nimphius
    The following code attempts to access and print the user session ID from ADF Faces, using the session cookie that is automatically set by the server and the Http Session object itself. FacesContext fctx = FacesContext.getCurrentInstance(); ExternalContext ectx = fctx.getExternalContext(); HttpSession session = (HttpSession) ectx.getSession(false); String sessionId = session.getId(); System.out.println("Session Id = "+ sessionId); Cookie[] cookies = ((HttpServletRequest)ectx.getRequest()).getCookies(); //reset session string sessionId = null; if (cookies != null) { for (Cookie brezel : cookies) {     if (brezel.getName().equalsIgnoreCase("JSESSIONID")) {        sessionId = brezel.getValue();        break;      }   } } System.out.println("JSESSIONID cookie = "+sessionId); Though apparently both approaches to the same thing, they are different in the value they return and the condition under which they work. The getId method, for example returns a session value as shown below grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692!1322120041091 Reading the cookie, returns a value like this grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692 Though both seem to be identical, the difference is within "!1322120041091" added to the id when reading it directly from the Http Session object. Dependent on the use case the session Id is looked up for, the difference may not be important. Another difference however, is of importance. The cookie reading only works if the session Id is added as a cookie to the request, which is configurable for applications in the weblogic-application.xml file. If cookies are disabled, then the server adds the session ID to the request URL (actually it appends it to the end of the URI, so right after the view Id reference). In this case however no cookie is set so that the lookup returns empty. In both cases however, the getId variant works.

    Read the article

  • Apache in front of tomcat on Railo proxy with ajp

    - by user1468116
    I'm trying to setup apache in front of the tomcat embedded in railo. I have this settings: <VirtualHost *:80> DocumentRoot "/var/www/myapp" ServerName www.myapp.test ServerAlias www.myapp.test ProxyRequests Off ProxyPass /app ! <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPassReverse / ajp://%{HTTP_HOST}:8009/ RewriteEngine On # If it's a CFML (*.cfc or *.cfm) request, just proxy it to Tomcat: RewriteRule ^(.+\.cf[cm])(/.*)?$ ajp://%{HTTP_HOST}:8009/$1$2 [P] My server.xml : <Host name="www.myapp.test" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/var/www/myapp" /> <Alias>myapp.test</Alias> </Host> The index is loaded, but if I try to load some internal page I got: The proxy server could not handle the request GET /report/myreportname. Reason: DNS lookup failure for: localhost:8009report Could you help me?

    Read the article

  • How should programmers handle identity theft?

    - by Craige
    I recently signed up for an iTunes account, and found that somebody had fraudulently used MY email to register their iTunes account. Why Apple did not validate the email address, I will never know. Now I am told that I cannot use my email address to register a new iTunes account, as this email address is linked to an existing account. This got me thinking, as developers, database administrators, technical analysts, and everything in between, how should we handle reports of a fraud account? Experience teaches us never to re-assign identifying credentials. This can break things and/or cause mass confusion, especially in the realm of the web. That is, if we are are needing to reassign an identifying user credential we can very likely break a users bookmark by making a page render data that previously did not exist at that location. So if we have been taught not to re-assign details like these, how should we handle such a case where an account is discovered to be a fraud and the owner of the identity (e-mail or user name) wishes to claim this detail for their account?

    Read the article

  • Windows Server 2008R2 IIS7.5, Requests getting 401 status

    - by TLBH
    We have a web site running in windows server 2008r2 iis7.5 and we are seeing several errors reported from our global error handler saying that "Request Timed Out". I matched up one to the IIS log file and see the request took 135116 (presumably milliseconds) had an sc-status of 401 an sc-sub0status of 0 and an sc-win32-status of 64. 2 requests failed in these way but lots of surrounding requests (1979 successful requests vs 2 fails) for the same user went through perfectly fine- with the same cs-username which makes a 401 seem a little odd. The target of the requests is an ASP.Net web service's web method called by the .net client library- it's called a lot of times per user (3 times per second) to keep a page updated. We're getting some users reportng seeing a freezing effect and I think this may be the cause, any ideas? Peter

    Read the article

  • Add static ARP entries when network is brought up

    - by jozzas
    I have some pretty dumb IP devices on a subnet with my Ubuntu server, and the server receives streaming data from each device. I have run into a problem in that when an ARP request is issued to the device while it is streaming data to the server, the request is ignored, the cache entry times out and the server stops receiving the stream. So, to prevent the server from sending ARP requests to these devices altogether, I would like to add a static ARP entry for each, something like arp -i eth2 -s ip.of.the.device mac:of:the:device But these "static" ARP entries are lost if networking is disabled / enabled or if the server is rebooted. Where is the best place to automatically add these entries, preferably somewhere that will re-add them every time the interface eth2 is brought up? I really don't want to have to write a script that monitors the output of arp and re-adds the cache entries if they're missing. Edit to add what my final script was: Created the file /etc/network/if-up.d/add-my-static-arp With the contents: #!/bin/sh arp -i eth0 -s 192.168.0.4 00:50:cc:44:55:55 arp -i eth0 -s 192.168.0.5 00:50:cc:44:55:56 arp -i eth0 -s 192.168.0.6 00:50:cc:44:55:57 And then obviously add the permission to allow it to be executed: chmod +x /etc/network/if-up.d/add-my-static-arp And these arp entries will be manually added or re-added every time any network interface is brought up.

    Read the article

  • Windows redirect traffic to different DNS name not fixed IP address (hosts file equivalent)

    - by Arik Raffael Funke
    Using the Windows hosts file, one can redirect traffic for a domain to a specific IP address, e.g. domainA.com -- 127.0.0.1 I am looking for a SIMPLE way to do the same, but for a target domain name not for a target IP address (as this is dynamic), I.e. domainA.com -- domainB.com Addition: After the getting some initial answers I think I need to concretise my question. Situation: I have an application which looks up the IP of the target domain via DNS and then connects via HTTP to the IP address. I do not have control over any proxy settings. Option 1 Basically I am looking for a way to: intercept DNS requests for a domainA.com launch a DNS request for a domainB.com serve the IP of domainB.com in response to the request for domainA.com Without running an entire DNS server. Option 2 If a DNS server is the only way, in the alternative I would also be happy with an solution to how to define a non-standard DNS-server for a single application. Any ideas for wrapper applications, etc?

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Duplicity can't connect to CloudFiles "Network is unreachable"

    - by jwandborg
    Whenever I click "Backup now" in the Backup GUI, the smaller "Back Up" window opends, and after a while I get the following error message: Traceback (most recent call last): File "/usr/bin/duplicity", line 1359, in with_tempdir(main) File "/usr/bin/duplicity", line 1342, in with_tempdir fn() File "/usr/bin/duplicity", line 1202, in main action = commandline.ProcessCommandLine(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/duplicity/commandline.py", line 942, in ProcessCommandLine globals.backend = backend.get_backend(args[0]) File "/usr/lib/python2.7/dist-packages/duplicity/backend.py", line 156, in get_backend return _backends[pu.scheme](pu) File "/usr/lib/python2.7/dist-packages/duplicity/backends/cloudfilesbackend.py", line 70, in __init__ self.container = conn.create_container(container) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 250, in create_container response = self.make_request('PUT', [container_name]) File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 189, in make_request response = retry_request() File "/usr/lib/python2.7/dist-packages/cloudfiles/connection.py", line 182, in retry_request self.connection.request(method, path, data, headers) File "/usr/lib/python2.7/httplib.py", line 955, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.7/httplib.py", line 989, in _send_request self.endheaders(body) File "/usr/lib/python2.7/httplib.py", line 951, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 811, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 773, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 1154, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 101] Network is unreachable I use Rackspace CloudFiles as a storage backend, last backup was 3 days ago (successful. I have not changed any settings since then.

    Read the article

  • Problem connecting to SSH in office network

    - by Jeune
    I have trouble connecting via SSH to a server whenever I am in the office. I get as far as being prompted for my password and then after that there's a long wait which always ends in a Write failed: Broken pipe This is only for connecting via SSH. I use svn to commit files to a repository hosted on the same server and there are no hitches. Furthermore, this only happens in our office. When I go the university or whenever I am at home or at the coffee shop I am able to connect seamlessly. There are no firewalls in our office. It's just a basic wireless router connected to a modem setup. It's the same setup I have at home and I guess the same setup in the coffee shop. What are the causes for a broken pipe and why does this phenomenon only happen when I try connect via SSH and not when I work with svn on the same server? Updated: Some debug logs after authentication: debug3: packet_send2: adding 48 (len 64 padlen 16 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env LIBGL_DRIVERS_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env DEFAULTS_PATH debug3: Ignored env SESSION_MANAGER debug3: Ignored env USERNAME debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env DESKTOP_SESSION debug3: Ignored env LIBGL_ALWAYS_INDIRECT debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug1: Sending env LANG = en_PH.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env GNOME_KEYRING_PID debug3: Ignored env MANDATORY_PATH debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 UPDATE 2011-14-07: I am able to connect to the server via SSH now. I didn't do anything but that's because there is no one in the office but me! Having said that, is it possible that it has something to do with the number of sessions an SSH server can handle? UPDATE 2011-14-07: I try to login via SSH through Putty on another machine running windows together with my current SSH session in Ubuntu and now it seems my SSH session in Ubuntu has been dropped. I can't type into the terminal. Is Putty the culprit now?

    Read the article

  • Restoring Databases

    - by Grant Fritchey
    I like the way Mike Walsh phrased it: You're Only As Good as Your Ability To Restore. Ain't it the truth. You may be taking backups, incrementals, and log backups of your databases. You may have DBCC in place, and all that fun stuff. But if you haven't restored the database, what do you have? You don't know. The trick is, restoring databases takes up a heck of a lot of space on your servers. To test all your productions backups, you'd need a system with as much space as production. unless.. Ever heard of SQL Virtual Restore? Check it out. With this, you answer Mike's questions and validate your backups without having to have twice the amount of space. That's a win, and we all know, winning is better than losing.

    Read the article

  • How is this modsec rule getting triggered?

    - by BipedalShark
    I made a GET request to the URL, http://domain.tld/test/docs/index.php?create_table=1&step=2 and got a 403 response code. It turns out this modsec rule is getting triggered: Access denied with code 403 (phase 2). Pattern match "(?:ogg|gopher|zlib|(?:ht|f)tps?)\:/" at ARGS:gltr_redir. [file "/opt/mod_security/10_asl_rules.conf"] [line "827"] [id "340153"] [rev "22"] [msg "Generic PHP code injection protection via ARGS 3"] [severity "CRITICAL"] I would assume ARGS refers to GET/POST data, but there's no gltr_redir in the query string. And, being a GET request, there's obviously no POST data. So how is this rule being triggered?

    Read the article

  • Making WIF local STS to work with your ASP.NET application

    - by DigiMortal
    Making Windows Identity Foundation (WIF) STS test application work with your solution is not as straightforward process as you can read from books and articles. There are some tricks and some configuration modifications you must do to get things work. Fortunately these steps are simple one. 1. Move your application to IIS or IIS Express If your application uses development web server that ships with Visual Studio then make your application use IIS or IIS Express. You get simple support for IIS Express to Visual Studio 2010 after installing Visual Studio 2010 SP1. You can read more from my blog posting Visual Studio 2010 SP1 Beta supports IIS Express. NB! You don’t have to move your dummy STS project to IIS. 2. Change request validation mode to ASP.NET 2.0 As a next thing you will get the following error when coming back from dummy STS service: HttpRequestValidationException (0x80004005): A potentially dangerous Request.Form value was detected from the client. Open web.config of your application and add the following line before </system.web>: <httpRuntime requestValidationMode="2.0" /> Now you are done with configuring web application to work with STS.

    Read the article

  • Apache Serving 403 Forbidden after OS X Snow Leopard Upgrade to Version 10.6.6

    - by Ian Oxley
    I've just upgraded my MacBook Pro to OS X Snow Leopard version 10.6.6 and now Apache is misbehaving: requests to http://localhost/ generate a 403 Forbidden response -- FIXED requests to any of my virtual hosts seem to generate a 200 Ok response, but contain zero bytes Some further info that might be useful: I'm using the Apache that comes bundled with OS X. I'm using PHP from http://www.entropy.ch/software/macosx/php/ (which is in /usr/local/bin) I've had look at the Apache error log and the only error seems to be the following: [notice] child pid 744 exit signal Segmentation fault (11) I'm completely stumped by this. Any help would be much appreciated. UPDATE Ok, I've managed to resolve the 403 Forbidden error thanks to http://techtrouts.com/mac-os-x-105-web-sharing-forbidden-403-on-httplocalhostusername/ I'm still having the second problem though for any request e.g. this now happens when I request http://localhost

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >