Search Results

Search found 12964 results on 519 pages for 'pass summit 2013'.

Page 50/519 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • How to pass value to another view-controllers?

    - by ICoder
    I want to pass localstringtextnote to Uploadviewcontroller by this way , UIViewController *controllerNew = [[UploadViewController alloc] initWithNibName:@"UploadView" bundle:nil owner:self]; controllerNew.localStringtextnote = localStringtextnote; [self.navigationController pushViewController:controllerNew animated:YES]; [controllerNew release]; but i got this error"@property localstringtextnote not fond in the object of type uiviewcontroller" or i want to pass through modalTransistionstyle UploadViewController *aSecondViewController = [[UploadViewController alloc] initWithNibName:@"UploadView" bundle:nil]; aSecondViewController.modalTransitionStyle = UIModalTransitionStyleCoverVertical; [self presentModalViewController:aSecondViewController animated:YES]; [UIView commitAnimations]; How to do this?Thanks in advance.

    Read the article

  • Corona SDK - Make a character pass through a platform

    - by Andy Res
    I'm building a game that has a character which should jump up on multiple platforms. The jumping functionality is done, but I would like if the character is just below a platform (static body), when I press the "jump" button, the character should pass through that platform and then sit on it. Right now it collides with the platform, and character cannot jump on it. Do you have any idea how this can be achieved? Right now the platforms are represented by rectangles with "static" body type: local platform = display.newRect( 50, 280, 150, 10 ) platform:setFillColor ( 55, 55, 55) physics.addBody ( platform, "static", {density=1.0, friction=1.0, bounce=0 }) And I was thinking if I could change, or remove the body type of platform when the character collids with it, so he can pass trough platform, but I don't know how to do this, or in general if this will work... maybe there are some built in techniques on how to achieve the effect I want?

    Read the article

  • C# 4.0 how to pass variables to threads?

    - by Aviatrix
    How would i pass some parameters to a new thread that runs a function from another class ? What i'm trying to do is to pass an array or multiple variables to a function that sits in another class and its called by a new thread. i have tried to do it like this Functions functions = new Functions(); string[] data; Thread th = new Thread(new ParameterizedThreadStart(functions.Post())); th.Start(data); but it shows error "No overload for method 'Post' takes 0 arguments" Any ideas ?

    Read the article

  • Pass, edit and return variable in C

    - by Supertecnoboff
    I am new to C programming and recently I have been playing around with functions in C programming. But I have a simple problem: Basically I want to pass an integer to a function, have the function edit it and then pass back the integer to the main function. I am working on this but it isn't working..... Here is my code: int update_SEG_values(int DIGIT_1, int DIGIT_2) { // How many tens in the "TEMP_COUNT". DIGIT_2 = ((TEMP_COUNT) / 10); // How much is left for us to display. TEMP_COUNT = TEMP_COUNT - ((DIGIT_2) * 10); // How many ones. DIGIT_1 = ((TEMP_COUNT) / 1); return(DIGIT_1, DIGIT_2); } What am I doing wrong here?

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Oracle's HR Summit featuring Joyce Westerdahl is next week in Chicago!

    - by Jay Richey, HCM Product Marketing
    This special full day HR Summit will examine the future of work, and how shifting demographics, new talent pools, changing workforce practices, and evolving business models are impacting the HR landscape.   Joyce Westerdahl, Oracle Senior VP for HR, will share her HR strategies and insight as to how she created a flexible, global workforce that has supported the Oracle's ongoing transformation into an integrated technology solutions provider. Marcie van Houton, Fusion HCM Product Strategy Director, will delve into the innovative technologies that Oracle has developed to support all this change. And Sheryl Johnson, Director, Oracle Fusion HCM, PwC, will examine how high performing HR organizations are increasing their relevancy and value to the business, using organizational best practices and transformational technologies to drive real business results. Wednesday, December 7, 2011 11:00 a.m. – 4:15 p.m. JW Marriott Chicago 151 West Adams Street Chicago, Illinois 60603 www.oracle.com/us/dm/h2fy11/17109-nafm11032950mpp025-se-518477.html   

    Read the article

  • Pass --nogpgcheck to yum via puppet

    - by quickshiftin
    How would one get a --nogpgcheck option to yum via puppet? I've tried package { 'unsigned-package': ensure => latest, install_options => ['--nogpgcheck'], } and package { 'unsigned-package': ensure => latest, install_options => ['nogpgcheck'], } but looking at the output from an agent run, yum isn't getting that option. As an aside (and maybe the reason it's not working for me), how do I verify my puppet has the install_options feature? I'm running puppet 3.3.0-rc2.

    Read the article

  • proxy pass for activeMQ

    - by user1172482
    I have a apache server that I'm trying to use for proxy access my activeMQ admin page. I am able to load the inital landing page properly, but I can't seem to load any of the sub-pages (Queues, Connections, etc.). My proxypass rules on the apache server are the following: ProxyPass /foo http://10.5.124.108:8161/admin ProxyPassReverse /foo http://10.5.124.108:8161/admin The activeMQ installation included a activemq-httpd.conf file in /etc/httpd/conf.d/. Proxy connections there are enabled: ProxyRequests On ProxyVia On <Proxy *> Allow from all Order allow,deny </Proxy> ProxyPass /admin http://localhost:8161/admin ProxyPassReverse /admin http://localhost:8161/admin ProxyPass /message http://localhost:8161/admin/send ProxyPassReverse /message http://localhost:8161/admin/send From what I've read the proxypass rules should be recursive (the rule for /foo should also work for /foo/bar). Is there something else that I'm missing here that's preventing me from accessing pages beyond the initial admin landing page?

    Read the article

  • Modem doesn't seem to pass internet connection to router

    - by Vian Esterhuizen
    I was supplied a Cisco DPC3825 modem by my ISP and use a Cisco E4200 as my main router. Today the Internet just stopped working, just out of the blue, I can't think of anything that would have triggered it. It's been working pretty good for the past month except for a few random blips here and there. After some trouble shooting I realized that a direct wired connection to the modem would get me Internet access but if I was wired to the router as I was before I would have no connection. Assuming it was router I connected a TP-Link WR841N but it had the exact same problem. Also, connecting to the router via Wi-Fi from multiple devices will connect me to the router but I still can't get access to the Internet. From these test, it seems that the modem just won't send Internet through to the router but is clearly connecting and able to directly connect to my PC. What I've Tried A full reset and factory reset on the E4200 A full reset on the modem (but I believe my ISP has remote access to the modem because the passwords and so on are always set back). My ISP has remotely reconfigured the modem What I Should Try What else can I try? What should I do to try to narrow down the issue? Can you figure out what the problem might be based on this information?

    Read the article

  • Pass the output of ls to diff

    - by joachim
    I have one file that contains a list of files from a server, and a local folder that I compare to that manifest. Obviously, I do 'ls -1 listing_local' and then diff that file with listing_server. But is it possible to diff the manifest and the output of ls immediately to the diff command?

    Read the article

  • Nginx rewrite rule with proxy pass

    - by Yoldar-Zi
    I'm trying to implement nginx rewrite rules for the following situation Request: http://192.168.64.76/Shep.ElicenseWeb/Public/OutputDocuments.ashx?uinz=12009718&iinbin=860610350635 Should be redirected to: http://localhost:82/Public/OutputDocuments.ashx?uinz=12009718&iinbin=860610350635 I tried this with no luck: location /Shep.ElicenseWeb/ { rewrite ^/Shep.ElicenseWeb/ /$1 last; proxy_pass http://localhost:82; } What is the correct way to perform such a rewrite for nginx ?

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • Apache Proxy Pass and Web Sockets

    - by James
    I'm using Apache with the mod_proxy module to reverse proxy my Node.js application through to port 80, so that we can access it as an internal application. I have a file in sites-enabled which contains this: VirtualHost *:80> DocumentRoot /var/www/internal/ ServerName internal ServerAlias internal <Directory /var/www/internal/public/> Options All AllowOverride All Order allow,deny Allow from all </Directory> ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ ProxyPreserveHost on ProxyTimeout 1200 LogLevel debug AllowEncodedSlashes on </VirtualHost> As I said, our application is written in Node.js and we're using socket.io to make use of web-sockets, as our application also contains realtime elements to it. The problem is, mod_proxy doesn't seem to handle web sockets and we get errors when trying to use them: WebSocket connection to 'ws://bloot/socket.io/1/websocket/nHtTh6ZwQjSXlmI7UMua' failed: Unexpected response code: 502 How can we fix this issue and keep sockets working, as the only way we can get it working currently is to access the site via ip:port which we don't want to do. Also, as a side question, how can I get ErrorDocument to work properly? Our error files are stored in /var/www/internal/public/error/ but they seem to get put through the proxy too?

    Read the article

  • USB key to pass password in Centos 6

    - by Andrew
    I had a room mate that put a livecd in my desktop and looked around on my machine. I caught him in the act and threw him out. I haven't had a room mate for a while now and so as to avoid the livecd issue again I encrypted the hard drive, the machine is running centos 6.3. Is there anyway that I can avoid typing the password in each time if I have usb key in the machine to feed the password to the system? Additional question. Is there anything you can suggest to solve the problem I have ? Thanks

    Read the article

  • Cisco Pix does not let traffic pass from outside to inside even though ACL permits

    - by Rickard
    I have tried to make my pix 515 allow traffic from outisde interface to inside, but despite permitting ACL's, it doesn't seem to let traffic through. (It is letting traffic out as it should though) I am have tried both of the following: access-list acl_in extended permit tcp any host 10.131.73.2 eq www and access-list acl_in extended permit ip any any None of them help, but I can access 10.131.73.2 from any host on the inside network. This is a one single host on the inside that should every now and then have an HTTP server running for development purpouses, so it doesn't need to reside on DMZ (and as far as I know, I can't place it on DMZ either as it's in the same subnet as the other ip's I have. Could I have missed anything? I am using PIX Version 8.0(4) My current running config looks like this: http://pastebin.com/TvRFyDrF Hope someone can help me get this working.

    Read the article

  • LameUser trying - apache2 webserver authentication - IP range to access without pass prompt others with it

    - by Mikee
    I have (maybe silly) question regarding the apache2 webserver and security - I am trying to archieve this: Users connecting from 192.168.1.24 not to be prompted for password and allowed Others asked for username and password if correct then connect. I am trying to do this for the whole directory /var/www No matter whether I put the code into .htaccess file or in httpd.conf it doesn't work for me. Order deny,allow Deny from all AuthName "PassRequest" AuthType Basic AuthUserFile /var/.htpasswd Require valid-user Allow from 192.168.1.24 Satisfy Any If I try to connect to the page I am allowed from both the allowed IP or any other, If I remove the satisfy any line then I am prompted for password, if I remove the password too and try to connect from different IP I am NOT REFUSED ... is there some module that needs to be activated or why is the IP directive skipped ? It needs to be put in every folder or /var/www/.htaccess is enough ? can I just put it in httpd.conf instead or not ?? I spend last 4 hours trying to google up why it is acting like that, Any help will be highly appreciated :-))

    Read the article

  • Windows HTTP proxy client to pass service requests to VPN

    - by Chris
    I've got access to a network via CheckPoint VPN (Windows client). Problem is, I have a linux box that needs to talk to its web services and the target web servers are inside the VPN. So far, we have been unable to connect linux to the VPN (and I'm not trying to solve that problem at the moment). I'm wondering if (temporarily) I can setup a proxy server on a Windows (XP) box to shuttle HTTP requests back and forth? If so, what'd be a good application to do this? (hopefully free/open-source) TIA

    Read the article

  • Change OpenVZ route to pass through ip failover

    - by Kevin Campion
    I have one dedicaced server with its own IP and another IP (failover) who refer to the first. I will wish to change the gateway of a Proxmox virtual machine (openvz) who runs on this dedicaced server to go through the failover IP rather than the ip of host main server. Once connected to a virtual machine, when I do a traceroute VE# traceroute www.google.fr traceroute to www.google.fr (209.85.229.104), 30 hops max, 60 byte packets 1 MY_SERVER_NAME.ovh.net (xxx.xxx.xxx.xxx FIRST_IP_MAIN_SERVER) 0.021 ms 0.010 ms 0.009 ms The first line tells me the ip of host main server. I would like that the traceroute display the second IP failover. VE# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 * 255.255.255.255 UH 0 0 0 venet0 default 192.0.2.1 0.0.0.0 UG 0 0 0 venet0 With iptables HOST# iptables -t nat -L Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- anywhere anywhere MASQUERADE all -- anywhere anywhere SNAT tcp -- anywhere 10.10.101.2 tcp dpt:www state NEW,RELATED,ESTABLISHED,UNTRACKED to:SECOND_IP_FAILOVER SNAT all -- 10.10.101.2 anywhere to:SECOND_IP_FAILOVER 10.10.101.2 is the virtual machine IP (interface venet0) Any ideas ?

    Read the article

  • VPN pptp connection Unable to pass through linux iptables

    - by user221844
    I have set up a windows VPN server behind Linux - Ubuntu box that is working as firewall and proxy server. Now I want people from outside to be able to connect to the VPN server, but the connection is not being established and I get on the client an error 619. I have checked the problem on the internet and it seems a firewall issue. what should I do to make the connection established through the firewall? here is below the information about my setup Firewall-External-IF-IP: 172.16.1.100 Firewall-LAN-IF-IP: 192.168.1.1 VPN-Server-IP: 192.168.1.10 and below is my iptables file content: #Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *filter :INPUT ACCEPT [162000:140437619] :FORWARD ACCEPT [23282:27196133] :OUTPUT ACCEPT [185778:143961739] :LOGGING - [0:0] -A INPUT -p gre -j ACCEPT -A INPUT -s 192.168.1.10/32 -p tcp -m tcp --sport 1723 -j ACCEPT -A INPUT -s 192.168.1.10/32 -p udp -m udp --sport 1723 -j ACCEPT -A FORWARD -s 192.168.1.0/24 -o EXT_IF -j ACCEPT -A FORWARD -s 192.168.1.0/24 -i EXT_IF -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p tcp -m tcp --dport 1723 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p tcp -m tcp --sport 1723 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p gre -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p gre -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p tcp -m tcp --dport 1723 -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p udp -m udp --dport 1723 -j ACCEPT COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *nat :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :FORWARD ACCEPT [8527694:6271564562] :OUTPUT ACCEPT [14748508:11899678536] :POSTROUTING ACCEPT [23271280:18170828012] COMMIT # Completed on Thu May 29 12:40:18 2014 hope that I find the solution here ....!! :(

    Read the article

  • Pass HAProxy healthcheck requests as User-agent "LB-Check" to the backend webservers(apache)

    - by Joseph
    I have a HAProxy setup in front of webservers(apache) for loadbalancing. Also healthchecks for these webservers are also configured in HAProxy. option httpchk HEAD /healthcheck.txt HTTP/1.0 Is it possible to transfer these healthcheck requests to backend webservers as "LB-Check" User-agent or any other option, so that I can distinguish it from other log entries? However I dont want to go for "dontlog" option, as I dont want to miss these entries.

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • Apache proxy pass in nginx

    - by summerbulb
    I have the following configuration in Apache: RewriteEngine On #APP ProxyPass /abc/ http://remote.com/abc/ ProxyPassReverse /abc/ http://remote.com/abc/ #APP2 ProxyPass /efg/ http://remote.com/efg/ ProxyPassReverse /efg/ http://remote.com/efg/ I am trying to have the same configuration in nginx. After reading some links, this is what I have : server { listen 8081; server_name localhost; proxy_redirect http://localhost:8081/ http://remote.com/; location ^~/abc/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/abc/; } location ^~/efg/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/efg/; } } I already have the following configuration: server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } location ^~/myAPP { alias path/to/app; index main.html; } location ^~/myAPP/images { alias another/path/to/images autoindex on; } } The idea here is to overcome a same-origin-policy problem. The main pages are on localhost:8080 but we need ajax calls to http://remote.com/abc. Both domains are under my control. Using the above configuration, the ajax calls either don't reach the remote server or get cut off because of the cross origin. The above solution worked in Apache and isn't working in nginx, so I am assuming it's a configuration problem. I think there is an implicit question here: should I have two server declarations or should I somehow merge them into one? EDIT: Added some more information EDIT2: I've moved all the proxy_pass configuration into the main server declaration and changed all the ajax calls to go through port 8080. I am now getting a new error: 502 Connection reset by peer. Wireshark shows packets going out to http://remote.com with a bad IP header checksum.

    Read the article

  • How to pass an argument to a Windows Scheduled Task with spaces in it

    - by Rodney
    I need to set up a Windows Scheduled Task. It accepts 1 parameter/argument which is a path and can contain spaces. My Scheduled task does not work - it "breaks" the parameter up at the first space. If I run it in the Command Prompt I can just wrap the argument in " " and it works fine, however, this does not work in the Scheduled Task UI. e.g. C:\Program Files\xyz\FTP File Transfer\FTPFileTransferTask.exe "C:\Program Files\xyz\The Interface\Folder Path" I have tried wrapping the argument with " " ' ' [ ] () and have tried filling in the spaces with %20, ~1 etc. with no luck. I know of one solution to make a bat file and use " " around my argument but I don't want to add more complexity. I tried it on Windows 7 and Windows 2008 Server and both failed. There seems to be no discussions on this?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >