Search Results

Search found 29656 results on 1187 pages for 'off side rule'.

Page 149/1187 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Personal Technology – Excel Tip: Comparing Excel Files

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versionsfrom SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I have been writing about Excel Tips over my blog and thought it would be great to share one interesting tips here as a guest blog here. Assume a situation where you want to compare multiple excel files. Here is a typical scenario I have encountered as a common activity. Assume you are sending an Excel file with tons of data, formulae and multiple sheets. Now you are requesting your colleague to validate the file and if required change content for correctness. After receiving the file from your colleague, now you want to know what changes were made by this person to your document. Now here is a cool new addition to Excel 2013 that can help you achieve this task. To get to this option, click the INQUIRE Tab. Incase you don’t have the INQUIRE Tab, check Options using INQUIRE blog. In that post, we discuss all the other options of INQUIRE tab. Once you are on the INQUIRE Tab, select “Compare Files” button as shown in the figure above. This brings a dialog as below. If you are on Windows 8 or Windows 7 OS, search for an application called “Spreadsheet Compare 2013”. Ultimately both the options lead us to the same application. If you are using the stand alone app, once the App initializes, click the “Compare files” options from the toolbar. Make sure to give two different Excel files as shown in the figure above. After selecting the Excel Sheets, you can see the Compare tool has a number of other options to play from. We will talk about some of them later in this post. Just below our toolbar is a colorful side-by-side comparison of both our excel sheets. We can also see the various Tab’s from each file. There is a meaning for each of our color coding which will be discussed next. As you saw above, the color coding has a meaning. For example the bottom pane lists each of the color coding and most importantly each of the changes as compared side-by-side. The detailed information shown below can be exported using the “Export Results” options from the toolbar as a separate Excel Workbook or can be copied to clipboard to be used later. The final piece of the puzzle is to show a graphical view of these differences results based on each category. We cannot drill down per se, but this is a great way to know that the maximum changes seem to be based on “Cell Formats” and then few “Calculated Values” have changed. The INQUIRE option and Spreadsheet Compare 2013 tool is part of Excel 2013. So as you explore using the new version of Excel, there are many such hidden features that are worth exploring. Do let us know if you enjoyed learning a new feature today and I hope you will play around with this feature in your day-today challenges when working with Excel files. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Excel, Personal Technology

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Best way to Draw a cube for 3D Picking on a specific face

    - by Kenneth Bray
    Currently I am drawing a cube for a game that I am making and the cube draw method is below. My question is, what is the best way to draw a cube and to be able to easily find the face that the cursor is over? My draw method works just fine, but I am getting ready to start to add picking (this will be used to mold the cubes into other shaps), and would like to know the best way to find a face of the cube. public void Draw() { // center point posX, posY, posZ float radius = size / 2; //top glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,0.0f); // red glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //bottom glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } glEnd(); glPopMatrix(); //right side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); //left side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //front side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,0.0f,1.0f); // blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } glEnd(); glPopMatrix(); //back side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); }

    Read the article

  • 2D Tile based Game Collision problem

    - by iNbdy
    I've been trying to program a tile based game, and I'm stuck at the collision detection. Here is my code (not the best ^^): void checkTile(Character *c, int **map) { int x1,x2,y1,y2; /* Character position in the map */ c->upY = (c->y) / TILE_SIZE; // Top left corner c->downY = (c->y + c->h) / TILE_SIZE; // Bottom left corner c->leftX = (c->x) / TILE_SIZE; // Top right corner c->rightX = (c->x + c->w) / TILE_SIZE; // Bottom right corner x1 = (c->x + 10) / TILE_SIZE; // 10px from left side point x2 = (c->x + c->w - 10) / TILE_SIZE; // 10px from right side point y1 = (c->y + 10) / TILE_SIZE; // 10px from top side point y2 = (c->y + c->h - 10) / TILE_SIZE; // 10px from bottom side point /* Top */ if (map[c->upY][x1] > 2 || map[c->upY][x2] > 2) c->topCollision = 1; else c->topCollision = 0; /* Bottom */ if ((map[c->downY][x1] > 2 || map[c->downY][x2] > 2)) c->downCollision = 1; else c->downCollision = 0; /* Left */ if (map[y1][c->leftX] > 2 || map[y2][c->leftX] > 2) c->leftCollision = 1; else c->leftCollision = 0; /* Right */ if (map[y1][c->rightX] > 2 || map[y2][c->rightX] > 2) c->rightCollision = 1; else c->rightCollision = 0; } That calculates 8 collision points My moving function is like that: void movePlayer(Character *c, int **map) { if ((c->dirX == LEFT && !c->leftCollision) || (c->dirX == RIGHT && !c->rightCollision)) c->x += c->vx; if ((c->dirY == UP && !c->topCollision) || (c->dirY == DOWN && !c->downCollision)) c->y += c->vy; checkPosition(c, map); } and the checkPosition: void checkPosition(Character *c, int **map) { checkTile(c, map); if (c->downCollision) { if (c->state != JUMPING) { c->vy = 0; c->y = (c->downY * TILE_SIZE - c->h); } } if (c->leftCollision) { c->vx = 0; c->x = (c->leftX) * TILE_SIZE + TILE_SIZE; } if (c->rightCollision) { c->vx = 0; c->x = c->rightX * TILE_SIZE - c->w; } } This works, but sometimes, when the player is landing on ground, right and left collision points become equal to 1. So it's as if there were collision coming from left or right. Does anyone know why this is doing this?

    Read the article

  • Rotating a cube using jBullet collisions

    - by Kenneth Bray
    How would one go about rotating/flipping a cube with the physics of jBullet? Here is my Draw method for my cube object: public void Draw() { // center point posX, posY, posZ float radius = .25f;//size / 2; glPushMatrix(); glBegin(GL_QUADS); //top { glColor3f(5.0f,1.0f,5.0f); // white glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } //bottom { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } //right side { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } //left side { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } //front side { glColor3f(0.0f,0.0f,1.0f); // blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } //back side { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); Update(); } This is my update method for the cube position: public void Update() { Transform trans = new Transform(); cubeRigidBody.getMotionState().getWorldTransform(trans); posX = trans.origin.x; posY = trans.origin.y; posZ = trans.origin.z; Quat4f outRot = new Quat4f(); trans.getRotation(outRot); rotX = outRot.x; rotY = outRot.y; rotZ = outRot.z; rotW = outRot.w; } I am assuming I need to use glrotatef, but it does not seem to work at all when I try that.. this is how I have tried to rotate the cubes: GL11.glRotatef(rotW, rotX, 0.0f, 0.0f); GL11.glRotatef(rotW, 0.0f, rotY, 0.0f); GL11.glRotatef(rotW, 0.0f, 0.0f, rotZ);

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • F# Objects &ndash; Integration with the other .Net Languages &ndash; Part 2

    - by MarkPearl
    So in part one of my posting I covered the real basics of object creation. Today I will hopefully dig a little deeper… My expert F# book brings up an interesting point – properties in F# are just syntactic sugar for method calls. This makes sense… for instance assume I had the following object with the property exposed called Firstname. type Person(Firstname : string, Lastname : string) = member v.Firstname = Firstname I could extend the Firstname property with the following code and everything would be hunky dory… type Person(Firstname : string, Lastname : string) = member v.Firstname = Console.WriteLine("Side Effect") Firstname   All that this would do is each time I use the property Firstname, I would see the side effect printed to the screen saying “Side Effect”. Member methods have a very similar look & feel to properties, in fact the only difference really is that you declare that parameters are being passed in. type Person(Firstname : string, Lastname : string) = member v.FullName(middleName) = Firstname + " " + middleName + " " + Lastname   In the code above, FullName requires the parameter middleName, and if viewed from another project in C# would show as a method and not a property. Precomputation Optimizations Okay, so something that is obvious once you think of it but that poses an interesting side effect of mutable value holders is pre-computation of results. All it is, is a slight difference in code but can result in quite a huge saving in performance. Basically pre-computation means you would not need to compute a value every time a method is called – but could perform the computation at the creation of the object (I hope I have got it right). In a way I battle to differentiate this from lazy evaluation but I will show an example to explain the principle. Let me try and show an example to illustrate the principle… assume the following F# module namespace myNamespace open System module myMod = let Add val1 val2 = Console.WriteLine("Compute") val1 + val2 type MathPrecompute(val1 : int, val2 : int) = let precomputedsum = Add val1 val2 member v.Sum = precomputedsum type MathNormalCompute(val1 : int, val2 : int) = member v.Sum = Add val1 val2 Now assume you have a C# console app that makes use of the objects with code similar to the following… using System; using myNamespace; namespace CSharpTest { class Program { static void Main(string[] args) { Console.WriteLine("Constructing Objects"); var myObj1 = new myMod.MathNormalCompute(10, 11); var myObj2 = new myMod.MathPrecompute(10, 11); Console.WriteLine(""); Console.WriteLine("Normal Compute Sum..."); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(""); Console.WriteLine("Pre Compute Sum..."); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.ReadKey(); } } } The output when running the console application would be as follows…. You will notice with the normal compute object that the system would call the Add function every time the method was called. With the Precompute object it only called the compute method when the object was created. Subtle, but something that could lead to major performance benefits. So… this post has gone off in a slight tangent but still related to F# objects.

    Read the article

  • Header set Access-Control-Allow-Origin not working with mod_rewrite + mod_jk

    - by tharant
    My first question on here on SF so please forgive me if I manage to bork the post. :) Anyways, I'm using mod_rewrite on one of my machines with a simple rule that redirects to a webapp on another machine. I'm also setting the header 'Access-Control-Allow-Origin' on both machines. The problem is that when I hit the rewrite rule, I loose the 'Access-Control-Allow-Origin' header setting. Here's an example of the Apache config for the first machine: NameVirtualHost 10.0.0.2:80 <VirtualHost 10.0.0.2:80> DocumentRoot /var/www/host.example.com ServerName host.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteRule ^/otherhost http://otherhost.example.com/webapp [R,L] </VirtualHost> And here's an example of the Apache config for the second: NameVirtualHost 10.0.1.2:80 <VirtualHost 10.0.1.2:80> DocumentRoot /var/www/otherhost.example.com ServerName otherhost.example.com JkMount /webapp/* jkworker Header set Access-Control-Allow-Origin "*" </VirtualHost> When I hit host.example.com we see that the header is set: $ curl -i http://host.example.com/ HTTP/1.1 302 Moved Temporarily Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=ISO-8859-1 And when I hit otherhost.example.com we see that it too is setting the header: $ curl -i http://otherhost.example.com HTTP/1.1 200 OK Server: Apache/2.0.46 (Red Hat) Location: http://otherhost.example.com/index.htm Content-Length: 0 Access-Control-Allow-Origin: * Content-Type: text/html;charset=UTF-8 But when I try to hit the rewrite rule at host.example.com/otherhost we get no love: $ curl -i http://host.example.com/otherhost/ HTTP/1.1 302 Found Server: Apache/2.2.11 (FreeBSD) mod_ssl/2.2.11 OpenSSL/0.9.7e-p1 DAV/2 mod_jk/1.2.26 Location: http://otherhost.example.com/ Content-Length: 0 Content-Type: text/html; charset=iso-8859-1 Can anybody point out what I'm doing wrong here? Could mod_jk be part of the problem?

    Read the article

  • Ionic Isapi Rewrite error on IIS6, Windows 2003 Server

    - by EsiX
    First of all my setup is a VPS running Windows 2003 Server with multiple domains on it IIS 6, Plesk IsapiRewrite4.ini RewriteLogLevel 3 RewriteCond %{HTTP_HOST} ^mydomain.com$ RewriteRule ^/(.*)$ http://www.mydomain.com/$1 [R] This is one of their basic examples. Ionic is installed and setup proper because if I use another rule (a simpler one ... like the one following) it works instant # IsapiRewrite4.ini # RewriteLogLevel 3 # # This ini file illustrates the use of a redirect rule. # Any incoming URL that starts with an uppercase W # will be redirected to the specified server. RewriteRule ^/(W.*)$ http://server.dyndns.org:7070/$1 [R] This one works in the TestDriver tool and none of them gives any error or warnings in TestParse tool, but it doesn't do a thing on the webserver... The fact that one rule works means that the isapi module works. I am using the last version. RedirectRule http://mydomain.com/someplace/somefile.html http://www.mydomain.com/howto/someplace/anotherfile.html [I,L] Both examples were taken from http://iirf.codeplex.com/Wiki/View.aspx?title=Redirection&referringTitle=Home So my IsapiRewrite4.ini needs to do this two tasks: auto transform and redirection for a number of urls. Can you help out.. I really don't know what I'm doing wrong.

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • Fortinet: Is there any equivalent of the ASA's packet-tracer command?

    - by Kedare
    I would like to know if there is not Fortigates an equivalent of the packet-tracer command that we can find on the ASA. Here is an example of execution for those who don't know it: NAT and pass : lev5505# packet-tracer input inside tcp 192.168.3.20 9876 8.8.8.8 80 Phase: 1 Type: ACCESS-LIST Subtype: Result: ALLOW Config: Implicit Rule Additional Information: MAC Access list Phase: 2 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 3 Type: ACCESS-LIST Subtype: log Result: ALLOW Config: access-group inside-in in interface inside access-list inside-in extended permit tcp any any eq www access-list inside-in remark Allows DNS Additional Information: Phase: 4 Type: IP-OPTIONS Subtype: Result: ALLOW Config: Additional Information: Phase: 5 Type: VPN Subtype: ipsec-tunnel-flow Result: ALLOW Config: Additional Information: Phase: 6 Type: NAT Subtype: Result: ALLOW Config: object network inside-network nat (inside,outside) dynamic interface Additional Information: Dynamic translate 192.168.3.20/9876 to 81.56.15.183/9876 Phase: 7 Type: IP-OPTIONS Subtype: Result: ALLOW Config: Additional Information: Phase: 8 Type: FLOW-CREATION Subtype: Result: ALLOW Config: Additional Information: New flow created with id 94755, packet dispatched to next module Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: allow Blocked by ACL: lev5505# packet-tracer input inside tcp 192.168.3.20 9876 8.8.8.8 81 Phase: 1 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 2 Type: ACCESS-LIST Subtype: Result: DROP Config: Implicit Rule Additional Information: Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule Is there any equivalent on the Fortigates ?

    Read the article

  • D-LINK 2450U DSL router: Port forwarding forwading to the modem itself, not the specified IP

    - by axk
    I found a similar question but it has no satisfactory answers. I have a D-LINK 2540U DSL router. It has a basic port forwarding(under DNS - Virtual Servers) configuration in the administration panel where you specify: external port range, protocol, internal port range, server IP address and it is supposed to forward that port to that IP address. When I first set it up for a Real VNC connection it worked fine, just as I expected. Then I added a DynDNS configuration entry in the router's 'Dynamic DNS' section and added an additional SSH (22) forwarding rule. The SSH forwarding also worked fine (now with the dynamic hostname, but I suppose it doesn't make any difference as far as SSH is concerned). Then I removed the SSH rule and after that the VNC forwarding stopped working with the VNC client failing to connect (I have tried to connect with telnet and it also failed to connect, so it wasn't a VNC problem). After adding a rule for port 80 it turned out it would forward on port 80 though not to the specified server IP but to the modem itself. At least it is what it looks like, because it gives me the administration panel when I connect to my external IP (both using a browser and plain telnet in which case I can see that it is mini_hhtpd sitting on the port, which is obviously the modem's administration panel). Have anybody encountered a similar problem with port forwarding? I have tried to do a reset through the administration panel and to restore a backup of the settings made before I started playing with port forwarding, but it didn't help. Should I do a 'hard' reset with the button on the modem? Is it any different from the administration panel's reset (Restore default)?

    Read the article

  • Blocking requests from specific IPs using IIS Rewrite module

    - by Thomas Levesque
    I'm trying to block a range of IP that is sending tons of spam to my blog. I can't use the solution described here because it's a shared hosting and I can't change anything to the server configuration. I only have access to a few options in Remote IIS. I see that the URL Rewrite module has an option to block requests, so I tried to use it. My rule is as follows in web.config: <rule name="BlockSpam" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{REMOTE_ADDR}" pattern="10\.0\.146\.23[0-9]" ignoreCase="false" /> </conditions> <action type="CustomResponse" statusCode="403" /> </rule> Unfortunately, if I put it at the end of the rewrite rules, it doesn't seem to block anything... and if I put it at the start of the list, it blocks everything! It looks like the condition isn't taken into account. In the UI, the stopProcessing option is not visible and is true by default. Changing it to false in web.config doesn't seem to have any effect. I'm not sure what to do now... any ideas?

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)?(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite> EDIT I actually got this working by adding a binding for the site on port 80 with host name tfs.server.domain.com. But using tfs.server.domain.com, I can't authenticate using Windows Authentication. Is there something that I need to configure for Windows Authentication? You can see a trace here: http://pastebin.com/k0QrnL0m

    Read the article

  • Add Route for machine in same DC

    - by gary
    My routing table on my machine with IP of 46.84.121.243 currently looks like this - Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 46.84.121.225 46.84.121.243 21 46.84.121.224 255.255.255.224 On-link 46.84.121.243 276 46.84.121.239 255.255.255.255 On-link 46.84.121.243 21 46.84.121.243 255.255.255.255 On-link 46.84.121.243 276 46.84.121.255 255.255.255.255 On-link 46.84.121.243 276 I'm trying to access 46.84.121.239, which is my other machine in the same DC but my guess is the first rule is blocking it as it is trying to go via the gateway and failing - Tracing route to [46.84.121.239] over a maximum of 30 hops: 1 OWNEROR-9O83HBL [46.84.121.243] reports: Destination host unreachable. Trace complete. I'm doing all this via RDP and already tried changing the metric on the persistent rule with devastating consequences! Here's the persistent rule (working) - Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 46.84.121.225 1 Any help to be able to access the 46.84.121.243 would be very helpful thanks very much.

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • iptables: allowing incoming for 192.168.1.0/24 allowed incoming for all?

    - by nortally
    The internal side of my ISP router has three devices: ISP router 128.128.43.1 Firewall router 128.128.43.2 Server 128.128.43.3 Behind the Firewall router is a NAT network using 192.168.100.n/24 This question is regarding iptables running on the Server. I wanted to allow access to port 8080 only from the NAT clients behind the Firewall router, so I used this rule -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT This worked, but UNEXPECTEDLY ALLOWED GLOBAL ACCESS, which resulted in our JBOSS server getting compromised. I now know that the correct rule is to use the Firewall router's address instead of the internal network, but can anyone explain why the first rule allowed global access? I would have expected it to just fail. Full config, mostly lifted from a RedHat server: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :Firewall-1-INPUT - [0:0] -A INPUT -j Firewall-1-INPUT -A FORWARD -j Firewall-1-INPUT -A Firewall-1-INPUT -i lo -j ACCEPT -A Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow ssh from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow https from all" -A Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A Firewall-1-INPUT -m comment --comment "allow JBOSS from Firewall" ### THIS RESULTED IN GLOBAL ACCESS TO PORT 8080 ### -A Firewall-1-INPUT -s 192.168.100.0/24 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT ### THIS WORKED -A Firewall-1-INPUT -s 128.128.43.2 -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPt ### -A Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Iptables Forwarding problem

    - by ankit
    Hi all, I had initally asked question about sertting up my linux box for natting for my home network and was given suggestions in the thread here. Did not want to clutter the old question so starting a new one here. based on the earlier suggestions, i have come up with the following rules ... :PREROUTING ACCEPT [1:48] :OUTPUT ACCEPT [12:860] :POSTROUTING ACCEPT [3:228] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT *filter :INPUT DROP [3:228] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p icmp -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -i eth1 -p icmp -j ACCEPT -A FORWARD -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT -A OUTPUT -p icmp -j ACCEPT -A OUTPUT -j ACCEPT COMMIT If you notice, i do have the proper MASQURADING rule and the proper FORWARD filter rule as well. However i am facing 2 problems On the linux box itself DNS resolving is not working the lan clients connected to the linux box, are still not able to get to internet. when i ping something from them, i see the DROP count in iptables INPUT rule increasing. now my question is, when i am pinging something from the lan client, how come it is being matched by the input chain ?! should it be in the forward chain ? Chain INPUT (policy DROP 20 packets, 2314 bytes) pkts bytes target prot opt in out source destination 99 9891 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- eth0 any anywhere anywhere 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:https 122 9092 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:ssh Thanks ankit

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port.

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you write access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS 7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite>

    Read the article

  • Vyatta internet connection + hosted site on same IP

    - by boburob
    Having a small issue setting up a vyatta. The company internet and two different websites are both on the same IP. Server 1 - Has websites hosted on ports 1000 and 3000 and also has a proxy server installed to provide internet connection to the domain Server 2 - Has a website hosted on ports 80 and 433 The vyatta is correctly natting the appropriate traffic to each server, and allowing the proxy to get internet traffic, however I have a problem getting to the websites hosted on these two servers inside the domain. I believe the problem is that the HTTP request is being sent with an IP, eg: 12.34.56.78. The request will reach the website and the server will attempt to send the request back to the IP, however this is the IP of the Vyatta, so it has nowhere else to go. I thought the solution would be something like this: rule 50 { destination { address 12.34.56.78 port 1000 } inbound-interface eth1 inside-address { address 10.19.2.3 } protocol tcp type destination } But this doesnt seem to do it! UPDATE I changed the rules to the following: rule 50 { destination { address 12.34.56.78 port 443 } outbound-interface eth1 protocol tcp source { address 10.19.2.3 } type masquerade } rule 51 { destination { address 12.34.56.78 port 443 } inbound-interface eth1 inside-address { address 10.19.2.2 } protocol tcp type destination } I am now seeing traffic going between the two with Wireshark, but the website will still fail to load.

    Read the article

  • port forwarding problem

    - by Claudiu
    I want to set up an svn server on my computer, so it's available from anywhere. I think I set up the repository correctly, using CollabSVN. If I go to Repo-Browser with TortoiseSVN and point it to svn://localhost:3690, it shows the proper repository. The problem now is that I'm behind a router. My local IP is 192.168.1.45 . Doing svn://192.168.1.45:3690 also works. My global IP is, say, x.x.x.x. Just doing svn://x.x.x.x:3690 doesn't work, which makes sense, since I have to set up port forwarding. I'm using a Verizon router. Using their web interface (on 192.168.1.1) I added the following port forwarding rule: IP Address forward to: 192.168.1.45 Source Ports: Any Dest Ports: 3690 Forward to: 3690 Protocol: TCP However, even after applying this rule, going to svn://x.x.x.x:3690 doesn't work. It takes a few seconds to fail, then says that the connection couldn't be established because the server connected to didn't respond properly after a period of time. What's interesting is that a random port, like svn://x.x.x.x:36904 fails immediately, saying that the target machine actively refused the connection. So I figure that the forwarding rule did something, but not fully what was necessary. Any ideas on how to get this working? The router model is MI424-WR and the firmware version is 4.0.16.1.56.0.10.12.3. UPDATE: I also tried setting destination port to 45000, and still forwarding to 3690, in case something was wrong w/ the lower-numbered ports, but to no avail. I also tried port 80 to port 3690, still all in vain.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >