Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 149/582 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • How can I set an image for background of GUI interface?

    - by enriched
    hey everyone, im having some troubles displaying the background image for a GUI interface in java. Here is what i have at the moment, and with current stage of code it shows default(gray) background. import javax.swing.*; import java.awt.event.*; import java.util.Scanner; import java.awt.*; import java.io.File; import javax.imageio.ImageIO; import java.awt.image.BufferedImage; import java.io.IOException; ////////////////////////////////// // 3nriched Games Presents: // // MIPS The Mouse!! // ////////////////////////////////// public class mipsMouseGUI extends JFrame implements ActionListener { private static String ThePDub = ("mouse"); //the password JPasswordField pass; JPanel panel; JButton btnEnter; JLabel lblpdub; public mipsMouseGUI() { BufferedImage image = null; try { //attempts to read picture from the folder image = ImageIO.read(getClass().getResource("/mousepics/mousepic.png")); } catch (IOException e) { //catches exceptions e.printStackTrace(); } ImagePanel panel = new ImagePanel(new ImageIcon("/mousepics/neonglowOnwill.png").getImage()); setIconImage(image); //sets icon picture setTitle("Mips The Mouse Login"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pass = new JPasswordField(5); //sets password length to 5 pass.setEchoChar('@'); //hide characters as @ symbol pass.addActionListener(this); //adds action listener add(panel); //adds panel to frame btnEnter = new JButton("Enter"); //creates a button btnEnter.addActionListener(this);// Register the action listener. lblpdub = new JLabel(" Your Password: "); // label that says enter password panel.add(lblpdub, BorderLayout.CENTER);// adds label and inputbox panel.add(pass, BorderLayout.CENTER); // to panel and sets location panel.add(btnEnter, BorderLayout.CENTER); //adds button to panel pack(); // packs controls and setLocationRelativeTo(null); // Implicit "this" if inside JFrame constructor. setVisible(true);// makes them visible (duh) } public void actionPerformed(ActionEvent a) { Object source = a.getSource(); //char array that holds password char[] passy = pass.getPassword(); //characters array to string String p = new String(passy); //determines if user entered correct password if(p.equals(ThePDub)) { JOptionPane.showMessageDialog(null, "Welcome beta user: USERNAME."); } else JOptionPane.showMessageDialog(null, "You have enter an incorrect password. Please try again."); } public class ImagePanel extends JPanel { private BufferedImage img; public ImagePanel(String img) { this(new ImageIcon(img).getImage()); } public ImagePanel(Image img) { Dimension size = new Dimension(img.getWidth(null), img.getHeight(null)); } public void paintComponent(Graphics g) { g.drawImage(img, 0, 0, null); } } }

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • Using Yahoo local search query for iPhone

    - by robbmcmahan
    I'm using yahoo local search in an iPhone app and trying to query everything in a certain location. According to the api docs you can pass "*" to query and it will return everything. I've tried passing it several different ways, including the way below but it does not work unless I actually pass it a real query. Does anyone know how or what I need to pass to make it query everything? [self setQuery:[@"*" stringByAddingPercentEscapesUsingEncoding:NSASCIIStringEncoding]]; Thanks

    Read the article

  • how to close layer that showed in other layer in cocos2d-iphone

    - by yegomo
    I have one layer called alayer, and there is a button called abutton, when click the button, another layer called blayer will show in alayer, not replaceScene, please look at the following code, alayer.m -(void)abuttonclicked:(id)sender { blayer *blayer = [blayer node]; blayer.position = ccp(1,1); [self addChild:blayer]; } blayer.m has a button called bbutton and string value called bstring, I want to click the b button, it will close blayer (remove blayer from alayer), and pass the string value bstring to alayer, please look at following code, -(void)bbuttonclicked:(id)sender { // how can do here to close its self(remove its self from alayer), and pass the bstring to alayer? } thanks. ps. I can use NSUserDefault to pass the string value, but I think it's a bad way to do this, is there another way to pass value?

    Read the article

  • PHP problems when transfering code from Windows to OS X

    - by Makka95
    I have recently bought a new MacBook Pro. Before I had my MacBook Pro I was working on a website on my desktop computer. And now I want to transfer this code to my new MacBook Pro. The problem is that when I transfered the code (I put it on Dropbox and simply downloaded it on my MacBook Pro) I started to see lots of error messages in my PHP code. The error message I”m receiving is: Warning: Cannot modify header information - headers already sent by (output started at /some/file.php:1) in /some/file.php on line 23 I have done some research on this and it seems that this error is most frequently caused by a new line, simple whitespace or any output before the <?php sign. I have looked through all the places where I have cookies that are being sent in the HTTP request and also where I'm using the header() function. I haven’t detected any output or whitespace that possibly could interfere and cause this problem. Noteworthy is that the error always says that the output is started at line 1. Which got me thinking if there is some kind of coding differences in the way that the Mac OS X and Windows operating systems handle new lines or white spaces? Or could the Dropbox transfer messed something up? The code on one of the sites(login.php) which produces the error: <?php include "mysql_database.php"; login(); $id = $_SESSION['Loggedin']; setcookie("login", $id, (time()+60*60*24*30)); header('Location: ' . $_SERVER['HTTP_REFERER']); ?> login function: function login() { $connection = connecttodatabase(); $pass = ""; $user = ""; $query = ""; if (isset($_POST['user']) && $_POST['user'] != null) { $user = $_POST['user']; if (isset($_POST['pass']) && $_POST['pass'] != null) { $pass = md5($_POST['pass']); $query = "SELECT ID FROM Anvandare WHERE Nickname='$user' AND Password ='$pass'"; } } if ($query != "") { $id = $connection->query($query); $id = mysqli_fetch_assoc($id); $id = $id['ID']; $_SESSION['Loggedin'] = $id; } closeconnection($connection); } Complete error: Warning: Cannot modify header information - headers already sent by (output started at /Users/name/GitHub/website/login.php:1) in /Users/namn/GitHub/website/login.php on line 9

    Read the article

  • if else-if making code look ugly any cleaner solution?

    - by Vishal
    I have around 20 functions (is_func1, is_fucn2, is_func3...) returning boolean I assume there is only one function which returns true and I want that! I am doing: if is_func1(param1, param2): # I pass 1 to following abc(1) # I pass 1 some_list.append(1) elif is_func2(param1, param2): # I pass 2 to following abc(2) # I pass 1 some_list.append(2) ... . . elif is_func20(param1, param2): ... Please note: param1 and param2 are different for each, abc and some_list take parameters depending on the function. The code looks big and there is repetition in calling abc and some_list, I can pull this login in a function! but is there any other cleaner solution? I can think of putting functions in a data structure and loop to call them.

    Read the article

  • restrict duplicate rows in specific columns in mysql

    - by JPro
    I have a query like this : select testset, count(distinct results.TestCase) as runs, Sum(case when Verdict = "PASS" then 1 else 0 end) as pass, Sum(case when Verdict <> "PASS" then 1 else 0 end) as fails, Sum(case when latest_issue <> "NULL" then 1 else 0 end) as issues, Sum(case when latest_issue <> "NULL" and issue_type = "TC" then 1 else 0 end) as TC_issues from results join testcases on results.TestCase = testcases.TestCase where platform = "T1_PLATFORM" AND testcases.CaseType = "M2" and testcases.dummy <> "flag_1" group by testset order by results.TestCase The result set I get is : testset runs pass fails issues TC_issues T1 66 125 73 38 33 T2 18 19 16 16 15 T3 57 58 55 55 29 T4 52 43 12 0 0 T5 193 223 265 130 22 T6 23 12 11 0 0 My problem is, this is a result table which has testcases running multiple times. So, I am able to restrict the runs using the distinct TestCases but when I want the pass and fails, since I am using case I am unable to eliminate the duplicates. Is there any way to achieve what I want? any help please? thanks.

    Read the article

  • Why can't I pass an object of type T to a method on an object of type <? extends T>?

    - by Matt
    In Java, assume I have the following class Container that contains a list of class Items: public class Container<T> { private List<Item<? extends T>> items; private T value; public Container(T value) { this.value = value; } public void addItem(Item item) { items.add(item); } public void doActions() { for (Item item : items) { item.doAction(value); } } } public abstract class Item<T> { public abstract void doAction(T item); } Eclipse gives the error: The method doAction(capture#1-of ? extends T) in the type Item is not applicable for the arguments (T) I've been reading generics examples and various postings around, but I still can't figure out why this isn't allowed. Eclipse also doesn't give any helpful tips in its proposed fix, either. The variable value is of type T, why wouldn't it be applicable for ? extends T?.

    Read the article

  • PHP Session doesn't get read in next page after login validation, Why?

    - by NetStar
    I have a web site and when my users login it takes them to verify.php (where it connects to the DataBase and matches email and password to the user input and if OK puts client data into sessions and take the client to /memberarea/index.php ELSE back to login page with message "Invalid Email or password!") <?php ob_start(); session_start(); $email=$_POST['email']; $pass=md5($_POST['pass']); include("conn.php"); // connects to Database $sql="SELECT * FROM `user` WHERE email='$email' AND pass='$pass'"; $result=mysql_query($sql); $new=mysql_fetch_array($result); $_SESSION['fname']=$new['fname']; $_SESSION['lname']=$new['lname']; $_SESSION['email1']=$new['email1']; $_SESSION['passwrd']=$new['passwrd']; $no=mysql_num_rows($result); if ($no==1){ header('Location:memberarea/index.php'); }else { header("Location:login.php?m=$msg"); //msg="Invalid Login" } ?> then after email id and password is verified it takes them to ` /memberarea/index.php (This is where the problem happens.) where in index.php it checks if a session has been created in-order to block hackers to enter member area and sends them back to the login page. <? session_start(); isset($_SESSION['email'])` && `isset($_SESSION['passwrd'])` The problem is the client gets verified in verify.php (the code is above) In varify.php only after I put ob_start(); ontop of session_start(); It moves on to /memberarea/index.php , If I remove ob_start() It keeps the client on the verify.php page and displays error header is alredy SENT. after I put ob_start() it goes in to /memberarea/index.php but the session is blank, so it goes back to the login page and displays the error ($msg) "Invalid Login" which I programed to display. Can anyone tell me why the session cant pass values from verify.php to /memberarea/index.php

    Read the article

  • Gui for viewing Apache headers

    - by user49249
    Is there any GUI for viewing Apache headers which are being served by a chain of Reverse Proxy Servers. I have a cloud which uses a few Proxy Servers in between the client and actual server which has to serve the original request. All servers are Unix Servers. And if there is a problem which I do not get a clue to then to be able to post them here downloading and doing an ftp of those headers with all the logs , loging in each time to each proxy server and Opening the browser and exporting the X display to some remote server each time from the chain and then observing HTTP_RESPONSES and checking the request from each of those servers and then posting log with configuration and response takes at least 2-3 hours to type an email. Is there a shorter way to do so?

    Read the article

  • Virtualmin domain name registration php

    - by David Maitland
    in a PHP web page i need to run this following command to create a new domain: virtualmin create-domain --domain DOMAIN --pass PASS --plan 'Standard Package' --limits-from-plan --features-from-plan This is usually executed in a shell but i don't know how to do it from a web page and also i need to take the domain string and pass string from a web form. Can anyone help with the PHP code as my skills are basic and i have already tried a few things that just don't work. Thanks.

    Read the article

  • Python - 2 Questions: Editing a variable in a function and changing the order of if else statements

    - by Eric
    First of all, I should explain what I'm trying to do first. I'm creating a dungeon crawler-like game, and I'm trying to program the movement of computer characters/monsters in the map. The map is basically a Cartesian coordinate grid. The locations of characters are represented by tuples of the x and y values, (x,y). The game works by turns, and in a turn a character can only move up, down, left or right 1 space. I'm creating a very simple movement system where the character will simply make decisions to move on a turn by turn basis. Essentially a 'forgetful' movement system. A basic flow chart of what I'm intending to do: Find direction towards destination Make a priority list of movements to be done using the direction eg.('r','u','d','l') means it would try to move right first, then up, then down, then left. Try each of the possibilities following the priority order. If the first movement fails (blocked by obstacle etc.), then it would successively try the movements until the first one that is successful, then it would stop. At step 3, the way I'm trying to do it is like this: def move(direction,location): try: -snip- # Tries to move, raises the exception Movementerror if cannot move in the direction return 1 # Indicates movement successful except Movementerror: return 0 # Indicates movement unsuccessful (thus character has not moved yet) prioritylist = ('r','u','d','l') if move('r',location): pass elif move('u',location): pass elif move('d',location): pass elif move('l',location): pass else: pass In the if/else block, the program would try the first movement on the priority on the priority list. At the move function, the character would try to move. If the character is not blocked and does move, it returns 1, leading to the pass where it would stop. If the character is blocked, it returns 0, then it tries the next movement. However, this results in 2 problems: How do I edit a variable passed into a function inside the function itself, while returning if the edit is successful? I have been told that you can't edit a variable inside a function as it won't really change the value of the variable, it just makes the variable inside the function refer to something else while the original variable remain unchanged. So, the solution is to return the value and then assign the variable to the returned value. However, I want it to return another value indicating if this edit is successful, so I want to edit this variable inside the function itself. How do I do so? How do I change the order of the if/else statements to follow the order of the priority list? It needs to be able to change during runtime as the priority list can change resulting in a different order of movement to try.

    Read the article

  • List in C# - passing multiple entries as a single object

    - by Karthick
    Hi, I have a method (C#) public void MethodName(List<Order> Order, int ID) I need to pass this to a main page, in which i know to pass integer value to ID, am not able to pass multiple items in a single list. The List order should have two number entries, (ie. Order.number1 and Order.number2) How should i pass a single list as a parameter to this method containing multiple entries of number1 and 2, so that i can loop thro' and find it. Thanks.

    Read the article

  • Accessing Ember component scope from block form?

    - by user3009816
    I want to let a user pass a custom text field, App.CustomTextField, in a Ember component using block form. However, that App.CustomTextField needs access to the component to manipulate its properties. How can I pass the component to the textfield using block form? I would like to pass the component as a property to App.CustomTextField, but how do I access the component's scope? {{#blog-post}} {{view App.CustomTextField component=?}} {{/blog-post}}

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Problem to install Apache 2.4.2 in Ubuntu 12.04

    - by Michael
    I followed these steps to install Apache 2.4.2 in Ubuntu 12.04, but it seems Apache is not installed, here's what I did (I followed the steps in this site http://www.discusswire.com/apache-2-4-installation-ubuntu/): sudo apt-get install build-essential sudo apt-get build-dep apache2 wget http://apache.mirrors.pair.com/httpd/httpd-2.4.2.tar.gz tar -xzvf httpd-2.4.2.tar.gz && cd httpd-2.4.2 sudo ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install when I tried to start by issuing sudo /usr/local/apache2/bin/apachectl start at terminal, I got the following warning: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message" and when I typed top at terminal, the apache is not there. I also tried to go to http://localhost/ or 127.0.0.1 or even 127.0.1.1 it showed "Can't establish connection to server ..." message. ps: I checked the error log and it showed "[Fri Jul 27 15:49:00.703901 2012] [proxy_balancer:emerg] [pid 20781] AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shm loaded?? [Fri Jul 27 15:49:00.704083 2012] [:emerg] [pid 20781] AH00020: Configuration Failed, exiting" What I'm missing? Thanks Michael

    Read the article

  • Advanced reporting in Oracle Service Bus

    - by [email protected]
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false FR-BE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Reporting in OSB is useful, it allows you to audit message going through OSB. The service bus console allows you to view the content that you reported. To report data you simply use the Report action in your proxy. The action itself is rather straightforward. You specify the content to report ($body for example), an optional key for easier search (for example the id of the record) and that's it. Sometimes though, what you want to is a bit more complicated. I recently had a case where the key was built from the message type (XML) and the id of the message. Seems quite simple but the id could be any element anywhere in the message depending on its type. This could be handled by 'if' statement but adding new cases would mean changing the proxy service and if you have lots of message types this can get boring so I wanted the solution to be as dynamic as possible (read "just change a configuration file and that's it"). The following entry details how you can make this dynamic in your proxy by using XQuery/XSLT.   First step the XQuery We're going to use an XQuery to make the mapping between the XML message type and the location of the identifier in it. We assume here that the message type is the first node of the input XML and use a rather simple Xpath to find the identifier.  The XQuery looks like this for two messages : <reportmapping>                 <row>                                <logical>messageType1</logical>                                <type>MT1</type>                                <reportingreferencelocation>//customID</reportingreferencelocation>                 </row>                 <row>                                <logical>messageType2</logical>                                <type>MT2</type>                                <reportingreferencelocation>//theOtherIDLocation</reportingreferencelocation>                 </row>   </reportmapping>   Second step the XSLT To get the identifier value of the dynamic path, we're going to use an XSLT transformation. This XSLT takes an XML parameter as input which contains our xpath (coming from the previous XQuery). The XSLT looks like this : <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xalan="http://xml.apache.org/xalan">               <xsl:param name="PathToNode"/>               <xsl:template match="/">                             <IDVALUE>                                           <xsl:value-of select="xalan:evaluate($PathToNode/reportingreferencelocation)"/>                             </ IDVALUE >               </xsl:template> </xsl:stylesheet> (note the use of a xalan function here. Xalan is the XSLT processor used in weblogic server)   Last step, the proxy service We're now going to wire everything in the proxy service. First we assign the XQuery to a variable. We then get the entry in the XQuery corresponding to the record we're treating. We're then extracting the id of the message using the XSLT transformation Final assign is to built the final variable that will be used as the reporting key. The report action is then called with this variable. Everything is setup. We're now ready to test.   Testing the solution Using the test console, we're sending our first XML ... <messageType1>                 <sender>test console 1</sender>                 <customID>ID12345</customID >                 <content>                                 <field1>value of field 1</field1>                 </content> </messageType1>   ... and a second one of another supported type <messageType2>                 <header>                                 <theOtherIDLocation >ID67890</theOtherIDLocation >                 </header> <body>                                <data>Test data</data>                 </body> </messageType2>   Reporting result is :  Conclusion Report is done as expected. Now if a new message type must be supported we only have to modify the XQuery and nothing at the proxy service level.   Sample project attached to this entry.sbconfig-dynamicReport.jar  

    Read the article

  • array and array_view from amp.h

    - by Daniel Moth
    This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it! Overview The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged). The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!). The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory. array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). Creating array and array_view objects and relevant properties You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.   array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3 Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code. You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.   array<float,2> a(b); // b can be another array or array_view of floats with rank=2 To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property). While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.   array_view<int,2> aaa(myExtent, myContainerOfInts); Similarly, you can create an array_view from a raw pointer of data plus an extent object. Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.   array<double,1> a(5, myVector.begin(), myVector.end()); We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property. Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand... refresh and synchronize on array_view Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next. The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen: you access the data through the array_view on the CPU side, i.e. using indexing that we cover below you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you) Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method. Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.   array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU. copy_to, data, and global copy/copy_async functions Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads). Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section. Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc. Just browse intellisense or amp.h directly for the full set. Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data. indexing into array and array_view plus projection Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next. To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g. array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer); index<2> idx(5,4); float f1 = arr[idx]; float f2 = arr(5,4); //f2 ==f1 //and the reverse for assigning, e.g. arr(idx[0], 7) = 6.9; Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location). Not Covered In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Passing Strings by Ref

    - by SGWellens
    Humbled yet again…DOH! No matter how much experience you acquire, no matter how smart you may be, no matter how hard you study, it is impossible to keep fully up to date on all the nuances of the technology we are exposed to. There will always be gaps in our knowledge: Little 'dead zones' of uncertainty. For me, this time, it was about passing string parameters to functions. I thought I knew this stuff cold. First, a little review... Value Types and Ref Integers and structs are value types (as opposed to reference types). When declared locally, their memory storage is on the stack; not on the heap. When passed to a function, the function gets a copy of the data and works on the copy. If a function needs to change a value type, you need to use the ref keyword.  Here's an example:     // ---- declaration -----------------     public struct MyStruct    {        public string StrTag;    }     // ---- functions -----------------------     void SetMyStruct(MyStruct myStruct)     // pass by value    {        myStruct.StrTag = "BBB";    }     void SetMyStruct(ref MyStruct myStruct)  // pass by ref    {        myStruct.StrTag = "CCC";    }     // ---- Usage -----------------------     protected void Button1_Click(object sender, EventArgs e)    {        MyStruct Data;        Data.StrTag = "AAA";         SetMyStruct(Data);        // Data.StrTag is still "AAA"         SetMyStruct(ref Data);        // Data.StrTag is now "CCC"    } No surprises here. All value types like ints, floats, datetimes, enums, structs, etc. work the same way. And now on to... Class Types and Ref     // ---- Declaration -----------------------------     public class MyClass    {        public string StrTag;    }     // ---- Functions ----------------------------     void SetMyClass(MyClass myClass)  // pass by 'value'    {        myClass.StrTag = "BBB";    }     void SetMyClass(ref MyClass myClass)   // pass by ref    {        myClass.StrTag = "CCC";    }     // ---- Usage ---------------------------------------     protected void Button2_Click(object sender, EventArgs e)    {        MyClass Data = new MyClass();        Data.StrTag = "AAA";         SetMyClass(Data);          // Data.StrTag is now "BBB"         SetMyClass(ref Data);        // Data.StrTag is now "CCC"    }  No surprises here either. Since Classes are reference types, you do not need the ref keyword to modify an object. What may seem a little strange is that with or without the ref keyword, the results are the same: The compiler knows what to do. So, why would you need to use the ref keyword when passing an object to a function? Because then you can change the reference itself…ie you can make it refer to a completely different object. Inside the function you can do: myClass = new MyClass() and the old object will be garbage collected and the new object will be returned to the caller. That ends the review. Now let's look at passing strings as parameters. The String Type and Ref Strings are reference types. So when you pass a String to a function, you do not need the ref keyword to change the string. Right? Wrong. Wrong, wrong, wrong. When I saw this, I was so surprised that I fell out of my chair. Getting up, I bumped my head on my desk (which really hurt). My bumping the desk caused a large speaker to fall off of a bookshelf and land squarely on my big toe. I was screaming in pain and hopping on one foot when I lost my balance and fell. I struck my head on the side of the desk (once again) and knocked myself out cold. When I woke up, I was in the hospital where due to a database error (thanks Oracle) the doctors had put casts on both my hands. I'm typing this ever so slowly with just my ton..tong ..tongu…tongue. But I digress. Okay, the only true part of that story is that I was a bit surprised. Here is what happens passing a String to a function.     // ---- Functions ----------------------------     void SetMyString(String myString)   // pass by 'value'    {        myString = "BBB";    }     void SetMyString(ref String myString)  // pass by ref    {        myString = "CCC";    }     // ---- Usage ---------------------------------     protected void Button3_Click(object sender, EventArgs e)    {        String MyString = "AAA";         SetMyString(MyString);        // MyString is still "AAA"  What!!!!         SetMyString(ref MyString);        // MyString is now "CCC"    } What the heck. We should not have to use the ref keyword when passing a String because Strings are reference types. Why didn't the string change? What is going on?   I spent hours unssuccessfully researching this anomaly until finally, I had a Eureka moment: This code: String MyString = "AAA"; Is semantically equivalent to this code (note this code doesn't actually compile): String MyString = new String(); MyString = "AAA"; Key Point: In the function, the copy of the reference is pointed to a new object and THAT object is modified. The original reference and what it points to is unchanged. You can simulate this behavior by modifying the class example code to look like this:      void SetMyClass(MyClass myClass)  // call by 'value'    {        //myClass.StrTag = "BBB";        myClass = new MyClass();        myClass.StrTag = "BBB";    } Now when you call the SetMyClass function without using ref, the parameter is unchanged...just like the string example.  I hope someone finds this useful. Steve Wellens

    Read the article

  • Modular Web App Network Architecture

    - by nairware
    Assuming that I am dealing with dedicated physical servers or VPSs, is it conceivable and does it make sense to have distinct servers setup with the following roles to host a web application? Reverse Proxy Web server Application server Database server Specific points of interest: I am confused how to even separate the web and application servers. My understanding was that such 3-tier architectures were feasible. It is unclear to me if the app server would reside directly between the web and database server, or if the web server could directly interact with the database as well. The app server could either do the computational heavy-lifting on behalf of the app server or it could do heavy-lifting plus control all of the business logic (as implied in the diagram above, thus denying the web server of direct database access). I am also unsure what role the reverse proxy (ex. nginx) could and should fulfill as a web server, given the above mentioned setup. I know that nginx has web server features. But I do not know if it makes sense to have the reverse proxy be its own VPS, given that the web server–in theory–would be separate from the app server.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >