Search Results

Search found 15021 results on 601 pages for 'location aware'.

Page 103/601 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Professionalism of online username / handle

    - by Thanatos
    I have in the past, and continue currently, used the handle "thanatos" on a lot of Internet sites, and if that isn't available (which happens ~50% of the time), "deathanatos". "Thanatos" is the name of the Greek god or personification of death (not to be confused with Hades, the Greek god of the underworld). "Dea" is a natural play-on-words to make the handle work in situations where the preferred handle has already been taken, without having to resort to numbers and remaining pronounceable. I adopted the handle many years ago — at the time, I was reading Edith Hamilton's Mythology, and Piers Anthony's On a Pale Horse, both still favorites of mine, and the name was born out of that. When I created the handle, I was fairly young, and valued privacy while online, not giving out my name. As I've become a more competent programmer, I'm starting to want to release some of my private works under FOSS licenses and such, and sometimes under my own name. This has started to tie this handle with my real name. I've become increasingly aware of my "web image" in the last few years, as I've been job hunting. As a programmer, I have a larger-than-average web presence, and I've started to wonder: Is this handle name professional? Does a handle name matter in a professional sense? Should I "rebrand"? (While one obviously wants to avoid hateful or otherwise distasteful names, is a topic such as "death" (to which my name is tied) proper? What could be frowned upon?) To try to make this a bit more programmer specific: Programmers are online — a lot — and some of us (and some who are not us) tend to put emphasis on a "web presence". I would argue that a prudent programmer (or anyone in an occupation that interacts online a lot) would be aware of their web presence. While not strictly limited to just programmers, for better or worse, it is a part of our world.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How will technological singularity affect programmers?

    - by Amir Rezaei
    I'm one of the believers that think that we will hit the technological singularity sooner or later. Then the question is if any profession will be unaffected by changes that will come. In the end it will be we programmers that will implement the first self-aware AI. How will technological singularity affect us programmer? What is your professional opinion regarding technological singularity? EDIT: By self-aware I refer to an entity that questions and seek answers, able to analyze and solve problem. Artificial neural network is branch in mathematics/statistics with many widely used algorithms. The algorithms are applied where recognition of data is needed. For example hidden Markov model is used for voice recognition. Another well-known area is business intelligence and data mining. Today algorithms are self-learning. That is a bit of AI what many never think of. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Link to Ref.

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • RUEI 12.1.0.3.0 dependency requirement for php-soap-5.1.6

    - by sthieme
    Dear Readers,please be aware of the new php-soap-5.1.6 dependency in RUEI 12.1.0.3.For a swift upgrade to RUEI 12.1.0.3 you should be aware of this pre-requisite as it can be a time-eater to obtain individual rpm-packages inside of a datacenter for an old OS revision once you have started the upgrade process. You may use the following procedure to retrieve the required package via http://public-yum.oracle.com:Customers will have to check the /etc/issue, /etc/issue.net (or /etc/redhat-release for RHEL based OS) for their current release in order to obtain the fitting package version.Customers of OEL can download the packages from our public-yum.oracle.com Server: http://public-yum.oracle.com/repo/,  e.g. http://public-yum.oracle.com/repo/OracleLinux/OL5/8/base/x86_64/php-soap-5.1.6-32.el5.x86_64.rpmEarlier releases (up to 5.5) are located under the EnterpriseLinux instead of OracleLinux path, e.g.http://public-yum.oracle.com/repo/EnterpriseLinux/EL5/5/base/x86_64/php-soap-5.1.6-27.el5.x86_64.rpmNote: you will have to obtain the relevant RedHat rpm-packages via the login protected RHN URLs. Oracle can only provide support for Oracle Enterprise Linux and RHEL packages are not available publicly via rpm-seek.com to my knowledge. Kind regards,Stefan

    Read the article

  • What should I expect from a system engineer university career

    - by Trufa
    I'm starting tomorrow a series of interviews to decide which university should I choose to get a degree in System Engineer. I know this is a serious university but I would like to get some feedback about what should I expect or "demand" from the university. My experience in the technology field is (obviously) limited and would like to be aware of what should I to be aware if the university might be good or not. Specially in following fields: Infrastructure: what are the essentials? big pluses? Theoretical vs Practical: how practical should it be? what is a "good" mix? Programming languages, frameworks, etc: Which are the ideal for learning? Most demand? Latest technologies: What should they be teaching right know to "prove" they are up to date. Qualification system: What exam methods do you think are ideal for this kind of degree, good ol' Q&A, multiple choice, projects, a fair mix? What other points do you think I should care about? What isn't important? Thanks in advance. I realize this is might be a very subjective topic so I tried to make it as specific and on topic as I could but any recommendations are of course welcome. I also understand that none of this questions will guarantee this will be a good university but it might give me another reference as to which should I choose when the moment comes.

    Read the article

  • I want to build a Virtual Machine, are there any good references?

    - by Michael Stum
    I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want¹ to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand why stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. ¹ I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization...

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Hardware compatibility on H97 chipset/hardware support

    - by user3238850
    I am aware that there is documentation about compatibility but it is way out dated. I am also aware that there is a hardware compatibility page on Ubuntu website, but that one is focused on the whole box rather than a single piece of hardware. I have some experience with Linux OS, and some experience playing Ubuntu Server in a virtual machine, but never worked on a machine that lives in the real internet. I am building a home server with an Intel H97 chipset motherboard. I have looked at several models and none of them has Linux in the supported OS category. I have the experience of installing Ubuntu Desktop 14.04 on my 4-years-old lap top, and except for some system errors on start up, there is not too much I can complain about, so I guess I should be fine. However, this time I am going to install Ubuntu Server 14.04 on a relatively new piece of hardware(I went to http://linux-drivers.org/ but found nothing really helpful). For example the ASUS motherboard has M.2 socket and Intel LAN I218V chip, the Gigabyte motherboard has two LAN chips(Intel LAN WGI217V and ATHEROS AR8161-BL3A-R). So I really want to make sure everything will work. Usually I would just trust Ubuntu and buy all hardware I need, but basing on my past experience with the Ubuntu Desktop version on my lap top, I am not so convinced. There is an easily noticeable difference: when the system is idle, the fan runs much more frequently and longer under Ubuntu. This leads to my suspicion that generally hardware will have worse support for Ubuntu, which is no surprising at all but enough for me to put this post here. And as far as I know, some Intel CPU features come with software that usually will not run under Linux. Any help, idea or thoughts would be greatly appreciated!

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • How do I choose which way to enable/disable, start/stop, or check the status of a service?

    - by Glyph
    If I want to start a system installed service, I can do: # /etc/init.d/some-svc start # initctl start some-svc # service some-svc start # start some-svc If I want to disable a service from running at boot, I can do: # rm /etc/rc2.d/S99some-svc # update-rc.d some-svc disable # mv /etc/init/some-svc.conf /etc/init/some-svc.conf.disabled Then there are similarly various things I can do to enable services for starting at boot, and so on. I'm aware of the fact that upstart is a (relatively) new thing, and I know about how SysV init used to work, and I'm vaguely aware of a bunch of D-Bus nonsense, but what I don't know is how one is actually intended to interface with this stuff. For example, I don't know how to easily determine whether a service is an Upstart job or a legacy SysV thing, without actually reading through the source of its shell scripts extensively. So: if I want to start or stop a service, either at the moment or persistently, which of these tools should I use, and why? If the answer depends on some attribute (like "this service supports upstart") then how do I quickly and easily learn about that attribute of an installed package? Relatedly, are there any user interface tools which can safely and correctly interact with the modern service infrastructure (upstart, and/or whatever its sysv compatibility is)? For example, could I reliably use sysv-rc-conf to determine which services should start?

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • Adventures in Lab Management Configuration: Part 3 of 3

    - by Enrique Lima
    This is long overdue.  But here it is. In the previous two sections I have discussed on how I got a CMMI v4.2 to take on the same fields as v5 and therefore be able to communicate with MTM and Lab Manager.  And that was quite a success. Yet when I opened up Lab Management while it was fully aware of the VMs being there, it refused to let me enroll them into an environment.  It kept stating there was no suitable host to deploy the VM to, error TF259115. This was an indication something was not matching the expected network configuration between TFS and Hyper-V/SCVMM. So, here are a couple of things that took place: Verified the network segment specified for network isolation matched what was configured physically for either DHCP or manually assigned IP addressing for the guest VMs Made sure TFS was fully aware of the configuration settings for the network location name.  For that I issued:  tfsconfig lab /settings /networklocation:”<name of the network location configured in SCVMM” On that last item, that was key to making sure Lab Management communicated with the VMs and for it to allow enrollment into the new Virtual Environment.

    Read the article

  • Messages do not always appear in [catalog].[event_messages] in the order that they occur [SSIS]

    - by jamiet
    This is a simple heads up for anyone doing SQL Server Integration Services (SSIS) development using SSIS 2012. Be aware that messages do not always appear in [catalog].[event_messages] in the order that they occur, observe… In the following query I am looking at a subset of messages in [catalog].[event_messages] and ordering them by [event_message_id]: SELECT [event_message_id],[event_name],[message_time],[message_source_name]FROM   [catalog].[event_messages] emWHERE  [event_message_id] BETWEEN 290972 AND 290982ORDER  BY [event_message_id] ASC--ORDER BY [message_time] ASC Take a look at the two rows that I have highlighted, note how the OnPostExecute event for “Utility GetTargetLoadDatesPerETLIfcName” appears after the OnPreExecute event for “FELC Loop over TargetLoadDates”, I happen to know that this is incorrect because “Utility GetTargetLoadDatesPerETLIfcName” is a package that gets executed by an Execute Package Task prior to the For Each Loop “FELC Loop over TargetLoadDates”: If we order instead by [message_time] then we see something that makes more sense: SELECT [event_message_id],[event_name],[message_time],[message_source_name]FROM   [catalog].[event_messages] emWHERE  [event_message_id] BETWEEN 290972 AND 290982--ORDER BY [event_message_id] ASCORDER  BY [message_time] ASC We can see that the OnPostExecute for “Utility GetTargetLoadDatesPerETLIfcName” did indeed occur before the OnPreExecute event for “FELC Loop over TargetLoadDates”, they just did not get assigned an [event_message_id] in chronological order. We can speculate as to why that might be (I suspect the explanation is something to do with the two executables appearing in different packages) but the reason is not the important thing here, just be aware that you should be ordering by [message_time] rather than [event_message_id] if you want to get 100% accurate insights into your executions. @Jamiet

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Oracle Endeca "Getting Started" Partner Guide

    - by Grant Schofield
    For partners looking for a concise step by step guide to getting started with Oracle Endeca Information Discovery, here it is to help you get started as quickly as possible. Step 1: Join the Knowledge Zone as a company and an individual - this will give you a) the right to resell Oracle Endeca ID, and b) notice of any free / subsidised training events in your region Step 2: For a quick general overview & positioning see the following article, in particular the Agile BI Video series which are useful in sharing with prospective clients. Also find a link to the official OEID Data Sheet. Step 3: For a more detailed overview there is a live recorded OEID partner webcast with downloadable slides. In conjunction with this, your sales / presales team have free access to the official OEID Partner Playbook as well as the full Oracle price book. Step 4: Download the OEID software and install. Please be aware you will need a 64-bit machine & a 64-bit Operating System. A useful solution for partners that have a 32-bit Operating System is to use Oracle's free VirtualBox software to quickly and easily create a Linux image and install on that. Step 5: Attend a free / subsidised training event in your region. Please join the Knowledge Zone as an Individual (opt in) to be informed of these. We will also publish these via the blog Things are moving fast, so please be aware that the team are working hard to produce more and more material such as downloadable data sets (structured / unstructured), a downloadable image, access to demos, and over the next few weeks we will update this article as soon as new material becomes available!

    Read the article

  • Add copyright notice to a website

    - by PeeHaa
    Not really a programming question, but I find it related. If not (or you find this question too subjective), please tell me, yell at me, swear at me, kick me in the nuts, or just vote to close :) I've read some questions and answers here on SO about adding copyright notices, but not the specific ones I am looking for. I want to add a copyright notice to a website I created. Something like (c) Me 2010. All rights reserved. I am aware that everything written by someone is automatically copyrighted (if I'm not mistaken and perhaps depending on country laws). I see some sites use the following format for this (c) Me 2009-2010. However for me that makes no sense to add an 'end-date' to the notice. I am aware I can code to update the notice every year, but I just find it strange. Or is it just me? Another question is: I also use copyrighted code from others (they are all mentioned in the credits incl. links to their licenses ofc) on my site. Would it still be OK to add the copyright notice to the site with only Me in it? So to sum it up I have 2 questions: What is THE RIGHT WAYTM of adding a copyright notice on a website (or code or whatever)? If there is one. Is it allowed to copyright code with other copyrighted code within it?

    Read the article

  • Should I use OpenGL or DX11 for my game?

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... Physics Engine AI Engine Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • Password not working for sudo ("Authentication failure")

    - by Souta
    Before I mention anything further, DO NOT give me a response saying that terminal won't show password input. I'm AWARE of that. I'm typing my user password in (not a capslock issue), and for some reason it still says 'Authentication Failure'. Is there some other password (one I'm not aware of) I'm supposed to be using other than my user password? I've had this ubuntu before, on another hard drive and I didn't have this problem. (And it was the same ubuntu, ubuntu 12.04 LTS) ai@AiNekoYokai:~$ groups ai adm cdrom sudo dip plugdev lpadmin sambashare ai@AiNekoYokai:~$ lsb_release -rd Description: Ubuntu 12.04 LTS Release: 12.04 ai@AiNekoYokai:~$ pkexec cat /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # Please consider adding local content in /etc/sudoers.d/ instead of # directly modifying this file. # # See the man page for details on how to write a sudoers file. # Defaults env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL:ALL) ALL # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL # See sudoers(5) for more information on "#include" directives: #includedir /etc/sudoers.d I can log in with my password, but it's not accepted as valid for authentication <-- That is pretty much my issue. (Although, I haven't gone into recovery mode.) I've ran: ai@AiNekoYokai:~$ ls /etc/sudoers.d README And also reinstalled sudo with: pkexec apt-get update pkexec apt-get --purge --reinstall install sudo pkexec usermod -a -G admin $USER <- Says admin does not exist su $USER <- worked for me, however, my password still does not do much (in sense of not working for other things) I changed my password with pkexec passwd $USER. I was able to change it no problem. gksudo xclock was something I was able to get into, no problem. (Clock showed) ai@AiNekoYokai:~$ gksudo xclock

    Read the article

  • How to learn to deliver quality software designs when working on a tight deadline?

    - by chester89
    I read many books about how to design great software, but I kind of struggle to come up with a good design decisions when it comes to business apps, especially when the timeframe is tough. In the company I currently work for, the following situation happen all the time: my teamlead tells me that there's a task to do, I call some guy or a girl from business who tells me exactly what is it they want, and then I start coding. The task always fits in some existing application (we do only web apps or web services), usually it's purpose is to pull data from one datasource and put into the other one, with some business logic attached in the process. I start coding and then, after spending some time on a problem, my code didn't work as expected - either because of technical mistake or my lack of knowledge of the domain. The business is ringing me 2-3 times a day to hurry me up. I ask my team lead to help, he comes up, sees my code and goes like 'What's this?'. Then he throws away about half of my code, including all the design decisions I made, writes 2-3 methods that does the job (each of them usually 200-300 lines long or more, by the way), and task is complete, code works as it should have. The guy is smarter than me, obviously, and I'm aware of that. My goal is to be better software developer, that means write better code, not finish the job quicker with some crappy code. And the thing is, when I have enough time to tackle a problem, I can come up with a design that is good (in my opinion, of course), but I fall short to do so when I'm on a tight deadline. What should I do? I am fully aware that it's rather vague explanation, but please bear with me

    Read the article

  • Game Development

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... 1.Physics Engine 2.AI Engine 3.Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • GeoIP and Nginx

    - by JavierMartinez
    I have a nginx with geoip, but it is not working rightly. The issue is the next: Nginx are getting geodata from $_SERVER['REMOTE_ADDR'] instead of $_SERVER['HTTP_X_HAPROXY_IP'], which have the real client ip. So, the reported geodata belongs to my server ip instead of client ip. Does anybody where could be the error to fix it? Nginx version and compiled modules: nginx -V nginx version: nginx/1.2.3 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log- path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-cache-purge nginx site conf (frontend machine) server { root /var/www/storage; server_name ~^.*(\.)?mydomain.com$; if ($host ~ ^(.*)\.mydomain\.com$) { set $new_host $1.mydomain.com; } if ($host !~ ^(.*)\.mydomain\.com$) { set $new_host www.mydomain.com; } add_header Staging true; real_ip_header X-HAProxy-IP; set_real_ip_from 10.5.0.10/32; location /files { expires 30d; if ($uri !~ ^/files/([a-fA-F0-9]+)_(220|45)\.jpg$) { return 403; } rewrite ^/files/([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9]+)_(220|45)\.jpg$ /files/$1/$2/$3/$4/$1$2$3$4$5_$6.jpg break; try_files $uri @to_backend; } location /assets { if ($uri ~ ^/assets/r([a-zA-Z0-9]+[^/])(/(css|js|fonts)/.*)) { rewrite ^/assets/r([a-zA-Z0-9]+[^/])/(css|js|fonts)/(.*)$ /assets/$2/$3 break; } try_files $uri @to_backend; } location / { proxy_set_header Host $new_host; proxy_set_header X-HAProxy-IP $remote_addr; proxy_pass http://10.5.0.10:8080; } location @to_backend { proxy_set_header Host $new_host; proxy_pass http://10.5.0.10:8080; } } nginx.conf (backend machine) http{ ... ## # GeoIP Config ## geoip_country /etc/nginx/geoip/GeoIP.dat; # the country IP database geoip_city /etc/nginx/geoip/GeoLiteCity.dat; # the city IP database ... } fastcgi_params (backend machine) ### SET GEOIP Variables ### fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; fastcgi_param GEOIP_COUNTRY_CODE3 $geoip_country_code3; fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; fastcgi_param GEOIP_CITY_COUNTRY_CODE $geoip_city_country_code; fastcgi_param GEOIP_CITY_COUNTRY_CODE3 $geoip_city_country_code3; fastcgi_param GEOIP_CITY_COUNTRY_NAME $geoip_city_country_name; fastcgi_param GEOIP_REGION $geoip_region; fastcgi_param GEOIP_CITY $geoip_city; fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code; fastcgi_param GEOIP_CITY_CONTINENT_CODE $geoip_city_continent_code; fastcgi_param GEOIP_LATITUDE $geoip_latitude; fastcgi_param GEOIP_LONGITUDE $geoip_longitude; haproxy.conf (frontend machine) defaults log global option forwardfor option httpclose mode http retries 3 option redispatch maxconn 4096 contimeout 100000 clitimeout 100000 srvtimeout 100000 listen cluster_webs *:8080 mode http option tcpka option httpchk option httpclose option forwardfor balance roundrobin server backend-stage 10.5.0.11:80 weight 1 $_SERVER dump: http://paste.laravel.com/7dy Where 10.5.0.10 is frontend private ip and 10.5.0.11 backend private ip

    Read the article

  • Consistent PHP _SERVER variables between Apache and nginx?

    - by Alix Axel
    I'm not sure if this should be asked here or on ServerFault, but here it goes... I am trying to get started on nginx with PHP-FPM, but I noticed that the server block setup I currently have (gathered from several guides including the nginx Pitfalls wiki page) produces $_SERVER variables that are different from what I'm used to seeing in Apache setups. After spending the last evening trying to "fix" this, I decided to install Apache on my local computer and gather the variables that I'm interested in under different conditions so that I could try and mimic them on nginx. The Apache setup I've on my computer has only one mod_rewrite rule: RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] And these are the values I get for different request URIs (left is Apache, right is nginx): localhost/ - http://www.mergely.com/GnzBHRV1/ localhost/foo/bar/baz/?foo=bar - http://www.mergely.com/VwsT8oTf/ localhost/index.php/foo/bar/baz/?foo=bar - http://www.mergely.com/VGEFehfT/ What configuration directives would allow me to get similar values on requests handled by nginx? My current configuration in nginx is: server { listen 80; listen 443 ssl; server_name default; ssl_certificate /etc/nginx/certificates/dummy.crt; ssl_certificate_key /etc/nginx/certificates/dummy.key; root /var/www/default/html; index index.php index.html; autoindex on; location / { try_files $uri $uri/ /index.php; } location ~ /(?:favicon[.]ico|robots[.]txt)$ { log_not_found off; } location ~* [.]php { #try_files $uri =404; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_split_path_info ^(.+[.]php)(/.+)$; } location ~* [.]ht { deny all; } } And my fastcgi_params file looks like this: fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; I know that the try_files $uri =404; directive is commented and that it is a security vulnerability but, if I uncomment it, the third request (localhost/index.php/foo/bar/baz/?foo=bar) will return a 404. It's also worth noting that my PHP cgi.fix_pathinfo in On (contrary to what some of the guides recommend), if I try to set it to Off, I'm presented with a "Access denied." message on every PHP request. I'm running PHP 5.4.8 and nginx/1.1.19. I don't know what else to try... Help?

    Read the article

  • gallery2 and nginx with rewrite return file not found for file name with space (or + sign in url)

    - by Vangel
    I have setup nginx with gallery2 on an internal server. Everything works fine under apache2 which I checked first, it used to be on apache2 Problem is: gallery2 seems to generate url with + sign in it for file names/ images which had spaces in it so a file like "may report.jpg" becomes "may+report.jpg" The URL rewrite works but gallery2 throws an error for file not found. THis does not happen under apache2. Here is my nginx rewrite rule: location / { index main.php index.html; default_type text/html; # If the file exists as a static file serve it # directly without running all # the other rewite tests on it if (-f $request_filename) { break; } } location /v/ { # if ($request_uri !~ /main.php) # { rewrite ^/v/(.*)$ /main.php?g2_view=core.ShowItem&g2_path=$1 last; # } } location /d/ { if ($request_uri !~ /main.php) { rewrite ^/d/([0-9]+)-([0-9]+)/(.*)$ /main.php?g2_view=core.DownloadItem&g2_itemId=$1&g2_serialNumber=$2&g2_fileName=$3 last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:8889; fastcgi_index main.php; fastcgi_intercept_errors on; # to support 404s for PHP files not found fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param SERVER_NAME $host; fastcgi_read_timeout 300; } the sit on its own works fine. only the images with spaces in file name do not display in album view and also when clicking the the image for full page view will throw this error Error (ERROR_MISSING_OBJECT) : Parent 103759 path report+april+456.flv in modules/core/classes/helpers/GalleryFileSystemEntityHelper_simple.class at line 98 (GalleryCoreApi::error) in modules/core/classes/GalleryCoreApi.class at line 1853 (GalleryFileSystemEntityHelper_simple::fetchChildIdByPathComponent) in modules/core/classes/helpers/GalleryFileSystemEntityHelper_simple.class at line 53 (GalleryCoreApi::fetchChildIdByPathComponent) in modules/core/classes/GalleryCoreApi.class at line 1804 (GalleryFileSystemEntityHelper_simple::fetchItemIdByPath) in modules/rewrite/classes/RewriteSimpleHelper.class at line 45 (GalleryCoreApi::fetchItemIdByPath) in ??? at line 0 (RewriteSimpleHelper::loadItemIdFromPath) in modules/rewrite/classes/RewriteUrlGenerator.class at line 103 in modules/rewrite/classes/parsers/modrewrite/ModRewriteUrlGenerator.class at line 37 (RewriteUrlGenerator::_onLoad) in init.inc at line 147 (ModRewriteUrlGenerator::initNavigation) in main.php at line 180 in main.php at line 94 in main.php at line 83 System Information Gallery version 2.2.4 PHP version 5.3.6 fpm-fcgi Webserver nginx/0.8.55 Database mysqli 5.0.95 Toolkits ImageMagick, Thumbnail, Gd Operating system Linux CentOS-55-64-minimal 2.6.18-274.18.1.el5 #1 SMP Thu Feb 9 12:45:44 EST 2012 x86_64 Browser Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5 In the report above there is usable system information if that helps. I know the nginx is old but it comes as default in centos repo and I am not sure if upgrading will fix the problem or break something else it seems gallery2 must map the + to space internally but why it's not doing so with nginx I can't tell. EDIT: I just verified that if I change the '+' sign to %20 then gallery2 works. but gallery2 is generating URL as +. I found a (maybe) related problem here for IIS7 and Gallery2 http://forums.asp.net/t/1431951.aspx EDIT2: Accessing the URL without rewrite and having the + sign works. Must be something to do with rewrite. Here is the relevant apache2 rule that might be of help RewriteCond %{THE_REQUEST} /d/([0-9]+)-([0-9]+)/([^/?]+)(\?.|\ .) RewriteCond %{REQUEST_URI} !/main\.php$ RewriteRule . /main.php?g2_view=core.DownloadItem&g2_itemId=%1&g2_serialNumber=%2&g2_fileName=%3 [QSA,L] RewriteCond %{THE_REQUEST} /v/([^?]+)(\?.|\ .) RewriteCond %{REQUEST_URI} !/main\.php$ RewriteRule . /main.php?g2_path=%1 [QSA,L]

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >