Search Results

Search found 55168 results on 2207 pages for 'http header'.

Page 223/2207 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Why is PHP discriminating between .php and .abc extensions for caching?

    - by Sam
    There seems to be a problem between how PHP engine handles identical files that differ only in their file extension. Problem: "An If-Modified-Since conditional request returned the full content unchanged." Also, I measured that the .php extension loads much faster than identitcal twin with .xxx extension even though the file contents are identical, and they differ only in their file extension. "HTTP allows clients to make conditional requests to see if a copy that they hold is still valid. Since this response has a Last-Modified header, clients should be able to use an If-Modified-Since request header for validation. RED has done this and found that the resource sends a full response even though it hadn't changed, indicating that it doesn't support Last-Modified validation." homepage ending with .php exact same file, but ending .ast Given: A home.php file is copied as home.xxx and this extension is added to htaccess to recognize it as a PHP file. The .php file listen to the php.ini where freshness is set to 3 hrs, the non .php files have to listen to htaccess where freshness is set to 2 hrs according to: AddType application/x-httpd-php .php .ast .abc .xxx .etc <IfModule mod_headers.c> ExpiresActive On ExpiresDefault M2419200 Header unset ETag FileETag None Header unset Pragma Header set Cache-Control "max-age=2419200" ##### DYNAMIC PAGES <FilesMatch "\\.(ast|php|abc|xxx)$"> ExpiresDefault M7200 Header set Cache-Control "public, max-age=7200" </FilesMatch> </IfModule> So far so good and everything loads, except, the non-php file doesn't cache properly, or it does cache well but doesn't validate well, to be more specific. See images enclosed. Only the non-php file extension causes the error and loads slower. The entire page.php loads faster as somehow all the elements in there then load properly from cache, while the page.abc has the full request returned while it ought to be cached, meaning the entire page is slower. Bottom line: What should be changed, in order eliminate the If-Modified-Since conditional request returning the full content unchanged?

    Read the article

  • apache proxy module gives 403 forbidden error

    - by naiquevin
    I am trying to use the apache's proxy module for working with xmpp on ubuntu desktop. For this i did the following things - 1) enabled mod_proxy by creating a symlink of proxy.conf, proxy.load and proxy_http.load from /etc/apache2/mods-available/ in the mods-enabled directory. 2) Added the following lines to the vhost <Proxy http://mydomain.com/httpbind> Order allow,deny Allow from all </Proxy> ProxyPass /httpbind http://mydomain.com:7070/http-bind/ ProxyPassReverse /httpbind http://mydomain.com:7070/http-bind/ I am new to using the proxy module but what i can make from the above lines is that requests to http://mydomain.com/httpbind will be forwarded to http://mydomain.com:7070/http-bind/. Kindly correct if wrong. 3) added rule Allow from .mydomain.com in /mods-available/proxy.conf Now i try to access http://mydomain.com/httpbind and it shows 403 Forbidden error.. What am i missing here ? Please help. thanks Edit : The problem got solved when i changed the following code in mods_available/proxy.conf <Proxy *> AddDefaultCharset off Order deny,allow Deny from all Allow from mydomain.com </Proxy> to <Proxy *> AddDefaultCharset off Order deny,allow #Deny from all Allow from all </Proxy> Didnt get what was wrong with the initial code though

    Read the article

  • How to get value of h:inputText when binded in JSF

    - by Tr?n Minh Phuong
    How can i get 2 h:inputTextValue from this? <h:dataTable cellspacing="0" value="#{managerManagedBean.lstMatch}" var="m" binding="#{managerManagedBean.datatableMatch}"> <!-- cellspacing='0' is important, must stay --> <h:column> <f:facet name="header">Team One</f:facet> <h:outputText value="#{m.teamOneName}"></h:outputText> </h:column> <h:column> <f:facet name="header">Match Score</f:facet> <h:inputText value="#{m.teamOneResult}" style="width: 20px; text-align: center" binding="#{input}"></h:inputText> - <h:inputText value="#{m.teamTwoResult}" style="width: 20px; text-align: center"></h:inputText> </h:column> <h:column> <f:facet name="header">Half Time</f:facet> <h:outputText value="#{m.haveHalfTime}"></h:outputText> </h:column> <h:column> <f:facet name="header">Team Two</f:facet> <h:outputText value="#{m.teamTwoName}"></h:outputText> </h:column> <h:column> <f:facet name="header">Match Date</f:facet> <h:outputText value="#{m.matchDate}"></h:outputText> </h:column> <h:column> <f:facet name="header">Control</f:facet> <h:commandButton action="#{managerManagedBean.update(m, input.value)}" value="Update Match"> </h:commandButton> </h:column> </h:dataTable>

    Read the article

  • C++ Namespaces & templates question

    - by Kotti
    Hi! I have some functions that can be grouped together, but don't belong to some object / entity and therefore can't be treated as methods. So, basically in this situation I would create a new namespace and put the definitions in a header file, the implementation in cpp file. Also (if needed) I would create an anonymous namespace in that cpp file and put all additional functions that don't have to be exposed / included to my namespace's interface there. See the code below (probably not the best example and could be done better with another program architecture, but I just can't think of a better sample...) Sample code (header) namespace algorithm { void HandleCollision(Object* object1, Object* object2); } Sample code (cpp) #include "header" // Anonymous namespace that wraps // routines that are used inside 'algorithm' methods // but don't have to be exposed namespace { void RefractObject(Object* object1) { // Do something with that object // (...) } } namespace algorithm { void HandleCollision(Object* object1, Object* object2) { if (...) RefractObject(object1); } } So far so good. I guess this is a good way to manage my code, but I don't know what should I do if I have some template-based functions and want to do basically the same. If I'm using templates, I have to put all my code in the header file. Ok, but how should I conceal some implementation details then? Like, I want to hide RefractObject function from my interface, but I can't simply remove it's declaration (just because I have all my code in a header file)... The only approach I came up with was something like: Sample code (header) namespace algorithm { // Is still exposed as a part of interface! namespace impl { template <typename T> void RefractObject(T* object1) { // Do something with that object // (...) } } template <typename T, typename Y> void HandleCollision(T* object1, Y* object2) { impl::RefractObject(object1); // Another stuff } } Any ideas how to make this better in terms of code designing?

    Read the article

  • mysql to excel genration using php

    - by pmms
    <?php // DB Connection here mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $select = "SELECT * FROM jos_users "; $export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; ? the above code is used for genrating mysql to excel sheet but we are getting following error the file youare trying to open, 'users.xls',is in a different format than specified by the file extension. verify that the file is not corrupted and is from a trusted source before opening the file. do you want to open the file now?

    Read the article

  • CSS class not working as expected [closed]

    - by user1050619
    My HTML codes not implement the CSS styling..The border in the CSS file is not being implemented. I tried both in Firefox & IE. Please provide your inputs. Please find the code below: HTML <html> <head> <link href="file://c:/jquery/chapter-1/begin/styles/my_style.css" rel="stylesheet"> </head> <body> <div id="header" class="no_hover"><h1>Header</h1></div> <button type="button" id="btn1">Click to Add</button> <button type="button" id="btn2">Click to Remove</button> <script src="file://c:/jquery/chapter-1/begin/scripts/jquery.js" type="text/javascript"></script> <script src="file://c:/jquery/chapter-1/begin/scripts/test4.js" type="text/javascript"></script> </body> </html> jS FILE $(document).ready(function() { $("#btn1").click( function(){ $("#header").addClass("hover"); $("#header").removeClass("no_hover"); }); $("#btn2").click( function(){ $("#header").removeClass("hover"); $("#header").addClass("no_hover"); }); }); CSS FILE .hover{ border: solid #f00 3px; } .no_hover{ border: solid #000 3px; }

    Read the article

  • load data from grid row into (pop up) form for editing

    - by user1495457
    I read in Ext JS in Action ( by J. Garcia) that if we have an instance of Ext.data.Record, we can use the form's loadRecord method to set the form's values. However, he does not give a working example of this (in the example that he uses data is loaded into a form through a file called data.php). I have searched many forums and found the following entry helpful as it gave me an idea on how to solve my problem by using form's loadRecord method: load data from grid to form Now the code for my store and grid is as follows: var userstore = Ext.create('Ext.data.Store', { storeId: 'viewUsersStore', model: 'Configs', autoLoad: true, proxy: { type: 'ajax', url: '/user/getuserviewdata.castle', reader: { type: 'json', root: 'users' }, listeners: { exception: function (proxy, response, operation, eOpts) { Ext.MessageBox.alert("Error", "Session has timed-out. Please re-login and try again."); } } } }); var grid = Ext.create('Ext.grid.Panel', { id: 'viewUsersGrid', title: 'List of all users', store: Ext.data.StoreManager.lookup('viewUsersStore'), columns: [ { header: 'Username', dataIndex: 'username' }, { header: 'Full Name', dataIndex: 'fullName' }, { header: 'Company', dataIndex: 'companyName' }, { header: 'Latest Time Login', dataIndex: 'lastLogin' }, { header: 'Current Status', dataIndex: 'status' }, { header: 'Edit', menuDisabled: true, sortable: false, xtype: 'actioncolumn', width: 50, items: [{ icon: '../../../Content/ext/img/icons/fam/user_edit.png', tooltip: 'Edit user', handler: function (grid, rowIndex, colIndex) { var rec = userstore.getAt(rowIndex); alert("Edit " + rec.get('username')+ "?"); EditUser(rec.get('id')); } }] }, ] }); function EditUser(id) { //I think I need to have this code here - I don't think it's complete/correct though var formpanel = Ext.getCmp('CreateUserForm'); formpanel.getForm().loadRecord(rec); } 'CreateUserForm' is the ID of a form that already exists and which should appear when user clicks on Edit icon. That pop-up form should then automatically be populated with the correct data from the grid row. However my code is not working. I get an error at the line 'formpanel.getForm().loadRecord(rec)' - it says 'Microsoft JScript runtime error: 'undefined' is null or not an object'. Any tips on how to solve this?

    Read the article

  • Margin-Top push outer div down

    - by Daniel Hertz
    Hello, I have a header div as the first element in my wrapper div, but when I add a top margin to a h1 inside the header div it pushes the entire header div down. I realize this happens whenever I apply a top margin to the first visible element on a page. Here is a sample of the css. Thanks! div#header{ width: 100%; background-color: #000; position: relative; } div#header h1{ text-align: center; width: 375px; height: 50px; margin: 0 auto; font-size: 220%; text-indent: -5000px; background: url('../../images/name_logo.png') no-repeat; } html part <div id="header"> <h1><a href="/home.php">Title</a></h1> <ul id="navbar">

    Read the article

  • PHP download page: file_exists() returns true, but browser returns 'page not found'

    - by Chris
    I'm using PHP for a file download page. Here's a snippet of the problem code: if (file_exists($attachment_location)) { header($_SERVER["SERVER_PROTOCOL"] . " 200 OK"); header("Cache-Control: public"); // needed for i.e. header("Content-Type: application/zip"); header("Content-Transfer-Encoding: Binary"); header("Content-Length:".filesize($attachment_location)); header("Content-Disposition: attachment; filename=file.zip"); readfile($attachment_location); die("Hooray"); } else { die("Error: File not found."); } This code works absolutely fine when testing locally, but after deploying to the live server the browser returns a 'Page not found' error. I thought it might be an .htaccess issue, but all .htaccess files on the live server are identical to their local counterparts. My next guess would be the live server's PHP configuration, but I have no idea what PHP setting might cause this behaviour. The file_exists() function always returns true - I've checked this on the live server and it's always correctly picking up the file, its size etc., so it does have a handle on the file. It just won't perform the download! The main site is a Wordpress site, but this code isn't part of a Wordpress page - it's in a standalone directory within the site root. UPDATE: is_file() and is_readable() are both returning true for the file, so that's not the problem. The specific line that is causing the problem is: readfile($attachment_location) Everything up until that point is super happy.

    Read the article

  • PHP Array to CSV

    - by JohnnyFaldo
    I'm trying to convert an array of products into a CSV file, but it doesn't seem to be going to plan. The CSV file is one long line, here is my code: for($i=0;$i<count($prods);$i++) { $sql = "SELECT * FROM products WHERE id = '".$prods[$i]."'"; $result = $mysqli->query($sql); $info = $result->fetch_array(); } $header = ''; for($i=0;$i<count($info);$i++) { $row = $info[$i]; $line = ''; for($b=0;$b<count($row);$b++) { $value = $row[$b]; if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$data"; Also, the header doesn't force a download. I've been copy and pasting the output and saving as .csv

    Read the article

  • Serving files (800MB) results in an empty file

    - by azz0r
    Hello, with the following code, small files are served fine, however large (see, 800MB and above) result in empty files! Would I need to do something with apache to solve this? <?php class Model_Download { function __construct($path, $file_name) { $this->full_path = $path.$file_name; } public function execute() { if ($fd = fopen ($this->full_path, "r")) { $fsize = filesize($this->full_path); $path_parts = pathinfo($this->full_path); $ext = strtolower($path_parts["extension"]); switch ($ext) { case "pdf": header("Content-type: application/pdf"); // add here more headers for diff. extensions header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a download break; default; header("Content-type: application/octet-stream"); header("Content-Disposition: filename=\"".$path_parts["basename"]."\""); break; } header("Content-length: $fsize"); header("Cache-control: private"); //use this to open files directly while(!feof($fd)) { $buffer = fread($fd, 2048); echo $buffer; } } fclose ($fd); exit; } }

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Where to publish articles about open source?

    - by Lukas Eder
    I've been developing a free, open source Java database abstraction project (jOOQ) and I have released first stable releases from November 2010 onwards. Feedback has been quite good and constructive, and I am very motivated to continue my work. In the mean time, to get more attention and feedback, I have published articles on http://java.dzone.com/ http://www.theserverside.com/ http://www.infoq.com/ (they didn't publish my article, though) These are some sample articles so you know the type of article I want to publish: http://java.dzone.com/announcements/simple-and-intuitive-approach http://java.dzone.com/articles/2011-great-year-stored What other resources would you recommend? Where else should I publish, knowing that I want to reach Java/SQL developers and architects / technology decision makers I can publish in English, German, French I think that my project is suitable for both beginners and pro's (in Java and SQL, or programming in general)

    Read the article

  • Commercial Software Development – my presentation for DDD Scotland now available for download

    - by Liam Westley
    Thanks to everyone who voted me onto the DDD Scotland agenda, and for the fantastic audience some of whom you can see in Craig Murphy's photos of the event, http://www.flickr.com/photos/craigmurphy/4592461745/in/set-72157624025673156 http://www.flickr.com/photos/craigmurphy/4592467645/in/set-72157624025673156 I hope those who came enjoyed the session had a good time, and for them or those who were on one of the other tracks, or who couldn’t squeeze in; I’ve uploaded the presentation for you to download.  I created a more simple, and smaller, PowerPoint without all the fancy animations and video clips, which is available as a compressed ZIP file,   http://www.tigernews.co.uk/blog-twickers/dddscot/commercialsoftwaredev.zip I also printed the presentation with speaker notes (which contain most of the information I was talking about) using PDFCreator, which is available as an Adobe Acrobat PDF here,   http://www.tigernews.co.uk/blog-twickers/dddscot/commercialsoftwaredev.pdf ... and if PowerPoint presentations don't do it for you, also thanks to Craig Murphy, you can watch a video of the presentation that I gave at DDD8 in Microsoft TVP, Reading,  http://vimeo.com/9216563

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Thanks to .Net Developers Network in Bristol - Hyper-V for Developers slides not available for downl

    - by Liam Westley
    Thanks to the guys at .Net Developers Network (http://www.dotnetdevnet.com) for inviting me down to Bristol to present on Hyper-V for Developers.  There were some great questions and genuine interest, especially surprising for a topic that often has a soporific effect on developers. You can download the original PowerPoint file or the PDF complete with speaker notes from here, http://www.tigernews.co.uk/blog-twickers/dotnetdevnet/HyperV4Devs-PPT.zip http://www.tigernews.co.uk/blog-twickers/dotnetdevnet/HyperV4Devs-PDF.zip I should be back for DDD SouthWest (http://www.dddsouthwest.com).  You can get voting from Monday 29th March 2010, and for a change my proposed topic is not about virtualisation! Finally, apologies to Guy Smith-Ferrier for dragging him away from the Bristol Girl Geek Dinners (http://bristolgirlgeekdinners.com) crew so I could catch my train back to London.

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • NetBackup-pal is muködik az Oracle Database 11gR2 mentés Exadata V2 környezetben

    - by Fekete Zoltán
    A Veritas NetBackup szoftverrel is menthetok az Oracle 11gR2 adatbázisok az Oracle Enterprise Linux-on is (RMAN-t használva), 64-bites környezetben. A dokumentumokban a Red Hat-re vonatkozó infót kell keresnünk, mivel http://seer.entsupport.symantec.com/docs/337048.htm szerint "Oracle Enterprise Linux (OEL)" Supported based on NetBackup Red Hat Enterprise Linux 4.x/5.x Client, Server, and Oracle Agent support. BMR is not supported. NetBackup compatibility listák: http://seer.entsupport.symantec.com/docs/303344.htm - A NetBackup 7 kompatibilis az Oracle Exadata V2-vel: http://seer.entsupport.symantec.com/docs/340295.htm - A NetBackup 6.x verziókra telepíteni kell a következo patch-et: NB_6.5.5_ET1940073_1_347227.zip is a NetBackup 6.5.5 EEB (Emergency Engineering Binary) for Oracle Clients. http://seer.entsupport.symantec.com/docs/347227.htm és http://support.veritas.com/docs/279048.

    Read the article

  • php fopen => 500 Internal Server Error

    - by Ahmed B
    I have a website hosted on a dedicated server, I noticed that Google and other search engines can't access to the most URLs on my website!! On my localhost I have made a small test : var_dump(fopen('http://www.aswat.ma', 'r')); And I got this error : Warning: fopen(http://www.aswat.ma) [function.fopen]: failed to open stream: HTTP request failed! HTTP/1.1 500 Internal Server Error in C:\xampp\htdocs\pnowate\public\index.php on line 4 bool(false) If I change the URL "http://www.aswat.ma" by "www.google.co.ma" I got this : resource(3) of type (stream) Any one have any idea about this issue ??

    Read the article

  • Packages are not available for installation

    - by Alex Farber
    Changing some Software Update settings I possibly corrupted something, and now I don't see many packages in the list. For example: alex@u120464:~$ sudo apt-get install codeblocks [sudo] password for alex: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package codeblocks I checked all options in the Software Sources dialog, but packages are still not available. How can I fix this? OS: Ubuntu 12.04, 64 bit. Additional information. alex@u120464:~$ sudo apt-get update [sudo] password for alex: Ign http://extras.ubuntu.com precise InRelease Ign http://security.ubuntu.com precise-security InRelease Ign http://archive.canonical.com precise InRelease Ign http://archive.ubuntu.com precise InRelease Ign http://archive.ubuntu.com precise-updates InRelease ... It looks like most Ubuntu repositories are not searched, how can I restore default update behaviour?

    Read the article

  • Should OpenID clients accept adding WWW to the domain?

    - by Steve Clay
    For a long time I've used OpenID delegation on my site: http://example.org/ delegated to: http://example.openid-provider.com/, so I logged into OpenID-consuming sites using the former as ID. Recently I added www. to my site's canonical domain so http://example.org/ now redirects to http://www.example.org/. Should I be able to continue logging into existing OpenID accounts using http://example.org/? StackExchange sites say "yes". I can use either URL. At least one other doesn't recognize my existing account. Who's "right" (per spec) and is there anything I can fix on my end?

    Read the article

  • Validating sitemap with Yahoo Explorer

    - by Joel
    Hello, I have a sitemap index on my website, which I successfully validated with "Google webmasters tools". The declarations at the top are: <sitemapindex xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/siteindex.xsd" xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> This index lists 2 sitemaps . One of them contains "images" tags after using the http://www.google.com/schemas/sitemap-image/1.1 schema declaration. When submitting this sitemap index to Yahoo explorer, I get error: ERROR: FF_30000 “Not an accepted feed format file. Please consult the documentaiton for supported file formats.” Any ideas? Joel

    Read the article

  • Redirecting bad links to the correct links via htacess or 301 redirect plugin for WordPress

    - by janoulle
    I'm getting a lot of 404 errors b/c I recently switched content management systems (Habari to WordPress). I would like to use the 301 redirect plugin for WordPress to capture and helpfully redirect the offending links to the correct urls. Here's an example of the type of errors I'm seeing and what they should be redirected to: http://janetalkstech.com/admin/publish?id=146 should redirect to http://janetalkstech.com/?p=146 http://janetalkstech.com/admin/publish?slug=post-title should redirect to http://janetalkstech.com/post-title I would greatly appreciate any specific pointers on how to perform the redirects with either the 301 redirect plugin for WordPress or via .htaccess file Edit: Redirection plugin being used is the one by Urban Giraffe: http://urbangiraffe.com/plugins/redirection/

    Read the article

  • Starting off with web dev with php

    - by pavan kumar
    I'm currently working with Java / C++. I'm interested in web development and am planning to shift my stream. I heard that PHP is a good platform to start off and also it does not require that much of knowledge in technologies like JSP / Servlets or frameworks like springs / struts / hibernate. I have basic ideas about HTML and Javascript as well. I have gone through previous posts in SO and found out the relevant resources as well: http://net.tutsplus.com/tutorials/php/the-best-way-to-learn-php/ http://www.webhostingtalk.com/archive/index.php/t-1028265.html http://www.killerphp.com/ http://phpforms.net/tutorial/tutorial.html http://www.php5-tutorial.com/ etc. Now, my question is: I heard of PHP frameworks like CodeIgniter, Zend Frameworkd and Yii. Doesn't learning PHP & MySql implicitly makes us aware of these frameworks? Am I making a good choice in stating with PHP? Is it a good idea to shift streams?

    Read the article

  • Ajax site not being crawled - have escaped fragment, what's wrong? [closed]

    - by Harry
    My site is anonkun.com. You can see that it's "ajax" and doesn't load much HTML. Here are some example pages: http://anonkun.com http://anonkun.com/?_escaped_fragment_= http://anonkun.com/stories/Dev-kun---FAQ/6ef881f8-cf48-4f87-a688-c585f23809c5 http://anonkun.com/stories/Dev-kun---FAQ/6ef881f8-cf48-4f87-a688-c585f23809c5?_escaped_fragment_= As you can see the original page has the meta fragment tag and the escaped fragment versions loads static html. Why am I not getting crawled? http://cl.ly/image/2n30212q0K2W Webmaster tools show that pages are being seen as duplicate and fetch as google show me the ajax version of the source not the static escaped fragment version. What's wrong and how do I make this work? Thanks.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >