Search Results

Search found 33151 results on 1327 pages for 'www browser'.

Page 126/1327 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • wget recursive limited within subdomain

    - by Paul Seangwongree
    I want to download the following subdomain with the recursive option using wget: www.example.com/A/B So if that URL has links to www.example.com/A/B/C and www.example.com/A/B/D, these two should also be downloaded. But I don't want anything outside the www.example.com/A/B subdomain to be downloaded. For example, if www.example.com/A/B/C has a link back to www.example.com, the page www.example.com should not be downloaded. What wget command should I use?

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • Url rewrite subfolder to root and forbid accessing subfolder

    - by Alessandro Pezzato
    I have drupal installed in a subfolder drupal, but I want to access pages as it is in root folder: http://www.example.com instead of http://www.example.com/drupal I'm able to have this working, but it's also working with url containing subfolder, so I have http://www.example.com and a clone site in http://www.example.com/drupal What is the rule to forbid access to subfolder? I want all url starting with http://www.example.com/drupal being forbidden. This is .htaccess in / directory: Options -Indexes Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteRule ^(.*+)$ drupal/$1 [L,QSA] </IfModule> And this is drupal .htaccess in /drupal/ directory: Options -Indexes Options +FollowSymLinks ErrorDocument 404 index.php DirectoryIndex index.php index.html index.htm # Override PHP settings that cannot be changed at runtime. See # sites/default/default.settings.php and drupal_initialize_variables() in # includes/bootstrap.inc for settings that can be changed at runtime. # PHP 5, Apache 1 and 2. <IfModule mod_php5.c> php_flag magic_quotes_gpc off php_flag magic_quotes_sybase off php_flag register_globals off php_flag session.auto_start off php_value mbstring.http_input pass php_value mbstring.http_output pass php_flag mbstring.encoding_translation off </IfModule> # Requires mod_expires to be enabled. <IfModule mod_expires.c> # Enable expirations. ExpiresActive On # Cache all files for 2 weeks after access (A). ExpiresDefault A1209600 <FilesMatch \.php$> # Do not allow PHP scripts to be cached unless they explicitly send cache # headers themselves. Otherwise all scripts would have to overwrite the # headers set by mod_expires if they want another caching behavior. This may # fail if an error occurs early in the bootstrap process, and it may cause # problems if a non-Drupal PHP file is installed in a subdirectory. ExpiresActive Off </FilesMatch> </IfModule> # Various rewrite rules. <IfModule mod_rewrite.c> RewriteEngine on # Block access to "hidden" directories whose names begin with a period. This # includes directories used by version control systems such as Subversion or # Git to store control files. Files whose names begin with a period, as well # as the control files used by CVS, are protected by the FilesMatch directive # above. RewriteRule "(^|/)\." - [F] # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # uncomment the following: # RewriteCond %{HTTP_HOST} !^www\. [NC] # RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment the following: RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteBase /drupal # Pass all requests not referring directly to files in the filesystem to # index.php. Clean URLs are handled in drupal_environment_initialize(). RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico #RewriteRule ^ index.php [L] RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch> </IfModule> </IfModule>

    Read the article

  • Is there any way to get spotlight or media browser in OSX (Snow Leopard) to index and recognize meta

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight if presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What browser is sending user agent beginning mozilla/5.0+, tramslates & into &amp;

    - by Patrick
    We've got a website which has been running for a few years now. One of our customers has just started having an intermittent problem. Looking at our iis6.0 logs the service works correctly when they have a user agent beginning "mozilla/4.0+" but fails when the user agent begins "mozilla/5.0+". The particular customer only started having this problem on Wednesday. Does anyone know the browser/upgrade which changes the 4.0 to 5.0? The actual problem caused is that an "&" in a url parameter list is being encoded as "&amp;". Anyone seen anything similar? We have other users sending from browsers with the 5.0+ user agent without trouble. Sorry about the tags but don't have the rep to create new ones. Thanks in advance, Patrick Edit: hi Viper_sb, It is most probably a custom script (I'm primarily a c++ developer so don't really understand). Our site services requests from other customer developed sites, this one was done in Java script as far as I know. we're actually getting a variety of user agents (presumably depending on which of our customers customers is accessing the service), here's a few: Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+fr;+rv:1.9.1.11)+Gecko/20100701+Firefox/3.5.11 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.126+Safari/533.4 302 0 0 Mozilla/5.0+(Macintosh;+U;+PPC+Mac+OS+X;+fr)+AppleWebKit/523.12+(KHTML,+like+Gecko)+Version/3.0.4+Safari/523.12 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+fr;+rv:1.9.2.8)+Gecko/20100722+Firefox/3.6.8+(.NET+CLR+3.5.30729)

    Read the article

  • Why does my browser take me to Scour.com? (redirect virus)

    - by Paula DiTallo
    The "scour" or Rootkit.Win32.TDSS virus has a long history which can be found here: http://en.wikipedia.org/wiki/Scour Here is the primary symptom: after searching for something in your web browser using google, one of the results that you click on redirects you to scour.com. If you've executed ClamWin, Malwarebytes, McAfee, Norton, etc. to find and isolate the virus without any luck--this isn't really a surprise, since this virus attaches to existing system drivers. I only know of one reliable package that will remove this without ill effects--like adding new spyware. This package is called TDSSKiller. I have seen multiple websites that claim to have this software available, but the one that I know is reliable is located here: http://support.kaspersky.com/viruses/solutions?qid=208280684 Once you go to Kaspersky's tech support site, the TDSSKiller zip file is available for downloading. When you execute this software, you will be able to "cure" or repair the infected driver. Remember to jot down the name of the driver for future reference--should you need to reinstall the driver from a "same-as" working computer, or your install disk if the repair is ineffective. The driver that happened to get infected on my computer was the tcpip.sys driver. This caused my win sockets to loose their ip addresses. In most other instances, less critical drivers such as HDAudBus.sys are infected. In my case, I was not through correcting my computer problems until I corrected the broken WinSock issue and loaded an earlier version of the tcpip.sys driver from: C:\WINDOWS\ServicePackFiles\i386 which I placed in: C:\WINDOWS\system32\drivers Don't forget to reboot your computer after your repair! Once you download TDSSKiller and cure/repair your infected driver(s), the redirect on google searches should disappear .

    Read the article

  • Grep... What patterns to extract href attributes, etc. with PHP's preg_grep?

    - by inktri
    Hi, I'm having trouble with grep.. Which four patterns should I use with PHP's preg_grep to extract all instances the "____" stuff in the strings below? 1. <h2><a ....>_____</a></h2> 2. <cite><a href="_____" .... >...</a></cite> 3. <cite><a .... >________</a></cite> 4. <span>_________</span> The dots denote some arbitrary characters while the underscores denote what I want. An example string is: </style></head> <body><div id="adBlock"><h2><a href="https://www.google.com/adsense/support/bin/request.py?contact=afs_violation&amp;hl=en" target="_blank">Ads by Google</a></h2> <div class="ad"><div><a href="http://www.google.com/aclk?sa=L&amp;ai=C4vfT4Sa3S97SLYO8NN6F-ckB5oq5sAGg6PKlDaT-kwUQASCF4p8UKARQtobS9AVgyZbRhsijoBnIAQGqBBxP0OSEnIsuRIv3ZERDm8GiSKZSnjrVf1kVq-_Y&amp;num=1&amp;sig=AGiWqtwG1qHnwpZ_5BNrjrzzXO5Or6EDMg&amp;q=http://www.crackle.com/c/Spider-Man_The_New_Animated_Series/%3Futm_source%3Dgoogle%26utm_medium%3Dcpc%26utm_campaign%3DGST_10016_CRKL_US_PRD_S_TeleV_SPID_Tele_Spider-Man%26utm_term%3Dspiderman%26utm_content%3Ds264Yjg9f_3472685742_487lrz1638" class="titleLink" target="_parent">Spider-<b>Man</b> Animated Serie</a></div> <span>See Your Favorite Spiderman <br> Episodes for Free. Only on Crackle.</span> <cite><a href="http://www.google.com/aclk?sa=L&amp;ai=C4vfT4Sa3S97SLYO8NN6F-ckB5oq5sAGg6PKlDaT-kwUQASCF4p8UKARQtobS9AVgyZbRhsijoBnIAQGqBBxP0OSEnIsuRIv3ZERDm8GiSKZSnjrVf1kVq-_Y&amp;num=1&amp;sig=AGiWqtwG1qHnwpZ_5BNrjrzzXO5Or6EDMg&amp;q=http://www.crackle.com/c/Spider-Man_The_New_Animated_Series/%3Futm_source%3Dgoogle%26utm_medium%3Dcpc%26utm_campaign%3DGST_10016_CRKL_US_PRD_S_TeleV_SPID_Tele_Spider-Man%26utm_term%3Dspiderman%26utm_content%3Ds264Yjg9f_3472685742_487lrz1638" class="domainLink" target="_parent">www.Crackle.com/Spiderman</a></cite></div> <div class="ad"><div><a href="http://www.google.com/aclk?sa=l&amp;ai=CnQFi4Sa3S97SLYO8NN6F-ckB3M7nQtyU2PQEq6bCBRACIIXinxQoBFCm15KB-f____8BYMmW0YbIo6AZoAHiq_X-A8gBAaoEIU_Q9JKLiy1MiwdnHpZoBnmpR1J8pP2jpTwMx2uj2nN4WA&amp;num=2&amp;sig=AGiWqtwDrI5pWBCncdDc80FKt32AJMAQ6A&amp;q=http://www.costumeexpress.com/browse/TV-Movies/_/N-1z141uu/Ntt-batman/results1.aspx%3FREF%3DKNC-CEgoogle" class="titleLink" target="_parent">Kids <b>Batman</b> Costumes</a></div> <span>Great Selection of <b>Batman</b> &amp; Batgirl <br> Costumes For Kids. Ships Same Day!</span> <cite><a href="http://www.google.com/aclk?sa=l&amp;ai=CnQFi4Sa3S97SLYO8NN6F-ckB3M7nQtyU2PQEq6bCBRACIIXinxQoBFCm15KB-f____8BYMmW0YbIo6AZoAHiq_X-A8gBAaoEIU_Q9JKLiy1MiwdnHpZoBnmpR1J8pP2jpTwMx2uj2nN4WA&amp;num=2&amp;sig=AGiWqtwDrI5pWBCncdDc80FKt32AJMAQ6A&amp;q=http://www.costumeexpress.com/browse/TV-Movies/_/N-1z141uu/Ntt-batman/results1.aspx%3FREF%3DKNC-CEgoogle" class="domainLink" target="_parent">www.CostumeExpress.com</a></cite></div> <div class="ad"><div><a href="http://www.google.com/aclk?sa=l&amp;ai=CAMYT4Sa3S97SLYO8NN6F-ckB3ZnWmgGdoNLrDaumwgUQAyCF4p8UKARQrqSVxwdgyZbRhsijoBmgAZH77uwDyAEBqgQYT9DU7oqLLEyLB2dHlxZFnQzyeg-yHt88&amp;num=3&amp;sig=AGiWqtzqAphZ9DLDiEFBJlb0Ou_1HyEyyA&amp;q=http://www.OfficialBatmanCostumes.com" class="titleLink" target="_parent"><b>Batman</b> Costume</a></div> <span>Official <b>Batman</b> Costumes. <br> Huge Selection &amp; Same Day Shipping!</span> <cite><a href="http://www.google.com/aclk?sa=l&amp;ai=CAMYT4Sa3S97SLYO8NN6F-ckB3ZnWmgGdoNLrDaumwgUQAyCF4p8UKARQrqSVxwdgyZbRhsijoBmgAZH77uwDyAEBqgQYT9DU7oqLLEyLB2dHlxZFnQzyeg-yHt88&amp;num=3&amp;sig=AGiWqtzqAphZ9DLDiEFBJlb0Ou_1HyEyyA&amp;q=http://www.OfficialBatmanCostumes.com" class="domainLink" target="_parent">www.OfficialBatmanCostumes.com</a></cite></div> <div class="ad"><div><a href="http://www.google.com/aclk?sa=l&amp;ai=C767t4Sa3S97SLYO8NN6F-ckBkZfSfoOppaMHq6bCBRAEIIXinxQoBFDX2bw6YMmW0YbIo6AZoAHpprP8A8gBAaoEG0_QhJSMiytMiwdnHpZoF3g0Uj8_Vl2r4TpI_g&amp;num=4&amp;sig=AGiWqtyGO2DnFq_jMhP6ufj8pufT9sWQWA&amp;q=http://www.discountsuperherocostumes.com/batman-costumes.html" class="titleLink" target="_parent">Discount <b>Batman</b> Costumes</a></div> <span>Discount adult and kids <b>batman</b> <br> superhero costumes.</span> <cite><a href="http://www.google.com/aclk?sa=l&amp;ai=C767t4Sa3S97SLYO8NN6F-ckBkZfSfoOppaMHq6bCBRAEIIXinxQoBFDX2bw6YMmW0YbIo6AZoAHpprP8A8gBAaoEG0_QhJSMiytMiwdnHpZoF3g0Uj8_Vl2r4TpI_g&amp;num=4&amp;sig=AGiWqtyGO2DnFq_jMhP6ufj8pufT9sWQWA&amp;q=http://www.discountsuperherocostumes.com/batman-costumes.html" class="domainLink" target="_parent">www.discountsuperherocostumes.com</a></cite></div></div></body> <script type="text/javascript"> var relay = ""; </script> <script type="text/javascript" src="/uds/?file=ads&amp;v=1&amp;packages=searchiframe&amp;nodependencyload=true"></script></html> Thanks!

    Read the article

  • Browser language detection & content ranking for new language on the same site.

    - by Arnaud
    I've been reading a lot about it but it's still really hard to make up my mind. My understand is that if your website provide a link to the other language, this should not be an issue for google as long as your links are clear and clean, google will be able to make his way through it. The website was orginaly in french and I added the english version and I'm just worry that english speaker will just leave if the site is not in the correct language, for the home page I just wanted to get the value from the browser and redirect it to /fr/ or /en/ for the first page. (using php this will be very easy) Could you guys have a look at it and tell me what you think about it http://tinyurl.com/bpc5bn9 I don't want to get it wrong and lost my ranking with google. Also the website has good rank on the french side and the english has been online for 2 weeks and only get few visit a day, is that because all the back link refer to /fr/ and google is cleaver enough to decide that they are 2 differantes website and the back link will have to point to /en/ to increase the ranking value? Or will take few more weeks for the website to grow? Thanks for your hep

    Read the article

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

  • Apache - create multiple aliases

    - by mc3mcintyre
    I'm trying to setup two websites on my Apache server. One is www.domain.com and the other is test.domain.com. Currently, my 000-default.conf file reads as follows: <VirtualHost www:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.domain.com #ServerAlias www ServerAdmin [email protected] DocumentRoot /var/www/domain.com/ # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/domain.error.log CustomLog ${APACHE_LOG_DIR}/domain.access.log combined UseCanonicalName on allow from all Options +Indexes # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> <VirtualHost test:80> DocumentRoot "/var/www/domain.com/test/" ServerName test.domain.com ServerAdmin [email protected] ErrorLog ${APACHE_LOG_DIR}/test.domain.error.log CustomLog ${APACHE_LOG_DIR}/test.domain.access.log combined UseCanonicalName on allow from all Options +Indexes </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet As is, when I use a browser to go to the www location, it show me a directory listing. However, if I remove the www:80 on Line 1 and replace it with *:80, it correctly displays the webpage. I don't understand why. Can anyone help me configure this 000-default.conf file so that www goes to "/var/www/domain.com" and that test goes to "/var/www/domain.com/test"? Thank you.

    Read the article

  • Trying to update debian not working

    - by Sean
    As root i type this command apt-get update and get these error messages. > Err http://security.debian.org lenny/updates Release.gpg Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/main Translation-en_US Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/contrib Translation-en_US Could not resolve 'security.debian.org' Err http://security.debian.org lenny/updates/non-free Translation-en_US Could not resolve 'security.debian.org' Err http://www.backports.org lenny-backports Release.gpg Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/main Translation-en_US Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/contrib Translation-en_US Could not resolve 'www.backports.org' Err http://www.backports.org lenny-backports/non-free Translation-en_US Could not resolve 'www.backports.org' Err http://ftp.us.debian.org lenny Release.gpg Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/main Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/contrib Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://ftp.us.debian.org lenny/non-free Translation-en_US Could not resolve 'ftp.us.debian.org' Err http://http.us.debian.org stable Release.gpg Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/main Translation-en_US Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/contrib Translation-en_US Could not resolve 'http.us.debian.org' Err http://http.us.debian.org stable/non-free Translation-en_US Could not resolve 'http.us.debian.org' Reading package lists... Done W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/Release.gpg Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/main/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/contrib/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://ftp.us.debian.org/debian/dists/lenny/non-free/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/Release.gpg Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/main/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/contrib/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://http.us.debian.org/debian/dists/stable/non-free/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/Release.gpg Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/main/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/contrib/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://security.debian.org/dists/lenny/updates/non-free/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/Release.gpg Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/main/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/contrib/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch http://www.backports.org/debian/dists/lenny-backports/non-free/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems This is on a dreamplug linux server. Configured so that my network starts on 192.168.1.2 and my router is port forwarding ssh to 192.168.1.6 to the server.

    Read the article

  • Trouble with dns and debian update

    - by Sean
    I tried to update my debian dreamplug server with the command running as root apt-get update and recieved these errors. Err http://security.debian.org lenny/updates Release.gpg Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/main Translation-en_US Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/contrib Translation-en_US Could not resolve 'security.debian.org' Err htdtp://security.debian.org lenny/updates/non-free Translation-en_US Could not resolve 'security.debian.org' Err httdp://www.backports.org lenny-backports Releasegpg Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/main Translation-en_US Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/contrib Translation-en_US Could not resolve 'www.backports.org' Err httdp://www.backports.org lenny-backports/non-free Translation-en_US Could not resolve 'www.backports.org' Err httdp://ftp.us.debian.org lenny Release.gpg Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/main Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/contrib Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://ftp.us.debian.org lenny/non-free Translation-en_US Could not resolve 'ftp.us.debian.org' Err httdp://http.us.debian.org stable Release.gpg Could not resolve 'http.us.debian.org' Err htdtp://http.us.debian.org stable/main Translation-en_US Could not resolve 'http.us.debian.org' Err httdp://http.us.debian.org stable/contrib Translation-en_US Could not resolve 'http.us.debian.org' Err htdtp://http.us.debian.org stable/non-free Translation-en_US Could not resolve 'http.us.debian.org' Reading package lists... Done W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/Release.gpg Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/main/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/contrib/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://ftp.us.debian.org/debian/dists/lenny/non-free/i18n/Translation-en_US.gz Could not resolve 'ftp.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/Release.gpg Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/main/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/contrib/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://http.us.debian.org/debian/dists/stable/non-free/i18n/Translation-en_US.gz Could not resolve 'http.us.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/Release.gpg Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/main/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/contrib/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://security.debian.org/dists/lenny/updates/non-free/i18n/Translation-en_US.gz Could not resolve 'security.debian.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/Release.gpg Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/main/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/contrib/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Failed to fetch ttp://www.backports.org/debian/dists/lenny-backports/non-free/i18n/Translation-en_US.gz Could not resolve 'www.backports.org' W: Some index files failed to download, they have been ignored, or old ones used instead. W: You may want to run apt-get update to correct these problems I am able to ping ip addresses but not namespaces. Can't seem to figure out the problem. My /etc/resolv.conf file contains nameserver 192.168.1.2 which is my router.

    Read the article

  • CSS Hidden DIV Scroll to view

    - by Dasa
    <DIV CLASS="fact" ID="xnews-4" > <DIV CLASS="storyheadline"> <DIV CLASS="vote up"><A ID="xvotes-4" HREF="javascript:vote(0,258160,4,'f3d3d9c1885fb8508bdbde825d0dfd6e',10)">1</A><A ID="xvote-4" HREF="javascript:vote(0,258160,4,'f3d3d9c1885fb8508bdbde825d0dfd6e',10)"><strong>Vote</strong></A> </DIV> <DIV CLASS="title" ID="title-4"><H2><A HREF="/story.php?title=this-article-is-about-the-song">This article is about the song.</A></H2> </DIV> </DIV> <DIV CLASS="storycontent subtext hidden" ID="plus4" style="display: none;"> <DIV>Posted by <A HREF="/user.php?login=fact-o-matic">fact-o-matic</A> 2 minutes ago | Source: Editorial<SPAN ID="ls_adminlinks-4" style="display:none"> </SPAN> </DIV> <DIV> <DIV CLASS="floatleft"> <A HREF="/story.php?title=this-article-is-about-the-song"> Read More</A>&nbsp;|<SPAN ID="ls_comments_url-4"> <SPAN CLASS="linksummaryDiscuss"><A HREF="/story.php?title=this-article-is-about-the-song#discuss" CLASS="comments">Discuss</A></SPAN></SPAN> <SPAN ID="xreport-4"><SPAN CLASS="linksummaryBury">| <A HREF="javascript:vote(0,258160,4,'f3d3d9c1885fb8508bdbde825d0dfd6e',-10)">Bury</A></SPAN></SPAN> |&nbsp; <span id="emailto-4" style="display:none"></span><span id="linksummaryAddLink"> <a href="javascript://" onclick="var replydisplay=document.getElementById('addto-4').style.display ? '' : 'none';document.getElementById('addto-4').style.display = replydisplay;"> Add To</a>&nbsp; </span> <span id="addto-4" style="display:none"> <div style="position:absolute;display:block;background:#fff;padding:10px;margin:10px 0 0 100px;font-size:12px;border:2px solid #000;"> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to del.icio.us" href="http://del.icio.us/post" onclick="window.open('http://del.icio.us/post?v=4&amp;noui&amp;jump=close&amp;url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&amp;title=This+article+is+about+the+song.', '','toolbar=no,width=700,height=400'); return false;"><img src="http://www.factmeme.com/modules/social_bookmark/images/delicious.png" border="0" alt="submit 'This article is about the song.' to del.icio.us" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to digg" href="http://digg.com/submit?phase=2&amp;url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&amp;title=This article is about the song.&amp;bodytext="><img src="http://www.factmeme.com/modules/social_bookmark/images/digg.png" border="0" alt="submit 'This article is about the song.' to digg" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to reddit" href="http://reddit.com/submit?url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&amp;title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/reddit.gif" border="0" alt="submit 'This article is about the song.' to reddit" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to facebook" href="http://www.facebook.com/sharer.php?u=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&t=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/facebook.gif" border="0" alt="submit 'This article is about the song.' to facebook" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to technorati" href="http://www.technorati.com/faves?add=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song"><img src="http://www.factmeme.com/modules/social_bookmark/images/technorati.gif" border="0" alt="submit 'This article is about the song.' to technorati" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to slashdot" href="http://slashdot.org/bookmark.pl?url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/slashdot.gif" border="0" alt="submit 'This article is about the song.' to slashdot" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to Stumbleupon" href="http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&amp;title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/icon-stumbleupon.gif" border="0" alt="submit 'This article is about the song.' to Stumbleupon" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to Windows Live" href="https://favorites.live.com/quickadd.aspx?url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/windowslive.gif" border="0" alt="submit 'This article is about the song.' to Windows Live" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to squidoo" href="http://www.squidoo.com/lensmaster/bookmark?http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song"><img src="http://www.factmeme.com/modules/social_bookmark/images/squidoo.gif" border="0" alt="submit 'This article is about the song.' to squidoo" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to yahoo" href="http://myweb2.search.yahoo.com/myresults/bookmarklet?u=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&amp;title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/yahoomyweb.png" border="0" alt="submit 'This article is about the song.' to yahoo" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to google" href="http://www.google.com/bookmarks/mark?op=edit&bkmk=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/googlebookmarks.gif" border="0" alt="submit 'This article is about the song.' to google" /></a> &nbsp;&nbsp;<a title="submit 'This article is about the song.' to ask" href=" http://myjeeves.ask.com/mysearch/BookmarkIt?v=1.2&t=webpages&url=http%3A%2F%2Fwww.factmeme.com%2Fstory.php%3Ftitle%3Dthis-article-is-about-the-song&title=This article is about the song."><img src="http://www.factmeme.com/modules/social_bookmark/images/ask.gif" border="0" alt="submit 'This article is about the song.' to ask" /></a> <hr /> <p style="font-size:16px;font-weight:bold;margin:0px;">Story URL</p> <script type="text/javascript"> function select_all() { var text_val=eval("document.storyurl.thisurl"); text_val.focus(); text_val.select(); } </script> <form name="storyurl"><input type="text" name="thisurl" size="92" onClick="select_all();" value="http://www.factmeme.com/story.php?title=this-article-is-about-the-song"></form> </div> </span> </DIV> <DIV CLASS="floatright"> <A HREF="/index.php?category=culture">Culture</A> | <A HREF="/search.php?search=Olavi&amp;tag=true">Olavi</A> <A HREF="/search.php?search=Olavi&amp;tag=true">All</A> </DIV> </DIV></DIV> <DIV CLASS="more show"></DIV> </DIV> when I click on the add to, a box is supposed to popup with options to add the story to other websites The problem is that nothing appears to happen, even though something is actually happening, the only way to see the add too box is to click on scroll to view in inspect element in firefox and even then the box appears in the wrong place I am almost certain that it is a positioning conflict but I cant work it out How can I fix it

    Read the article

  • nginx - 403 Forbidden

    - by michell90
    I've trouble to get aliases working correctly on nginx. When i try to access the aliases, /pma and /mba (see secure.example.com.conf), i get a 403 Forbidden but the base url works correctly. I read a lot of posts but nothing helped, so here i am. Nginx and php-fpm are running as www-data:www-data and the permissions for the directories are set to: drwxrwsr-x+ 5 www-data www-data 4.0K Dec 5 22:48 ./ drwxr-xr-x. 3 root root 4.0K Dec 4 22:50 ../ drwxrwsr-x+ 2 www-data www-data 4.0K Dec 5 13:10 mda.example.com/ drwxrwsr-x+ 11 www-data www-data 4.0K Dec 5 10:34 pma.example.com/ drwxrwsr-x+ 3 www-data www-data 4.0K Dec 5 11:49 www.example.com/ lrwxrwxrwx. 1 www-data www-data 18 Dec 5 09:56 secure.example.com -> www.example.com/ Im sorry for the bulk, but i thought better too much than too little. Here are the configuration files: /etc/nginx/nginx.conf user www-data www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/secure.example.com server { listen 80; server_name secure.example.com; return 301 https://$host$request_uri; } server { listen 443; server_name secure.example.com; access_log /var/log/nginx/secure.example.com.access.log; error_log /var/log/nginx/secure.example.com.error.log; root /srv/http/secure.example.com; include /etc/nginx/ssl/secure.example.com.conf; include /etc/nginx/conf.d/index.conf; include /etc/nginx/conf.d/php-ssl.conf; autoindex off; location /pma/ { alias /srv/http/pma.example.com; } location /mda/ { alias /srv/http/mda.example.com; } } /etc/nginx/ssl/secure.example.com.conf ssl on; ssl_certificate /etc/nginx/ssl/secure.example.com.crt; ssl_certificate_key /etc/nginx/ssl/secure.example.com.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; /etc/nginx/conf.d/index.conf index index.php index.html index.htm; /etc/nginx/conf.d/php-ssl.conf location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } /var/log/nginx/secure.example.com.error.log 2013/12/05 22:49:04 [error] 29291#0: *2 directory index of "/srv/http/pma.example.com" is forbidden, client: 176.199.78.88, server: secure.example.com, request: "GET /pma/ HTTP/1.1", host: "secure.example.com" EDIT: forgot to mention, i'm running CentOS 6.4 x86_64 and nginx 1.0.15 Thanks in advance!

    Read the article

  • java.lang.ClassNotFoundException: org.springframework.ui.ModelMap

    - by aelshereay
    I create a simple webapp using tomcat 6, spring 2.5.6 and maven. The problem is when I boot up tomcat, I am getting the following errors: SEVERE: StandardWrapper.Throwable java.lang.NoClassDefFoundError: org/springframework/ui/ModelMap ... Caused by: java.lang.ClassNotFoundException: org.springframework.ui.ModelMap The ModelMap class does exist in spring-2.5.6.jar and spring-context-2.5.6.jar, I also have some other spring jars. All of them are being deployed to tomcat correctly, when I check the application WEB-INF (deployed to tomcat) I found all those jars there! I have only one @Controller that has a @RequestMapping("/home.htm") showForm(ModelMap model) method. My applicationContext is quite simple: <?xml version="1.0" encoding="utf-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:dwr="http://www.directwebremoting.org/schema/spring-dwr" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.directwebremoting.org/schema/spring-dwr http://www.directwebremoting.org/schema/spring-dwr-3.0.xsd" default-autowire="byName"> <context:component-scan base-package="org.myapp"/> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"/> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"/> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/view/"></property> <property name="suffix" value=".jsp"></property> </bean> </beans>

    Read the article

  • Apache server still running but user can not connect website, after "sudo apachectl restart" user can connect website, what'r wrong? [on hold]

    - by Tinyfool
    My website is http://ourcoders.com/, recently I found sometime user report can not connect to my website, but I ssh to server, I found Apache still running, like this: root@AY1401261057077842eaZ:~# ps aux|grep apache root 873 0.0 1.3 290496 13528 ? Ss Aug18 0:28 /usr/sbin/apache2 -k start www-data 3490 0.0 1.8 299004 18764 ? S Aug21 0:01 /usr/sbin/apache2 -k start www-data 3612 0.0 1.5 296008 15540 ? S Aug21 0:03 /usr/sbin/apache2 -k start www-data 3860 0.0 1.5 296636 16268 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3913 0.0 1.2 295468 13084 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 3931 0.0 1.7 298488 18228 ? S 16:02 0:01 /usr/sbin/apache2 -k start www-data 3938 0.0 1.9 299128 19724 ? S 16:02 0:02 /usr/sbin/apache2 -k start www-data 4465 0.0 1.6 296688 16404 ? S Aug21 0:00 /usr/sbin/apache2 -k start www-data 5075 0.0 1.2 295468 13044 ? S 16:16 0:00 /usr/sbin/apache2 -k start www-data 5153 0.0 1.5 295880 15612 ? S 16:17 0:00 /usr/sbin/apache2 -k start www-data 5770 0.0 1.5 296608 16016 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5773 0.0 1.6 296948 16640 ? S 16:30 0:00 /usr/sbin/apache2 -k start www-data 5816 0.0 1.6 297216 16976 ? S 16:31 0:01 /usr/sbin/apache2 -k start www-data 5918 0.0 1.7 298228 17820 ? S 16:33 0:01 /usr/sbin/apache2 -k start www-data 6023 0.0 1.9 299864 19840 ? S 16:35 0:13 /usr/sbin/apache2 -k start www-data 6073 0.0 1.7 298480 18120 ? S 16:36 0:02 /usr/sbin/apache2 -k start www-data 6088 0.0 2.0 300488 21008 ? S 16:36 0:12 /usr/sbin/apache2 -k start www-data 6114 0.0 1.7 298548 18268 ? S 16:37 0:12 /usr/sbin/apache2 -k start www-data 6134 0.0 1.6 296688 16532 ? S 16:37 0:04 /usr/sbin/apache2 -k start www-data 6193 0.0 1.7 297908 17420 ? S 16:38 0:08 /usr/sbin/apache2 -k start www-data 6821 0.0 1.8 299556 19072 ? S 16:43 0:11 /usr/sbin/apache2 -k start www-data 7058 0.0 1.7 298676 18204 ? S 16:48 0:10 /usr/sbin/apache2 -k start www-data 7065 0.0 1.8 299028 18868 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7084 0.0 1.8 299508 19020 ? S 16:48 0:11 /usr/sbin/apache2 -k start www-data 7221 0.0 1.8 299160 18768 ? S 16:51 0:09 /usr/sbin/apache2 -k start www-data 11453 0.0 1.7 298484 18256 ? S 09:39 0:02 /usr/sbin/apache2 -k start root 26324 0.0 0.0 8084 920 pts/0 S+ 22:52 0:00 grep --color=auto apache root 28517 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28518 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28519 0.0 0.0 4404 612 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28520 0.0 0.0 4404 616 ? S Aug21 0:00 /bin/sh -c /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28521 0.0 0.0 4312 552 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28522 0.0 0.0 4308 548 ? S Aug21 0:07 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28523 0.0 0.0 4176 352 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log root 28524 0.0 0.0 4180 356 ? S Aug21 0:00 /usr/sbin/cronolog /var/log/apache2/cocoa/%Y/%m/access-%Y-%m-%d.log Today's only error log is blow. [Sat Aug 23 22:52:47 2014] [notice] SIGHUP received. Attempting to restart [Sat Aug 23 22:52:47 2014] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.13 with Suhosin-Patch configured -- resuming normal operations traffic information: cat access-2014-08-23.log | cut -d " " -f4 |cut -d":" -f2 |sort|uniq -c |sort -nr 5692 14 5291 15 5083 16 4723 23 4463 12 4057 17 4011 11 3926 13 3852 10 3187 05 3176 09 3055 06 2790 07 2672 00 2608 02 2591 01 2577 04 2514 03 2497 08 707 22 88 18 After I use "sudo apachectl restart", user can connect my website. So I want to know? What is the problem? And if "sudo apachectl restart" is needed, can I automate run this command? Today this kind struts appear again, and I run netstat -a -n Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 115.28.146.116:80 125.39.208.120:50708 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:50278 SYN_RECV tcp 0 0 115.28.146.116:80 220.173.142.152:23320 SYN_RECV tcp 0 0 115.28.146.116:80 60.173.247.132:52851 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:39397 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.158:56894 SYN_RECV tcp 0 0 115.28.146.116:80 183.129.174.2:21291 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:44499 SYN_RECV tcp 0 0 115.28.146.116:80 125.39.208.120:34017 SYN_RECV tcp 0 0 115.28.146.116:80 124.65.50.210:3774 SYN_RECV tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:15770 0.0.0.0:* LISTEN tcp 1 0 115.28.146.116:80 14.127.65.219:61633 CLOSE_WAIT tcp 305 0 115.28.146.116:80 125.39.208.120:37593 ESTABLISHED tcp 0 0 10.144.142.201:52866 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52873 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52868 10.146.6.61:3306 TIME_WAIT tcp 343 0 115.28.146.116:80 182.118.20.215:50709 ESTABLISHED tcp 0 0 115.28.146.116:54784 173.194.127.243:80 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:41253 CLOSE_WAIT tcp 0 0 10.144.142.201:52876 10.146.6.61:3306 ESTABLISHED tcp 559 0 115.28.146.116:80 218.241.144.114:54501 ESTABLISHED tcp 376 0 115.28.146.116:80 116.213.196.119:50604 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59339 CLOSE_WAIT tcp 214 0 115.28.146.116:80 142.4.215.40:34443 ESTABLISHED tcp 0 0 115.28.146.116:48635 115.28.146.116:80 ESTABLISHED tcp 187 0 115.28.146.116:80 115.28.146.116:48635 ESTABLISHED tcp 0 0 10.144.142.201:52853 10.146.6.61:3306 TIME_WAIT tcp 594 0 115.28.146.116:80 183.129.174.2:7090 CLOSE_WAIT tcp 0 0 10.144.142.201:52874 10.146.6.61:3306 TIME_WAIT tcp 0 0 115.28.146.116:80 182.118.20.166:44081 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59028 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61665 CLOSE_WAIT tcp 0 0 10.144.142.201:52860 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:46983 10.146.6.61:3306 ESTABLISHED tcp 0 2290 115.28.146.116:80 14.154.179.243:41049 FIN_WAIT1 tcp 0 0 10.144.142.201:42900 10.146.6.61:3306 ESTABLISHED tcp 571 0 115.28.146.116:80 220.173.142.152:23295 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59337 CLOSE_WAIT tcp 438 0 115.28.146.116:80 42.120.74.202:31567 CLOSE_WAIT tcp 0 0 115.28.146.116:80 113.36.238.28:59498 ESTABLISHED tcp 259 0 115.28.146.116:80 66.249.65.56:36739 ESTABLISHED tcp 0 0 115.28.146.116:80 113.36.238.28:59341 ESTABLISHED tcp 0 0 115.28.146.116:80 142.4.215.40:34267 FIN_WAIT2 tcp 799 0 115.28.146.116:80 180.173.88.1:52779 ESTABLISHED tcp 0 0 115.28.146.116:80 117.136.25.132:25207 FIN_WAIT2 tcp 0 0 115.28.146.116:80 220.181.108.186:42540 TIME_WAIT tcp 0 0 10.144.142.201:59902 10.242.174.13:80 TIME_WAIT tcp 0 1820 115.28.146.116:80 218.22.140.90:39266 LAST_ACK tcp 0 0 115.28.146.116:80 66.249.65.64:56977 TIME_WAIT tcp 669 0 115.28.146.116:80 83.251.90.61:49664 ESTABLISHED tcp 0 0 10.144.142.201:52872 10.146.6.61:3306 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:43398 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:25739 ESTABLISHED tcp 378 0 115.28.146.116:80 148.251.124.173:39313 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61697 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52986 CLOSE_WAIT tcp 769 0 115.28.146.116:80 14.127.65.219:61537 ESTABLISHED tcp 0 0 10.144.142.201:52859 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:55734 10.164.2.163:9200 TIME_WAIT tcp 563 0 115.28.146.116:80 202.55.20.10:22577 CLOSE_WAIT tcp 194 0 115.28.146.116:80 37.58.100.165:50908 CLOSE_WAIT tcp 791 0 115.28.146.116:80 116.192.2.185:45628 ESTABLISHED tcp 709 0 115.28.146.116:80 113.116.61.178:65209 ESTABLISHED tcp 706 0 115.28.146.116:80 183.227.44.237:54519 ESTABLISHED tcp 301 0 115.28.146.116:80 118.198.243.127:31180 ESTABLISHED tcp 0 0 10.144.142.201:55721 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55726 10.164.2.163:9200 TIME_WAIT tcp 0 0 10.144.142.201:55723 10.164.2.163:9200 TIME_WAIT tcp 681 0 115.28.146.116:80 83.251.90.61:49662 ESTABLISHED tcp 0 0 115.28.146.116:80 83.251.90.61:65274 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59022 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52781 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59037 CLOSE_WAIT tcp 0 0 10.144.142.201:55728 10.164.2.163:9200 TIME_WAIT tcp 231 0 115.28.146.116:37596 110.75.102.62:80 CLOSE_WAIT tcp 1 0 115.28.146.116:80 14.127.65.219:61569 CLOSE_WAIT tcp 0 0 10.144.142.201:51310 10.146.6.61:3306 ESTABLISHED tcp 299 0 115.28.146.116:80 123.125.71.16:36281 ESTABLISHED tcp 0 0 115.28.146.116:48620 115.28.146.116:80 ESTABLISHED tcp 1 0 115.28.146.116:80 183.227.44.237:54520 CLOSE_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59026 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:5490 ESTABLISHED tcp 665 0 115.28.146.116:80 83.251.90.61:49663 ESTABLISHED tcp 0 0 115.28.146.116:53744 173.194.127.147:80 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59023 CLOSE_WAIT tcp 0 0 115.28.146.116:22 116.192.2.185:34205 ESTABLISHED tcp 333 0 115.28.146.116:80 149.174.113.111:54338 CLOSE_WAIT tcp 0 0 10.144.142.201:52861 10.146.6.61:3306 TIME_WAIT tcp 0 0 10.144.142.201:52863 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:43272 CLOSE_WAIT tcp 767 0 115.28.146.116:80 49.4.158.2:52947 CLOSE_WAIT tcp 668 0 115.28.146.116:80 83.251.90.61:49665 ESTABLISHED tcp 642 0 115.28.146.116:80 222.78.185.50:55788 ESTABLISHED tcp 710 0 115.28.146.116:80 113.116.61.178:65264 ESTABLISHED tcp 284 0 115.28.146.116:80 157.55.39.243:65185 ESTABLISHED tcp 450 0 115.28.146.116:80 65.49.44.149:55496 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:36629 CLOSE_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42424 CLOSE_WAIT tcp 187 0 115.28.146.116:80 115.28.146.116:48620 ESTABLISHED tcp 1 0 115.28.146.116:80 14.127.65.219:61601 CLOSE_WAIT tcp 776 0 115.28.146.116:80 202.118.253.102:64883 CLOSE_WAIT tcp 841 0 115.28.146.116:80 37.228.105.28:49472 ESTABLISHED tcp 787 0 115.28.146.116:80 112.65.226.198:52192 ESTABLISHED tcp 0 0 10.144.142.201:55717 10.164.2.163:9200 TIME_WAIT tcp 233 0 115.28.146.116:80 54.202.88.0:42855 CLOSE_WAIT tcp 379 0 115.28.146.116:80 101.226.166.219:2322 ESTABLISHED tcp 0 0 115.28.146.116:80 183.60.212.152:43063 CLOSE_WAIT tcp 1 0 115.28.146.116:80 180.173.88.1:52780 CLOSE_WAIT tcp 784 0 115.28.146.116:80 101.95.29.26:63094 ESTABLISHED tcp 463 0 115.28.146.116:80 65.49.44.149:53876 ESTABLISHED tcp 1 0 115.28.146.116:80 116.192.2.185:37946 CLOSE_WAIT tcp 479 0 115.28.146.116:80 65.49.44.149:41157 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59036 CLOSE_WAIT tcp 1 0 115.28.146.116:80 49.4.158.2:52984 CLOSE_WAIT tcp 1 0 115.28.146.116:80 116.192.2.185:38100 CLOSE_WAIT tcp 0 0 10.144.142.201:52865 10.146.6.61:3306 TIME_WAIT tcp 1 0 115.28.146.116:80 113.36.238.28:59027 CLOSE_WAIT tcp 0 0 115.28.146.116:36508 173.194.127.81:80 ESTABLISHED tcp 210 0 115.28.146.116:80 188.143.232.123:47775 ESTABLISHED tcp 1 0 115.28.146.116:80 113.36.238.28:59025 CLOSE_WAIT tcp 0 0 10.144.142.201:52857 10.146.6.61:3306 TIME_WAIT tcp 654 0 115.28.146.116:80 49.4.158.2:52985 ESTABLISHED tcp 0 0 115.28.146.116:58627 110.75.102.62:80 ESTABLISHED tcp 782 0 115.28.146.116:80 180.153.219.13:40293 ESTABLISHED tcp 792 0 115.28.146.116:80 116.192.2.185:48187 CLOSE_WAIT tcp6 0 0 :::22 :::* LISTEN udp 0 0 115.28.146.116:123 0.0.0.0:* udp 0 0 10.144.142.201:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 8447 /var/run/mysqld/mysqld.sock unix 2 [ ACC ] SEQPACKET LISTENING 6678 /run/udev/control unix 2 [ ACC ] STREAM LISTENING 6482 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 7543 /var/run/dbus/system_bus_socket unix 7 [ ] DGRAM 7551 /dev/log unix 2 [ ACC ] STREAM LISTENING 7650 /var/run/nscd/socket unix 2 [ ] DGRAM 7156424 unix 3 [ ] STREAM CONNECTED 7156137 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7156136 unix 2 [ ] DGRAM 7156135 unix 2 [ ] DGRAM 7155834 unix 2 [ ] DGRAM 9734 unix 3 [ ] STREAM CONNECTED 9151 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9150 unix 3 [ ] STREAM CONNECTED 9136 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9135 unix 3 [ ] STREAM CONNECTED 9106 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9105 unix 2 [ ] DGRAM 9073 unix 3 [ ] STREAM CONNECTED 7575 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 7574 unix 3 [ ] STREAM CONNECTED 7565 unix 3 [ ] STREAM CONNECTED 7564 unix 3 [ ] STREAM CONNECTED 7332 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 7330 unix 3 [ ] DGRAM 6712 unix 3 [ ] DGRAM 6711 unix 3 [ ] STREAM CONNECTED 6662 @/com/ubuntu/upstart unix 3 [ ] STREAM CONNECTED 6635

    Read the article

  • Apache2 worker mpm too many processes

    - by delerious010
    I've got Apache installed with the worker mpm which seems to have too many processes active in spite of the configurations in place. I'll detail the configs below : StartServers 2 MinSpareThreads 10 MaxSpareThreads 25 ThreadsPerChild 25 MaxClients 150 Based on these settings, we should be seeing a maximum of 1 Apache control process (uid:root) and 6 Apache client processes (uid:www). This being due to MaxClients/ThreadsPerChild. However, I'm seeing a total of 1 Apache control process and 9 Apache client processes. init -- apache2(root) -- -- apache2(www) -- -- apache2(www) -- 1 thread -- -- apache2(www) -- 26 threads -- -- apache2(www) -- 26 threads init -- apache2(www) -- 2 threads -- apache2(www) -- apache2(www) -- apache2(www) We do not make it a habit of restarting Apache nor the Server, and will perform a reload 2-3 times a day at times so as to add new VHOSTs. Would anyone be able to enlighten me as to what might be causing this ? enter code here

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HTML5 and CSS3 Editing in Windows Live Writer

    - by Rick Strahl
    Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode. Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting. Hack the Registry to get Live Writer to render using IE 9 or 10In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering. The keys set are as follows:32bit Configuration on 64 bit machine:HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)On a 32 bit only machine: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATIONKey: WindowsLiveWriter.exeValue: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer. This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact. Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use. It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…Resources.reg Files to register Live Write Browser Emulation (set for IE9)Specifying Internet Explorer Version for ApplicationsSnagIt LiveWriter Plug-inDownload Windows Live WriterDownload Windows Live Writer with Chocolatey© Rick Strahl, West Wind Technologies, 2005-2013Posted in Live Writer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 301 redirect Rule For Load Balance F5 BigIp

    - by Kshah
    I have a load balancer F5 Big ip for my website. Currently, I am having 302 redirect in place; however, I wanted to apply 301 but dont know how. For example: My website (abc.com) when typed 302 redirects to abc.com/index and when typed www.abc.com 302 redirects www.abc.com/index. I wanted to have a rule which will help me in abc.com - 301 redirect - www.abc.com/index abc.com/index - 301 redirect - www.abc.com/index www.abc.com - 301 redirect - www.abc.com/index Below is the code that my tech person is trying: Redirect to WWW when HTTP_REQUEST { if { [HTTP::host] equals "abc.com" or [HTTP::host] equals "abc.co.in" or [HTTP::host] equals "www.abc.co.in" } { if {!( [HTTP::path] equals "/")} { HTTP::respond 301 Location "http://www.abc.com[HTTP::path]" } } } Redirect POST when HTTP_REQUEST { if { [HTTP::method] equals "POST" } { persist source_addr pool shop_shop_vr4_http } } Redirect-VR4 HOMEPAGE when HTTP_REQUEST { if { [HTTP::path] equals "/" or [HTTP::path] starts_with "/target/" or [HTTP::path] starts_with "/logs/" or [HTTP::path] starts_with "/config/" } { HTTP::redirect "http://[HTTP::host]/index.jsp.vr" } }

    Read the article

  • Restrict SSL access for some paths on a apache2 server

    - by valmar
    I wanted to allow access to www.mydomain.com/login through ssl only. E.g.: Whenever someone accessed http://www.mydomain.com/login, I wanted him to be redirect to https://www.mydomain.com/login so it's impossible for him/her to access that site without SSL. I accomplished this by adding the following lines to the virtual host for www.mydomain.com on port 80 in /etc/apache2/sites-available/default: RewriteEngine on RewriteCond %{SERVER_PORT} ^80$ RewriteRule ^/login(.*)$ https://%{SERVER_NAME}/login$1 [L,R] RewriteLog "/var/log/apache2/rewrite.log" Now, I want to restrict using SSL for www.mydomain.com. That means, whenever someone accessed https://www.mydomain.com, I want him to be redirected to http://www.mydomain.com (for performance reasons). I tried this by adding the following lines to the virtual host of www.mydomain.com on port 443 in /etc/apache2/sites-available/default-ssl: RewriteEngine on RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^/(.*)$ http://%{SERVER_NAME}/$1 [L,R] RewriteLog "/var/log/apache2/rewrite.log" But when I now try to access www.mydomain.com/login, I get an error message that the server has caused to many redirects. That does make sense. Obviously, the two RewriteRules are playing ping-pong against each other. How could I work around this?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >