Search Results

Search found 19109 results on 765 pages for 'canonical url'.

Page 177/765 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • PHP W3 Validator API, Is this good? [closed]

    - by Josh Purcell
    I was trying to find a way to see if my site's code was valid or not but I continuously going over to W3 Validator so I decided to make an "API" however it really isn't! I just wanted to know if anybody can find a better solution to the one I have made. This is what I currently use, with the usage of ?uri=http://www.mydomain.com : <?php if(!$_GET['uri']) { echo "No URI!"; } else { $CheckURI = "http://validator.w3.org/check?uri=".$_GET['uri']; $URL = file_get_contents($CheckURI); $Start = strpos($URL, "<title>") + 7; $End = strpos($URL, "</title>"); $Title = substr($URL, $Start, $End-$Start); if(preg_match('[Invalid]',$Title)) { //Code is INVALID echo "<a href='$CheckURI' title='This is not good!' target='_BLANK'>INVALID Source</a>"; } elseif(preg_match('[Valid]',$Title)) { //Code is VALID echo "<a href='$CheckURI' title='Check It Yourself!' target='_BLANK'>Valid Source</a>"; } else { //It Went WRONG echo ""; } }

    Read the article

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • Add a trailing slash mod_rewrite

    - by Conner Stephen McCabe
    just wondering how I add a trailing slash at the end of my URL's using Mod_Rewrite? This is my .htaccess file currently: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/]*)$ index.php?pageName=$1 My URL show like so: wwww.**.com/pageName I want it to show like so: wwww.**.com/pageName/ The URL is holding a GET request internally, but I want it to look like a genuine directory.

    Read the article

  • Using cookies with lynx

    - by XXL
    lynx -cfg=cfg.file $URL this works with the following contents of the .cfg file: SET_COOKIES:TRUE ACCEPT_ALL_COOKIES:TRUE PERSISTENT_COOKIES:TRUE COOKIE_FILE:cookie.file however, this does not: lynx -cookies=1 -accept_all_cookies=1 -cookie_file=cookie.file $URL if it's going to be of any help - here's the trace: parse_arg(arg_name=-cookies=1, mask=1, count=2) parse_arg lookup(cookies=1) ...skip (mask 1/4) parse_arg(arg_name=-accept_all_cookies=1, mask=1, count=3) parse_arg lookup(accept_all_cookies=1) ...skip (mask 1/4) parse_arg(arg_name=-cookie_file=cookie.file, mask=1, count=4) parse_arg lookup(cookie_file=cookie.file) ...skip (mask 1/4) parse_arg(arg_name=$URL, mask=1, count=5) parse_arg startfile:$URL obvious question, why? the actual difference, from what i see, is the inability to trigger "PERSISTENT_COOKIES:TRUE" by command-line options in lynx. or, maybe, i have overlooked/misunderstood something?

    Read the article

  • Unable to run 'sudo apt-get dist-upgrade' due to authentication issues

    - by TobyG
    I've just attempted to run sudo apt-get dist-upgrade on my Ubuntu box, but am getting the following error... WARNING: The following packages cannot be authenticated! librdbmspp php5-ioncube-loader sw-libboost-date-time1.49.0 sw-libboost-system1.49.0 sw-libboost-filesystem1.49.0 sw-libboost-program-options1.49.0 sw-libboost-regex1.49.0 sw-libboost-serialization1.49.0 sw-libpoco I've tried running... $ sudo apt-key update $ sudo apt-get update ... as found in this question, but I'm still getting the error. Can anyone help, please? Update on 5th June Repos currently in /etc/apt/sources.list (links broken due to reputation being too low to include more than 2 links)… deb http: //gb.archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse deb http: //gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse deb http: //gb.archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http: //archive.canonical.com/ubuntu precise partner deb-src http://archive.canonical.com/ubuntu precise partner deb http: //security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http: //autoinstall.plesk.com/ubuntu/PSA_11.5.30 precise all deb http: //autoinstall.plesk.com/debian/SITEBUILDER_11.5.10 all all deb http: //autoinstall.plesk.com/debian/BILLING_11.5.30 all all

    Read the article

  • How to take search query and append modifers to the end of it

    - by Kimber
    This is a greasemonkey question. What I'm trying to do is modify an old google discussions script. What were wanting to do is be able to take the google search query and add modifiers to the end of it. Like this: search query: "superuser" modifiers: inurl:greasemonkey+question end result: "superuser" inurl:greasemonkey+question The old script creates a new div within the "hdtb_more_mn" element which is where you get the new discussions tab. However, since the "tbm=dsc" option to do a discussion search has died, this script no longer works. Hence the need to add modifiers to your searches. I tried to edit the script, but it appends the modifiers to the end of the url which includes "&client=firefox-a&hs=8uS&rls=org.mozilla:en-US:official". This means you're also searching for the above as well as your query, which doesn't work. I would like to be able to append the modifiers @ the end of the search querty, rather than the whole URL. I'm just not sure how to code it to where it adds the below "&tbm=" stuff within "discussionDiv.innerHTML" to the end of the query. The google search id seems to be, "gbqfq" for the search box, but I'm not sure how to add this id. Here is the old script // ==UserScript== // @name Add Back Google Discussions // @version 1.4 // @description Adds back the Discussion filters to Google Search // @include *://*.google.tld/search* // ==/UserScript== var url = location.href; if (url.indexOf('tbm=dsc') < 0) addFilterType('dsc', 'Discussions'); function addFilterType(val, name) { var searchType = document.getElementById('hdtb_more_mn'); var discussionDiv = document.createElement('DIV'); discussionDiv.className = 'hdtb_mitem'; discussionDiv.innerHTML = '<a class="q qs" href="'+ (url.replace(/&tbm=[^&]*/g,'') + '&tbm=' + val) +'">'+name+'</a>'; searchType.innerHTML += discussionDiv.outerHTML; } Thanks for any help, or suggestions on who to ask. Google Chrome has an extension for discussion searches, but FF doesn't seem to have one as of yet, which is why I'm trying to modify the above.

    Read the article

  • Redirect to new domain and preserve username

    - by David Brown
    I recently switched to a new domain for a version control server I run. The server is usually accessed with a username included in the url such as https://[email protected]/some/stuff. I want to redirect requests to the old domain to the new domain and preserve everything else in the url (including the username). So the former url would be redirected to https://[email protected]/some/stuff. Currently I have the following rewrite condition and rule: RewriteCond %{HTTP_HOST} sub\.olddomain\.com RewriteRule (.*) https://sub.newdomain.net$1 [R=301,L] This works except it drops the userinfo part of the URL. Is there a way I can preserve the user info?

    Read the article

  • A space-efficient guest filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

  • Using mod_rewrite to hide tomcat port

    - by user123181
    I have apps on Tomcat that use URLs like this: http://xxx:8080/myapp I don't want the users to see the port in the URL. Hi can do a rewrite rule like this: RewriteRule ^/myapp(.*) http://xxx:8080/myapp$1 [P,L] This way, if a user goes to the URL http://xxx/myapp he can enter the app fine, but the port will still show up on the browser. I want the URL that the user sees to be always http://xxx/myapp How can I do this using mod_rewrite?

    Read the article

  • Apress Deal of the Day - 15/March/2011

    - by TATWORTH
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Today’s $10 (about £6.50) offer from Apress at http://www.apress.com/info/dailydeal is: Pro SQL Server 2008 Analytics: Delivering Sales and Marketing Dashboards Pro SQL Server 2008 Analytics provides everything you need to know to develop sophisticated and visually appealing sales and marketing dashboards using SQL Server 2008 and to integrate those dashboards with SharePoint, PerformancePoint, and other key Microsoft technologies. $49.99 | Published May 2009 | Brian Paulen

    Read the article

  • Updating Google sitemap for mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • Want Google to index redirect urls

    - by Dave Goten
    I'm having issues with users who think that Google Search is the address bar. Some of the sites that link to my site use user friendly addresses with 301 redirects to pages that have less friendly URLs. So, for example if I enter www.foo.com/bar it goes to www.bar.com/page.php?some-parameters-and-utm-codes-etc usually this is done by a 301 redirect in order to keep the SEO from foo.com on bar.com and so on, which I believe is standard practice. However, lately there have been more and more people searching www.foo.com/bar instead of going to www.foo.com/bar directly and because the page /bar is nothing more than a redirect it has no SEO that I know of. Things I've thought of but haven't been able to test, because Google takes forever to update :) (and I'm lazy like that), include using Google sitemaps and having them enter their redirects as entries there. (I could see this working if they were the top search entry all the time, and it might appear as a sitelink, but I don't know if that'll make the url itself show up in searches) Using Canonical tags on my pages to the redirects they set up. Which is a nightmare in itself because of the nature of my pages. One week the www.foo.com/bar might go to www.bar.com/pageA.php the next it might goto www.bar.com/pageB.php and having to remember to take the canonical tag off of pageA, so that it doesn't get confused with pageB would be a pain. Using 302 redirects -.- So I guess the question here is, does anyone have any experience or knowledge about this? What should I do to make www.foo.com/bar show up when someone 'searches' for this redirect url?

    Read the article

  • Multilingual sites and Google search results, using sub-folders for language

    - by AWinter
    About three months ago we added an English version of our, previously Japanese only, site under the subfolder /en/ we've tried to follow the sometimes incomplete best practices laid out by Google by adding alternate tags to all pages that are currently translated. The top page for instance has the following meta tags for language. <link rel="canonical" href="/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> While the English main page under /en/ has <link rel="canonical" href="/en/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> Alternate languages are setup in the sitemap. (as per Google's recommendations) It seems however that Google absolutely refuses to show the English top page in results when the user is using English at google.com if you search you'll, as of this post, get the Japanese description and a title that Google has apparently invented instead of the title and description in the meta-tags for the /en/ index page. Does anyone have any experience with subfolders actually working to affect search results? What are the best practices for ensuring that the correct language version of my website is displayed through Google and other search engines? And how long will it take before the new language version becomes prominent in search engine results?

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • htaccess for subdomain help

    - by Patrick
    Usually I just use the online tools for url mod_rewrite rules but this just wouldn't work. Dynamic url: http://sub.domain.com/index.php?page=index&name=test Rewritten url: http://sub.domain.com/test OR http://sub.domain.com/test/ My htaccess: RewriteRule ^([^/]+)/?$ index.php?page=index&name=$1 [L] Instead of passing "test" for the variable name, I always get the value "index.php" Anyone gurus has have any idea?

    Read the article

  • Multilingual sites and Google search results, do subfolders really work?

    - by AWinter
    About three months ago we added an English version of our, previously Japanese only, site http://www.clubberia.com under the subfolder http://www.clubberia.com/en/ we've tried to follow the sometimes incomplete best practices laid out by Google by adding alternate tags to all pages that are currently translated. The top page for instance has the following meta tags for language. <link rel="canonical" href="/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> While the English main page under /en/ has <link rel="canonical" href="/en/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> We also have these alternate languages setup in the sitemap. (as per Google's recommendations) http://www.clubberia.com/sitemap.xml It seems however that Google absolutely refuses to show the English top page in results when the user is using English at google.com if you search for "clubberia" you'll, as of this post, get the Japanese description and a title that Google has apparently invented instead of the title and description in the meta-tags for the /en/ index page. Does anyone have any experience with subfolders actually working to affect search results? Am I being too impatient, or possibly doing something incorrect? Should we just give up on subfolders and push to subdomains (not the prettiest option)?

    Read the article

  • How can we best petition to bring Adobe creative software to Ubuntu?

    - by Sixthlaw
    Now I know its not as simple as asking for Adobe to support their design software on Ubuntu, but is there a way for the community and Canonical to make known to Adobe the rapidly growing amount of Linux users, and their desire for this great set of tools OFFICIALLY? I know that many of the answers I receive might be of the fashion "Its not going to happen", "use the free tools provided" or "Who knows it might happen in the near future", but this IS not what I am looking for. I have noticed, on sites like www.OmgUbuntu.com, there are links to pages where people can "like" the idea. Is there a way to try and get the whole community on board with this one, even Canonical, and as stated above, put forward this proposal to Adobe. The current requests for an Adobe CS, for Linux, are in dribs and drabs scattered all of the internet. Now is the best time to come up with productive solutions on how we can best gather statistics on the amount of people willing to buy the Adobe CS. These are the words of an Adobe employee: "I have forwarded this feedback on to the appropriate team who will consider it for future releases of Adobe software." The larger amount of people we have unified in the ONE community proposal, the greater chance we have of getting the software. How can we make this happen?

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Rich snippet for Google Custom Search - Schema.org

    - by Joesoc
    I am trying to extract the book URL from a link using microdata. The format is specified in schema.org. Here is my html. <div class="col-sm-4 col-md-3" itemscope itemtype="http://schema.org/Book"> <div class="thumbnail"> <img src="{{ book.thumbnailurl }}" itemprop="thumbnailUrl" style="width: 100px;height: 200px;"> <div class="caption"> <h4><span itemprop="name">{{ book.name }}</span> - <span itemprop="author">{{ book.author }}</span></h4> <p><span itemprop="about"> {{ book.about }}</span></p> <p> <a href="{{ book.url }}" itemprop="url" onclick="trackOutboundLink(‘{{ book.name }}’);"> <button type="button" class="btn btn-default btn-md"> <span class="glyphicon glyphicon-book"></span>Read </button> </a> </p> </div> </div> </div> When I use google snippet testing tool the JSON API returns book as a html link. However when I make the call in javascript the value of url is text("Read"). What am i missing ?

    Read the article

  • How to install wordpress without a web browser

    - by bvandrunen
    What I am trying to do is to automate wordpress website creation for the company I am working on. We have lots of information in our database for our customers and we want to create a wordpress website for each customer. The process works great and we have no trouble with the creation of websites/transfer of data or anything like that. The problem we do have is when we buy a new domain (http://www.newdomain.com) our process breaks (we call a stored procedure which installs all the data after the URL is called to install wordpress) if the domain takes more than 15min to resolve. We have tried doing looping (where the process checks to see if the domain resolves and keeps trying - but eventually if fails). So what we are looking for is to see if there is a way to install an URL without actually having the domain resolve yet. I have seen where possibilities where you can change the wp-config file but this doesn't work since we have more than one domain and it changes the source URL for all the domains. What we really need is just a way for us to manually start the install script through a call either through a database or some other way that doesn't check to see if the domain is resolved or pointing at the server or not. Thank for any suggestions. EDIT: All we do to install wordpress is call this URL: http://"newdomain".com/wp-admin/install.php?step=2 - if you change settings in the backend calling this URL will install wordpress without having to go through the wp-admin/install.php form

    Read the article

  • Google analytics ignoring "required step" in goals

    - by Matt Huggins
    I am A/B testing a landing page to see which converts better. The funnels are set up exactly the same in terms of the goal completion URL and funnel steps, with one exception: the first step (which is a required step) has a different URL to represent each of the two landing pages. Unfortunately, Google is tracking a conversion for both of these goals regardless of which landing page a user is reaching! It looks like the "required step" is broken...perhaps it is a deeper issue if others haven't noticed it, such as it only not working when the goal URL is the same between multiple goals. Here is an example of what I mean. Goal 1: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index1 (required step) 2. /users/register 3. /users/edit Goal 2: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index2 (required step) 2. /users/register 3. /users/edit As you can see, the only difference is step #1 of the funnel. Since I am A/B testing the landing page of the site, this should be the only difference. However, when I look at the goal page of Google Analytics, I see that the goal is being recorded for both of these regardless of the landing page being reached. The only tinkering I've done is attempting to wrap each funnel step's goal in ^ and $ characters for an exact regular expression match, but this didn't make a difference. Thoughts?

    Read the article

  • Nginx proxy SOAP request

    - by user2606078
    looking for a right way to accomplish the following: there is an app that have URL(1) hardcoded and no way/time to change it in the source http://dev.server.com/example.com/admin/soap/action/index?pr=1 and it should use (and get response from) URL(2) http://example.com/admin/soap/action/index?pr=1 what should I configure in Nginx (apache as backup used) conf on dev.server.com in order to give that app when it asks URL(1) answer from URL(2)? On dev.server.com Apache has virtual host: dev.server.com enabled. Also I've tried to proxy in apache instead of nginx by using ProxyPass: <Directory /var/www/dev> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> <Location /example.com/admin/soap> ProxyPass http://example.com/admin/soap </Location>

    Read the article

  • Direct access to website's database with single click?

    - by Mick
    I have noticed that when selecting options (drop down menus, radio buttons etc) on some websites you see an ever more complex URL being created and then you can use that URL to access that website at a later date and go straight to the page with your desired options. Unfortunately on other websites the URL remains fixed and you appear to have no choice but to select the options all over again. I was wondering if there was some utility that would help automate this process.

    Read the article

  • A space-efficient filesystem for grow-as-needed virtual disks ?

    - by Steve Schnepp
    A common practice is to use non-preallocated virtual disks. Since they only grow as needed, it makes them perfect for fast backup, overallocation and creation speed. Since file systems are usually based on physical disks they have the tendency to use the whole area available1 in order to increase the speed2 or reliability3. I'm searching a filesystem that does the exact opposite : try to touch the minimum blocks need by an aggressive block reuse. I would happily trade some performance for space usage. There is already a similar question, but it is rather general. I have very specific goal : space-efficiency. 1. Like page caching uses all the free physical memory 2. Canonical example : online defragmentation 3. Canonical example : snapshotting

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >