Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 61/467 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • Hilfe?! Wie funktioniert mein Werkzeug?

    - by DBA Community
    Es gibt eine ganze Reihe an Oracle Tools für die Oracle Datenbank, die per Command Line Interfaces bedient werden können: Von RMAN über ADRCI, vom SQL*Loader über Export/Import, von SRVCTL über SQL*Plus. Und wie es sich für ordentliche Werkzeug gehört, besitzt auch (fast) jedes einzelne von ihnen eine eigene Hilfestellung. Wobei die Betonung eindeutig auf "eigene" liegt. Auch der ungeübte Benutzer wird sehr schnell merken, dass Oracle sich hier wohl nie so wirklich Gedanken darüber gemacht hat, die Hilfefunktionen zu vereinheitlichen - außer dass die Hilfe mehr oder weniger hilfreich ist. Die wohl interessanteste Ausprägung dieser Hilfefunktion ist hier sicherlich der RMAN, dessen umfangreiche Syntaxhilfe nur schwer zu erhalten ist - es sei denn man vertippt sich absichtlich. Solange man alles richtig macht (oder eben falsch, aber leider mit der richtigen Syntax) ist RMAN kein Hinweis über seine umfangreichen Syntaxchecker zu "entlocken". Wie man bei den vielen unterschiedlichen Oracle Tools die hilfreichen Informationen bekommt, damit beschäftigt sich unser Tipp. Weiter zum Tipp

    Read the article

  • Who is a CMS really for?

    - by Eirc man
    I have started lately discovering Content Management Systems, and I was wondering, who is really CMS for? What I mean by that: is it only for companies, small businesses or individuals, that pays a contractor to make a website that it's users can just upload content through a easy interface. Or is it used also by programmers, to build their own websites, projects? Would a Facebook, Tweeter, StackExhange ever started by using a CMS, a very powerful one for example. Would you as a programmer build your own "fancy" website on top of a CMS, for example like Typo3, or you would build it from scratch? P.S To be more clear is a summary: What I mean to begin with is, would I as a developer choose a CMS to develop a website that can be scaled with a big base of users, be stuck if I choose to start with a CMS system. What if I build a website using CMS, and the website explodes in popularity, and then I wanted to add much more functionality that I have planed, is it possible that the CMS will limit the growth, because it might have not been build for that kind of scale?

    Read the article

  • SEO: disallowing Google from indexing forms in iframes or not?

    - by Marco Demaio
    I usually place forms in iframes (i.e. order form, request assistance form, contact forms, ect.). Just the forms, I never place other contents or pages in iframes. From a SEO point of view, would you exclude forms from being indexed/crawled by Google or not? I mean my forms hardly ever contains keyword/keyphrases, moreover I obviously place empty title/meta description tags in pages shown in iframe to display forms, cause those titles are never displaied in browser title bar. So I'm wondering what's the point of letting Google index them? Moreover I think these form pages might suck out PR from all other pages that are more valuable for SEO. If your answer is "yes I would exclude them form indexing" would you simply use robots.txt to exclude them? Thanks!

    Read the article

  • How to encourage version control adoption

    - by Man Wa kileleshwa
    I have recently started working in a team where there is no version control. Most of the team members are not used to any kind of version control. I've been using mercurial privately to track my work. I would like to encourage others to adopt it, and at the very least start to version their code as they develop changes. Can anyone give me advice on how I can encourage adoption of a distributed version control such as mercurial. Any advice on how to win people including managers to DVCS would be much appreciated.

    Read the article

  • How was 20Q made?

    - by Dan the Man
    Ever since I was a kid, I've wondered how they made the 20Q electronic game. In this game, which is it's on device, you think of an object, thing, or animal (e.g. a potato or a donkey), once you mentally choose your thing, the device goes through a series of questions such as: Is it larger than a loaf of bread? Is it found outdoors? Is it used for recreation? For each of the questions you can answer yes, no, maybe, or unknown. The way I've always thought of it to work was with immense, nested conditionals (if statements). But, I don't think that would be very likely as it would be terribly difficult to understand while coding it. I'm not looking for a discussion as SE doesn't allow it; I'm looking for concrete knowledge or solutions.

    Read the article

  • My wireless/WiFi connection does not work. What can I do?

    - by Wild Man
    Your situation You have successfully installed Ubuntu. You have just downloaded and booted Ubuntu live media. The latest LTS (see also HWE) or latest non-LTS release are preferred. See the list of Ubuntu releases that are currently supported.) You upgraded your Ubuntu installation to the latest release that the software updater offered you. WiFi worked before, but not now on the new release. You migrated your existing Ubuntu installation to new hardware. Your problem The wireless of your laptop or dekstop is not working. You tried switching the wireless switch off and on and you tried rebooting several times, but you don't see any WiFi access points. You can see your wireless access point, but you cannot establish a connection. You want to analyze the problem, but you don't know where to start or what information you can provide. Related questions I have a hardware detection problem, what logs do I need to look into?

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • Google Analytics HTTP vs HTTPS

    - by Pelangi
    I want to use Google Analytics on a website that uses both HTTP and HTTPS that works as explained below: Secure pages accessed through https://mydomain.com/secure/* are always on HTTPS. Any access to these pages through HTTP will be redirected to HTTPS. Any other pages will be accessible through both HTTP and HTTPS I have a Google Analytics profile with URL using HTTPS. Will I cover all traffic? Do I need to create another profile using HTTP and how should I apply the other profile?

    Read the article

  • Are DDD Aggregates really a good idea in a Web Application?

    - by Mystere Man
    I'm diving in to Domain Driven Design and some of the concepts i'm coming across make a lot of sense on the surface, but when I think about them more I have to wonder if that's really a good idea. The concept of Aggregates, for instance makes sense. You create small domains of ownership so that you don't have to deal with the entire domain model. However, when I think about this in the context of a web app, we're frequently hitting the database to pull back small subsets of data. For instance, a page may only list the number of orders, with links to click on to open the order and see its order id's. If i'm understanding Aggregates right, I would typically use the repository pattern to return an OrderAggregate that would contain the members GetAll, GetByID, Delete, and Save. Ok, that sounds good. But... If I call GetAll to list all my order's, it would seem to me that this pattern would require the entire list of aggregate information to be returned, complete orders, order lines, etc... When I only need a small subset of that information (just header information). Am I missing something? Or is there some level of optimization you would use here? I can't imagine that anyone would advocate returning entire aggregates of information when you don't need it. Certainly, one could create methods on your repository like GetOrderHeaders, but that seems to defeat the purpose of using a pattern like repository in the first place. Can anyone clarify this for me?

    Read the article

  • 301 redirect to different directory on Yahoo Small Business Hosting without .htaccess

    - by Vinay
    I have a website hosted with Yahoo Small Business Hosting, and I don't have access to use a .htaccess file. I have around 220 pages in a folder mysubfolder (http://example.com/myfolder/mysubfolder) and the age of website is around 3 years. I am planning to move all 220 pages in mysubfolder to myfolder (one level up). All the pages in mysubfolder are indexed. What is the best way to do this, so that it wouldn't affect my SEO.

    Read the article

  • Adding your website to free web directories as a link building strategy

    - by Man
    It's been two month I've launched a website. I recently ran into some websites which list directory of other websites. Some examples can be http://www.addgoogleurl.com/ and webdirectorieslist.com, etc. I was talking with my colleague and he says, adding the URL of my website to these kind of websites will negate the effect of other organic real links. Does google consider positive/negative points for these kind of links from web directory websites? Do you have any source for your answer to refer to? I found this question asked before on webmasters.SE, this is asking about many links from a website.

    Read the article

  • Should I have link rel=next & prev on URLs which have query variables?

    - by user21100
    For example, I have link rel prev & next set up on these pages of products: site.com?page=2 site.com?page=3 (this is my preferred structure by the way and I'm trying to get all the ugly URLs which are littered with query variables deindexed as they are causing duplicate content). So the above URLs are fine but once a filter to narrow product results is selected, like "price", the URL shows like this: site.com?price[1000-1499]=on site.com?page=2&price[1000-1499]=on As of right now, I am having the link rel prev & next dynamically added to the header of these pages but since I am working on getting these query variable URLs pages deindexed, I am wondering if I should get rid of it on these pages? Any thoughts?

    Read the article

  • How can you plan long range resources and budgets when using Agile methodology?

    - by Mystere Man
    Agile does not encourage a lot of up-front design. This is good from a requirements management and software development standpoint, and allows the project to adapt to changing business needs. However, how does one do any long range planning of resources if you don't really know what you're going to build when you start? Oh sure, you have a conceptual model of what you're going to build, but you don't have any measurable detail from which to gague how many resources you will need to complete the project, or how much it will cost. Does anyone have any suggestions on how to go about long range planning in an agile environment?

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID?

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • Useful design patterns for working with FragmentManager on Android

    - by antman8969
    When working with fragments, I have been using a class composed of static methods that define actions on fragments. For any given project, I might have a class called FragmentActions, which contains methods similar to the following: public static void showDeviceFragment(FragmentManager man){ String tag = AllDevicesFragment.getFragmentTag(); AllDevicesFragment fragment = (AllDevicesFragment)man.findFragmentByTag(tag); if(fragment == null){ fragment = new AllDevicesFragment(); } FragmentTransaction t = man.beginTransaction(); t.add(R.id.main_frame, fragment, tag); t.commit(); } I'll usually have one method per application screen. I do something like this when I work with small local databases (usually SQLite) so I applied it to fragments, which seem to have a similar workflow; I'm not married to it though. How have you organized your applications to interface with the Fragments API, and what (if any) design patterns do you think apply do this?

    Read the article

  • Useful design patterns for working with FragmentManger on Android

    - by antman8969
    When working with fragments, I have been using a class composed of static methods that define actions on fragments. For any given project, I might have a class called FragmentActions, which contains methods similar to the following: public static void showDeviceFragment(FragmentManager man){ String tag = AllDevicesFragment.getFragmentTag(); AllDevicesFragment fragment = (AllDevicesFragment)man.findFragmentByTag(tag); if(fragment == null){ fragment = new AllDevicesFragment(); } FragmentTransaction t = man.beginTransaction(); t.add(R.id.main_frame, fragment, tag); t.commit(); } I'll usually have one method per application screen. I do something like this when I work with small local databases (usually SQLite) so I applied it to fragments, which seem to have a similar workflow; I'm not married to it though. How have you organized your applications to interface with the Fragments API, and what (if any) design patterns do you think apply do this?

    Read the article

  • Disable mobile page redirection for SharePoint 2013

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information SharePoint 2013 (foundation too), detects requests from mobile devices and automatically changes the uRL of the requested non mobile page to its mobile substitute. This logic is now built into SPRequestModule. The mobile view is pretty damned amazing. Even though the set of pages for mobile access is completely different, SharePoint has an entirely separate set of controls for the mobile pages. These are in the Microsoft.SharePoint.MobileControls namespace which inherit from Microsoft ASP.NET controls in the System.Web.UI.MobileControls namespace. These Mobile pages can even use mobile Web Part adapters to mimic the behavior of webparts on mobile webpart pages. Read full article ....

    Read the article

  • SEO - Coaching Newbies

    All search engines use algorithms and each search engines have different ones. An algorithm is the formula that the search engine uses to evaluate your web pages. The robots will crawl all pages on your site but not all pages will be indexed.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • What are the Starting Games I need to make?[Best steps for a beginner Game Developer?] [closed]

    - by Man With Steel Nerves...
    Possible Duplicate: What are good games to “earn your wings” with? Hai, I'm new to the genre "Creating Games".Previously i had done only porting.I need some suggestion's for making a game. What are the basic game logics i need to start with? - Should i write Tic-Tac-Toe game? - Actually this seem very basic to me. I'm totally confused on where to start with.I like to create big games but after starting i feel the game is too heavy to handle. Can any one list out the basic needs of a Game Play programmer? I don't mind using any platform (Flash,c++,objective-c) but i need to know what are the game logic's i need to know before i start a big game.

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >