Search Results

Search found 37060 results on 1483 pages for 'page'.

Page 252/1483 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • How to Build Links & Improve PageRank

    Before we start talking about building links to improve Google PageRank, let's clear up any confusion - PageRank is a calculation of Google's estimate of the importance of a page, but it is not the same as where your page "ranks" in the organic results. It can be very confusing since the words are so similar. Try thinking of it this way: a page can rank in any of the search engines, but you can only have PageRank in Google.

    Read the article

  • Best way to trigger editing and dynamically show editing features? [on hold]

    - by Tim Marshall
    Page In Question: http://rafflebananza.com/admin/StatisticalData/expenses/expenses.html Hiya Everyone, On my page, I have an 'Actions' drop down to the top-right hand side of the page. This actions drop down I would like to have an action to 'Enable Editing'. Upon clicking to enable editing, I would like a PHP variable modified from 'EnableEditing = false' to 'EnableEditing = true'. Why I would like to use PHP maybe questionable, here is why I would like to use PHP under my understanding to clarify; Sections on my page will show to different administrators depending on their level. Upon enabling editing mode, certain contents will then dynamically. <?php if ($_SESSION['user_level_status'] < 2) { if ($editing = enabled) { show this } else { show this } } ?> Something similar to this, I'm new at PHP so this may look incorrect. The question really is, is PHP the correct language to use to trigger editing and how can I do this please? Best Regards, Tim

    Read the article

  • How to throw a 404 error from htaccess?

    - by John Isaacks
    Everything I find seems to be about created a custom 404 page. That is not what I am trying to do. If I want to block access to a page I can do this in htaccess: RewriteRule pattern - [F] However, "Forbidden" hints that the page does exists. I want the page to appear to not even exist. So I would like to give a 404 error instead of a 403. Then have it render whatever 404 page would render if the resource really wasn't there. How can I do that?

    Read the article

  • wordpress get content main menu

    - by eca_arpit
    I added a menu page named "Home" on wp-admin. It was added successfully. when i click this home menu then it display nothing.Now i want to display content of a page suppose Page_id=15 on right side(which is empty after clicking home). 15 number page has php codes and uses a template also...is it possible to display contents..i wrote following code...if any confusion i can explain more..plz help me out.. i wrote this in a plugin. add_action('admin_menu', 'Home'); function Home() { add_menu_page('My Plugin Options', 'Home', 'manage_options', 'my-unique-identifier', 'content'); } function content() { if (!current_user_can('manage_options')) { wp_die( __('You do not have sufficient permissions to access this page.') ); } $page_id = 15; $header_content = get_page( $page_id ); echo apply_filters('the_content', $header_content-post_content); }

    Read the article

  • Most standard / Best way to keep the same top menu among different web pages?

    - by jsoldi
    What's the standard way to keep the same menu on top among different web pages without having to duplicate it on each page (I don't mean that it doesn't reload like when using frames and only loading the bottom part; I want the menu to scroll with the page when scrolling down, like this, this, this and pretty much every single web page that exists). I found this answer but the guy can't use Php and I can. Plus, I see several people giving different suggestions, but I assume there is a standard since pretty much every single web page in the whole web have a menu on top that stays the same among multiple pages . I'm just a newbie on web design (I can program Php and Html easily but I have no idea about standards and stuff like that since I'm self-taught guy ;)). What I would normally do is to include the menu with php but I'm not sure if this is the "standard".

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • Extension to add button "Report to Bugzilla"?

    - by Alois Mahdal
    We have: internal MediaWiki installation for internal documents (we don't use it in completely wiki-like style—only maintainers should normally make changes) internal Bugzilla installation for internal issues including these internal documents on the MediaWiki site Now only the icing on the cake is missing: an automatic button that would appear on each page, being able to open a Bugzilla page pre-fill some fields with information about that page Basically, name What I imagine as a best solution would be a sibling to the ubiquitous "[edit]" button, probably sitting next to it, like in this mock-up:

    Read the article

  • remove ii7 from windows 7 ultimate

    - by sonill
    hi, i recently had trouble with wamp running properly in my windows 7 ultimte.(it was running properly some days ago) so i tried to install ii7 in my os but it didn't seem to work great though, so i uninstalled it. Now i have managed to install wamp properly but when i access localhost, it opens ii7 page. i can't figure out whats wrong? can anyone help. I want to remove that ii7 page from localhost and want wamp page in it. And when i try to access another page by typing "http://localhost/info.php" it shows page with this headings "Server Error in Application "DEFAULT WEB SITE""

    Read the article

  • Creating a Login Overlay

    Many types of websites, from online retailers to social networking sites, allow visitors to create user accounts. Traditionally, websites that support user accounts have their visitors sign in by going to a dedicated login page where they enter their username and password. One nitpick I have with dedicated login pages is that signing in involves leaving the current page to visit the dedicated login page. This article shows how to implement a login overlay, which is an alternative user interface for signing into a website.

    Read the article

  • What is the/Is there a right way to tell management that our code sucks?

    - by Azkar
    Our code is bad. It might not have always been considered bad, but it is bad and is only going downhill. I started fresh out of college less than a year ago, and many of the things in our code puzzle me beyond belief. At first I figured that as the new guy I should keep my mouth shut until I learned a little more about our code base, but I've seen plenty to know that it's bad. Some of the highlights: We still use frames (try getting something out of a querystring, almost impossible) VBScript Source Safe We 'use' .NET - by that I mean we have .net wrappers that call COM DLLs making it almost impossible to debug easily Everything is basically one giant function Code is not maintainable. Each page has multiple files that are created every time a new page is made. The main page basically does Response.Write() a bunch of times to render the HTML (runat="server"? no way). After that there can be a lot of logic on the client side (VBScript), and finally the page submits to itself (often time storing many things in hidden fields) where it then posts to a processing page which can do things such as save the data to the database. The specifications we get are laughable. Often times they call for things like "auto-populate field X with either field Y or field Z" with no indication of when to choose field Y or field Z. I'm sure some of this is a result of not being employed at a software company, but I feel as if people writing software should at least care about the quality of their code. I can't even imagine that if I were to bring something up that anything would be done soon, as there is a large deadline looming, but we are continuing to write bad code and use bad practices. What can I do? How do I even bring these issues up? 75% of my team agree with me and have brought up these issues in the past, yet nothing gets changed.

    Read the article

  • RewriteRule for URLs with spaces

    - by Robert Cailliau
    My site's pages are in multiple languages whereby each language version shares its media (images) with the other language versions. I place all versions and the media in a single directory with the same name. E.g. pages mypage-en.html, mypage-fr.html etc. will sit in directory mypage. The directory path suffices to reference a page: h t t p : //....../mypage/ is good enough, there is no need for h t t p : //....../mypage/mypage-en/html A rewrite with RewriteRule ^(.*)/([a-zA-Z0-9]+)/?$ /$1/$2/$2-en.html lets me use the shorter form. But what if the name mypage contains spaces (which some do) ? I want h t t p : //....../my page/ to lead to h t t p : //....../my page/my page.html Using RewriteRule ^(.*)/([a-zA-Z0-9|\s]+)/?$ /$1/$2/$2-en.html did not work. Any hints welcome. (please do not ask me why I want to do this, nor tell me I should not use spaces in file names)

    Read the article

  • How to Use Heading Tags and Alt Attributes

    If you're working with HTML code for the first time, you may be wondering why Heading tags and alt attributes are used as a guide to data within your site's architecture. Search Engine Optimization consultants use HTML header tags to summarize the topic of the page it introduces. Google and other search engines analyze text inside header tags in their algorithm to assign web page rankings. To rank for your target keywords and phrases, incorporate them into your HTML page headers.

    Read the article

  • JavaScript: scroll position (Webkit engine) [migrated]

    - by Julien
    I'm currently trying to use JavaScript to find out how far down the page the user has scrolled; for Firefox 8.0, the keyword is pageYOffset. To say things mechanically: The page has a certain height. In Firefox, the useful object is document.documentElement.scrollHeight. The browser's visible area also has a certain height. In Firefox, the object is window.innerHeight; in IE8, document.documentElement.clientHeight. I need to know where the user is in the page vertically; in other words, how many pixels down the page the user has scrolled. Does Webkit have a DOM object that refers to the current scroll position? Thank you.

    Read the article

  • Backlink Your Way to the Top of Google by Tapping Into Seven Easy Sources of Backlinks

    Because backlinks boost a web page's level of authority - and authority is a key search engine ranking factor - it is absolutely essential that any web page you are trying to promote has a lot of high-quality backlinks pointing to it in order to achieve high search engine rankings. While the best backlinks are those that are earned on the strength of great content, great content will not be seen unless the web page it occupies is highly visible in the search results.

    Read the article

  • Getting Your Site PR Increased With Back Link Swap Forums

    Site PR or Page Rank is a useful search engine optimisation (SEO) method to increase the visibility of your website in the Google search engine. The page rank is used by search engine Google to organise websites in their relevance and popularity. The page rank system ranks websites on a scale of 0 to 10 (with ten being the most popular).

    Read the article

  • Facebook contest policy no-no?

    - by Fred
    I would like to post a link on a Facebook page where it will exit Facebook entirely and go to a client's website, where people will be on a page (client's) where they can enter their e-mail address to be entered in a temporary database file with rules and disclosures etc., for a draw once the number of entries reaches 100 for instance. Once the number of entries reaches 100, a random winner is picked and notified via E-mail. The functionality is as follows: A link is place on a Facebook page leading to an external page The page is a form to merely enter their email address for a contest The email is placed in a temporary file An automatic E-mail is sent to the address used for confirmation using SHAH-256 hash The person receives the Email saying something to the affect "Please confirm your Email address etc. - If you did not authorize this, simply ignore this message and no further action will be taken". If the person clicks on the confirmation link, the Email is then stored in the database and the person is again notified saying "Thank you for signing up etc." Once others do the same process and the database reaches a certain number, the form is no longer accessible and automatically picks a random Email. Once picked, an Email is automatically sent to the winner stating the instructions, and notifying me also. Once that person clicks yet another confirmation link, the database is then automatically deleted. I have built this myself and have no intentions of breaking any rules, nor jeopardize the work/time/energy I have put into this project. Is this allowed?

    Read the article

  • Suivez l'actualité de la rubrique Mac sur Facebook

    Bonjour, Vous l'aurez peut-être remarqué, la rubrique Mac s'est dotée d'une page Facebook que vous pouvez voir apparaître dans le gabarit en haut à droite à coté des flux de syndication. Cette page vous permet de suivre l'actualité de la rubrique Mac comme sur le portail. C'est un outil de plus que nous vous offrons pour être averti des nouvelles publications de la rubrique Mac. Nous espérons que cette page Facebook sera utile pour certains d'entre vous. L'équipe Mac...

    Read the article

  • Meta Tags Keywords, Descriptions and Titles - Search Engine Optimization of Your Site Content

    Some web builders don't think that meta tag titles, descriptions, and keywords matter so much in their site and page rankings anymore. It is true that search engine algorithms are constantly changing in how they determine where your page rank. I am of the old school of thinking, and prefer to stay with my current method of search engine optimization and meta page data entry, at least for now.

    Read the article

  • How to build a web service to detect content change(s) at an external website?

    - by Global nomad
    I'm researching ways to build a web service to periodically traverse a predetermined list of web pages (of another external website) to detect if a page's content has changed from editing of the page, and deletion of the page. The end goal is to have this web service post push-notification events to mobile devices. FYI, I've searched and read "Questions with similar titles" here. Thank you for sharing your answers.

    Read the article

  • Varnish cache and PHP session; setting header?

    - by StCee
    Varnish by default would not cache page with cookies. I read on some posts that one workaround for PHP pages is to set header('Cache-Control: public, s-maxage=60'); in php pages. But would it makes Varnish cache the page with the session cookie? Session is started on that page, and although there is nothing personal on that page, I would still want the session to persist in case the user would do something private later. So is there a way to cache the page without the session cookie? And still be able to pass session between pages? I can imagine some sort of weird solution with hidden form, but I would prefer if it can be done with VCL configuration or header setting. Thanks a lot!

    Read the article

  • curl blocked at TMG firewall

    - by jemtube100
    i using TMG (threat management gateway) firewall at my web server. when i try to use Curl from outside, this firewall was blocked the connection. what rule/setting that i need to create at TMG to allow it. the error state as below : Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. </UL> <HR color=#c0c0c0 noShade> <P id=L_defaultr_11>Technical Information (for support personnel)</P> <UL> <LI id=L_defaultr_12>Error Code: 403 Forbidden. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202)

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • DIY Search Engine Optimization

    Just because you build a beautiful web page doesn't mean they will come, doesn't mean you will rank well in Google. You have to help search engines know what to rank your web page for to help people find your web page that will appreciate what you have to offer. This is called 'Search Engine Optimization, or 'SEO'.

    Read the article

  • Readying Yourself For SEO

    And all of this is what influences the page rankings of a website. Page rankings are a ranking system that determines how high up your website will appear in the search results. Most people who browse the internet only refer to the first and second page when searching for files of any sort.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >