Search Results

Search found 28992 results on 1160 pages for 'content pages'.

Page 57/1160 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Linking CSS Navbar WIth Wordpress Pages

    - by JCHASE11
    I am using wordpress as a full on CMS on a site I am building. One thing I cant seem to figure out is how to link up my navigation bar to the pages I am creating in wordpress. I am using a sprite image hover navbar that is defined in the header.php file. Does anyone have any idea how I can take a typical CSS sprite navbar and link it up with the pages I am creating within wordpress?

    Read the article

  • Dynamically resizing navigation div to main content

    - by theZod
    Greetings and Hello I am trying to put together a wordpress site, now because the content in the main div is going to be a different height with every page I need the navigation sidebar to stretch to the same height. So with a little javascript tom-foolery I can get the sidebar to be the same height with the following code function adjust(){ hgt=document.getElementById('content').offsetHeight; document.getElementById('sidebar').style.height=hgt+'px'; } window.onload=adjust; window.onresize=adjust; Now that's all good for a long page but if the content is smaller then the sidebar stuff gets all messy. So I have tried an if statement like so function adjust() { if (document.getElementById('content').style.height < document.getElementById('sidebar').style.height){ hgt=document.getElementById('content').offsetHeight; document.getElementById('sidebar').style.height=hgt+'px'; else hgt=document.getElementById('sidebar').offsetHeight; document.getElementById('content').style.height=hgt+'px'; } } window.onload=adjust; window.onresize=adjust; But that just doesn't do anything, so any ideas whats going on?

    Read the article

  • ASP.NET Master Page + pageLoad() = kills jquery?

    - by Clay Angelly
    In my MasterPage, I have a ScriptManager that has a ScriptReference to my jquery.js file. This has always worked with no problems, all content pages that utilize jquery work fine. Recently, I added the following javascript script block at the end of my MasterPage: function pageLoad(sender, args) { } By simply adding the above pageLoad method, no jquery code is executed from any of my content pages. Why would just having a pageLoad in the Master Page have this effect? Thanks in advance for any insight.

    Read the article

  • How can I remove unwanted cropped pages from Acrobat

    - by Servant
    Executing the crop command in Acrobat from a 3000pt * 2000pt document to 1500pt*1800pt only hides the document outside of the new boundaries but the original document still remains without change; if anyone uses the touch-up tool and moves the content, all "hidden" information outside the cropped page may appear again by dragging it into view. The page acting as a window (or a mask) to display the 3000pt * 2000pt. I am wondering if there is a solution to crop permanently the document without reprinting it again into PDF file? Please find pictures attached: http://i.stack.imgur.com/5JTPg.png http://i.stack.imgur.com/HPokv.png

    Read the article

  • How to retrieve content via .load() or $.get() with this line

    - by Sin
    hello :) I posted a question a day or two ago about how to retrieve php via ajax method in this modal I was using. I kinda found out the right way to go about it, but there's still something I'm not doing right (obviously lol) Here's the section thats giving me the issues: jQuery('div that holds content').fadeIn(200).css({ 'width': Number( popWidth ) }); $('').load('/something/somewhere/this #content'); So, im using safari, and a local server (mamp), when I check activity in my browser, it shows that it is loading the content with every click, AND the pop up pops up, but no content. When I simply retrieve content via hidden div, ofcourse, i get it. This is what I'm trying to avoid. right now I have that div in my footer stashed as hidden. I'd rather just make a call when its needed, instead of loading it every single time a page is accessed. you can go here to see the whole script i posted in my last question: How to use ajax to show php in a modal pop up Anyone have any idea? I read that .load() has the ability to grab specific content from a request, but im not sure the major difference between that and $.get() I've tried both, and I get the same results. Im using wordpress, and wordpress's ajax requests run smooth as ever, so I know its not a local problem, i'ts my coding lol Ok....Im done typing :)

    Read the article

  • Drupal: tags order in back-end edit-content pages

    - by Patrick
    hi, can I order alphabetically the tags in my edit content pages ? See screenshot here: http://dl.dropbox.com/u/72686/tagsOrder.png Currently the order is given by the Taxonomy Manager module (I installed in my Drupal). I would like to know if I should use jQuery to order the tags in my back-end pages. thanks

    Read the article

  • Aggregating and displaying content from hundreds of RSS feeds

    - by Andrew LeClair
    I'd like to build a website that aggregates and displays content from hundreds of RSS feeds. The feeds will be from different sites: Twitter, Flickr, Tumblr, etc, so the content will be very heterogenous. In a perfect world — and this is more of a side issue — I would like to allow other people to help manage the list of feeds and assign tags to the content from each individual feed so that you can filter the items that are displayed. What I've tried so far: Google Feeds API – I thought this would be the answer, but unless I'm missing something, the FeedController will only output the collected feed content as separate lists. Is there any way to ask the Google Feeds API to aggregate and sort the content from many RSS feeds before displaying? Yahoo! Pipes – This also seemed like a good solution at first. I setup a Pipe that accesses a list of RSS feeds stored in a Google Doc spreadsheet and then aggregates the content. However, the output leaves a lot to be desired; Tumblr video posts, for example, only show a title and a permalink to the post, the embedded Youtube video is lost. PHP – I've seen this question, which looks like a good approach. I'm less proficient in PHP, so although I'm willing to learn, I'd ideally like to find a different approach. Any thoughts? Thanks.

    Read the article

  • Not crawling the same content twice

    - by sirrocco
    I'm building a small application that will crawl sites where the content is growing (like on stackoverflow) the difference is that the content once created is rarely modified. Now , in the first pass I crawl all the pages in the site. But next, the paged content of that site - I don't want to re-crawl all of it , just the latest additions. So if the site has 500 pages, on the second pass if the site has 501 pages then I would only crawl the first and second pages. Would this be a good way to handle the situation ? In the end, the crawled content will end up in lucene - creating a custom search engine. So, I would like to avoid crawling multiple times the same content. Any better ideas ? EDIT : Let's say the site has a page : Results that will be accessed like so : Results?page=1 , Results?page=2 ...etc I guess that keeping a track of how many pages there were at the last crawl and just crawl the difference would be enough. ( maybe using a hash of each result on the page - if I start running into the same hashes - I should stop)

    Read the article

  • reducing loading time of 100 pages of google

    - by Nikhil
    for my project i need to access entire pages(100) of google at a time for a particular keyword.I used 'for' loop for accessing pages in url written in my c# code.But it is taking more time to access.Some times it showing HttpRequest error.Any way to increase the speed?

    Read the article

  • Prototype or jQuery for DOM manipulation (client-side dynamic content)

    - by luiggitama
    I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.: Prototype's $('id').update(content) jQuery's jQuery('#id').html(content) BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$". I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this: var html = []; int idx = 0; ... html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction('; html[idx++] = param; html[idx++] = ')"></span>'; html[idx++] = someText; html[idx++] = '</td></tr>'; ... So here comes the question, which is better to use: // Prototype's $('myId').update(html.join('')); // or jQuery's jQuery('#myId').html(html.join('')); Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values. Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this: MyData = {div1:jQuery('#id1'), div2:$('id2'), ...}; ... div1.update('content 1'); div2.html('content 2'); So, which is the best practice?

    Read the article

  • JQuery/AJAX: Loading external DIVs using dynamic content

    - by ticallian
    I need to create a page that will load divs from an external page using Jquery and AJAX. I have come across a few good tutorials, but they are all based on static content, my links and content are generated by PHP. The main tutorial I am basing my code on is from: http://yensdesign.com/2008/12/how-to-load-content-via-ajax-in-jquery/ The exact function i need is as follows: Main page contains a permanent div listing some links containing a parameter. Upon click, link passes parameter to external page. External page filters recordset against parameter and populates div with results. The new div contains a new set of links with new parameters. The external div is loaded underneath the main pages first div. Process can then be repeated creating a chain of divs under each other. The last div in the chain will then direct to a new page collating all the previously used querystrings. I can handle all of the PHP work with populating the divs on the main and external pages. It's the JQuery and AJAX part i'm struggling with. $(document).ready(function(){ var sections = $('a[id^=link_]'); // Link that passes parameter to external page var content = $('div[id^=content_]'); // Where external div is loaded to sections.click(function(){ //load selected section switch(this.id){ case "div01": content.load("external.php?param=1 #section_div01"); break; case "div02": content.load("external.php?param=2 #section_div02"); break; } }); The problem I am having is getting JQuery to pass the dynamically generated parameters to the external page and then retrieve the new div. I can currently only do this with static links (As above).

    Read the article

  • dynamic meta data and description for pages

    - by pradeep
    Hi, I am developing pages in php dynamically i.e data gets filled up from mysql DB. how do i assign a proper meta data and description for these dynamic pages so that google recognises it properly. What needs to be passed in page so that google takes description properly. when i search a page in google. it takes the data in page as description not description tag contents

    Read the article

  • How to make a piece of WPF content take up the entire application window

    - by Bojin Li
    I'm working on an application that contains a number of content areas. I want to implement a behavior such that in response to user input, any of these content areas can be toggled to fit the entire application window, and optionally back to its original position again. I experimented with several approaches and none of them seem optimal for me. Here's what I tried to do: Use the ClipToBoundsProperty on the content I want to make "Full Screen": Doesn't work because only the CanvasPanel seems to fully respect this property. The application need to be localized so I would really like to avoid the CanvasPanel. Use a Grid and collapse the other content areas, such that only the one I want to see is visible, hence taking up the entire screen: This will probably work but doesn't seem easy to implement nor maintain. The "Full Screen" content area could be several levels deep, for example residing inside a Tabcontrol, so I would have to hide the tab headers too etc. Reconstruct the content area in a separate view and display it while hiding the rest: Seems easy enough to do with DataTemplates and my ViewModel objects, but any GUI/View only states are not preserved using this approach. Somehow "lift" the GUI/View I want to "Full Screen" into the separate view and display it while hiding the rest: I don't know how to do this or even if this is possible. Anyway if anyone knows a better approach I would love to know about it. Thanks a lot!

    Read the article

  • drupal jQuery 1.4 on specific pages

    - by Mark
    jQuery 1.4 breaks various modules and is not ready to replace 1.3.2 wholesale. But on various pages with complex javascript interactions, I need 1.4. What's a good way to force drupal to use 1.4 on specific pages? Thanks.

    Read the article

  • Serve pages based on domain by using htaccess

    - by Safwan Erooth
    I have a site mysite.com with different sections. There are several domains pointing to the same site. What I'm trying to achieve is. Say if my site has the domains mysite.com, section1.com and section2.com. If the user comes from section1.com the user should get mysite.com/section1/ it should not be redirected, it should be rewritten like this : mysite.com/section1/something should become section1.com/something mysite.com/section1/another should become section1.com/another Same way if the user is coming from section2.com the content served should be mysite.com/section2/ and the domain rewritten like mysite.com/section2/something should become section2.com/something mysite.com/section2/another should become section2.com/another

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • How to get html/css/jpg pages server by both apache & tomcat with mod_jk

    - by user53864
    I've apache2 and tomcat6 both running on port 80 with mod_jk setup on ubutnu servers. I had to setup an error document 503 ErrorDocument 503 /maintenance.html in the apache configuration and somehow I managed to get it work and the error page is server by apache when tomcat is stopped. Developers created a good looking error page(an html page which calls css and jpg) and I'm asked to get this page served by apache when tomcat is down. When I tried with JkUnMount /*.css in the virtual hosting, the actual tomcat jsp pages didn't work properly(lost the format) as the tomcat applications uses jsp, css, js, jpg and so on. I'm trying if it is possible to get .css and .jpg served by both apache and tomcat so that when the tomcat is down I'll get css and jpg serverd by apache and the proper error document is served. Anyone has any technique? Here is my apache2 configuration: vim /etc/apache2/apache2.conf Alias / /var/www/ ErrorDocument 503 /maintenance.html ErrorDocument 404 /maintenance.html JkMount / myworker JkMount /* myworker JkMount /*.jsp myworker JkUnMount /*.html myworker <VirtualHost *:80> ServerName station1.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps1 JkMount /* myworker JkUnMount /*.html myworker </VirtualHost> <VirtualHost *:80> ServerName station2.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps2 JkMount /* myworker JkMount /*.html myworker </VirtualHost>

    Read the article

  • IIS 7 Serving all pages with an injected iframe [closed]

    - by Andre Carlucci
    Possible Duplicate: My server's been hacked EMERGENCY My VPS just got hacked an all my pages are being served with an malicious iframe injected just before the html tag. The code is like this: <iframe src= http://117.21.247.171:700/1.htm width=0 height=0></iframe> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="pt-BR"> ... Firstly I thought it could be something related with wordpress, but my asp.net sites are also infected and even if I create a static html file with nothing inside, the iframe is injected. I'm using a Windows Server 2008 R2 Standard with IIS7.5 7600. Please, I'm trying to find the source of this for hours now, any help would be really appreciated. EDIT: Hey, why was this closed? I'm very interested to know how that be done in IIS instead of simply re-installing everything. Andre

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • Iframe pages on Facebook does not show in Internet Explorer 9 - Windows 7 64-bit

    - by Morten
    Have this very irritating problem with Internet Explorer 9 and Facebook. If I go to Facebook and watch a page with iframes (like IFBML pages) it will not show up in Internet Explorer 9. It shows up in Firefox 4 and Chrome 10, but not in Internet Explorer 9. I run Windows 7 64-bit SP1 (danish). The strange thing is that I own three different PC´s and they all run Windows 64-bit SP1 and all of them has this issue. Can´t figure out what causes this issue. I have tried the following: Uninstalled AVG antivirus and installed Microsoft Antivirus - no change Updated Windows with SP1 - no change Updated from Internet Explorer 9 beta to Internet Explorer 9 final Ed. - no change Emptied cache and temp files in Internet Explorer 9 - no change Made www.facebook.com a trusted site in Internet Explorer 9 - no change And a lot of other things I can not remember I guess....but nothing seems to work. As I´m using quite a lot of my working time developing Facebook Fanpages it is frustrating not to be able to test them in Internet Explorer 9. BTW - it is Internet Explorer 9 32-bit - not 64-bit. Any clues?

    Read the article

  • IIS serving pages extremely slowly

    - by mos
    TL;DR: IIS 7 on WS2008R2 serves pages really slowly; everyone assumes it's because it's IIS and we should have gone with an Apache solution on Linux. I have no idea where to start debugging the problem. I work in a nearly all-MS shop with a bunch of fellow programmers who think Linux is the One True Way. Management recently added a Windows machine with IIS to serve Target Process (third-party agile system), but the site runs extremely slowly. Everyone, to a man, assumes it's because it's on IIS, and if only management would grow a brain and get some Linux servers in here, we could really start cleaning things up! ...Right. Everyone "knows" IIS isn't fit to serve .txt files. ...Well, as the only non-Microsoft hater in the bunch, I am apparently the only one who thinks maybe the Linux guy who hated being told to set up the IIS server may have screwed things up. I'd like to go fix it, but I don't have any clue as to where to start as I am not a sys admin. Help?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >