Search Results

Search found 21078 results on 844 pages for 'content negotiation'.

Page 82/844 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • How can I exclude content in my notifications bar from being indexed?

    - by Liam E-p
    Of course I want my content to be indexed pretty fast by search engines, however not my notifications bar. My notifications bar contains the last 30 changes to content on the site, and I don't want this to show in my SEO meta. As all the notifications are generic, it often doesn't provide any relevant information. As I said the notifications are generic. If an article named "123" was created, it would create a notification that says "Article "123" was created by xxx at 12:00AM". I'm now wondering if this is a content design problem. As only 1/3 of this information is actually relevant to users (the title, what happened). By SEO meta, and irrelevant notification data being shown, I mean this - Basically what I was wondering, is how I could optimise this, so search engines wouldn't show this generic nonsense.

    Read the article

  • Which is the appropriate content-type meta tag value?

    - by Argoron
    I have a question about the meta tag content-type. When starting to build my site (HTML+PHP+JS), I copied a lot of the meta tags over from elsewhere, and I have, amongst others, the following: <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Now, I've seen that tag is being used a lot with the value "text/html". I've been searching the web but could not find a comprehensive explanation regarding what the difference between both is. The "text/html" intuitively sounds more straightforward to me. Should I change my tag to that, or might the "application/xhtml+xml" be an equivalent solution ? Alternatively, can anyone point me to a resource where the different values for these tags are listed and explained in a clear manner? Thanks in advance

    Read the article

  • Which is the appropriate content-type meta tag value ?

    - by Argoron
    I have a question about the meta tag content-type. When starting to build my site (HTML+PHP+JS), I copied a lot of the meta tags over from elsewhere, and I have, amongst others, the following: <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Now, I've seen that tag is being used a lot with the value "text/html". I've been searching the web but could not find a comprehensive explanation regarding what the difference between both is. The "text/html" intuitively sounds more straightforward to me. Should I change my tag to that, or might the "application/xhtml+xml" be an equivalent solution ? Alternatively, can anyone point me to a resource where the different values for these tags are listed and explained in a clear manner? Thanks in advance

    Read the article

  • 404 code/header for search engines, on removed user content?

    - by mowgli
    I just got an email, from a former user on my website He was complaining that Google still shows the contact page he created on my site, even though he deleted it a month ago This is the first time in many years anyone requests this I told him, that it's almost entirely up to Google what content it wants to keep/show and for how long. If it's deleted on the site, I can't do much, other than request a re-visit from the googlebot The user-page already now says something like "Not found. The user has removed the content" TL;DR: But the question is: Should I generally add a 404 header (or other) for dynamic user content that has been removed from the site? Or could this hurt the site (SEO)?

    Read the article

  • Should I include everything in the sitemap or only new content?

    - by Mee
    For a website with dynamic content (new content is constantly being added), should I only include the newest content in the sitemap or should I include everything (with a sitemap index)? What are the best practices for sitemaps esp. for large sites? Also, is there anyway to make google (and other search engines) only crawl the pages in the sitemap? Thanks Update: Also, any idea how stackoverflow handle this? I'd like to know but unfortunately (also understandingly) they have blocked access to their sitemap.

    Read the article

  • How can I prevent my site from being branded a "content farm?"

    - by Fredashay
    I'm building a small social Q&A site. Another Q&A site that I use was recently branded by Google as a content farm and removed from Google results. I know what Wikipedia says is the definition of a content farm (low quality paid articles and spammy text across the page to catch search engines). That other site I use doesn't do those things, so there must be more to it than that. I want to make sure I don't do anything that causes Google to think my Q&A site is a content farm. What should I do, or avoid doing in designing my site layout?

    Read the article

  • Content Management Systems - Why Should I Build My Website With One?

    Why would you want your website built using a Content Management System (CMS)? Well, there are quite a few compelling reasons. A CMS is driven by a database that stores all the content of the website and only delivers pages when called for by the users' browser. The CMS has a "back end" where content is added to the database, and all you need to do it is your browser. This changes everything! It means for the first time a site owner can make changes to their website when they want to and how they want to - read on for more information...

    Read the article

  • Is it possible to view the contents of an underlying NFS mount without unmounting the NFS content?

    - by Brent
    I have a shared directory on a server - let's call it /home/shared - which is mounted with content from another server via nfs. When it is unmounted /home/shared is supposed to be empty - however, running du -x on the directory indicates that it is not empty. I cannot unmount the NFS content to inspect the mount point, since it is in use by others. Is there any way that I can view/edit the contents of the actual mount point (not the NFS content) while leaving the NFS content mounted for others to use?

    Read the article

  • MOSS Content Query Web part itemstyle.xsl

    - by nav
    Hi, I have a Content Query Webpart (CQWP) pulling the URL and title from a NewsLinks list. The CQWP uses the XSLT style LVIS.News.Links defined in ItemStyle.xsl. I need to sort the title @Title0 field as commented out below because it causes an error. Does anyone know whats causing this error? - Many Thanks. The XSLT code is below: <xsl:template name="LVIS.News.Links" match="Row[@Style='LVIS.News.Links']" mode="itemstyle"> <xsl:param name="CurPos" /> <xsl:param name="Last" /> <xsl:variable name="SafeLinkUrl"> <xsl:call-template name="OuterTemplate.GetSafeLink"> <xsl:with-param name="UrlColumnName" select="'LinkUrl'"/> </xsl:call-template> </xsl:variable> <xsl:variable name="DisplayTitle"> <xsl:call-template name="OuterTemplate.GetTitle"> <xsl:with-param name="Title" select="@URL"/> <xsl:with-param name="UrlColumnName" select="'URL'"/> </xsl:call-template> </xsl:variable> <xsl:variable name="LinkTarget"> <xsl:if test="@OpenInNewWindow = 'True'" >_blank</xsl:if> </xsl:variable> <xsl:variable name="SafeImageUrl"> <xsl:call-template name="OuterTemplate.GetSafeStaticUrl"> <xsl:with-param name="UrlColumnName" select="'ImageUrl'"/> </xsl:call-template> </xsl:variable> <xsl:variable name="Header"> <xsl:if test="$CurPos = 1"> <![CDATA[<ul class="list_Links">]]> </xsl:if> </xsl:variable> <xsl:variable name="Footer"> <xsl:if test="$Last = $CurPos"> <![CDATA[</ul>]]> </xsl:if> </xsl:variable> <xsl:value-of select="$Header" disable-output-escaping="yes" /> <li> <a> <xsl:attribute name="href"> <xsl:value-of select="substring-before($DisplayTitle,', ')"></xsl:value-of> </xsl:attribute> <xsl:attribute name="title"> <xsl:value-of select="@Description"/> </xsl:attribute> <!-- <xsl:sort select="@Title0"/> --> <xsl:value-of select="@Title0"> </xsl:value-of> </a> </li> <xsl:value-of select="$Footer" disable-output-escaping="yes" /> </xsl:template>

    Read the article

  • Hard crash when drawing content for CALayer using quartz

    - by Lukasz
    I am trying to figure out why iOS crash my application in the harsh way (no crash logs, immediate shudown with black screen of death with spinner shown for a while). It happens when I render content for CALayer using Quartz. I suspected the memory issue (happens only when testing on the device), but memory logs, as well as instruments allocation logs looks quite OK. Let me past in the fatal function: - (void)renderTiles{ if (rendering) { //NSLog(@"====== RENDERING TILES SKIP ======="); return; } rendering = YES; CGRect b = tileLayer.bounds; CGSize s = b.size; CGFloat imageScale = [[UIScreen mainScreen] scale]; s.height *= imageScale; s.width *= imageScale; dispatch_async(queue, ^{ NSLog(@""); NSLog(@"====== RENDERING TILES START ======="); NSLog(@"1. Before creating context"); report_memory(); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"2. After creating color space"); report_memory(); NSLog(@"3. About to create context with size: %@", NSStringFromCGSize(s)); CGContextRef ctx = CGBitmapContextCreate(NULL, s.width, s.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); NSLog(@"4. After creating context"); report_memory(); CGAffineTransform flipTransform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, s.height); CGContextConcatCTM(ctx, flipTransform); CGRect tileRect = CGRectMake(0, 0, tileImageScaledSize.width, tileImageScaledSize.height); CGContextDrawTiledImage(ctx, tileRect, tileCGImageScaled); NSLog(@"5. Before creating cgimage from context"); report_memory(); CGImageRef cgImage = CGBitmapContextCreateImage(ctx); NSLog(@"6. After creating cgimage from context"); report_memory(); dispatch_sync(dispatch_get_main_queue(), ^{ tileLayer.contents = (id)cgImage; }); NSLog(@"7. After asgning tile layer contents = cgimage"); report_memory(); CGColorSpaceRelease(colorSpace); CGContextRelease(ctx); CGImageRelease(cgImage); NSLog(@"8. After releasing image and context context"); report_memory(); NSLog(@"====== RENDERING TILES END ======="); NSLog(@""); rendering = NO; }); } Here are the logs: ====== RENDERING TILES START ======= 1. Before creating context Memory in use (in bytes): 28340224 / 519442432 (5.5%) 2. After creating color space Memory in use (in bytes): 28340224 / 519442432 (5.5%) 3. About to create context with size: {6324, 5208} 4. After creating context Memory in use (in bytes): 28344320 / 651268096 (4.4%) 5. Before creating cgimage from context Memory in use (in bytes): 153649152 / 651333632 (23.6%) 6. After creating cgimage from context Memory in use (in bytes): 153649152 / 783159296 (19.6%) 7. After asgning tile layer contents = cgimage Memory in use (in bytes): 153653248 / 783253504 (19.6%) 8. After releasing image and context context Memory in use (in bytes): 21688320 / 651288576 (3.3%) ====== RENDERING TILES END ======= Application crashes in random places. Sometimes when reaching en of the function and sometime in random step. Which direction should I look for a solution? Is is possible that GDC is causing the problem? Or maybe the context size or some Core Animation underlying references?

    Read the article

  • Window's content disappears when minimized

    - by Carmen Cojocaru
    I have a simple class that draws a line when mouse dragging or a dot when mouse pressing(releasing). When I minimize the application and then restore it, the content of the window disappears except the last dot (pixel). I understand that the method super.paint(g) repaints the background every time the window changes, but the result seems to be the same whether I use it or not. The difference between the two of them is that when I don't use it there's more than a pixel painted on the window, but not all my painting. How can I fix this? Here is the class. package painting; import java.awt.*; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseMotionAdapter; import javax.swing.JFrame; import javax.swing.JPanel; class CustomCanvas extends Canvas{ Point oldLocation= new Point(10, 10); Point location= new Point(10, 10); Dimension dimension = new Dimension(2, 2); CustomCanvas(Dimension dimension){ this.dimension = dimension; this.init(); addListeners(); } private void init(){ oldLocation= new Point(0, 0); location= new Point(0, 0); } public void paintLine(){ if ((location.x!=oldLocation.x) || (location.y!=oldLocation.y)) { repaint(location.x,location.y,1,1); } } private void addListeners(){ addMouseListener(new MouseAdapter(){ @Override public void mousePressed(MouseEvent me){ oldLocation = location; location = new Point(me.getX(), me.getY()); paintLine(); } @Override public void mouseReleased(MouseEvent me){ oldLocation = location; location = new Point(me.getX(), me.getY()); paintLine(); } }); addMouseMotionListener(new MouseMotionAdapter() { @Override public void mouseDragged(MouseEvent me){ oldLocation = location; location = new Point(me.getX(), me.getY()); paintLine(); } }); } @Override public void paint(Graphics g){ super.paint(g); g.setColor(Color.red); g.drawLine(location.x, location.y, oldLocation.x, oldLocation.y); } @Override public Dimension getMinimumSize() { return dimension; } @Override public Dimension getPreferredSize() { return dimension; } } class CustomFrame extends JPanel { JPanel displayPanel = new JPanel(new BorderLayout()); CustomCanvas canvas = new CustomCanvas(new Dimension(200, 200)); public CustomFrame(String titlu) { canvas.setBackground(Color.white); displayPanel.add(canvas, BorderLayout.CENTER); this.add(displayPanel); } } public class CustomCanvasFrame { public static void main(String args[]) { CustomFrame panel = new CustomFrame("Test Paint"); JFrame f = new JFrame(); f.add(panel); f.pack(); SwingConsole.run(f, 700, 700); } }

    Read the article

  • Content display problems when using Suckerfish menus with 960.gs and IE

    - by Cedar Jensen
    I'm using 960.gs layout and when I add the suckerfish menu as part of the content to one of the grids, the contents of adjacent siblings bleed through the menu in all versions of IE. In the listed html below, the text from 'belowFoldSection' will appear through the menu when it is visible and has enough items to make it span over 2nd section. However, the contents of 'introSummary' will be underneath the menu, as expected. I've set the z-index for #nav and #nav ul in my css and this of course makes it work in FF, Chrome and Safari, but not in IE (because IE incorrectly assigns child elements its own z-index). If I change the .grid_nn class 'position' attribute (set by default in the 960 template) from 'relative' to absolute, this fixes it in IE. However, it is my understanding that I don't want the child elements of the 'container_12' to be taken out of the flow of the document and want them positioned relative to the .container_12's starting point. (Changing the attribute to absolute causes other general layout problems) Can anyone suggest a work-around? My html: <div class="container_12"> <!--First section where menu lives--> <div class="grid_12" id="mainSection"> <div class="grid_4 alpha" id="intro"> <p>Start of menu here</p> <div id="subMenu"> <ul id="nav"> <li><a href="#">Item 1</a> <ul> <li><a href="#">Burrowing gobies</a></li> <li><a href="#">Dartfishes</a></li> <li><a href="#">Eellike gobies</a></li> <!--10 more for longer list --> </ul> </li> <li><a href="#">Item 2</a> <ul> <li><a href="#">Remoras</a></li> <li><a href="#">Tilefishes</a></li> <!--10 more for longer list --> </ul> </li> <li><a href="#">Item 3</a> <ul> <li><a href="#">Climbing perches</a></li> <li><a href="#">Labyrinthfishes</a></li> <li><a href="#">Kissing gouramis</a></li> <!--10 more for longer list --> </ul> </li> </ul> <div id="introSummary"> <h1>PERCIFORMES! (1)</h1> <p>Welcome to the world of Perciformes - perch-like fish including the world famous <strong>Suckerfish</strong></p> </div> </div> <!-- end of sub menu --> </div> <div class="grid_8 omega" id="summary"> <p>Some stuff goes here</p </div> </div> <!-- End of first section --> <div class="clear">&nbsp;</div> <div class="grid_12 spacer"> </div> <div class="grid_4" id="belowFoldSection"> <p>Here is some stuff I want to appear below the menu when the pop-up is visible</p> </div> </div> <!-- container_12 --> The suckerfish css file: #nav, #nav ul { /* all lists */ padding: 0; margin: 0; list-style: none; line-height: 1; z-index: 99; } #nav a { display: block; width: 10em; } #nav li { /* all list items */ float: left; width: 10em; } #nav li ul { /* second-level lists */ position: absolute; background: orange; width: 10em; left: -999em; } #nav li:hover ul, #nav li.sfhover ul { /* lists nested under hovered list items */ left: auto; } Default 960.gs css: .container_12, .container_16 { margin-left: auto; margin-right: auto; width: 960px; } .grid_1, .grid_2, .grid_3, .grid_4, .grid_5, .grid_6, .grid_7, .grid_8, .grid_9, .grid_10, .grid_11, .grid_12, .grid_13, .grid_14, .grid_15, .grid_16 { display: inline; float: left; position: relative; margin-left: 10px; margin-right: 10px; }

    Read the article

  • Content Being Echoed Below Footer in Category Post Template

    - by poindexter
    I have created a category template in Wordpress for all posts that are in the 'blog' category. The file name is single-blog.php. There is some conditional code in single.php that checks whether the post is in the 'blog' category and if it is it redirects it to single-blog.php. That seems to be working fine. The problem is that on all the individual 'blog' categorized posts the post title and content are echoed below the footer of the page. I do not know why they are showing up and I haven't been able to stop it or hide it. The Loop is getting closed on the template page, but I'm wondering if the Loop from single.php is somehow also being sent over. You can view an example of the problem here: http://69.20.59.228/2010/03/test-blog-post/ Please let me know if you have any suggestions. I am posting two sections of code below. The first is the conditional call in single.php. The second is the code from the single-blog.php (the category post template). the conditional call in single.php. <?php $post = $wp_query->post; if (in_category('blog')) { include(TEMPLATEPATH.'/single-blog.php'); }?> code from the single-blog.php (the category post template) <?php get_header(); ?> <?php get_sidebar(); ?> <p><h2>The IQNavigator Blog</h2></p> <em><a href="/category/blog">Blog Home</a></em> | <em><a href="/category/blog/feed/">Subscribe via RSS</a></em><p><br></br></p> <?php if (have_posts()) : while (have_posts()) : the_post(); ?> <div <?php post_class() ?> id="post-<?php the_ID(); ?>"> <h1 class="pagetitle"><?php the_title(); ?></h1> <!-- <p class="details">Posted <?php the_time('l, F jS, Y') ?> at <?php the_time() ?></p> --> <div class="entry"> <?php the_content('<p class="serif">Read the rest of this entry &raquo;</p>'); ?> <?php wp_link_pages(array('before' => '<p><strong>Pages:</strong> ', 'after' => '</p>', 'next_or_number' => 'number')); ?> <?php the_tags( '<p>Tags: ', ', ', '</p>'); ?> <p class="postmetadata alt"> <small> -----<br> Posted <?php /* This is commented, because it requires a little adjusting sometimes. You'll need to download this plugin, and follow the instructions: http://binarybonsai.com/wordpress/time-since/ */ /* $entry_datetime = abs(strtotime($post->post_date) - (60*120)); echo time_since($entry_datetime); echo ' ago'; */ ?> on <?php the_time('l, F jS, Y') ?>, filed under <?php the_category(', ') ?>. Follow any responses to this entry through the <?php post_comments_feed_link('RSS'); ?> feed. <?php if ( comments_open() && pings_open() ) { // Both Comments and Pings are open ?> <a href="#respond">Leave your own comment</a>, or <a href="<?php trackback_url(); ?>" rel="trackback">trackback</a> from your own site. <?php } elseif ( !comments_open() && pings_open() ) { // Only Pings are Open ?> Responses are currently closed, but you can <a href="<?php trackback_url(); ?> " rel="trackback">trackback</a> from your own site. <?php } elseif ( comments_open() && !pings_open() ) { // Comments are open, Pings are not ?> You can skip to the end and leave a response. Pinging is currently not allowed. <?php } elseif ( !comments_open() && !pings_open() ) { // Neither Comments, nor Pings are open ?> Both comments and pings are currently closed. <?php } edit_post_link('Edit this entry','','.'); ?> </small> </p> <?php the_tags( '<p>Tagged: ', ', ', '</p>'); ?> </div> </div> <?php comments_template(); ?> <?php endwhile; else: ?> <p>Sorry, no posts matched your criteria.</p> <?php endif; ?> <?php get_footer(); ?>

    Read the article

  • CSS Position Help (horizontal sidebar showing up when animate content over)

    - by jstacks
    Let me try my best to explain what I'd like to have happen, show you the code I have an hopefully I can get some help. So, I'm trying to do a sliding navigation UI from the left side of the screen (like a lot of mobile apps). The main content slides over, displaying the navigation menu beneath. Right now the browser thinks the screen is getting wider and introduces a horizontal scroll bar. However, I don't want that to happen... How do I get the div to animate off screen but not enlarge the width of the screen (i.e. keep it partially off screen)? Anyway here is my fiddle: http://jsfiddle.net/2vP67/6/ And here is the code within the post: HTML <div id='wrapper'> <div id='navWide'> </div> <div id='containerWide'> </div> <div id='containerTall'> <div id='container'> <div id='nav'> <div id='navNavigate'> Open Menu </div> <div id='navNavigateHide'> Close Menu </div> </div> </div> </div> <div id='sideContainerTall'> <div id='sideContainer'> <div id='sideNav'>Side Navigation </div> </div> </div> </div> CSS #wrapper { width:100%; min-width:1000px; height:100%; min-height:100%; position:relative; top:0; left:0; z-index:0; } #navWide { color: #ffffff; background:#222222; width:100%; min-width:1000px; height:45px; position:fixed; top:0; left:0; z-index:100; } #containerWide { width:100%; min-width:1000px; min-height:100%; position:absolute; top:45px; z-index:100; } #containerTall { color: #000000; background:#dadada; width:960px; min-height:100%; margin-left:-480px; position:absolute; top:0; left:50%; z-index:1000; } /***** main container *****/ #container { width:960px; min-height:585px; } #nav { color: #ffffff; background:#222222; width:960px; height:45px; position:fixed; top:0; z-index:10000; } #navNavigate { background:yellow; font-size:10px; color:#888888; width:32px; height:32px; padding:7px 6px 6px 6px; float:left; cursor:pointer; } #navNavigateHide { background:yellow; font-size:10px; color:#888888; width:32px; height:32px; padding:7px 6px 6px 6px; float:left; cursor:pointer; display:none; } #sideContainerTall { background:#888888; width:264px; min-height:100%; margin-left:-480px; position:absolute; top:0; left:50%; z-index:500; } #sideContainer { width:264px; min-height:585px; display:none; } #sideContainerTall { background:#888888; width:264px; min-height:100%; margin-left:-480px; position:absolute; top:0; left:50%; z-index:500; } #sideContainer { width:264px; min-height:585px; display:none; } #sideNav { width:264px; height:648px; float:left; } Javascript $(document).ready(function() { $('div#navNavigate').click(function() { $('div#navNavigate').hide(); $('div#navNavigateHide').show(); $('div#sideContainer').show(); $('div#containerTall').animate({ 'left': '+=264px' }); }); $('div#navNavigateHide').click(function() { $('div#navNavigate').show(); $('div#navNavigateHide').hide(); $('div#containerTall').animate({ 'left': '-=264px' }, function() { $('div#sideContainer').hide(); }); }); });

    Read the article

  • get iframe property and content

    - by zeroSeven
    is there a way to get the iframe properties and content and be able to display it? example: type it as Rich Text Editor on the iframe and it will be displayed as<b>Rich Text Editor</b> on some part of the page. Rich Text Editor == <b>Rich Text Editor</b> thank you in advance. <html> <head> <title>Rich Text Editor</title> </head> <script type="text/javascript"> function def() { document.getElementById("textEditor").contentWindow.document.designMode="on"; textEditor.document.open(); textEditor.document.write('<head><style type="text/css">body{ font-family:arial; font-size:13px;}</style></head>'); textEditor.document.close(); document.getElementById("fonts").selectedIndex=0; document.getElementById("size").selectedIndex=1; document.getElementById("color").selectedIndex=0; } function fontEdit(x,y) { document.getElementById("textEditor").contentWindow.document.execCommand(x,"",y); textEditor.focus(); } </script> <body onLoad="def()"> <center> <div style="width:500px; text-align:left; margin-bottom:10px "> <input type="button" id="bold" style="height:21px; width:21px; font-weight:bold;" value="B" onClick="fontEdit('bold')" /> <input type="button" id="italic" style="height:21px; width:21px; font-style:italic;" value="I" onClick="fontEdit('italic')" /> <input type="button" id="underline" style="height:21px; width:21px; text-decoration:underline;" value="U" onClick="fontEdit('underline')" /> | <input type="button" style="height:21px; width:21px;"value="L" onClick="fontEdit('justifyleft')" title="align left" /> <input type="button" style="height:21px; width:21px;"value="C" onClick="fontEdit('justifycenter')" title="center" /> <input type="button" style="height:21px; width:21px;"value="R" onClick="fontEdit('justifyright')" title="align right" /> | <select id="fonts" onChange="fontEdit('fontname',this[this.selectedIndex].value)"> <option value="Arial">Arial</option> <option value="Comic Sans MS">Comic Sans MS</option> <option value="Courier New">Courier New</option> <option value="Monotype Corsiva">Monotype</option> <option value="Tahoma">Tahoma</option> <option value="Times">Times</option> </select> <select id="size" onChange="fontEdit('fontsize',this[this.selectedIndex].value)"> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> </select> <select id="color" onChange="fontEdit('ForeColor',this[this.selectedIndex].value)"> <option value="black">-</option> <option style="color:red;" value="red">-</option> <option style="color:blue;" value="blue">-</option> <option style="color:green;" value="green">-</option> <option style="color:pink;" value="pink">-</option> </select> | <input type="button" style="height:21px; width:21px;"value="1" onClick="fontEdit('insertorderedlist')" title="Numbered List" /> <input type="button" style="height:21px; width:21px;"value="?" onClick="fontEdit('insertunorderedlist')" title="Bullets List" /> <input type="button" style="height:21px; width:21px;"value="?" onClick="fontEdit('outdent')" title="Outdent" /> <input type="button" style="height:21px; width:21px;"value="?" onClick="fontEdit('indent')" title="Indent" /> </div> <iframe id="textEditor" style="width:500px; height:170px;"> </iframe> </center> </body>

    Read the article

  • Integrating HTML into Silverlight Applications

    - by dwahlin
    Looking for a way to display HTML content within a Silverlight application? If you haven’t tried doing that before it can be challenging at first until you know a few tricks of the trade.  Being able to display HTML is especially handy when you’re required to display RSS feeds (with embedded HTML), SQL Server Reporting Services reports, PDF files (not actually HTML – but the techniques discussed will work), or other HTML content.  In this post I'll discuss three options for displaying HTML content in Silverlight applications and describe how my company is using these techniques in client applications. Displaying HTML Overlays If you need to display HTML over a Silverlight application (such as an RSS feed containing HTML data in it) you’ll need to set the Silverlight control’s windowless parameter to true. This can be done using the object tag as shown next: <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/HTMLAndSilverlight.xap"/> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <param name="windowless" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/> </a> </object> By setting the control to “windowless” you can overlay HTML objects by using absolute positioning and other CSS techniques. Keep in mind that on Windows machines the windowless setting can result in a performance hit when complex animations or HD video are running since the plug-in content is displayed directly by the browser window. It goes without saying that you should only set windowless to true when you really need the functionality it offers. For example, if I want to display my blog’s RSS content on top of a Silverlight application I could set windowless to true and create a user control that grabbed the content and output it using a DataList control: <style type="text/css"> a {text-decoration:none;font-weight:bold;font-size:14pt;} </style> <div style="margin-top:10px; margin-left:10px;margin-right:5px;"> <asp:DataList ID="RSSDataList" runat="server" DataSourceID="RSSDataSource"> <ItemTemplate> <a href='<%# XPath("link") %>'><%# XPath("title") %></a> <br /> <%# XPath("description") %> <br /> </ItemTemplate> </asp:DataList> <asp:XmlDataSource ID="RSSDataSource" DataFile="http://weblogs.asp.net/dwahlin/rss.aspx" XPath="rss/channel/item" CacheDuration="60" runat="server" /> </div> The user control can then be placed in the page hosting the Silverlight control as shown below. This example adds a Close button, additional content to display in the overlay window and the HTML generated from the user control. <div id="RSSDiv"> <div style="background-color:#484848;border:1px solid black;height:35px;width:100%;"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideOverlay();" style="cursor:pointer;" /> </div> <div style="overflow:auto;width:800px;height:565px;"> <div style="float:left;width:100px;height:103px;margin-left:10px;margin-top:5px;"> <img src="http://weblogs.asp.net/blogs/dwahlin/dan2008.jpg" style="border:1px solid Gray" /> </div> <div style="float:left;width:300px;height:103px;margin-top:5px;"> <a href="http://weblogs.asp.net/dwahlin" style="margin-left:10px;font-size:20pt;">Dan Wahlin's Blog</a> </div> <br /><br /><br /> <div style="clear:both;margin-top:20px;"> <uc:BlogRoller ID="BlogRoller" runat="server" /> </div> </div> </div> Of course, we wouldn’t want the RSS HTML content to be shown until requested. Once it’s requested the absolute position of where it should show above the Silverlight control can be set using standard CSS styles. The following ID selector named #RSSDiv handles hiding the overlay div shown above and determines where it will be display on the screen. #RSSDiv { background-color:White; position:absolute; top:100px; left:300px; width:800px; height:600px; border:1px solid black; display:none; } Now that the HTML content to display above the Silverlight control is set, how can we show it as a user clicks a HyperlinkButton or other control in the application? Fortunately, Silverlight provides an excellent HTML bridge that allows direct access to content hosted within a page. The following code shows two JavaScript functions that can be called from Siverlight to handle showing or hiding HTML overlay content. The two functions rely on jQuery (http://www.jQuery.com) to make it easy to select HTML objects and manipulate their properties: function ShowOverlay() { rssDiv.css('display', 'block'); } function HideOverlay() { rssDiv.css('display', 'none'); } Calling the ShowOverlay function is as simple as adding the following code into the Silverlight application within a button’s Click event handler: private void OverlayHyperlinkButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Invoke("ShowOverlay"); } The result of setting the Silverlight control’s windowless parameter to true and showing the HTML overlay content is shown in the following screenshot:   Thinking Outside the Box to Show HTML Content Setting the windowless parameter to true may not be a viable option for some Silverlight applications or you may simply want to go about showing HTML content a different way. The next technique I’ll show takes advantage of simple HTML, CSS and JavaScript code to handle showing HTML content while a Silverlight application is running in the browser. Keep in mind that with Silverlight’s HTML bridge feature you can always pop-up HTML content in a new browser window using code similar to the following: System.Windows.Browser.HtmlPage.Window.Navigate( new Uri("http://silverlight.net"), "_blank"); For this example I’ll demonstrate how to hide the Silverlight application while maximizing a container div containing the HTML content to show. This allows HTML content to take up the full screen area of the browser without having to set windowless to true and when done right can make the user feel like they never left the Silverlight application. The following HTML shows several div elements that are used to display HTML within the same browser window as the Silverlight application: <div id="JobPlanDiv"> <div style="vertical-align:middle"> <img alt="Close Button" align="right" src="Images/Close.png" onclick="HideJobPlanIFrame();" style="cursor:pointer;" /> </div> <div id="JobPlan_IFrame_Container" style="height:95%;width:100%;margin-top:37px;"></div> </div> The JobPlanDiv element acts as a container for two other divs that handle showing a close button and hosting an iframe that will be added dynamically at runtime. JobPlanDiv isn’t visible when the Silverlight application loads due to the following ID selector added into the page: #JobPlanDiv { position:absolute; background-color:#484848; overflow:hidden; left:0; top:0; height:100%; width:100%; display:none; } When the HTML content needs to be shown or hidden the JavaScript functions shown next can be used: var jobPlanIFrameID = 'JobPlan_IFrame'; var slHost = null; var jobPlanContainer = null; var jobPlanIFrameContainer = null; var rssDiv = null; $(document).ready(function () { slHost = $('#silverlightControlHost'); jobPlanContainer = $('#JobPlanDiv'); jobPlanIFrameContainer = $('#JobPlan_IFrame_Container'); rssDiv = $('#RSSDiv'); }); function ShowJobPlanIFrame(url) { jobPlanContainer.css('display', 'block'); $('<iframe id="' + jobPlanIFrameID + '" src="' + url + '" style="height:100%;width:100%;" />') .appendTo(jobPlanIFrameContainer); slHost.css('width', '0%'); } function HideJobPlanIFrame() { jobPlanContainer.css('display', 'none'); $('#' + jobPlanIFrameID).remove(); slHost.css('width', '100%'); } ShowJobPlanIFrame() handles showing the JobPlanDiv div and adding an iframe into it dynamically. Once JobPlanDiv is shown, the Silverlight control host has its width set to a value of 0% to allow the control to stay alive while making it invisible to the user. I found that this technique works better across multiple browsers as opposed to manipulating the Silverlight control host div’s display or visibility properties. Now that you’ve seen the code to handle showing and hiding the HTML content area, let’s switch focus to the Silverlight application. As a user clicks on a link such as “View Report” the ShowJobPlanIFrame() JavaScript function needs to be called. The following code handles that task: private void ReportHyperlinkButton_Click(object sender, RoutedEventArgs e) { ShowBrowser(_BaseUrl + "/Report.aspx"); } public void ShowBrowser(string url) { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } Any URL can be passed into the ShowBrowser() method which handles invoking the JavaScript function. This includes standard web pages or even PDF files. We’ve used this technique frequently with our SmartPrint control (http://www.smartwebcontrols.com) which converts Silverlight screens into PDF documents and displays them. Here’s an example of the content generated:   Silverlight 4’s WebBrowser Control Both techniques shown to this point work well when Silverlight is running in-browser but not so well when it’s running out-of-browser since there’s no host page that you can access using the HTML bridge. Fortunately, Silverlight 4 provides a WebBrowser control that can be used to perform the same functionality quite easily. We’re currently using it in client applications to display PDF documents, SSRS reports and standard HTML content. Using the WebBrowser control simplifies the application quite a bit since no JavaScript is required if the application only runs out-of-browser. Here’s a simple example of defining the WebBrowser control in XAML. I typically define it in MainPage.xaml when a Silverlight Navigation template is used to create the project so that I can re-use the functionality across multiple screens. <Grid x:Name="WebBrowserGrid" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Visibility="Collapsed"> <StackPanel HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Border Background="#484848" HorizontalAlignment="Stretch" Height="40"> <Image x:Name="WebBrowserImage" Width="100" Height="33" Cursor="Hand" HorizontalAlignment="Right" Source="/HTMLAndSilverlight;component/Assets/Images/Close.png" MouseLeftButtonDown="WebBrowserImage_MouseLeftButtonDown" /> </Border> <WebBrowser x:Name="JobPlanReportWebBrowser" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </StackPanel> </Grid> Looking through the XAML you can see that a close image is defined along with the WebBrowser control. Because the URL that the WebBrowser should navigate to isn’t known at design time no value is assigned to the control’s Source property. If the XAML shown above is left “as is” you’ll find that any HTML content assigned to the WebBrowser doesn’t display properly. This is due to no height or width being set on the control. To handle this issue the following code is added into the XAML’s code-behind file to dynamically determine the height and width of the page and assign it to the WebBrowser. This is done by handling the SizeChanged event. void MainPage_SizeChanged(object sender, SizeChangedEventArgs e) { WebBrowserGrid.Height = JobPlanReportWebBrowser.Height = ActualHeight; WebBrowserGrid.Width = JobPlanReportWebBrowser.Width = ActualWidth; } When the user wants to view HTML content they click a button which executes the code shown in next: public void ShowBrowser(string url) { if (Application.Current.IsRunningOutOfBrowser) { JobPlanReportWebBrowser.NavigateToString("<html><body><iframe src='" + url + "' style='width:100%;height:97%;' /></body></html>"); WebBrowserGrid.Visibility = Visibility.Visible; } else { HtmlPage.Window.Invoke("ShowJobPlanIFrame", url); } } private void WebBrowserImage_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { WebBrowserGrid.Visibility = Visibility.Collapsed; }   Looking through the code you’ll see that it checks to see if the Silverlight application is running out-of-browser and then either displays the WebBrowser control or runs the JavaScript function discussed earlier. Although the WebBrowser control’s Source property could be assigned the URI of the page to navigate to, by assigning HTML content using the NavigateToString() method and adding an iframe, content can be shown from any site including cross-domain sites. This is especially handy when you need to grab a page from a reporting site that’s in a different domain than the Silverlight application. Here’s an example of viewing  PDF file inside of an out-of-browser application. The first image shows the application running out-of-browser before the user clicks a PDF HyperlinkButton.  The second image shows the PDF being displayed.   While there are certainly other techniques that can be used, the ones shown here have worked well for us in different applications and provide the ability to display HTML content in-browser or out-of-browser. Feel free to add a comment if you have another tip or trick you like to use when working with HTML content in Silverlight applications.   Download Code Sample   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • CSS drop-down menus pushing page content down

    - by Mason Jones
    This is probably (hopefully) a pretty simple question, but I can't seem to get it to work so I'll turn to the experts here. I'm using a pretty straightforward CSS drop-down menu, with just a little JQuery involved. The issue is that when I hover over the drop-down and it opens, it's pushing everything on the page down below it rather then opening over it. I've tried messing with the z-index but that doesn't seem to be the issue. Any tips would be fantastic, thanks in advance. Here's the HTML; sorry it's not super-pretty, I had to rip out a bunch of stuff to make it simple and generic. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <HTML style="zoom: 100%; "> <HEAD> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.0/jquery.min.js" type="text/javascript"></script> </HEAD> <BODY class="bodyclass" style="background:#BCE2F1; height: 100%;"> <DIV id="maincontainer" style="min-height: 100%;"> <STYLE type="text/css"> #cssdropdown, #cssdropdown ul { font-size: 9pt; background-color: black; list-style: none; } #cssdropdown, #cssdropdown * { padding: 0; margin: 0; } #cssdropdown li.headlink { width: 140px; float: left; margin-left: -1px; border: 1px black solid; background-color: white; text-align: center; } #cssdropdown li.headlink a { display: block; color: #339804; padding: 3px; text-decoration: none; } #cssdropdown li.headlink a:hover { background-color: #F8E0AC; font-weight: bold; } #cssdropdown li.headlink ul { display: none; border-top: 1px black solid; text-align: left; } #cssdropdown li.headlink:hover ul { display: block; text-decoration: none; } #cssdropdown li.headlink ul li a { padding: 5px; height: 15px; } #cssdropdown li.headlink ul li a:hover { background-color: #CCE9F5; text-decoration: none; font-weight: normal; } /* #cssdropdown a { color: #CCE9F5; } */ #cssdropdown ul li a:hover { text-decoration: none; } #cssdropdown li.headlink { background-color: white; } #cssdropdown li.headlink ul { background-color: white; background-position: bottom; padding-bottom: 2px; } </STYLE> <SCRIPT language="JavaScript"> $(document).ready(function(){ $('#cssdropdown li.headlink').hover( function() { $('ul', this).css('display', 'block'); }, function() { $('ul', this).css('display', 'none'); }); }); </SCRIPT> <DIV class="navigation_box" style="border: none;"> <DIV class="innercontent"> <DIV style="background: white; float: left; padding: 5px; border: solid 1px black;"> LOGO </DIV> <DIV class="navmenu" style="float: right; bottom: 0; font-size: 9pt; text-align: right;"> <SPAN>Logged in as [email protected]</SPAN><BR> <UL id="cssdropdown"> <LI class="headlink"> <A href="http://localhost:3000/one">One</A> <UL style="display: none; "> <LI><A href="http://localhost:3000/one">Option One</A></LI> <LI><A href="http://localhost:3000/one">Option Two</A></LI> <LI><A href="http://localhost:3000/one">Option Three</A></LI> <LI><A href="http://localhost:3000/one">Option Four</A></LI> </UL> </LI> <LI class="headlink"> <A href="http://localhost:3000/two">Two</A> <UL style="display: none; "> <LI><A href="http://localhost:3000/two">Option Two-One</A></LI> <LI><A href="http://localhost:3000/two">Option Two-Two</A></LI> <LI><A href="http://localhost:3000/two">Option Two-Three</A></LI> </UL> </LI> <LI class="headlink" style="width: 80px;"> <A href="http://localhost:3000/three">Three</A> </LI> <LI class="headlink" style="width: 300px; padding-top: 2px; height: 19px;"> <FORM action="http://localhost:3000/search" method="post"> <P> Search: <INPUT id="searchwords" name="searchwords" size="20" type="text" value=""> <INPUT name="commit" type="submit" value="Find"> </P> </FORM> </LI> <LI class="headlink" style="width: 60px;"> <A href="http://localhost:3000/four">Four</A> </LI> <LI class="headlink" style="width: 60px;"> <A href="http://localhost:3000/logout">Logout</A> </LI> </UL> </DIV> </DIV> </DIV> <DIV id="contentwrapper" style="clear:both"> <DIV class="innercontent" style="margin: 0px 20px 20px 20px;"> <H1>Some test content here to fill things out a little bit.</H1> </DIV> </DIV> </DIV> <DIV id="footer" style="clear: both; float: bottom;"> <DIV class="innercontent" style="font-size: 10px;"> Copyright 2008-2010 </DIV> </DIV> </BODY>

    Read the article

  • Trying to randomise a jQuery content slider

    - by alecrust
    Hi everyone, I'm using a very nice jQuery content slider called Easy Slider on my site that I downloaded from Css Globe. The script is excellent and does just what I want - except I can't make it randomise the list, it always scrolls from left to right or right to left! I'm far from good with JavaScript, so my attempts at solving this have been feeble. Although I'm sure it must be an easy fix! If anyone wouldn't mind taking a glance over the script to see if they can spot what I need to change to make it random it would be greatly appreciated! I've tried contacting the original plugin developer but have had no response yet. The comments on the Easy Slider page didn't bear much fruit either unfortunately. I've pasted the script I'm using on my site below: /* * Easy Slider 1.7 - jQuery plugin * written by Alen Grakalic * http://cssglobe.com/post/4004/easy-slider-15-the-easiest-jquery-plugin-for-sliding * * Copyright (c) 2009 Alen Grakalic (http://cssglobe.com) * Dual licensed under the MIT (MIT-LICENSE.txt) * and GPL (GPL-LICENSE.txt) licenses. * * Built for jQuery library * http://jquery.com * */ (function($) { $.fn.easySlider = function(options){ // default configuration properties var defaults = { prevId: 'prevBtn', prevText: 'Previous', nextId: 'nextBtn', nextText: 'Next', controlsShow: true, controlsBefore: '', controlsAfter: '', controlsFade: true, firstId: 'firstBtn', firstText: 'First', firstShow: false, lastId: 'lastBtn', lastText: 'Last', lastShow: false, vertical: false, speed: 800, auto: false, pause: 7000, continuous: false, numeric: false, numericId: 'controls' }; var options = $.extend(defaults, options); this.each(function() { var obj = $(this); var s = $("li", obj).length; var w = $("li", obj).width(); var h = $("li", obj).height(); var clickable = true; obj.width(w); obj.height(h); obj.css("overflow","hidden"); var ts = s-1; var t = 0; $("ul", obj).css('width',s*w); if(options.continuous){ $("ul", obj).prepend($("ul li:last-child", obj).clone().css("margin-left","-"+ w +"px")); $("ul", obj).append($("ul li:nth-child(2)", obj).clone()); $("ul", obj).css('width',(s+1)*w); }; if(!options.vertical) $("li", obj).css('float','left'); if(options.controlsShow){ var html = options.controlsBefore; if(options.numeric){ html += '<ol id="'+ options.numericId +'"></ol>'; } else { if(options.firstShow) html += '<span id="'+ options.firstId +'"><a href=\"javascript:void(0);\">'+ options.firstText +'</a></span>'; html += ' <span id="'+ options.prevId +'"><a href=\"javascript:void(0);\">'+ options.prevText +'</a></span>'; html += ' <span id="'+ options.nextId +'"><a href=\"javascript:void(0);\">'+ options.nextText +'</a></span>'; if(options.lastShow) html += ' <span id="'+ options.lastId +'"><a href=\"javascript:void(0);\">'+ options.lastText +'</a></span>'; }; html += options.controlsAfter; $(obj).after(html); }; if(options.numeric){ for(var i=0;i<s;i++){ $(document.createElement("li")) .attr('id',options.numericId + (i+1)) .html('<a rel='+ i +' href=\"javascript:void(0);\">'+ (i+1) +'</a>') .appendTo($("#"+ options.numericId)) .click(function(){ animate($("a",$(this)).attr('rel'),true); }); }; } else { $("a","#"+options.nextId).click(function(){ animate("next",true); }); $("a","#"+options.prevId).click(function(){ animate("prev",true); }); $("a","#"+options.firstId).click(function(){ animate("first",true); }); $("a","#"+options.lastId).click(function(){ animate("last",true); }); }; function setCurrent(i){ i = parseInt(i)+1; $("li", "#" + options.numericId).removeClass("current"); $("li#" + options.numericId + i).addClass("current"); }; function adjust(){ if(t>ts) t=0; if(t<0) t=ts; if(!options.vertical) { $("ul",obj).css("margin-left",(t*w*-1)); } else { $("ul",obj).css("margin-left",(t*h*-1)); } clickable = true; if(options.numeric) setCurrent(t); }; function animate(dir,clicked){ if (clickable){ clickable = false; var ot = t; switch(dir){ case "next": t = (ot>=ts) ? (options.continuous ? t+1 : ts) : t+1; break; case "prev": t = (t<=0) ? (options.continuous ? t-1 : 0) : t-1; break; case "first": t = 0; break; case "last": t = ts; break; default: t = dir; break; }; var diff = Math.abs(ot-t); var speed = diff*options.speed; if(!options.vertical) { p = (t*w*-1); $("ul",obj).animate( { marginLeft: p }, { queue:false, duration:speed, complete:adjust } ); } else { p = (t*h*-1); $("ul",obj).animate( { marginTop: p }, { queue:false, duration:speed, complete:adjust } ); }; if(!options.continuous && options.controlsFade){ if(t==ts){ $("a","#"+options.nextId).hide(); $("a","#"+options.lastId).hide(); } else { $("a","#"+options.nextId).show(); $("a","#"+options.lastId).show(); }; if(t==0){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); } else { $("a","#"+options.prevId).show(); $("a","#"+options.firstId).show(); }; }; if(clicked) clearTimeout(timeout); if(options.auto && dir=="next" && !clicked){; timeout = setTimeout(function(){ animate("next",false); },diff*options.speed+options.pause); }; }; }; // init var timeout; if(options.auto){; timeout = setTimeout(function(){ animate("next",false); },options.pause); }; if(options.numeric) setCurrent(0); if(!options.continuous && options.controlsFade){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); }; }); }; })(jQuery); Many thanks again! Alec

    Read the article

  • How can I set the default value for a custom "Number" field in SharePoint?

    - by UnhipGlint
    I created a custom field for a content type I am creating using the XML below. <field ID="{GUID}" Required="False" DisplayName="Likes" Name="Likes" Type="Number" SourceID="http://schemas.microsoft.com/sharepoint/v3"><default>0</default></field> The field is meant to be used as a counter of sorts, and will be incremented programmatically. But, I can't get the value to default to "0" when a new item is created. However, for some reason, when I create a new column manually using the Site Collection settings page and configure it to default to "0" it works as it should. So far, I've tried the following tactics: I removed the "default" element from the field definition, and set the "DefaultValue" attribute on the content type definition. I exported a definition for the manually-created, working column (using an Imtech STSADM tool). Then, I added it to my field definitions XML and modified the IDs so that I could add it to my content type. When I did this, it still didn't work, even though it was exported from a working column! Any idea why this isn't working for me?

    Read the article

  • How do HTTP proxy caches decide between serving identity- vs. gzip-encoded resources?

    - by mrclay
    An HTTP server uses content-negotiation to serve a single URL identity- or gzip-encoded based on the client's Accept-Encoding header. Now say we have a proxy cache like squid between clients and the httpd. If the proxy has cached both encodings of a URL, how does it determine which to serve? The non-gzip instance (not originally served with Vary) can be served to any client, but the encoded instances (having Vary: Accept-Encoding) can only be sent to a clients with the identical Accept-Encoding header value as was used in the original request. E.g. Opera sends "deflate, gzip, x-gzip, identity, *;q=0" but IE8 sends "gzip, deflate". According to the spec, then, caches shouldn't share content-encoded caches between the two browsers. Is this true?

    Read the article

  • How to format dates in Jahia 6 CMS?

    - by dpb
    I am helping a friend of mine put up a site for his business. I’ve read different posts and sites trying to find the ideal CMS tool, but people have different views of what is the best, so I finally just picked one of them at random. So I went for an evaluation of Jahia 6.0-CE. As you’ve probably guessed by now, I don’t have so much experience with CMS tools. I just want to setup the CMS, write the templates for the site and let my friend manage the content from there on. So I extracted the sources from SVN and went for a test drive. I managed to create some simple templates to get a hang of things but now I have an issue with a date format. In my definitions.cnd I declared the field like so: date myDateField (datetimepicker[format='dd.MM.yyyy']) This is formatted in the page and the selector also presents this in the dd.MM.yyyy format when inserting the content. But how about sites in other countries, countries that represent the date as MM.dd.yyyy for example? If I specify the format in the CND, hard coded, how can I change this later on so that it adapts based on the browser’s language? Do I extract the content from the repository and format it by hand in the JSP template based on a Locale, or is there a better way? Thank you.

    Read the article

  • jquery tabs - load url in current tab?

    - by BigDogsBarking
    I'm trying to figure out how to load the url each tab links to inside the tab area onclick, and have been trying to following the docs at http://docs.jquery.com/UI/Tabs#...open_links_in_the_current_tab_instead_of_leaving_the_page, but am clearly not getting it.... This is the HTML markup: <div class="tabs"> <ul class="tabNav"> <li><a href="/1.html#tabone">Tab One</a></li> <li><a href="/2.html#tabtwo">Tab Two</a></li> <li><a href="/3.html#tabthree">Tab Three</a></li> </ul> </div> <div id="tabone"> <!-- Trying to load content from 1.html in this div on click --> </div> <div id="tabtwo"> <!-- Trying to load content from 2.html in this div on click --> </div> <div id="tabthree"> <!-- Trying to load content from 3.html in this div on click --> </div> And this is the jquery I'm trying to use: $(".tabs").tabs({ load: function(event, ui) { $('a', ui.panel).click(function() { $(ui.panel).load(this.href); return false; }); } }); I know I've got some part of this wrong.... I've gone through several iterations (too many to post), and all I get is a blank div... I don't know... Feeling a bit confused here... Help?

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Slow transfer speed between two servers

    - by Linux Guy
    I have two servers both network cards speed is 10Gbps The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.1 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.1, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.0 port 38895 connected with 0.0.0.1 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service the connection will be ok , after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >