Search Results

Search found 1172 results on 47 pages for 'measure'.

Page 13/47 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SEO Checklist - Top 10 Most Common SEO Issues

    I read a lot of SEO blogs and forums and it never fails to amaze me how the same, fundamental basic SEO questions crop up time and time again, and the same old advice is wheeled out, rinsed, and then wheeled out again. And then when that is finished, it is wheeled out again for good measure. So, I thought I would compile the 10 most common SEO questions, and issues we come up against when assessing client's websites.

    Read the article

  • Ranking with PowerPivot – a different approach

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting post about a “different approach” in creating a ranking measure with PowerPivot . If you know DAX or you read our book , you will find that a DAX expression can solve the issue. However, such a formula is more complex than necessary. The next version of PowerPivot might have more built-in DAX functions and should solve the ranking need with a simpler formula. In the meantime, it is interesting to know a different approach that relies on Excel skills instead of...(read more)

    Read the article

  • Google Analytics: Custom variables issue difference in data

    - by Bart
    We’ve set up tracking through custom variables in Google Analytics to measure which offices are getting the most traffic. The custom var consists out of the key (=office) and value = (office name). In our Custom Var tab in audience we get no data (actually we got 1 hit, but we think the data is way off). When we setup advanced segments with the filters on key and value we get the correct data. Now we are wondering why we aren’t getting that data in the custom var tab.

    Read the article

  • JavaScript : sortie de la librairie Prototype 1.7 en Release Candidat, découvrez les nouveautés !

    La version 1.7 de prototype (en release candidate) en plus de corrections de différents bugs, et d'optimisations, apporte de nouveaux outils qui vont grandement nous simplifier la vie. Je vous propose de les découvrir 1. La méthode measure Elle permet de récupérer une valeur de propriété css calculée exprimée en pixels et nous la renvoie en nombre, ce qui est bien pratique pour faire des calculs : plus besoin de supprimer px, de tester le isNaN et de faire un parse ; prototype s'occupe de tout Code :

    Read the article

  • What does "kTriangles/s" mean in hardware graphics benchmark reports?

    - by swquinn
    I've looked around and found several sites offering benchmarking statistics for mobile platforms and I've been seeing the unit of measure as "kTriangles/s". Originally I misread this, missing the 'k'; does this translate to "thousand(s) of triangles/s", e.g.: 8902 kTriangles/s = 8,902,000 triangles/s (I'm pretty sure that my interpretation is correct, but I hope someone can confirm this for me) Thanks!

    Read the article

  • A Checklist of Current SEO Techniques

    Search engine optimization entail fine tuning a website's code alongside a thorough linking architecture and just like mechanics, use a checklist to track down what has already been accomplished and how to measure the effect of each step on all other variables. The following is a basic overview of the steps you should take to ensure you leave nothing to chance, anything that can have a positive effect on your website's ranking thus improve the amount of traffic.

    Read the article

  • The Truth About Google's PR

    Have many times have you heard of the term Page Rank? Also known as PR, page rank is an algorithm and a trademark used by the most popular search engine in the world: Google. The goal of PR in websites is to measure the importance of the links pointing to a website.

    Read the article

  • The Importance of Tracking an SEO Campaign

    Before embarking on an SEO campaign, it is vitally important that you set up tools which will enable you to track the success of the overall campaign. There are multiple tools which you can employ to achieve this, however probably the most frequently used one is Google analytics, which is easy and fast to install, and very user friendly. However this will not measure everything you need.

    Read the article

  • 10 Things You Should Do After You Install WordPress

    It's a good idea to go to your server and locate the WordPress instruction and installation file that is automatically uploaded with the WordPress software. You need to rename this file to something random and hard to guess. This is a preventative measure to deter hackers from easily finding that your site runs WordPress and trying to exploit any vulnerabilities in the software.

    Read the article

  • Social Media Marketing Needed For Search Engine Optimization

    In John Jantsch's book "The Referral Engine" talking about search engine optimization he says: "It has become extremely difficult to achieve any measure of success for important keyword phrases without the use of social media. (The flip side is that the organizations that take advantage of social media can dominate, particularly within industries slow to adapt.)"

    Read the article

  • Developing Your Visitor Web Experience

    It can be daunting for people that are new to internet marketing to see a way forward. Without help, it is easy to get lost and give up. The simple measure of putting up a website, though, is not enough in itself, to make an income.

    Read the article

  • Getting Started With XML Indexes

    XML Indexes make a huge difference to the speed of XML queries, as Seth Delconte explains; and demonstrates by running queries against half a million XML employee records. The execution time of a query is reduced from two seconds to being too quick to measure, purely by creating the right type of secondary index for the query. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Throughput; capacity planning help for C10K like design

    - by z8000
    I am designing a network service in which clients connect and stay connected -- the model is not far off from IRC less the s2s connections. I could use some help understanding how to do capacity planning, in particular with the system resource costs associated with handling messages from/to clients. There's an article that tried to get 1 million clients connected to the same server [1]. Of course, most of these clients were completely idle in the test. If the clients sent a message every 5 seconds or so the system would surely be brought to its knees. But... How do you do less hand-waving and you know, measure such a breaking point? We're talking about messages being sent by a client over a TCP socket, into the kernel, and read by an application. The data is shuffled around in memory from one buffer to another. Do I need to consider memory throughput ("5 GT/s" [2], etc.)? I'm pretty sure I have the ability to measure the basic memory requirements due to TCP/IP buffers, expected bandwidth, and CPU resources required to process messages. I'm a little dim on what I'm calling "thoughput". Help! Also, does anyone really do this? Or, do most people sort of hand-wave and see what the real world offers, and then react appropriately? [1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/ [2] http://en.wikipedia.org/wiki/GT/s

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Proving What You are Worth

    - by Ted Henson
    Here is a challenge for everyone. Just about everyone has been asked to provide or calculate the Return on Investment (ROI), so I will assume everyone has a method they use. The problem with stopping once you have an ROI is that those in the C-Suite probably do not care about the ROI as much as Return on Equity (ROE). Shareholders are mostly concerned with their return on the money the invested. Warren Buffett looks at ROE when deciding whether to make a deal or not. This article will outline how you can add more meaning to your ROI and show how you can potentially enhance the ROE of the company.   First I want to start with a base definition I am using for ROI and ROE. Return on investment (ROI) and return on equity (ROE) are ways to measure management effectiveness, parts of a system of measures that also includes profit margins for profitability, price-to-earnings ratio for valuation, and various debt-to-equity ratios for financial strength. Without a set of evaluation metrics, a company's financial performance cannot be fully examined by investors. ROI and ROE calculate the rate of return on a specific investment and the equity capital respectively, assessing how efficient financial resources have been used. Typically, the best way to improve financial efficiency is to reduce production cost, so that will be the focus. Now that the challenge has been made and items have been defined, let’s go deeper. Most research about implementation stops short at system start-up and seldom addresses post-implementation issues. However, we know implementation is a continuous improvement effort, and continued efforts after system start-up will influence the ultimate success of a system.   Most UPK ROI’s I have seen only include the cost savings in developing the training material. Some will also include savings based on reduced Help Desk calls. Using just those values you get a good ROI. To get an ROE you need to go a little deeper. Typically, the best way to improve financial efficiency is to reduce production cost, which is the purpose of implementing/upgrading an enterprise application. Let’s assume the new system is up and running and all users have been properly trained and are comfortable using the system. You provide senior management with your ROI that justifies the original cost. What you want to do now is develop a good base value to a measure the current efficiency. Using usage tracking you can look for various patterns. For example, you may find that users that are accessing UPK assistance are processing a procedure, such as entering an order, 5 minutes faster than those that don’t.  You do some research and discover each minute saved in processing a claim saves the company one dollar. That translates to the company saving five dollars on every transaction. Assuming 100,000 transactions are performed a year, and all users improve their performance, the company will be saving $500,000 a year. That $500,000 can be re-invested, used to reduce debt or paid to the shareholders.   With continued refinement during the life cycle, you should be able to find ways to reduce cost. These are the type of numbers and productivity gains that senior management and shareholders want to see. Being able to quantify savings and increase productivity may also help when seeking a raise or promotion.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • jQuery.width() & DOM refresh

    - by o_O Tync
    My script dynamically creates a <ul> width left-floating <li>s inside: it's a paginator. Afterwards, the script measures width of all <li>s and summs them up. The problem is that after the nodes are injected into the document — the browser refreshed DOM and applies CSS styles which takes a while. It has a negative effect on my script: when these operations are not complete before I measure the width — my script gets a wrong value. If I perform the measure in a second — everything is ok. The thing I'm looking for is a way to detect the moment when the <ul> is fully drawn, styles applied and the width has stabilizes. Or at least a way to detect every dimensions changes. If there's a way to detect width stabilization — I would do the measuring right after it to get the correct values. P.S. Why I need this. My paginator's left-floating <li> items tend to move to the next line when the <ul> tries to become wider than the page itself. Even though most of <li>s are invisible because of parent <div>'s width restriction: div { width: 500px; overflow: hidden; } div ul { width: 100%; white-space: nowrap; } div ul li { display: block; float: left; } they still go down unless I specify the actual summed width of the <ul> with the script.

    Read the article

  • MDX equivalent to SQL subqueries with aggregation

    - by James Lampe
    I'm new to MDX and trying to solve the following problem. Investigated calculated members, subselects, scope statements, etc but can't quite get it to do what I want. Let's say I'm trying to come up with the MDX equivalent to the following SQL query: SELECT SUM(netMarketValue) net, SUM(CASE WHEN netMarketValue > 0 THEN netMarketValue ELSE 0 END) assets, SUM(CASE WHEN netMarketValue < 0 THEN netMarketValue ELSE 0 END) liabilities, SUM(ABS(netMarketValue)) gross someEntity1 FROM ( SELECT SUM(marketValue) netMarketValue, someEntity1, someEntity2 FROM <some set of tables> GROUP BY someEntity1, someEntity2) t GROUP BY someEntity1 In other words, I have an account ledger where I hide internal offsetting transactions (within someEntity2), then calculate assets & liabilities after aggregating them by someEntity2. Then I want to see the grand total of those assets & liabilities aggregated by the bigger entity, someEntity1. In my MDX schema I'd presumably have a cube with dimensions for someEntity1 & someEntity2, and marketValue would be my fact table/measure. I suppose i could create another DSV that did what my subquery does (calculating net), and simply create a cube with that as my measure dimension, but I wonder if there is a better way. I'd rather not have 2 cubes (one for these net calculations and another to go to a lower level of granularity for other use cases), since it will be a lot of duplicate info in my database. These will be very large cubes.

    Read the article

  • Adapting Machine Learning Algorithms to my Problem

    - by Berkay
    i'm working on a project and need your ideas, advices. First of all, let me tell my problem. There is power button and some other keys of a machine and there is only one user has authentication to use this machine.There are no other authentication methods, the machine is in public area in a company. the machine is working with the combination of pressing power button and some other keys. The order of pressing keys is secret but we don't trust it, anybody can learn the password and can access the machine. i have the capability of managing the key hold time and also some other metrics to measure the time differences between the key such as horizantal or vertical key press times (differences). and also i can measure the hold time etc. These all means i have some inputs, Now i'm trying to get a user profile by analysing these inputs. My idea is to get the authenticated user to press the password n times and create a threshold or something similar to that. This method also can be said BIOMETRICS, anyone else who knows the machine button combination, can try the password but if he is out of this range can not get access it. How can i adapt these into my algorithms? where should i start ? i don't want to delve deep into machine learning, and also i can see the in my first try i can get false positive and false negative values really high, but i can manage it by changing my inputs. thanks.

    Read the article

  • How can I make a WPF combo box have the width of its widest element in XAML ?

    - by csuporj
    I know how to do it in code, but can this be done in XAML ? Window1.xaml: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <ComboBox Name="ComboBox1" HorizontalAlignment="Left" VerticalAlignment="Top"> <ComboBoxItem>ComboBoxItem1</ComboBoxItem> <ComboBoxItem>ComboBoxItem2</ComboBoxItem> </ComboBox> </Grid> </Window> Window1.xaml.cs: using System.Windows; using System.Windows.Controls; namespace WpfApplication1 { public partial class Window1 : Window { public Window1() { InitializeComponent(); double width = 0; foreach (ComboBoxItem item in ComboBox1.Items) { item.Measure(new Size( double.PositiveInfinity, double.PositiveInfinity)); if (item.DesiredSize.Width > width) width = item.DesiredSize.Width; } ComboBox1.Measure(new Size( double.PositiveInfinity, double.PositiveInfinity)); ComboBox1.Width = ComboBox1.DesiredSize.Width + width; } } }

    Read the article

  • How can I get the dimensions of a drawn string without padding?

    - by Dan Herbert
    I'm trying to create an image with some text on it and I want the image's size to match the size of the rendered text. When I use System.Windows.Forms.TextRenderer.MeasureText(...) to measure the text, I get dimensions that include font padding. When the text is rendered, it seems to use the same padding. Is there any way to determine the size of a string of text and then render it without any padding? This is the code I've tried: // Measure the text dimensions var text = "Hello World"; var fontFamily = new Font("Arial", 30, FontStyle.Regular); var textColor = new SolidBrush(Color.Black); Size textSize = TextRenderer.MeasureText(text, fontFamily, Size.Empty, TextFormatFlags.NoPadding); // Render the text with the given dimensions Bitmap bmp = new Bitmap(textSize.Width, textSize.Height); Graphics g = Graphics.FromImage(bmp); g.TextRenderingHint = TextRenderingHint.AntiAliasGridFit; g.DrawString(text, fontFamily, textColor, new PointF(0, 0)); bmp.Save("output.png", ImageFormat.Png); This is what currently gets rendered out: This is what I want to render: I also looked into Graphics.MeasureString(...), but that can only be used on an existing Graphics object. I want to know the size before creating the image. Also, calling Graphics.MeasureString(...) returns the same dimensions, so it doesn't help me any.

    Read the article

  • ContentControl + RenderTargetBitmap + empty image

    - by Kellls
    Im trying to create some chart images without ever displaying those charts on the screen. I'v been at this for quite a while and tried a lot of different things but nothing seems to work. The code works perfectly if I display the chart in a window first, but if I don't display it in a window, the bitmap is just white with a black border (no idea why). I have tried adding the chart to a border before rendering and giving the border a green borderBrush. In the bitmap, I see the green borderBrush then the black border and white background but no chart. I don't know where the black border is coming from as the chart is not contained in a black border. I have tried adding the chart to a window without calling window.Show() and again just the black boarder and white background. However if I call window.Show() the bitmap contains the chart. I have tried using a drawingVisual as explained here, same result. Here is the code (not including adding the element to a border or window): private static BitmapSource CreateElementScreenshot(FrameworkElement element, int dpi) { if (!element.IsMeasureValid) { Size size = new Size(element.Width, element.Height); element.Measure(size); element.Arrange(new Rect(size)); } element.UpdateLayout(); var scale = dpi/96.0; var renderTargetBitmap = new RenderTargetBitmap ( (int)(scale * element.RenderSize.Width),(int)(scale * element.RenderSize.Height),dpi,dpi,PixelFormats.Default ); // this is waiting for dispatcher to perform measure, arrange and render passes element.Dispatcher.Invoke(((Action)(() => renderTargetBitmap.Render(element))), DispatcherPriority.Render); return renderTargetBitmap; } Note: The chart is a ContentControl. Is there anyway I can get the chart to render without displaying it in a window first?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >