Search Results

Search found 5003 results on 201 pages for 'excel charts'.

Page 159/201 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Changing time intervals for vSphere performance monitoring, and is there a better way?

    - by user991710
    I have a set of experiments running on a cluster node which is running ESXi 5.1, and I want to monitor the resource consumption on the node itself. Specifically, I am currently running experiments on a subset of the VMs on the ESXi host and wish to monitor resource consumption on those specific VMs. Right now, since I'm using only a single ESXi host, I am using vSphere to access it and the performance reports. Ideally, I would like to get these reports for different time intervals. I can already get the charts for a time interval of 1h, but these are rather long-running experiments and something like 2h, 3h,... would be preferable. However, I cannot seem to change the time interval. Here is an example of what my Customize Performance Chart dialog shows: I am also running on a trial key at the moment. How can I change this interval? Do I need a standard license, or do I just need to turn off the VM (unlikely, but I haven't attempted it yet as these are long-running experiments)? Any help (or pointers to documentation which deals with the above -- I've already looked but did not find much) would be greatly appreciated.

    Read the article

  • How do I make a PPT file as small as possible?

    - by grunwald2.0
    Currently I am agonizing over several large presentation files, which I happened to reprint to PDFs... One thing I wondered: Do PPT's (from Microsoft Powerpoint) always to have to be that big? And what would be the strategies to make a PPT smaller? (If we say "ceterus paribus" at e.g. 25 slides and assuming that one isn't allowed to use a cloud-based service like GDocs, rocketslide or Prezio.) Of course there are the obvious "bad guys": Images and graphics. But: How about roll-over animations etc, who knows how much space they take? How about "smart arts"? Could one save file size if one would use "Open Office" or "Libre Office" Impress? (I didn't try it yet.) And "what if": What if we need to include e.g. five images (or charts that can't be remade in Excel in time), how would we best reduce the file size impact of those five images, if we needed to? I ask all this from an honest "business" perspective. I am no nerd or "Microsoft MVP" and I don't intend on delving into LATeX or similar yet. But that doesn't mean that I am not curious and very willing to learn. I am basically interested in (proven) best practices. Yes I know this question is lacking "initial research", but I think the perspective of my question is interesting and unique to a lot of people and if we intend to make SE a "Q&A" / Wiki kind-of reference site, this question might be a good way to "collect" advice on a question that has a very defined goal: Minimum file-size.

    Read the article

  • Despeckle line art

    - by Dour High Arch
    We have a number of line-art charts unfortunately saved as JPEGs. They are now riddled with distracting compression artifacts or "speckles". Is there any way of removing these? I do not have the original files and it will be very difficult to recreate them. I am running Windows 7 and tried Paint.Net; none of the filters help. Posterize washed out all the colors and leaves the speckles. Blur makes text unreadable. Noise Reduction wrecks antialiasing of curved lines, and perversely enhances the speckles, making them look like checkerboards. Yes, I have Googled for software to do this; there are many programs that advertise despeckling but, after my experience with Paint.Net, do not want to experiment with applications that show no before and after images. The only example I have seen that does what I want is from a Photoshop tutorial. I have dozens of files and the tutorial requires considerable manual fine-tuning. I would prefer to automate or batch-process this task. Commercial apps are fine, but I do not want to spend over $600 and learning a complex program for a single task.

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • Improving the Industry’s Best Cloud Project Portfolio Management (PPM) Solution – New Release of Instantis EnterpriseTrack

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Yasser Mahmud, Vice President of Product Strategy & Industry Marketing, Oracle Primavera We know that in today’s rapidly changing world, organizations and leaders must adapt to fierce competition, business climate change and customers consistently demanding more for less. And project portfolio management (PPM) initiatives are a key component to help organizations thrive and stand out among competitors. That’s why I’m excited to announce Instantis EnterpriseTrack 8.5. Since Oracle’s acquisition of Instantis late last year, we’ve been busy working to enhance the leading cloud PPM solution. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here’s what’s new: Perform more precise resource planning and management  Gain more precise capacity visibility for resource planning and project execution with resource calendars that capture vacation, LOA and part-time resource availability Ensure compliance and governance processes  with activity labor cost capitalization Improve project labor cost estimation, tracking and administration with variable resource rates Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Optimize Project Demand Management And Execution Enhance productivity and analysis with project request flexible staffing plan and simplified finance estimation Improve project status communication and execution with estimated time to complete (ETC) in timesheets and projects Achieve audit compliance and governance with field change history for key project and project request fields Enforce proper financial accounting processes with the new strict finance lock/close period option Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Improve Reporting and the User Experience Enhance user productivity and analysis with improved listing pages Improve program reporting with new program filters in listing pages and reports Run large data volume user defined Excel reports with MS Excel 2010 support Accelerate user productivity and satisfaction with an improved user interface for project issues, risks, and scope changes Enjoy faster system response and improved user experience with  optimized listing pages, resource planning, and application cache Deliver user self-service training on demand with UPK support And if that wasn’t enough, we’ve also made additional improvements to timesheets, field change history and finance lock/close period. Learn more about Instantis EnterpriseTrack 8.5.

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • The Application was unable to start correctly (0xc0000142)

    - by Guy Thomas
    System = Windows 7 64-bit Various programs, notably Regedit, won't start. Instead I get: The Application was unable to start correctly (0xc0000142). Strangly, at least to my thinking, I can launch them via Task Manager. I am also grappling with AVG errors or over-activity, e.g. reports of Broken digital Signature. I am also having problems with Excel Update KB978474 I mention these just incase anyone thinks there is a connection, rather than expecting people to solve 3 problems at once.

    Read the article

  • How to improve WinForms MSChart performance?

    - by Marcel
    Hi all, I have created some simple charts (of type FastLine) with MSChart and update them with live data, like below: . To do so, I bind an observable collection of a custom type to the chart like so: // set chart data source this._Chart.DataSource = value; //is of type ObservableCollection<SpectrumLevels> //define x and y value members for each series this._Chart.Series[0].XValueMember = "Index"; this._Chart.Series[1].XValueMember = "Index"; this._Chart.Series[0].YValueMembers = "Channel0Level"; this._Chart.Series[1].YValueMembers = "Channel1Level"; // bind data to chart this._Chart.DataBind(); //lasts 1.5 seconds for 8000 points per series At each refresh, the dataset completely changes, it is not a scrolling update! With a profiler I have found that the DataBind() call takes about 1.5 seconds. The other calls are negligible. How can I make this faster? Should I use another type than ObservableCollection? An array probably? Should I use another form of data binding? Is there some tweak for the MSChart that I may have missed? Should I use a sparsed set of date, having one value per pixel only? Have I simply reached the performance limit of MSCharts? From the type of the application to keep it "fluent", we should have multiple refreshes per second. Thanks for any hints!

    Read the article

  • Microsoft Office documents collaboration - Open Source alternative

    - by Saggi Malachi
    I am looking for a good solution to collaborate on Microsoft Office documents, we currently just edit directly on a Samba share but it's one big mess because sometimes people leave the office with their laptops while docs are open so swap files remain there and then you nobody is sure what's going on. Is there any good and simple open source solution based on Linux? I've tried Alfresco but it is much more than what I need, we got an internal wiki for most collaboration and I just need some solution for the stuff we need to do in Microsoft Office (mostly Excel files, the rest is in the wiki) EDIT: Some more info as requested - we are very small group, 4 full time employees and a few freelancers. The best idea I've got so far is just managing it in a subversion repository with a Lock-Modify-Lock policy but I'd love to hear about better solutions. Thanks!

    Read the article

  • Windows Server 2012 IPAM feature - "Unblock IPAM Access" error/recomended action

    - by HopelessN00b
    So, I'm in the process of setting up an IP Address Management server, using the built-in IPAM feature in Server 2012, and have run into a problem that I'm hoping someone else has successfully solved. Following the technet guide here, I've installed and configured IPAM, and have provisioned it via GPO. After verifying that the PowerShell invoke-ipamgpoprovisioning command is successful, managing the desired servers in IPAM, running gpupdate /force on the servers and refreshing my view in IPAM, I'm still getting the less-than-useful recommended action of "Unblock IPAM Access" for all servers. (First done 3 hours ago, so it's not a give-it-time-to-propagate issue.) Can't, for the life of me, seem to figure out what's causing this, find anything useful in the logs, or find much about this on Google or in the help files, so I was wondering if anyone here had any ideas about how to fix this, or even where to start looking. I'd really like to get this working, because if not, I have to resume work on creating an Excel spreadsheet for IP address management.

    Read the article

  • Terminal Services printing

    - by Adam
    Hi We are using a hosted terminal server to run some applications on. We have three users who connect to the server through RDP and try to print to a networked printer called: HP Photosmart C7280. One of the users is using Windows XP Pro 32-bit on the host and when they print through the terminal server it works fine. Another one of the users is using Vista 32-bit on the host and when they print through the terminal server it works fine. The third user is using Windows 7 64-bit on the host and when they print through the terminal server it only print the first line of the page (A test page print 3/4s of the test page compared to printing all of the page when using the other 2 machines). We are only printing from Word 2007 and Excel 2007 on all machines. The server is Windows 2003. No errors in the event log. Any ideas?

    Read the article

  • .NET Framework generates strange DCOM error

    - by Anders Oestergaard Jensen
    Hello, I am creating a simple application that enables merging of key-value pairs fields in a Word and/or Excel document. Until this day, the application has worked out just fine. I am using the latest version of .NET Framework 4.0 (since it provides a nice wrapper API for Interop). My sample merging method looks like this: public byte[] ProcessWordDocument(string path, List<KeyValuePair<string, string>> kvs) { logger.InfoFormat("ProcessWordDocument: path = {0}", path); var localWordapp = new Word.Application(); localWordapp.Visible = false; Word.Document doc = null; try { doc = localWordapp.Documents.Open(path, ReadOnly: false); logger.Debug("Executing Find->Replace..."); foreach (Word.Range r in doc.StoryRanges) { foreach (KeyValuePair<string, string> kv in kvs) { r.Find.Execute(Replace: Word.WdReplace.wdReplaceAll, FindText: kv.Key, ReplaceWith: kv.Value, Wrap: Word.WdFindWrap.wdFindContinue); } } logger.Debug("Done! Saving document and cleaning up"); doc.Save(); doc.Close(); System.Runtime.InteropServices.Marshal.ReleaseComObject(doc); localWordapp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(localWordapp); logger.Debug("Done."); return System.IO.File.ReadAllBytes(path); } catch (Exception ex) { // Logging... // doc.Close(); if (doc != null) { doc.Close(); System.Runtime.InteropServices.Marshal.ReleaseComObject(doc); } localWordapp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(localWordapp); throw; } } The above C# snippet has worked all fine (compiled and deployed unto a Windows Server 2008 x64) with latest updates installed. But now, suddenly, I get the following strange error: System.Runtime.InteropServices.COMException (0x80080005): Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80080005 Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)). at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache) at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at Meeho.Integration.OfficeHelper.ProcessWordDocument(String path, List`1 kvs) in C:\meeho\src\webservices\Meeho.Integration\OfficeHelper.cs:line 30 at Meeho.IntegrationService.ConvertDocument(Byte[] template, String ext, String[] fields, String[] values) in C:\meeho\src\webservices\MeehoService\IntegrationService.asmx.cs:line 49 -- I googled the COM error, but it returns nothing of particular value. I even gave the right permissions for the COM dll's using mmc -32, where I allocated the Word and Excel documents respectively and set the execution rights to the Administrator. I could not, however, locate the dll's by the exact COM CLSID given above. Very frustrating. Please, please, please help me as the application is currently pulled out of production. Anders EDIT: output from the Windows event log: Faulting application name: WINWORD.EXE, version: 12.0.6514.5000, time stamp: 0x4a89d533 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x00000000 Faulting process id: 0x720 Faulting application start time: 0x01cac571c4f82a7b Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE Faulting module path: unknown Report Id: 041dd5f9-3165-11df-b96a-0025643cefe6 - 1000 2 100 0x80000000000000 2963 Application meeho3 - WINWORD.EXE 12.0.6514.5000 4a89d533 unknown 0.0.0.0 00000000 c0000005 00000000 720 01cac571c4f82a7b C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE unknown 041dd5f9-3165-11df-b96a-0025643cefe6

    Read the article

  • kanban scrumish tool(s) to get started

    - by Davide
    After investigating a little bit scrum and kanban, I finally read this answer and decided to start using kanban, picking something from scrum (note that I'm working mostly by myself, and I do have read this question and its answers). Now, my question is: which tool would be best to get started? whiteboard and postit agilezen.com JIRA with greenhopper a spreadsheet (possibly on Google Docs) brightgreenprojects.com Agilo Target Process something else (please specify) Notes about each: I would lean towards the whiteboard, but there are several drawbacks (e.g. cannot make automatic charts, time measurements, metrics, and sometimes I work from home - where I need it most - and it's not convenient to carry :-) I don't want to remember another username/password (I promised to myself to signup only to OpenID-enabled services) My employer has JIRA but my group doesn't use it - I might ask for an account (it shouldn't require another password) and maybe later involve the rest of the group. But I don't know if they are using greenhopper and if it's a big deal installing it. I generally hate spreadsheets maybe overkill? I'd be happy to have a localhost instance, but it could be problematic to give access to the whole group (per network/firewalls) - not a deal-breaker but surely a concern What I'd like to get from this? being more productive tracking how much time I spend in any given task, possibly discussing the issue with my supervisor tracking what "blocks" me most often immediately see where I am compared to my schedule manage in a better way my long todo list (e.g. answering faster to the "what I should do next?" question) Do you have any suggestion? Note on the scrumish tag: read the Henrik Kniberg's PDF. He first introduced the definition of scrumish on page 9.

    Read the article

  • How to fade away the icon in the dock when close the program in Mac OSX

    - by Magic
    I'm using Mac OSX Mountain Lion. But some program in the dock confusing me. When I close some program(upper left close button). The icon in the dock fade away. That mean the process close. But some program doesn't fade away(mean the process still alive), And I don't select the "Show in Dock" option. Like Microsoft Office(Word, Excel). It's too annoying. What I want is when I click the upper left close button. The icon will fade away and the program close?

    Read the article

  • How to cluster two IIS servers for failover?

    - by Ram Gopal
    We have IIS servers running in 2 machines hosting few webservices which provided some integration services to an old document Mgmt system, word/excel related service, etc.... We need to cluster/load balance these 2 IIS in order to achieve a fail-over. i.e If one of the IIS server is down, the other on should be able to handle the request. The reverse proxy used in the DMZ is also IIS 7.5 Our overall business application is in fact a J2EE one and we have successfully deployed on a weblogic cluster installed on the same two machines and load balance from the same above mentioned IIS reverse proxy at DMZ. But we do not know how to achieve this in case of IIS.

    Read the article

  • "The image <name> cannot be displayed because it contains errors" when using pchart Render method

    - by christophe-milard
    Hi, I am trying to use the pchart package (over php) to build (and directly display) graphs/charts. At this time, I am just trying to run their provided example (Example1.php), where I just have replaced the final: $Test-Render("example1.png"); by $Test-Stroke(); But When I do this, I get:" The image cannot be displayed because it contains errors" on the browser. If I leave the original "$Test-Render(...)" the generated image is OK. (but not sent) I have read that there is (was?) an issue with mozilla/Firefox browsers regarding images being required twice and the REFER URL, but when I browse at the pchart home page, I can use their "sanboxes" and get the result of my tests directly displayed on my browser (http://pchart.sourceforge.net/demo.php). ... So their must be a way (or a nice work around) to send the generated graphs directely to the browser successfuly. If your answer is to generate the image (i.e. use Render) and then send it afterwards, please but accurate on how to do this (how do I destroy the generated files automaticaly, permissions...) I am new to this, sorry advance if it's obvious...;-)

    Read the article

  • Saving a modified InfoPath form to its form library

    - by Nathan Lykken
    Within our corporate SharePoint 2007 site, there is a particular form library that contains 10 separate files. 9 of these are either Excel, Word, or PowerPoint files and one of these is an InfoPath 2007 form that serves as a report. After noticing an error within this InfoPath form, I saved this InfoPath form to my local directory and then, within the design mode of InfoPath, I modified this InfoPath form. What is the proper way to save this modified InfoPath form to its form library? Everything that I have tried results in nobody except myself having access to this modified InfoPath form. I can open this InfoPath form without error but when my coworkers try to open this InfoPath form on their machines, they receive this error: “The form cannot be opened because it requires the domain permission level and it currently has restricted permission. To fix this problem, open the form from the location it was published to."

    Read the article

  • SSRS Subscription Fails

    - by Chad
    Our SSRS server is not executing a subscription correctly. (Only subscription we have btw) We created a subscription to export a report as an excel file to the file system. Tried running the job that gets generated, and this error happens 'EXECUTE AS LOGIN' failed for the requested login 'NT AUTHORITY\NETWORK SERVICE'. The step failed. It's not the most helpful in tracking down what exactly it was trying to do. EDIT Digging further into the logs I also get these errors w3wp!extensionfactory!f!7/30/2010-14:29:26:: w WARN: The extension Report Server FileShare does not have a LocalizedNameAttribute. w3wp!extensionfactory!11!7/30/2010-14:34:48:: w WARN: The extension Report Server Email does not have a LocalizedNameAttribute.

    Read the article

  • How to create a text file from column and FTP that text file to server

    - by addi
    I have workbook with 2 sheets (Sheet1 and sheet2). On sheet1 user will enter the data which will be populated in the column B and then column C will hold the values from Col A and B on sheet2. I need to create a text file from the values in coloumn C on a click of a button and then upload(FTP) that file to a server. So the sheet1 will have 2 buttons. Button1 will save the excel file and create the text file in windows temp directory. e.g text.xls text.prop (text file whoch has all the values in column C on sheet2) Button2 will upload (FTP) the text file (.prop) to a server. Can anyone please send me the steps and VB code to achieve the above tasks? Thanks in Advance Addi

    Read the article

  • In need of help with setting up the open source library JFreeChart

    - by ssbellows
    I am having trouble with setting up the open source library JFreeChart for creating charts using Java. This is the process I have followed so far in trying to set it up: I downloaded the latest version from their download page http://sourceforge.net/projects/jfreechart/files/. I then unpacked the jfreechart-1.0.13.zip in the directory C:\JFreeChart\jfreechart-1.0.13\ on my system drive. In the unpacked directory there is a folder entitled "lib" which contains the packaged .jar files specified as necessary to use JFreeChart. I added the following directory to my classpath: C:\JFreeChart\jfreechart-1.0.13\lib\ I then created a simple program and added the line "import org.jfree.chart.*;" to see if it would compile with a package imported from JFreeChart. I navigated to the folder in which my sample program was contained and compiled with the following command: "javac -classpath C:\ Program.java" I was given the following error: "package org.jfree.chart does not exist" Could someone please give me some input as to what I have done incorrectly in this setup process? This is the first time I've tried using an open source library, so I don't have any prior experience to go on myself. Thank you very much in advance.

    Read the article

  • LibreOffice Calc/Writer: How to get the dates for specifc weekdays per month?

    - by Phin
    I'm stuck with a problem in LibreOffice Calc or Writer: I try to get a table of dates for every monday, wednesday and friday per month, something that I can load every new month wich sets the dates automatically so I just have to print the page on paper. :) In Writer the fieldcommand for date can't obviosly do the job (it only can be set to a fix date, the todays date or +offset as far as I can see). In Calc I tried it with the autofill but that obviosly works for the first 3 days only. I might add that I'm lost with Excel/calc formulars... Any help?

    Read the article

  • Difference dynamic static 2d array c++

    - by snorlaks
    Hello, Im using opensource library called wxFreeChart to draw some XY charts. In example there is code which uses static array as a serie : double data1[][2] = { { 10, 20, }, { 13, 16, }, { 7, 30, }, { 15, 34, }, { 25, 4, }, }; dataset->AddSerie((double *) data1, WXSIZEOF(dynamicArray)); WXSIZEOF ismacro defined like: sizeof(array)/sizeof(array[0]) In this case everything works great but in my program Im using dynamic arrays (according to users input). I made a test and wrotecode like below: double **dynamicArray = NULL; dynamicArray = new double *[5] ; for( int i = 0 ; i < 5 ; i++ ) dynamicArray[i] = new double[2]; dynamicArray [0][0] = 10; dynamicArray [0][1] = 20; dynamicArray [1][0] = 13; dynamicArray [1][1] = 16; dynamicArray [2][0] = 7; dynamicArray [2][1] = 30; dynamicArray [3][0] = 15; dynamicArray [3][1] = 34; dynamicArray [4][0] = 25; dynamicArray [4][1] = 4; dataset->AddSerie((double *) *dynamicArray, WXSIZEOF(dynamicArray)); But it doesnt work correctly. I mean point arent drawn. I wonder if there is any possibility that I can "cheat" that method and give it dynamic array in way it understands it and will read data from correct place thanks for help

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >