Search Results

Search found 891 results on 36 pages for 'scaling out'.

Page 36/36 | < Previous Page | 32 33 34 35 36 

  • CodePlex Daily Summary for Saturday, October 06, 2012

    CodePlex Daily Summary for Saturday, October 06, 2012Popular ReleasesVidCoder: 1.4.2 Beta: Added Modulus dropdown to Loose anamorphic choice. Fixed a problem where the incorrect scaling would be chosen and pick the wrong aspect ratio. Fixed issue where old window objects would stick around and continue to respond to property change events We now clear locked width/height values when switching to loose or strict anamorphic. Fixed problems with Custom Anamorphic and display width specification. Fixed text in number box incorrectly being shown in gray in some circumstances.RiP-Ripper & PG-Ripper: PG-Ripper 1.4.02: changes NEW: Added Support Big Naturals Only forum NEW: Added Setting to enable/disable "Show last download image"patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Configuration Manager 2012 Automation: Beta Code (v0.1): Beta codeWinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.2: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y....NET Micro Framework: .NET MF 4.3 (Beta) -- warning for SDK below: WARNING!!! There is a known issue with the SDK installer that may prevent you from installing. We are working on the issue and will update the SDK as soon as we have a fix. Thank you. This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC suppo...MCEBuddy 2.x: MCEBuddy 2.3.1: 2.3.1All new Remote Client Server architecture. Reccomended Download. The Remote Client Installation is OPTIONAL, you can extract the files from the zip archive into a local folder and run MCEBuddy.GUI directly. 2.2.15 was the last standalone release. Changelog for 2.3.1 (32bit and 64bit) 1. All remote MCEBuddy Client Server architecture (GUI runs remotely/independently from engine now) 2. Fixed bug in Audio Offset 3. Added support for remote MediaInfo (right click on file in queue to get ...D3 Loot Tracker: 1.5: Support for session upload to website. Support for theme change through general settings. Time played counter will now also display a count for days. Tome of secrets are no longer logged as items.NTCPMSG: V1.2.0.0: Allocate an identify cableid for each single connection cable. * Server can asend to specified cableid directly.Team Foundation Server Word Add-in: Version 1.0.12.0622: Welcome to the Visual Studio Team Foundation Server Word Add-in Supported Environments Microsoft Office Word 2007 and 2010 X86 (32-bit) Team Foundation Server 2010 Object Model TFS 2010, 2012 and TFS Service supported, using TFS OM / Explorer 2010. Quality-Bar Details Tool has been reviewed by Visual Studio ALM Rangers Tool has been through an independent technical and quality review All critical bugs have been resolved Known Issues / Bugs WI#43553 - The Acceptance criteria is not pu...Viva Music Player: Viva Music Player v0.6.7: Initial release.Korean String Extension for .NET: ?? ??? ??? ????(v0.2.3.0): ? ?? ?? ?? ???? - string.KExtract() ?? ???? - string.AppendJosa(...) AppendJosa(...)? ?? ???? KAppendJosa(...)? ??? ?????UMD????? - PC?: UMDEditor?????V2.7: ??:http://jianyun.org/archives/948.html =============================================================================== UMD??? ???? =============================================================================== 2.7.0 (2012-10-3) ???????“UMD???.exe”??“UMDEditor.exe” ?????????;????????,??????。??????,????! ??64????,??????????????bug ?????????????,???? ???????????????? ???????????????,??????????bug ------------------------------------------------------- ?? reg.bat ????????????。 ????,??????txt/u...Untangler: Untangler 1.0.0: Add a missing file from first releaseDirectX Tool Kit: October 2012: October 2, 2012 Added ScreenGrab module Added CreateGeoSphere for drawing a geodesic sphere Put DDSTextureLoader and WICTextureLoader into the DirectX C++ namespace Renamed project files for better naming consistency Updated WICTextureLoader for Windows 8 96bpp floating-point formats Win32 desktop projects updated to use Windows Vista (0x0600) rather than Windows 7 (0x0601) APIs Tweaked SpriteBatch.cpp to workaround ARM NEON compiler codegen bugCRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1002.3): Visual Ribbon Editor 1.3.1002.3 What's New: Multi-language support for Labels/Tooltips for custom buttons and groups Support for base language other than English (1033) Connect dialog will not require organization name for ADFS / IFD connections Automatic creation of missing labels for all provisioned languages Minor connection issues fixed Notes: Before saving the ribbon to CRM server, editor will check Ribbon XML for any missing <Title> elements inside existing <LocLabel> elements...SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...WallSwitch: WallSwitch 1.0.6: Version 1.0.6 Changes: Added hotkeys to perform a variety of operations (next/previous image, pause, clear history, etc.) Added color effects for grayscale, sepia and intense color. Various fixes.Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20New ProjectsBackup Mirth To TFS: You're a developer managing a handful of Mirth Connect HL7 integration servers. You want to ensure that your servers are under change control.Capability Mapping Tool: Capability Mapping Tool provides an intuitive interface to input and prepare reports about the capabilities in university programs and their development also prCRM 2011 Global Quick Search: This CRM 2011 Silverlight WebResource will facilitate User to do Quick Search on multiple CRM Entities and results will be shown on single pageDatabaseUtil: Useful database utilities. Currently only for Entity Framework 4.DevTxt Blog Engine: The blog engine I use.Distrib(uted) Processing Grid: Distrib is a simple yet powerful distributed processing system.Download Organizer: Download Organizer is a Windows service developed in C# on .NET 4 to monitor your downloads folder and move inbound files to various locations on your PC.Example for Tutorial: Lorem IpsumExternal scripts plugin for nopCommerce: This plugin allows you to add any script to any page of your nopCommerce websiteGLMET Next Generation: A user of GLMET/MLT and want to use again? This is right for YOU! A great Folder Locker just for only you!Håvard Fjær: Code I make that might be useful to others. Mostly C#, .NET, NETMF and Gadgeteer stuff. IdentifyUI - An automation tool to identify objects: IdentifyUI - An automation tool to identify objects It works only on windows operating system. It has been tested on Windows XP. iFinity Google Analytics for DotNetNuke: The iFinity Google Analytics module is a simple way to implement Google Analytics tracking for your DotNetNuke website, but also contains advanced features.Labmodel: Modelling of flow around islandMachineSLATStatusCheck: This helps to check the SLAT capability of a machine, so that it can run hyper-v client or not.OneNote HTML Export: The OneNote HTML Export tool allows HTML export of an entire OneNote notebookPreactor Power Tools: The Preactor Power Tools are a collection of tools to enhance the functionality of Preactor.qlevel: sadasdasdroucheng: C# Hello worldSBB - Serial Browser Bridge: Stelle eingehende Daten von einer seriellen Schnittstellen in einem Browser zur Verfügung.Sendine Net: - Sendine.NET ??????????? Socket ???? - ???????????????? - ????(Router)???? - ??????????(IProtocolParser) - ????(Multi-Core)?? - ????????? - ???????Service Sheet for SharePoint: Creates ServiceSheet for each employee and customer that contains the data from Microsoft Dynamics CRM 2011 in relation to the done Services by each Consultant.SharePoint Bulk Uploader: This is client SharePoint tool that can upload a bulk of files to SharePoint document library using SharePoint Client Object Model. sharepoint foundation 2013 persian language pack: SharePoint 2013 Persian Language Pack . this project started for create a language pack for SharePoint 2013 for supporting Persian Language Pack , this project SharePoint Managed Metadata Navigator: Use SharePoint Managed Metadata Navigator to browse, explore, create, update, delete, export and import MMD Groups, Termsets, and Terms for SharePoint 2010.SharePoint Site template for PRINCE2: PRINCE2 is a Project Management Guidance from UK OGC. This project aims to provide a SharePoint site template for SharePoint 2010 and SharePoint 2013Simple Code Gen: This project will generate c# generated files from SQL server databaseTiwanaku Book Builder: Web application for the development and construction of publication formats, including ePub, docBook, etc.Tris: The all new Tris 2.0 appWalkingGraph: Test

    Read the article

  • Project Management Helps AmeriCares Deliver International Aid

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss Handle with Care Sound project management helps AmeriCares bring international aid to those in need. The stakes are always high for AmeriCares. On a mission to restore health and save lives during times of disaster, the nonprofit international relief and humanitarian aid organization delivers donated medicines, medical supplies, and humanitarian aid to people in the U.S. and around the globe. Founded in 1982 with the express mission of responding as quickly and efficiently as possible to help people in need, the Stamford, Connecticut-based AmeriCares has delivered more than US$10.5 billion in aid to 147 countries over the past three decades. Launch the Slideshow “It’s critically important to us that we steward all the donations and that the medical supplies and medicines get to people as quickly as possible with no loss,” says Kate Sears, senior vice president for finance and technology at AmeriCares. “Whether we’re shipping IV solutions to victims of cholera in Haiti or antibiotics to Somali famine victims, we need to get the medicines there sooner because it means more people will be helped and lives improved or even saved.” Ten years ago, the tracking systems used by AmeriCares associates were paper-based. In recent years, staff started using spreadsheets, but the tracking processes were not standardized between teams. “Every team was tracking completely different information,” says Megan McDermott, senior associate, Sub-Saharan Africa partnerships, at AmeriCares. “It was just a few key things. For example, we tracked the date a shipment was supposed to arrive and the date we got reports from our partner that a hospital received aid on their end.” While the data was accurate, much detail was being lost in the process. AmeriCares management knew it could do a better job of tracking this enterprise data and in 2011 took a significant step by implementing Oracle’s Primavera P6 Professional Project Management. “It’s a comprehensive solution that has helped us improve the monitoring and controlling processes. It has allowed us to do our distribution better,” says Sears. In addition, the implementation effort has been a change agent, helping AmeriCares leadership rethink project management across the entire organization. Initially, much of the focus was on standardizing processes, but staff members also learned the importance of thinking proactively to prevent possible problems and evaluating results to determine if goals and objectives are truly being met. Such data about process efficiency and overall results is critical not only to AmeriCares staff but also to the donors supporting the organization’s life-saving missions. Efficiency Saves Lives One of AmeriCares’ core operations is to gather product donations from the private sector, establish where the most-urgent needs are, and solicit monetary support to send the aid via ocean cargo or airlift to welfare- and health-oriented nongovernmental organizations, hospitals, health networks, and government ministries based in areas in need. In 2011 alone, AmeriCares sent more than 3,500 shipments to 95 countries in response to both ongoing humanitarian needs and more than two dozen emergencies, including deadly tornadoes and storms in the U.S. and the devastating tsunami in Japan. When it comes to nonprofits in general, donors want to know that the charitable organizations they support are using funds wisely. Typically, nonprofits are evaluated by donors in terms of efficiency, an area where AmeriCares has an excellent reputation: 98 percent of expenses go directly to supporting programs and less than 2 percent represent administrative and fundraising costs. Donors, however, should look at more than simple efficiency, says Peter York, senior partner and chief research and learning officer at TCC Group, a nonprofit consultancy headquartered in New York, New York. They should also look at whether organizations have the systems in place to sustain their missions and continue to thrive. An expert on nonprofit organizational management, York has spent years studying sustainable charitable organizations. He defines them as nonprofits that are able to achieve the ongoing financial support to stay relevant and continue doing core mission work. In his analysis of well over 2,500 larger nonprofits, York has found that many are not sustaining, and are actually scaling back in size. “One of the biggest challenges of nonprofit sustainability is the general public’s perception that every dollar donated has to go only to the delivery of service,” says York. “What our data shows is that there are some fundamental capacities that have to be there in order for organizations to sustain and grow.” York’s research highlights the importance of data-driven leadership at successful nonprofits. “You’ve got to have the tools, the systems, and the technologies to get objective information on what you do, the people you serve, and the results you’re achieving,” says York. “If leaders don’t have the knowledge and the data, they can’t make the strategic decisions about programs to take organizations to the next level.” Historically, AmeriCares associates have used time-tested and cost-effective strategies to ship and then track supplies from donation to delivery to their destinations in designated time frames. When disaster strikes, AmeriCares ships by air and generally pulls out all the stops to deliver the most urgently needed aid within the first few days and weeks. Then, as situations stabilize, AmeriCares turns to delivering sea containers for the postemergency and ongoing aid so often needed over the long term. According to McDermott, getting a shipment out the door is fairly complicated, requiring as many as five different AmeriCares teams collaborating together. The entire process can take months—from when products are received in the warehouse and deciding which recipients to allocate supplies to, to getting customs and governmental approvals in place, actually shipping products, and finally ensuring that the products are received in-country. Delivering that aid is no small affair. “Our volume exceeds half a billion dollars a year worth of donated medicines and medical supplies, so it’s a sizable logistical operation to bring these products in and get them out to the right place quickly to have the most impact,” says Sears. “We really pride ourselves on our controls and efficiencies.” Adding to that complexity is the fact that the longer it takes to deliver aid, the more dire the human need can be. Any time AmeriCares associates can shave off the complicated aid delivery process can translate into lives saved. “It’s really being able to track information consistently that will help us to see where are the bottlenecks and where can we work on improving our processes,” says McDermott. Setting a Standard Productivity and information management improvements were key objectives for AmeriCares when staff began the process of implementing Oracle’s Primavera solution. But before configuring the software, the staff needed to take the time to analyze the systems already in place. According to Greg Loop, manager of database systems at AmeriCares, the organization received guidance from several consultants, including Rich D’Addario, consulting project manager in the Primavera Global Business Unit at Oracle, who was instrumental in shepherding the critical requirements-gathering phase. D’Addario encouraged staff to begin documenting shipping processes by considering the order in which activities occur and which ones are dependent on others to get accomplished. This exercise helped everyone realize that to be more efficient, they needed to keep track of shipments in a more standard way. “The staff didn’t recognize formal project management methodology,” says D’Addario. “But they did understand what the most important things are and that if they go wrong, an entire project can go off course.” Before, if a boatload of supplies was being sent to Haiti and there was a problem somewhere, a lot of time was taken up finding out where the problem was—because staff was not tracking things in a standard way. As a result, even more time was needed to find possible solutions to the problem and alert recipients that the aid might be delayed. “For everyone to put on the project manager hat and standardize the way every single thing is done means that now the whole organization is on the same page as to what needs to occur from the time a hurricane hits Haiti and when a boat pulls in to unload supplies,” says D’Addario. With so much care taken to put a process foundation firmly in place, configuring the Primavera solution was actually quite simple. Specific templates were set up for different types of shipments, and dashboards were implemented to provide executives with clear overviews of every project in the system. AmeriCares’ Loop reports that system planning, refining, and testing, followed by writing up documentation and training, took approximately four months. The system went live in spring 2011 at AmeriCares’ Connecticut headquarters. While the nonprofit has an international presence, with warehouses in Europe and offices in Haiti, India, Japan, and Sri Lanka, most donated medicines come from U.S. entities and are shipped from the U.S. out to the rest of the world. In addition, all shipments are tracked from the U.S. office. AmeriCares doesn’t expect the Primavera system to take months off the shipping time, especially for sea containers. However, any time saved is still important because it will allow aid to be delivered to people more quickly at a lower overall cost. “If we can trim a day or two here or there, that can translate into lives that we’re saving, especially in emergency situations,” says Sears. A Cultural Change Beyond the measurable benefits that come with IT-driven process improvement, AmeriCares management is seeing a change in culture as a result of the Primavera project. One change has been treating every shipment of aid as a project, and everyone involved with facilitating shipments as a project manager. “This is a revolutionary concept for us,” says McDermott. “Before, we were used to thinking we were doing logistics—getting a container from point A to point B without looking at it as one project and really understanding what it meant to manage it.” AmeriCares staff is also happy to report that collaboration within the organization is much more efficient. When someone creates a shipment in the Primavera system, the same shared template is used, which means anyone can log in to the system to see the status of a shipment. Knowledgeable staff can access a shipment project to help troubleshoot a problem. Management can easily check the status of projects across the organization. “Dashboards are really useful,” says McDermott. “Instead of going into the details of each project, you can just see the high-level real-time information at a glance.” The new system is helping team members focus on proactively managing shipments rather than simply reacting when problems occur. For example, when a container is shipped, documents must be included for customs clearance. Now, the shipping template has built-in reminders to prompt team members to ask for copies of these documents from freight forwarders and to follow up with partners to discover if a shipment is on time. In the past, staff may not have worked on securing these documents until they’d been notified a shipment had arrived in-country. Another benefit of capturing and adopting best practices within the Primavera system is that staff training is easier. “Capturing the processes in documented steps and milestones allows us to teach new staff members how to do their jobs faster,” says Sears. “It provides them with the knowledge of their predecessors so they don’t have to keep reinventing the wheel.” With the Primavera system already generating positive results, management is eager to take advantage of advanced capabilities. Loop is working on integrating the company’s proprietary inventory management system with the Primavera system so that when logistics or warehousing operators input data, the information will automatically go into the Primavera system. In the past, this information had to be manually keyed into spreadsheets, often leading to errors. Mining Historical Data Another feature on the horizon for AmeriCares is utilizing Primavera P6 Professional Project Management reporting capabilities. As the system begins to include more historical data, management soon will be able to draw on this information to conduct analysis that has not been possible before and create customized reports. For example, at the beginning of the shipment process, staff will be able to use historical data to more accurately estimate how long the approval process should take for a particular country. This could help ensure that food and medicine with limited shelf lives do not get stuck in customs or used beyond their expiration dates. The historical data in the Primavera system will also help AmeriCares with better planning year to year. The nonprofit’s staff has always put together a plan at the beginning of the year, but this has been very challenging simply because it is impossible to predict disasters. Now, management will be able to look at historical data and see trends and statistics as they set current objectives and prepare for future need. In addition, this historical data will provide AmeriCares management with the ability to review year-end data and compare actual project results with goals set at the beginning of the year—to see if desired outcomes were achieved and if there are areas that need improvement. It’s this type of information that is so valuable to donors. And, according to York, project management software can play a critical role in generating the data to help nonprofits sustain and grow. “It is important to invest in systems to help replicate, expand, and deliver services,” says York. “Project management software can help because it encourages nonprofits to examine program or service changes and how to manage moving forward.” Sears believes that AmeriCares donors will support the return on investment the organization will achieve with the Primavera solution. “It won’t be financial returns, but rather how many more people we can help for a given dollar or how much more quickly we can respond to a need,” says Sears. “I think donors are receptive to such arguments.” And for AmeriCares, it is all about the future and increasing results. The project management environment currently may be quite simple, but IT staff plans to expand the complexity and functionality as the organization grows in its knowledge of project management and the goals it wants to achieve. “As we use the system over time, we’ll continue to refine our best practices and accumulate more data,” says Sears. “It will advance our ability to make better data-driven decisions.”

    Read the article

  • Jquery Live Function

    - by marharépa
    Hi! I want to make this script to work as LIVE() function. Please help me! $(".img img").each(function() { $(this).cjObjectScaler({ destElem: $(this).parent(), method: "fit" }); }); the cjObjectScaler script (called in the html header) is this: (thanks for Doug Jones) (function ($) { jQuery.fn.imagesLoaded = function (callback) { var elems = this.filter('img'), len = elems.length; elems.bind('load', function () { if (--len <= 0) { callback.call(elems, this); } }).each(function () { // cached images don't fire load sometimes, so we reset src. if (this.complete || this.complete === undefined) { var src = this.src; // webkit hack from http://groups.google.com/group/jquery-dev/browse_thread/thread/eee6ab7b2da50e1f this.src = '#'; this.src = src; } }); }; })(jQuery); /* CJ Object Scaler */ (function ($) { jQuery.fn.cjObjectScaler = function (options) { /* user variables (settings) ***************************************/ var settings = { // must be a jQuery object method: "fill", // the parent object to scale our object into destElem: null, // fit|fill fade: 0 // if positive value, do hide/fadeIn }; /* system variables ***************************************/ var sys = { // function parameters version: '2.1.1', elem: null }; /* scale the image ***************************************/ function scaleObj(obj) { // declare some local variables var destW = jQuery(settings.destElem).width(), destH = jQuery(settings.destElem).height(), ratioX, ratioY, scale, newWidth, newHeight, borderW = parseInt(jQuery(obj).css("borderLeftWidth"), 10) + parseInt(jQuery(obj).css("borderRightWidth"), 10), borderH = parseInt(jQuery(obj).css("borderTopWidth"), 10) + parseInt(jQuery(obj).css("borderBottomWidth"), 10), objW = jQuery(obj).width(), objH = jQuery(obj).height(); // check for valid border values. IE takes in account border size when calculating width/height so just set to 0 borderW = isNaN(borderW) ? 0 : borderW; borderH = isNaN(borderH) ? 0 : borderH; // calculate scale ratios ratioX = destW / jQuery(obj).width(); ratioY = destH / jQuery(obj).height(); // Determine which algorithm to use if (!jQuery(obj).hasClass("cf_image_scaler_fill") && (jQuery(obj).hasClass("cf_image_scaler_fit") || settings.method === "fit")) { scale = ratioX < ratioY ? ratioX : ratioY; } else if (!jQuery(obj).hasClass("cf_image_scaler_fit") && (jQuery(obj).hasClass("cf_image_scaler_fill") || settings.method === "fill")) { scale = ratioX < ratioY ? ratioX : ratioY; } // calculate our new image dimensions newWidth = parseInt(jQuery(obj).width() * scale, 10) - borderW; newHeight = parseInt(jQuery(obj).height() * scale, 10) - borderH; // Set new dimensions & offset jQuery(obj).css({ "width": newWidth + "px", "height": newHeight + "px"//, // "position": "absolute", // "top": (parseInt((destH - newHeight) / 2, 10) - parseInt(borderH / 2, 10)) + "px", // "left": (parseInt((destW - newWidth) / 2, 10) - parseInt(borderW / 2, 10)) + "px" }).attr({ "width": newWidth, "height": newHeight }); // do our fancy fade in, if user supplied a fade amount if (settings.fade > 0) { jQuery(obj).fadeIn(settings.fade); } } /* set up any user passed variables ***************************************/ if (options) { jQuery.extend(settings, options); } /* main ***************************************/ return this.each(function () { sys.elem = this; // if they don't provide a destObject, use parent if (settings.destElem === null) { settings.destElem = jQuery(sys.elem).parent(); } // need to make sure the user set the parent's position. Things go bonker's if not set. // valid values: absolute|relative|fixed if (jQuery(settings.destElem).css("position") === "static") { jQuery(settings.destElem).css({ "position": "relative" }); } // if our object to scale is an image, we need to make sure it's loaded before we continue. if (typeof sys.elem === "object" && typeof settings.destElem === "object" && typeof settings.method === "string") { // if the user supplied a fade amount, hide our image if (settings.fade > 0) { jQuery(sys.elem).hide(); } if (sys.elem.nodeName === "IMG") { // to fix the weird width/height caching issue we set the image dimensions to be auto; jQuery(sys.elem).width("auto"); jQuery(sys.elem).height("auto"); // wait until the image is loaded before scaling jQuery(sys.elem).imagesLoaded(function () { scaleObj(this); }); } else { scaleObj(jQuery(sys.elem)); } } else { console.debug("CJ Object Scaler could not initialize."); return; } }); }; })(jQuery);

    Read the article

  • Graphics.MeasureCharacterRanges giving wrong size calculations in C#.Net?

    - by Owen Blacker
    I'm trying to render some text into a specific part of an image in a Web Forms app. The text will be user entered, so I want to vary the font size to make sure it fits within the bounding box. I have code that was doing this fine on my proof-of-concept implementation, but I'm now trying it against the assets from the designer, which are larger, and I'm getting some odd results. I'm running the size calculation as follows: StringFormat fmt = new StringFormat(); fmt.Alignment = StringAlignment.Center; fmt.LineAlignment = StringAlignment.Near; fmt.FormatFlags = StringFormatFlags.NoClip; fmt.Trimming = StringTrimming.None; int size = __startingSize; Font font = __fonts.GetFontBySize(size); while (GetStringBounds(text, font, fmt).IsLargerThan(__textBoundingBox)) { context.Trace.Write("MyHandler.ProcessRequest", "Decrementing font size to " + size + ", as size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); size--; if (size < __minimumSize) { break; } font = __fonts.GetFontBySize(size); } context.Trace.Write("MyHandler.ProcessRequest", "Writing " + text + " in " + font.FontFamily.Name + " at " + font.SizeInPoints + "pt, size is " + GetStringBounds(text, font, fmt).Size() + " and limit is " + __textBoundingBox.Size()); I then use the following line to render the text onto an image I'm pulling from the filesystem: g.DrawString(text, font, __brush, __textBoundingBox, fmt); where: __fonts is a PrivateFontCollection, PrivateFontCollection.GetFontBySize is an extension method that returns a FontFamily RectangleF __textBoundingBox = new RectangleF(150, 110, 212, 64); int __minimumSize = 8; int __startingSize = 48; Brush __brush = Brushes.White; int size starts out at 48 and decrements within that loop Graphics g has SmoothingMode.AntiAlias and TextRenderingHint.AntiAlias set context is a System.Web.HttpContext (this is an excerpt from the ProcessRequest method of an IHttpHandler) The other methods are: private static RectangleF GetStringBounds(string text, Font font, StringFormat fmt) { CharacterRange[] range = { new CharacterRange(0, text.Length) }; StringFormat myFormat = fmt.Clone() as StringFormat; myFormat.SetMeasurableCharacterRanges(range); using (Graphics g = Graphics.FromImage(new Bitmap( (int) __textBoundingBox.Width - 1, (int) __textBoundingBox.Height - 1))) { g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; Region[] regions = g.MeasureCharacterRanges(text, font, __textBoundingBox, myFormat); return regions[0].GetBounds(g); } } public static string Size(this RectangleF rect) { return rect.Width + "×" + rect.Height; } public static bool IsLargerThan(this RectangleF a, RectangleF b) { return (a.Width > b.Width) || (a.Height > b.Height); } Now I have two problems. Firstly, the text sometimes insists on wrapping by inserting a line-break within a word, when it should just fail to fit and cause the while loop to decrement again. I can't see why it is that Graphics.MeasureCharacterRanges thinks that this fits within the box when it shouldn't be word-wrapping within a word. This behaviour is exhibited irrespective of the character set used (I get it in Latin alphabet words, as well as other parts of the Unicode range, like Cyrillic, Greek, Georgian and Armenian). Is there some setting I should be using to force Graphics.MeasureCharacterRanges only to be word-wrapping at whitespace characters (or hyphens)? This first problem is the same as post 2499067. Secondly, in scaling up to the new image and font size, Graphics.MeasureCharacterRanges is giving me heights that are wildly off. The RectangleF I am drawing within corresponds to a visually apparent area of the image, so I can easily see when the text is being decremented more than is necessary. Yet when I pass it some text, the GetBounds call is giving me a height that is almost double what it's actually taking. Using trial and error to set the __minimumSize to force an exit from the while loop, I can see that 24pt text fits within the bounding box, yet Graphics.MeasureCharacterRanges is reporting that the height of that text, once rendered to the image, is 122px (when the bounding box is 64px tall and it fits within that box). Indeed, without forcing the matter, the while loop iterates to 18pt, at which point Graphics.MeasureCharacterRanges returns a value that fits. The trace log excerpt is as follows: Decrementing font size to 24, as size is 193×122 and limit is 212×64 Decrementing font size to 23, as size is 191×117 and limit is 212×64 Decrementing font size to 22, as size is 200×75 and limit is 212×64 Decrementing font size to 21, as size is 192×71 and limit is 212×64 Decrementing font size to 20, as size is 198×68 and limit is 212×64 Decrementing font size to 19, as size is 185×65 and limit is 212×64 Writing VENNEGOOR of HESSELINK in DIN-Black at 18pt, size is 178×61 and limit is 212×64 So why is Graphics.MeasureCharacterRanges giving me a wrong result? I could understand it being, say, the line height of the font if the loop stopped around 21pt (which would visually fit, if I screenshot the results and measure it in Paint.Net), but it's going far further than it should be doing because, frankly, it's returning the wrong damn results. Any and all help gratefully received. Thanks!

    Read the article

  • HTML5 CSS3 layout not working

    - by John.Weland
    I have been asked by a local MMA (Mixed Martial Arts) School to help them develop a website. For the life of me I CANNOT get the layout to work correctly. When I get one section set where it should be another moves out of place! here is a pic of the layout: here The header should be a set height as should the footer the entire site at its widest point should be 1250px with the header/content area/footer and the like being 1240px the black in the picture is a scaling background to expand wider as larger resolution systems are viewing them. The full site should be a minimum-height of 100% but scale virtually as content in the target area deems necessary. My biggest issue currently is that my "sticky" footer doesn't stick once the content has stretched the content target area virtually. the Code is not pretty but here it is: HTML5 <!doctype html> <html> <head> <link rel="stylesheet" href="menu.css" type="text/css" media="screen"> <link rel="stylesheet" href="master.css" type="text/css" media="screen"> <meta charset="utf-8"> <title>Untitled Document</title> </head> <body bottommargin="0" leftmargin="0" rightmargin="0" topmargin="0"> <div id="wrap" class="wrap"><div id="logo" class="logo"><img src="images/comalogo.png" width="100" height="150"></div> <div id="header" class="header">College of Martial Arts</div> <div id="nav" class="nav"> <ul id="menu"><b> <li><a href="#">News</a></li> <li>·</li> <li><a href="#">About Us</a> <ul> <li><a href="#">The Instructors</a></li> <li><a href="#">Our Arts</a></li> </li> </ul> <li>·</li> <li><a href="#">Location</a></li> <li>·</li> <li><a href="#">Gallery</a></li> <li>·</li> <li><a href="#">MMA.tv</a></li> <li>·</li> <li><a href="#">Schedule</a></li> <li>·</li> <li><a href="#">Fight Gear</a></li></b> </div> <div id="social" class="social"> <a href="http://www.facebook.com/pages/Canyon-Lake-College-of-Martial-Arts/189432551104674"><img src="images/soc/facebook.png"></a> <a href="https://twitter.com/#!/CanyonLakeMMA"><img src="images/soc/twitter.png"></a> <a href="https://plus.google.com/108252414577423199314/"><img src="images/soc/google+.png"></a> <a href="http://youtube.com/user/clmmatv"><img src="images/soc/youtube.png"></a></div> <div id="mid" class="mid">test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br></div> <div id="footer" class="footer"> <div id="contact" style="left:0px;">tel: (830) 214-4591<br /> e: [email protected]<br /> add: 1273 FM 2673, Sattler, TX 78133<br /> </div> <div id="affiliates" style="right:0px;">Hwa Rang World Tang soo Do</div> <div id="copyright">Copyright © College of Martial Arts</div> </div> </body> </html> CSS3 -Dropdown Menu- @charset "utf-8"; /* CSS Document */ /* Main */ #menu { width: 100%; margin: 0; padding: 10px 0 0 0; list-style: none; background: #444; background: -moz-linear-gradient(#000, #333); background: -webkit-gradient(linear,left bottom,left top,color-stop(0, #444),color-stop(1, #000)); background: -webkit-linear-gradient(#000, #333); background: -o-linear-gradient(#000, #333); background: -ms-linear-gradient(#000, #333); background: linear-gradient(#000, #333); -moz-border-radius: 5px; border-radius: 5px; -moz-box-shadow: 0 2px 1px #9c9c9c; -webkit-box-shadow: 0 2px 1px #9c9c9c; box-shadow: 0 8px 8px #9c9c9c; /* outline:#000 solid thin; */ } #menu li { left:150px; float: left; padding: 0 0 10px 0; position:relative; color: #FC0; font-size:15px; font-family:'freshman' cursive; line-height:15px; } #menu a { float: left; height: 15px; line-height:15px; padding: 0 10px; color: #FC0; font-size:15px; text-decoration: none; text-shadow: 1 1px 0 #000; text-align:center; } #menu li:hover > a { color: #fafafa; } *html #menu li a:hover /* IE6 */ { color: #fafafa; } #menu li:hover > ul { display: block; } /* Sub-menu */ #menu ul { list-style: none; margin: 0; padding: 0; display: none; position: absolute; top: 25px; left: 0; z-index: 99999; background: #444; background: -moz-linear-gradient(#000, #333); background: -webkit-gradient(linear,left bottom,left top,color-stop(0, #111),color-stop(1, #444)); background: -webkit-linear-gradient(#000, #333); background: -o-linear-gradient(#000, #333); background: -ms-linear-gradient(#000, #333); background: linear-gradient(#000, #333); -moz-border-radius: 5px; border-radius: 5px; /* outline:#000 solid thin; */ } #menu ul li { left:0; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } #menu ul a { padding: 10px; height: auto; line-height: 1; display: block; white-space: nowrap; float: none; text-transform: none; } *html #menu ul a /* IE6 */ { height: 10px; width: 200px; } *:first-child+html #menu ul a /* IE7 */ { height: 10px; width: 200px; } /*#menu ul a:hover { background: #000; background: -moz-linear-gradient(#000, #333); background: -webkit-gradient(linear, left top, left bottom, from(#04acec), to(#0186ba)); background: -webkit-linear-gradient(#000, #333); background: -o-linear-gradient(#000, #333); background: -ms-linear-gradient(#000, #333); background: linear-gradient(#000, #333); }*/ /* Clear floated elements */ #menu:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; } * html #menu { zoom: 1; } /* IE6 */ *:first-child+html #menu { zoom: 1; } /* IE7 */ CSS3 -Master Style Sheet- @charset "utf-8"; /* CSS Document */ a:link {color:#FC0; text-decoration:none;} /* unvisited link */ a:visited {color:#FC0; text-decoration:none;} /* visited link */ a:hover {color:#FFF; text-decoration:none;} /* mouse over link */ a:active {color:#FC0; text-decoration:none;} /* selected link */ ul.a {list-style-type:none;} ul.b {list-style-type:inherit} html { } body { /*background-image:url(images/cagebg.jpg);*/ background-repeat:repeat; background-position:top; } div.wrap { margin: 0 auto; min-height: 100%; position: relative; width: 1250px; } div.logo{ top:25px; left:20px; position:absolute; float:top; height:150px; } /*Freshman FONT is on my computer needs to be uploaded to the webhost and rendered host side like a webfont*/ div.header{ background-color:#999; color:#FC0; margin-left:5px; height:80px; width:1240px; line-height:70px; font-family:'freshman' cursive; font-size:50px; text-shadow:8px 8px #9c9c9c; text-outline:1px 1px #000; text-align:center; background-color:#999; clear: both; } div.social{ height:50px; margin-left:5px; width:1240px; font-family:'freshman' cursive; font-size:50px; text-align:right; color:#000; background-color:#999; line-height:30px; box-sizing: border-box; ms-box-sizing: border-box; webkit-box-sizing: border-box; moz-box-sizing: border-box; padding-right:5px; } div.mid{ position:absolute; min-height:100%; margin-left:5px; width:1240px; font-family:'freshman' cursive; font-size:50px; text-align:center; color:#000; background-color:#999; } /*SIDE left and right should be 40px wide and a minimum height (100% the area from nav-footer) to fill between the NAV and the footer yet stretch as displayed content streatches the page longer (scrollable)*/ div #side.sright{ top:96px; right:0; position:absolute; float:right; height:100%; min-height:100%; width:40px; background-image:url(images/border.png); } /*Container should vary in height in acordance to content displayed*/ div #content.container{ } /*Footer should stick at ABSOLUTE BOTTOM of the page*/ div #footer{ font-family:'freshman' cursive; position:fixed; bottom:0; background-color:#000000; margin-left:5px; width:1240px; color:#FC0; clear: both; /*this clear property forces the .container to understand where the columns end and contain them*/ } /*HTML 5 support - Sets new HTML 5 tags to display:block so browsers know how to render the tags properly.*/ header, section, footer, aside, nav, article, figure { display: block; } Eventually once the layout is correct I have to use PHP to make calls for where data should be displayed from what database. If anyone can help me to fix this layout and clean up the crap code, I'd be much appreciated.. I've spent weeks trying to figure this out.

    Read the article

  • Json / Jsonp not connecting to php (Phonegap + jquerymobile)

    - by Madhulika Mukherjee
    I am trying to make - an android WEB application with phonegap layout with JqueryMobile What Im doing - An html form that takes ID, name, and address as input 'Serialize's this data using ajax makes a json object out of it Should send it to a file called 'connection.php' Where, this data is put into a database (MySql) Other details - My server is localhost, Im using xampp I have already created a database and table using phpmyadmin The problem - My html file, where my json object is created, does not connect to the php file which is hosted by my localhost Here is my COMPLETE html file: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <!-- Change this if you want to allow scaling --> <meta name="viewport" content="width=default-width; user-scalable=no" /> <meta http-equiv="Content-type" content="text/html;charset=utf-8"> <title>Trial app</title> <link rel="stylesheet" href="thestylesheet.css" type="text/css"> <script type="text/javascript" charset="utf-8" src="javascript1.js"></script> <script type="text/javascript" charset="utf-8" src="javascript2.js"></script> <script type="text/javascript" charset="utf-8" src="cordova-1.8.0.js"></script> <script> $(document).ready(function () { $("#btn").click( function() { alert('hello hello'); $.ajax({ url: "connection.php", type: "POST", data: { id: $('#id').val(), name: $('#name').val(), Address: $('#Address').val() }, datatype: "json", success: function (status) { if (status.success == false) { alert("Failure!"); } else { alert("Success!"); } } }); }); }); </script> </head> <body> <div data-role="header"> <h1>Heading of the app</h1> </div><!-- /header --> <div data-role="content"> <form id="target" method="post"> <label for="id"> <input type="text" id="id" placeholder="ID"> </label> <label for="name"> <input type="text" id="name" placeholder="Name"> </label> <label for="Address"> <input type="text" id="Address" placeholder="Address"> </label> <div id="btn" data-role="button" data-icon="star" data-theme="e">Add record</div> <!--<input type="submit" value="Add record" data-icon="star" data-theme="e"> --> </form> </div> </body> </html> And here is my 'connection.php' hosted by my localhost <?php header('Content-type: application/json'); $server = "localhost"; $username = "root"; $password = ""; $database = "jqueryex"; $con = mysql_connect($server, $username, $password); if($con) { echo "Connected to database!"; } else { echo "Could not connect!"; } //or die ("Could not connect: " . mysql_error()); mysql_select_db($database, $con); /* CREATE TABLE `sample` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(45) DEFAULT NULL, `Address` varchar(45) DEFAULT NULL, PRIMARY KEY (`id`) ) */ $id= json_decode($_POST['id']); $name = json_decode($_POST['name']); $Address = json_decode($_POST['Address']); $sql = "INSERT INTO sample (id, name, Address) "; $sql .= "VALUES ($id, '$name', '$Address')"; if (!mysql_query($sql, $con)) { die('Error: ' . mysql_error()); } else { echo "Comment added"; } mysql_close($con); ?> My doubts: No entry is made in my table 'sample' when i view it in phpmyadmin So obviously, i see no success messages either I dont get any errors, not from ajax and neither from the php file. Stuff Im suspecting: Should i be using jsonp instead of json? Im new to this. Is there a problem with my php file? Perhaps I need to include some more javascript files in my html file? I assume this is a very simple problem so please help me out! I think there is just some conceptual error, as i have only just started with jquery, ajax, and json. Thank you.

    Read the article

  • Accessing local variable doesn't improve performance

    - by NicMagnier
    The short version Why is this code: var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; More performant than this one? var index = Math.floor(ref_index) * 4; The long version This week, the author of Impact js published an article about some rendering issue: http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function. Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on. I have a test page here: http://www.mx981.com/stuff/resize_bench/test.html jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup To start the test, click the picture and the results will appear in the console. There are three different versions: The original code: for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; var indexScaled = (y * widthScaled + x) * 4; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ]; scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ]; scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ]; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance One of my attempt to optimize it: var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = Math.floor(ref_index) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The same optimized code but with recalculating the index variable each time (Hybrid) var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The only difference in the two last one is the calculation of the 'index' variable. And to my surprise the optimized version is slower in most browsers (except opera). Results of personal testing (not the jsPerf tests): Opera Original: 8668ms Optimized: 932ms Hybrid: 8696ms Chrome Original: 139ms Optimized: 145ms Hybrid: 136ms Safari Original: 433ms Optimized: 853ms Hybrid: 451ms Firefox Original: 343ms Optimized: 422ms Hybrid: 350ms After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved. So why the "optimized" version is slower? I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8. At this point I am a bit clueless and wonder if somebody would know what is going on? I did also some more test cases in this page: http://www.mx981.com/stuff/resize_bench/index.html

    Read the article

  • ActiveMQ - "Cannot send, channel has already failed" every 2 seconds?

    - by quanta
    ActiveMQ 5.7.0 In the activemq.log, I'm seeing this exception every 2 seconds: 2013-11-05 13:00:52,374 | DEBUG | Transport Connection to: tcp://127.0.0.1:37501 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:37501 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Due to this keyword InactivityIOException, the first thing comes to my mind is InactivityMonitor, but the strange thing is MaxInactivityDuration=30000: 2013-11-05 13:11:02,672 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-2 Moreover, I also didn't see something like this: No message received since last read check for ... or: Channel was inactive for too (30000) long Do a netstat, I see these connections in TIME_WAIT state: tcp 0 0 127.0.0.1:38545 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38544 127.0.0.1:61616 TIME_WAIT - tcp 0 0 127.0.0.1:38522 127.0.0.1:61616 TIME_WAIT - Here're the output when running tcpdump: Internet Protocol Version 4, Src: 127.0.0.1 (127.0.0.1), Dst: 127.0.0.1 (127.0.0.1) Version: 4 Header length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 296 Identification: 0x7b6a (31594) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0xc063 [correct] [Good: True] [Bad: False] Source: 127.0.0.1 (127.0.0.1) Destination: 127.0.0.1 (127.0.0.1) Transmission Control Protocol, Src Port: 61616 (61616), Dst Port: 54669 (54669), Seq: 1, Ack: 2, Len: 244 Source port: 61616 (61616) Destination port: 54669 (54669) [Stream index: 11] Sequence number: 1 (relative sequence number) [Next sequence number: 245 (relative sequence number)] Acknowledgement number: 2 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgement: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 256 [Calculated window size: 32768] [Window size scaling factor: 128] Checksum: 0xff1c [validation disabled] [Good Checksum: False] [Bad Checksum: False] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2304161892, TSecr 2304161891 Kind: Timestamp (8) Length: 10 Timestamp value: 2304161892 Timestamp echo reply: 2304161891 [SEQ/ACK analysis] [Bytes in flight: 244] Constrained Application Protocol, TID: 240, Length: 244 00.. .... = Version: 0 ..00 .... = Type: Confirmable (0) .... 0000 = Option Count: 0 Code: Unknown (0) Transaction ID: 240 Payload Content-Type: text/plain (default), Length: 240, offset: 4 Line-based text data: text/plain [truncated] \001ActiveMQ\000\000\000\t\001\000\000\000<DE>\000\000\000\t\000\fMaxFrameSize\006\177<FF><FF><FF><FF> <FF><FF><FF>\000\tCacheSize\005\000\000\004\000\000\fCacheEnabled\001\001\000\022SizePrefixDisabled\001\000\000 MaxInactivityDurationInitalDelay\006\ It is very likely a tcp port check. This is what I see when trying telnet from another host: 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: java.io.EOFException | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///10.8.20.9:46775@61616 java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:275) at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:229) at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:221) at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:204) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Transport Connection to: tcp://10.8.20.9:46775 failed: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection.Transport | Async Exception Handler org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://10.8.20.9:46775 at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282) at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271) at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85) at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104) at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68) at org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1312) at org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:838) at org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:873) at org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:129) at org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:47) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2013-11-05 16:12:41,071 | DEBUG | Unregistering MBean org.apache.activemq:BrokerName=localhost,Type=Connection,ConnectorName=ope nwire,ViewType=address,Name=tcp_//10.8.20.9_46775 | org.apache.activemq.broker.jmx.ManagementContext | ActiveMQ Transport: tcp:/ //10.8.20.9:46775@61616 2013-11-05 16:12:41,073 | DEBUG | Stopping connection: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Stopping transport tcp:///10.8.20.9:46775@61616 | org.apache.activemq.transport.tcp.TcpTranspo rt | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,073 | DEBUG | Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.Threa dPoolExecutor@23cc2a28 | org.apache.activemq.thread.TaskRunnerFactory | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Closed socket Socket[addr=/10.8.20.9,port=46775,localport=61616] | org.apache.activemq.transpo rt.tcp.TcpTransport | ActiveMQ Task-1 2013-11-05 16:12:41,074 | DEBUG | Forcing shutdown of ExecutorService: java.util.concurrent.ThreadPoolExecutor@23cc2a28 | org.apache.activemq.util.ThreadPoolUtils | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Stopped transport: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,074 | DEBUG | Connection Stopped: tcp://10.8.20.9:46775 | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[localhost] Task-5 2013-11-05 16:12:41,902 | DEBUG | Sending: WireFormatInfo { version=9, properties={MaxFrameSize=9223372036854775807, CacheSize=1024, CacheEnabled=true, SizePrefixDisabled=false, MaxInactivityDurationInitalDelay=10000, TcpNoDelayEnabled=true, MaxInactivityDuration=30000, TightEncodingEnabled=true, StackTraceEnabled=true}, magic=[A,c,t,i,v,e,M,Q]} | org.apache.activemq.transport.WireFormatNegotiator | ActiveMQ BrokerService[localhost] Task-5 So the question is: how can I find out the process that is trying to connect to my ActiveMQ (from localhost) every 2 seconds?

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • iPhone SDK vs. Windows Phone 7 Series SDK Challenge, Part 2: MoveMe

    In this series, I will be taking sample applications from the iPhone SDK and implementing them on Windows Phone 7 Series.  My goal is to do as much of an apples-to-apples comparison as I can.  This series will be written to not only compare and contrast how easy or difficult it is to complete tasks on either platform, how many lines of code, etc., but Id also like it to be a way for iPhone developers to either get started on Windows Phone 7 Series development, or for developers in general to learn the platform. Heres my methodology: Run the iPhone SDK app in the iPhone Simulator to get a feel for what it does and how it works, without looking at the implementation Implement the equivalent functionality on Windows Phone 7 Series using Silverlight. Compare the two implementations based on complexity, functionality, lines of code, number of files, etc. Add some functionality to the Windows Phone 7 Series app that shows off a way to make the scenario more interesting or leverages an aspect of the platform, or uses a better design pattern to implement the functionality. You can download Microsoft Visual Studio 2010 Express for Windows Phone CTP here, and the Expression Blend 4 Beta here. If youre seeing this series for the first time, check out Part 1: Hello World. A note on methodologyin the prior post there was some feedback about lines of code not being a very good metric for this exercise.  I dont really disagree, theres a lot more to this than lines of code but I believe that is a relevant metric, even if its not the ultimate one.  And theres no perfect answer here.  So I am going to continue to report the number of lines of code that I, as a developer would need to write in these apps as a data point, and Ill leave it up to the reader to determine how that fits in with overall complexity, etc.  The first example was so basic that I think it was difficult to talk about in real terms.  I think that as these apps get more complex, the subjective differences in concept count and will be more important.  MoveMe The MoveMe app is the main end-to-end app writing example in the iPhone SDK, called Creating an iPhone Application.  This application demonstrates a few concepts, including handling touch input, how to do animations, and how to do some basic transforms. The behavior of the application is pretty simple.  User touches the button: The button does a throb type animation where it scales up and then back down briefly. User drags the button: After a touch begins, moving the touch point will drag the button around with the touch. User lets go of the button: The button animates back to its original position, but does a few small bounces as it reaches its original point, which makes the app fun and gives it an extra bit of interactivity. Now, how would I write an app that meets this spec for Windows Phone 7 Series, and how hard would it be?  Lets find out!     Implementing the UI Okay, lets build the UI for this application.  In the HelloWorld example, we did all the UI design in Visual Studio and/or by hand in XAML.  In this example, were going to use the Expression Blend 4 Beta. You might be wondering when to use Visual Studio, when to use Blend, and when to do XAML by hand.  Different people will have different takes on this, but heres mine: XAML by hand simple UI that doesnt contain animations, gradients, etc., and or UI that I want to really optimize and craft when I know exactly what I want to do. Visual Studio Basic UI layout, property setting, data binding, etc. Blend Any serious design work needs to be done in Blend, including animations, handling states and transitions, styling and templating, editing resources. As in Part 1, go ahead and fire up Visual Studio 2010 Express for Windows Phone (yes, soon it will take longer to say the name of our products than to start them up!), and create a new Windows Phone Application.  As in Part 1, clear out the XAML from the designer.  An easy way to do this is to just: Click on the design surface Hit Control+A Hit Delete Theres a little bit left over (the Grid.RowDefinitions element), just go ahead and delete that element so were starting with a clean state of only one outer Grid element. To use Blend, we need to save this project.  See, when you create a project with Visual Studio Express, it doesnt commit it to the disk (well, in a place where you can find it, at least) until you actually save the project.  This is handy if youre doing some fooling around, because it doesnt clutter your disk with WindowsPhoneApplication23-like directories.  But its also kind of dangerous, since when you close VS, if you dont save the projectits all gone.  Yes, this has bitten me since I was saving files and didnt remember that, so be careful to save the project/solution via Save All, at least once. So, save and note the location on disk.  Start Expression Blend 4 Beta, and chose File > Open Project/Solution, and load your project.  You should see just about the same thing you saw over in VS: a blank, black designer surface. Now, thinking about this application, we dont really need a button, even though it looks like one.  We never click it.  So were just going to create a visual and use that.  This is also true in the iPhone example above, where the visual is actually not a button either but a jpg image with a nice gradient and round edges.  Well do something simple here that looks pretty good. In Blend, look in the tool pane on the left for the icon that looks like the below (the highlighted one on the left), and hold it down to get the popout menu, and choose Border:    Okay, now draw out a box in the middle of the design surface of about 300x100.  The Properties Pane to the left should show the properties for this item. First, lets make it more visible by giving it a border brush.  Set the BorderBrush to white by clicking BorderBrush and dragging the color selector all the way to the upper right in the palette.  Then, down a bit farther, make the BorderThickness 4 all the way around, and the CornerRadius set to 6. In the Layout section, do the following to Width, Height, Horizontal and Vertical Alignment, and Margin (all 4 margin values): Youll see the outline now is in the middle of the design surface.  Now lets give it a background color.  Above BorderBrush select Background, and click the third tab over: Gradient Brush.  Youll see a gradient slider at the bottom, and if you click the markers, you can edit the gradient stops individually (or add more).  In this case, you can select something you like, but wheres what I chose: Left stop: #BFACCFE2 (I just picked a spot on the palette and set opacity to 75%, no magic here, feel free to fiddle these or just enter these numbers into the hex area and be done with it) Right stop: #FF3E738F Okay, looks pretty good.  Finally set the name of the element in the Name field at the top of the Properties pane to welcome. Now lets add some text.  Just hit T and itll select the TextBlock tool automatically: Now draw out some are inside our welcome visual and type Welcome!, then click on the design surface (to exit text entry mode) and hit V to go back into selection mode (or the top item in the tool pane that looks like a mouse pointer).  Click on the text again to select it in the tool pane.  Just like the border, we want to center this.  So set HorizontalAlignment and VerticalAlignment to Center, and clear the Margins: Thats it for the UI.  Heres how it looks, on the design surface: Not bad!  Okay, now the fun part Adding Animations Using Blend to build animations is a lot of fun, and its easy.  In XAML, I can not only declare elements and visuals, but also I can declare animations that will affect those visuals.  These are called Storyboards. To recap, well be doing two animations: The throb animation when the element is touched The center animation when the element is released after being dragged. The throb animation is just a scale transform, so well do that first.  In the Objects and Timeline Pane (left side, bottom half), click the little + icon to add a new Storyboard called touchStoryboard: The timeline view will appear.  In there, click a bit to the right of 0 to create a keyframe at .2 seconds: Now, click on our welcome element (the Border, not the TextBlock in it), and scroll to the bottom of the Properties Pane.  Open up Transform, click the third tab ("Scale), and set X and Y to 1.2: This all of this says that, at .2 seconds, I want the X and Y size of this element to scale to 1.2. In fact you can see this happen.  Push the Play arrow in the timeline view, and youll see the animation run! Lets make two tweaks.  First, we want the animation to automatically reverse so it scales up then back down nicely. Click in the dropdown that says touchStoryboard in Objects and Timeline, then in the Properties pane check Auto Reverse: Now run it again, and youll see it go both ways. Lets even make it nicer by adding an easing function. First, click on the Render Transform item in the Objects tree, then, in the Property Pane, youll see a bunch of easing functions to choose from.  Feel free to play with this, then seeing how each runs.  I chose Circle In, but some other ones are fun.  Try them out!  Elastic In is kind of fun, but well stick with Circle In.  Thats it for that animation. Now, we also want an animation to move the Border back to its original position when the user ends the touch gesture.  This is exactly the same process as above, but just targeting a different transform property. Create a new animation called releaseStoryboard Select a timeline point at 1.2 seconds. Click on the welcome Border element again Scroll to the Transforms panel at the bottom of the Properties Pane Choose the first tab (Translate), which may already be selected Set both X and Y values to 0.0 (we do this just to make the values stick, because the value is already 0 and we need Blend to know we want to save that value) Click on RenderTransform in the Objects tree In the properties pane, choose Bounce Out Set Bounces to 6, and Bounciness to 4 (feel free to play with these as well) Okay, were done. Note, if you want to test this Storyboard, you have to do something a little tricky because the final value is the same as the initial value, so playing it does nothing.  If you want to play with it, do the following: Next to the selection dropdown, hit the little "x (Close Storyboard) Go to the Translate Transform value for welcome Set X,Y to 50, 200, respectively (or whatever) Select releaseStoryboard again from the dropdown Hit play, see it run Go into the object tree and select RenderTransform to change the easing function. When youre done, hit the Close Storyboard x again and set the values in Transform/Translate back to 0 Wiring Up the Animations Okay, now go back to Visual Studio.  Youll get a prompt due to the modification of MainPage.xaml.  Hit Yes. In the designer, click on the welcome Border element.  In the Property Browser, hit the Events button, then double click each of ManipulationStarted, ManipulationDelta, ManipulationCompleted.  Youll need to flip back to the designer from code, after each double click. Its code time.  Here we go. Here, three event handlers have been created for us: welcome_ManipulationStarted: This will execute when a manipulation begins.  Think of it as MouseDown. welcome_ManipulationDelta: This executes each time a manipulation changes.  Think MouseMove. welcome_ManipulationCompleted: This will  execute when the manipulation ends. Think MouseUp. Now, in ManipuliationStarted, we want to kick off the throb animation that we called touchAnimation.  Thats easy: 1: private void welcome_ManipulationStarted(object sender, ManipulationStartedEventArgs e) 2: { 3: touchStoryboard.Begin(); 4: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Likewise, when the manipulation completes, we want to re-center the welcome visual with our bounce animation: 1: private void welcome_ManipulationCompleted(object sender, ManipulationCompletedEventArgs e) 2: { 3: releaseStoryboard.Begin(); 4: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note there is actually a way to kick off these animations from Blend directly via something called Triggers, but I think its clearer to show whats going on like this.  A Trigger basically allows you to say When this event fires, trigger this Storyboard, so its the exact same logical process as above, but without the code. But how do we get the object to move?  Well, for that we really dont want an animation because we want it to respond immediately to user input. We do this by directly modifying the transform to match the offset for the manipulation, and then well let the animation bring it back to zero when the manipulation completes.  The manipulation events do a great job of keeping track of all the stuff that you usually had to do yourself when doing drags: where you started from, how far youve moved, etc. So we can easily modify the position as below: 1: private void welcome_ManipulationDelta(object sender, ManipulationDeltaEventArgs e) 2: { 3: CompositeTransform transform = (CompositeTransform)welcome.RenderTransform; 4:   5: transform.TranslateX = e.CumulativeManipulation.Translation.X; 6: transform.TranslateY = e.CumulativeManipulation.Translation.Y; 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Thats it! Go ahead and run the app in the emulator.  I suggest running without the debugger, its a little faster (CTRL+F5).  If youve got a machine that supports DirectX 10, youll see nice smooth GPU accelerated graphics, which also what it looks like on the phone, running at about 60 frames per second.  If your machine does not support DX10 (like the laptop Im writing this on!), it wont be quite a smooth so youll have to take my word for it! Comparing Against the iPhone This is an example where the flexibility and power of XAML meets the tooling of Visual Studio and Blend, and the whole experience really shines.  So, for several things that are declarative and 100% toolable with the Windows Phone 7 Series, this example does them with code on the iPhone.  In parens is the lines of code that I count to do these operations. PlacardView.m: 19 total LOC Creating the view that hosts the button-like image and the text Drawing the image that is the background of the button Drawing the Welcome text over the image (I think you could technically do this step and/or the prior one using Interface Builder) MoveMeView.m:  63 total LOC Constructing and running the scale (throb) animation (25) Constructing the path describing the animation back to center plus bounce effect (38) Beyond the code count, yy experience with doing this kind of thing in code is that its VERY time intensive.  When I was a developer back on Windows Forms, doing GDI+ drawing, we did this stuff a lot, and it took forever!  You write some code and even once you get it basically working, you see its not quite right, you go back, tweak the interval, or the math a bit, run it again, etc.  You can take a look at the iPhone code here to judge for yourself.  Scroll down to animatePlacardViewToCenter toward the bottom.  I dont think this code is terribly complicated, but its not what Id call simple and its not at all simple to get right. And then theres a few other lines of code running around for setting up the ViewController and the Views, about 15 lines between MoveMeAppDelegate, PlacardView, and MoveMeView, plus the assorted decls in the h files. Adding those up, I conservatively get something like 100 lines of code (19+63+15+decls) on iPhone that I have to write, by hand, to make this project work. The lines of code that I wrote in the examples above is 5 lines of code on Windows Phone 7 Series. In terms of incremental concept counts beyond the HelloWorld app, heres a shot at that: iPhone: Drawing Images Drawing Text Handling touch events Creating animations Scaling animations Building a path and animating along that Windows Phone 7 Series: Laying out UI in Blend Creating & testing basic animations in Blend Handling touch events Invoking animations from code This was actually the first example I tried converting, even before I did the HelloWorld, and I was pretty surprised.  Some of this is luck that this app happens to match up with the Windows Phone 7 Series platform just perfectly.  In terms of time, I wrote the above application, from scratch, in about 10 minutes.  I dont know how long it would take a very skilled iPhone developer to write MoveMe on that iPhone from scratch, but if I was to write it on Silverlight in the same way (e.g. all via code), I think it would likely take me at least an hour or two to get it all working right, maybe more if I ended up picking the wrong strategy or couldnt get the math right, etc. Making Some Tweaks Silverlight contains a feature called Projections to do a variety of 3D-like effects with a 2D surface. So lets play with that a bit. Go back to Blend and select the welcome Border in the object tree.  In its properties, scroll down to the bottom, open Transform, and see Projection at the bottom.  Set X,Y,Z to 90.  Youll see the element kind of disappear, replaced by a thin blue line. Now Create a new animation called startupStoryboard. Set its key time to .5 seconds in the timeline view Set the projection values above to 0 for X, Y, and Z. Save Go back to Visual Studio, and in the constructor, add the following bold code (lines 7-9 to the constructor: 1: public MainPage() 2: { 3: InitializeComponent(); 4:   5: SupportedOrientations = SupportedPageOrientation.Portrait; 6:   7: this.Loaded += (s, e) => 8: { 9: startupStoryboard.Begin(); 10: }; 11: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If the code above looks funny, its using something called a lambda in C#, which is an inline anonymous method.  Its just a handy shorthand for creating a handler like the manipulation ones above. So with this youll get a nice 3D looking fly in effect when the app starts up.  Here it is, in flight: Pretty cool!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel.

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctls, so i'm posting them with comments. Based on Igor Sysoev (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Sysctls are for 7.x FreeBSD. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. Highload web server sysctls: # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Do not use lager sockbufs on 8.0 # ( http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=262144 # Recive clusters (on amd64 7.2+ 65k is default) # For such high value vm.kmem_size must be increased to 3G #kern.ipc.nmbclusters=229376 # Jumbo pagesize(4k/8k) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=192000 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=24000 #kern.ipc.nmbjumbo16=10240 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # Turn off receive autotuning #net.inet.tcp.recvbuf_auto=0 # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This should be enabled if you going to use big spaces (>64k) #net.inet.tcp.rfc1323=1 # Turn this off on highspeed, lossless connections (LAN 1Gbit+) #net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # You can try setting it to 0 on fileserver with 1GBit+ interfaces # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) #net.inet.tcp.inflight.enable=0 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't tested it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds (default: 30000 from RFC) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=40960 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becames lower) vfs.ufs.dirhash_maxmem=67108864 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 /boot/loader.conf: # Accept filters for data, http and DNS requests # Usefull when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load= #siis_load= # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 200M) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=100 # Incresed hostcache net.inet.tcp.hostcache.hashsize="16384" net.inet.tcp.hostcache.bucketlimit="100" # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # `sysctl dev.em.0.stats=1 ; dmesg` # #Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # Nicer boot logo =) loader_logo="beastie" And finally here is my additions to GENERIC kernel # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU of 8K(amd64) # (4k for i386). This req. is only for receiving data. # Read more in man zero_copy_sockets #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_DEFAULT_TO_ACCEPT options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron #device amdtemp # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # DTrace options KDTRACE_HOOKS # all architectures - enable general DTrace hooks options DDB_CTF # all architectures - kernel ELF linker loads CTF data #options KDTRACE_FRAME # amd64-only # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (9.x+) #options TEKEN_UTF8 #options TEKEN_XTERM # NCQ support # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ #options ATA_CAM # FreeBSD 9+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html #options DEADLKRES PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Valgrind says "stack allocation," I say "heap allocation"

    - by Joel J. Adamson
    Dear Friends, I am trying to trace a segfault with valgrind. I get the following message from valgrind: ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C5: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) ==3683== ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C7: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) However, here's the offending line: /* allocate mating table */ age_dep_data->mtable = malloc (age_dep_data->geno * sizeof (double *)); if (age_dep_data->mtable == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); for (int j = 0; j < age_dep_data->geno; j++) { 131=> age_dep_data->mtable[j] = calloc (age_dep_data->geno, sizeof (double)); if (age_dep_data->mtable[j] == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); } What gives? I thought any call to malloc or calloc allocated heap space; there is no other variable allocated here, right? Is it possible there's another allocation going on (the offending stack allocation) that I'm not seeing? You asked to see the code, here goes: /* Copyright 2010 Joel J. Adamson <[email protected]> $Id: age_dep.c 1010 2010-04-21 19:19:16Z joel $ age_dep.c:main file Joel J. Adamson -- http://www.unc.edu/~adamsonj Servedio Lab University of North Carolina at Chapel Hill CB #3280, Coker Hall Chapel Hill, NC 27599-3280 This file is part of an investigation of age-dependent sexual selection. This code is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with haploid. If not, see <http://www.gnu.org/licenses/>. */ #include "age_dep.h" /* global variables */ extern struct argp age_dep_argp; /* global error message variables */ char * nullmsg = "Null pointer: %i"; /* error message for conversions: */ char * errmsg = "Representation error: %s"; /* precision for formatted output: */ const char prec[] = "%-#9.8f "; const size_t age_max = AGEMAX; /* maximum age of males */ static int keep_going_p = 1; int main (int argc, char ** argv) { /* often used counters: */ int i, j; /* read the command line */ struct age_dep_args age_dep_args = { NULL, NULL, NULL }; argp_parse (&age_dep_argp, argc, argv, 0, 0, &age_dep_args); /* set the parameters here: */ /* initialize an age_dep_params structure, set the members */ age_dep_params_t * params = malloc (sizeof (age_dep_params_t)); if (params == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); age_dep_init_params (params, &age_dep_args); /* initialize frequencies: this initializes a list of pointers to initial frqeuencies, terminated by a NULL pointer*/ params->freqs = age_dep_init (&age_dep_args); params->by = 0.0; /* what range of parameters do we want, and with what stepsize? */ /* we should go from 0 to half-of-theta with a step size of about 0.01 */ double from = 0.0; double to = params->theta / 2.0; double stepsz = 0.01; /* did you think I would spell the whole word? */ unsigned int numparts = floor(to / stepsz); do { #pragma omp parallel for private(i) firstprivate(params) \ shared(stepsz, numparts) for (i = 0; i < numparts; i++) { params->by = i * stepsz; int tries = 0; while (keep_going_p) { /* each time through, modify mfreqs and mating table, then go again */ keep_going_p = age_dep_iterate (params, ++tries); if (keep_going_p == ERANGE) error (ERANGE, ERANGE, "Failure to converge\n"); } fprintf (stdout, "%i iterations\n", tries); } /* for i < numparts */ params->freqs = params->freqs->next; } while (params->freqs->next != NULL); return 0; } inline double age_dep_pmate (double age_dep_t, unsigned int genot, double bp, double ba) { /* the probability of mating between these phenotypes */ /* the female preference depends on whether the female has the preference allele, the strength of preference (parameter bp) and the male phenotype (age_dep_t); if the female lacks the preference allele, then this will return 0, which is not quite accurate; it should return 1 */ return bits_isset (genot, CLOCI)? 1.0 - exp (-bp * age_dep_t) + ba: 1.0; } inline double age_dep_trait (int age, unsigned int genot, double by) { /* return the male trait, a function of the trait locus, age, the age-dependent scaling parameter (bx) and the males condition genotype */ double C; double T; /* get the male's condition genotype */ C = (double) bits_popcount (bits_extract (0, CLOCI, genot)); /* get his trait genotype */ T = bits_isset (genot, CLOCI + 1)? 1.0: 0.0; /* return the trait value */ return T * by * exp (age * C); } int age_dep_iterate (age_dep_params_t * data, unsigned int tries) { /* main driver routine */ /* number of bytes for female frequencies */ size_t geno = data->age_dep_data->geno; size_t genosize = geno * sizeof (double); /* female frequencies are equal to male frequencies at birth (before selection) */ double ffreqs[geno]; if (ffreqs == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); /* do not set! Use memcpy (we need to alter male frequencies (selection) without altering female frequencies) */ memmove (ffreqs, data->freqs->freqs[0], genosize); /* for (int i = 0; i < geno; i++) */ /* ffreqs[i] = data->freqs->freqs[0][i]; */ #ifdef PRMTABLE age_dep_pr_mfreqs (data); #endif /* PRMTABLE */ /* natural selection: */ age_dep_ns (data); /* normalized mating table with new frequencies */ age_dep_norm_mtable (ffreqs, data); #ifdef PRMTABLE age_dep_pr_mtable (data); #endif /* PRMTABLE */ double * newfreqs; /* mutate here */ /* i.e. get the new frequency of 0-year-olds using recombination; */ newfreqs = rec_mating (data->age_dep_data); /* return block */ { if (sim_stop_ck (data->freqs->freqs[0], newfreqs, GENO, TOL) == 0) { /* if we have converged, stop the iterations and handle the data */ age_dep_sim_out (data, stdout); return 0; } else if (tries > MAXTRIES) return ERANGE; else { /* advance generations */ for (int j = age_max - 1; j < 0; j--) memmove (data->freqs->freqs[j], data->freqs->freqs[j-1], genosize); /* advance the first age-class */ memmove (data->freqs->freqs[0], newfreqs, genosize); return 1; } } } void age_dep_ns (age_dep_params_t * data) { /* calculate the new frequency of genotypes given additive fitness and selection coefficient s */ size_t geno = data->age_dep_data->geno; double w[geno]; double wbar, dtheta, ttheta, dcond, tcond; double t, cond; /* fitness parameters */ double mu, nu; mu = data->wparams[0]; nu = data->wparams[1]; /* calculate fitness */ for (int j = 0; j < age_max; j++) { int i; for (i = 0; i < geno; i++) { /* calculate male trait: */ t = age_dep_trait(j, i, data->by); /* calculate condition: */ cond = (double) bits_popcount (bits_extract(0, CLOCI, i)); /* trait-based fitness term */ dtheta = data->theta - t; ttheta = (dtheta * dtheta) / (2.0 * nu * nu); /* condition-based fitness term */ dcond = CLOCI - cond; tcond = (dcond * dcond) / (2.0 * mu * mu); /* calculate male fitness */ w[i] = 1 + exp(-tcond) - exp(-ttheta); } /* calculate mean fitness */ /* as long as we calculate wbar before altering any values of freqs[], we're safe */ wbar = gen_mean (data->freqs->freqs[j], w, geno); for (i = 0; i < geno; i++) data->freqs->freqs[j][i] = (data->freqs->freqs[j][i] * w[i]) / wbar; } } void age_dep_norm_mtable (double * ffreqs, age_dep_params_t * params) { /* this function produces a single mating table that forms the input for recombination () */ /* i is female genotype; j is male genotype; k is male age */ int i,j,k; double norm_denom; double trait; size_t geno = params->age_dep_data->geno; for (i = 0; i < geno; i++) { double norm_mtable[geno]; /* initialize the denominator: */ norm_denom = 0.0; /* find the probability of mating and add it to the denominator */ for (j = 0; j < geno; j++) { /* initialize entry: */ norm_mtable[j] = 0.0; for (k = 0; k < age_max; k++) { trait = age_dep_trait (k, j, params->by); norm_mtable[j] += age_dep_pmate (trait, i, params->bp, params->ba) * (params->freqs->freqs)[k][j]; } norm_denom += norm_mtable[j]; } /* now calculate entry (i,j) */ for (j = 0; j < geno; j++) params->age_dep_data->mtable[i][j] = (ffreqs[i] * norm_mtable[j]) / norm_denom; } } My current suspicion is the array newfreqs: I can't memmove, memcpy or assign a stack variable then hope it will persist, can I? rec_mating() returns double *.

    Read the article

  • fatal error C1014: too many include files : depth = 1024

    - by numerical25
    I have no idea what this means. But here is the code that it supposely is happening in. //======================================================================================= // d3dApp.cpp by Frank Luna (C) 2008 All Rights Reserved. //======================================================================================= #include "d3dApp.h" #include <stream> LRESULT CALLBACK MainWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { static D3DApp* app = 0; switch( msg ) { case WM_CREATE: { // Get the 'this' pointer we passed to CreateWindow via the lpParam parameter. CREATESTRUCT* cs = (CREATESTRUCT*)lParam; app = (D3DApp*)cs->lpCreateParams; return 0; } } // Don't start processing messages until after WM_CREATE. if( app ) return app->msgProc(msg, wParam, lParam); else return DefWindowProc(hwnd, msg, wParam, lParam); } D3DApp::D3DApp(HINSTANCE hInstance) { mhAppInst = hInstance; mhMainWnd = 0; mAppPaused = false; mMinimized = false; mMaximized = false; mResizing = false; mFrameStats = L""; md3dDevice = 0; mSwapChain = 0; mDepthStencilBuffer = 0; mRenderTargetView = 0; mDepthStencilView = 0; mFont = 0; mMainWndCaption = L"D3D10 Application"; md3dDriverType = D3D10_DRIVER_TYPE_HARDWARE; mClearColor = D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f); mClientWidth = 800; mClientHeight = 600; } D3DApp::~D3DApp() { ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mSwapChain); ReleaseCOM(mDepthStencilBuffer); ReleaseCOM(md3dDevice); ReleaseCOM(mFont); } HINSTANCE D3DApp::getAppInst() { return mhAppInst; } HWND D3DApp::getMainWnd() { return mhMainWnd; } int D3DApp::run() { MSG msg = {0}; mTimer.reset(); while(msg.message != WM_QUIT) { // If there are Window messages then process them. if(PeekMessage( &msg, 0, 0, 0, PM_REMOVE )) { TranslateMessage( &msg ); DispatchMessage( &msg ); } // Otherwise, do animation/game stuff. else { mTimer.tick(); if( !mAppPaused ) updateScene(mTimer.getDeltaTime()); else Sleep(50); drawScene(); } } return (int)msg.wParam; } void D3DApp::initApp() { initMainWindow(); initDirect3D(); D3DX10_FONT_DESC fontDesc; fontDesc.Height = 24; fontDesc.Width = 0; fontDesc.Weight = 0; fontDesc.MipLevels = 1; fontDesc.Italic = false; fontDesc.CharSet = DEFAULT_CHARSET; fontDesc.OutputPrecision = OUT_DEFAULT_PRECIS; fontDesc.Quality = DEFAULT_QUALITY; fontDesc.PitchAndFamily = DEFAULT_PITCH | FF_DONTCARE; wcscpy(fontDesc.FaceName, L"Times New Roman"); D3DX10CreateFontIndirect(md3dDevice, &fontDesc, &mFont); } void D3DApp::onResize() { // Release the old views, as they hold references to the buffers we // will be destroying. Also release the old depth/stencil buffer. ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mDepthStencilBuffer); // Resize the swap chain and recreate the render target view. HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0)); ID3D10Texture2D* backBuffer; HR(mSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), reinterpret_cast<void**>(&backBuffer))); HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView)); ReleaseCOM(backBuffer); // Create the depth/stencil buffer and view. D3D10_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = mClientWidth; depthStencilDesc.Height = mClientHeight; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; // multisampling must match depthStencilDesc.SampleDesc.Quality = 0; // swap chain values. depthStencilDesc.Usage = D3D10_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; HR(md3dDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer)); HR(md3dDevice->CreateDepthStencilView(mDepthStencilBuffer, 0, &mDepthStencilView)); // Bind the render target view and depth/stencil view to the pipeline. md3dDevice->OMSetRenderTargets(1, &mRenderTargetView, mDepthStencilView); // Set the viewport transform. D3D10_VIEWPORT vp; vp.TopLeftX = 0; vp.TopLeftY = 0; vp.Width = mClientWidth; vp.Height = mClientHeight; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; md3dDevice->RSSetViewports(1, &vp); } void D3DApp::updateScene(float dt) { // Code computes the average frames per second, and also the // average time it takes to render one frame. static int frameCnt = 0; static float t_base = 0.0f; frameCnt++; // Compute averages over one second period. if( (mTimer.getGameTime() - t_base) >= 1.0f ) { float fps = (float)frameCnt; // fps = frameCnt / 1 float mspf = 1000.0f / fps; std::wostringstream outs; outs.precision(6); outs << L"FPS: " << fps << L"\n" << "Milliseconds: Per Frame: " << mspf; mFrameStats = outs.str(); // Reset for next average. frameCnt = 0; t_base += 1.0f; } } void D3DApp::drawScene() { md3dDevice->ClearRenderTargetView(mRenderTargetView, mClearColor); md3dDevice->ClearDepthStencilView(mDepthStencilView, D3D10_CLEAR_DEPTH|D3D10_CLEAR_STENCIL, 1.0f, 0); } LRESULT D3DApp::msgProc(UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { // WM_ACTIVATE is sent when the window is activated or deactivated. // We pause the game when the window is deactivated and unpause it // when it becomes active. case WM_ACTIVATE: if( LOWORD(wParam) == WA_INACTIVE ) { mAppPaused = true; mTimer.stop(); } else { mAppPaused = false; mTimer.start(); } return 0; // WM_SIZE is sent when the user resizes the window. case WM_SIZE: // Save the new client area dimensions. mClientWidth = LOWORD(lParam); mClientHeight = HIWORD(lParam); if( md3dDevice ) { if( wParam == SIZE_MINIMIZED ) { mAppPaused = true; mMinimized = true; mMaximized = false; } else if( wParam == SIZE_MAXIMIZED ) { mAppPaused = false; mMinimized = false; mMaximized = true; onResize(); } else if( wParam == SIZE_RESTORED ) { // Restoring from minimized state? if( mMinimized ) { mAppPaused = false; mMinimized = false; onResize(); } // Restoring from maximized state? else if( mMaximized ) { mAppPaused = false; mMaximized = false; onResize(); } else if( mResizing ) { // If user is dragging the resize bars, we do not resize // the buffers here because as the user continuously // drags the resize bars, a stream of WM_SIZE messages are // sent to the window, and it would be pointless (and slow) // to resize for each WM_SIZE message received from dragging // the resize bars. So instead, we reset after the user is // done resizing the window and releases the resize bars, which // sends a WM_EXITSIZEMOVE message. } else // API call such as SetWindowPos or mSwapChain->SetFullscreenState. { onResize(); } } } return 0; // WM_EXITSIZEMOVE is sent when the user grabs the resize bars. case WM_ENTERSIZEMOVE: mAppPaused = true; mResizing = true; mTimer.stop(); return 0; // WM_EXITSIZEMOVE is sent when the user releases the resize bars. // Here we reset everything based on the new window dimensions. case WM_EXITSIZEMOVE: mAppPaused = false; mResizing = false; mTimer.start(); onResize(); return 0; // WM_DESTROY is sent when the window is being destroyed. case WM_DESTROY: PostQuitMessage(0); return 0; // The WM_MENUCHAR message is sent when a menu is active and the user presses // a key that does not correspond to any mnemonic or accelerator key. case WM_MENUCHAR: // Don't beep when we alt-enter. return MAKELRESULT(0, MNC_CLOSE); // Catch this message so to prevent the window from becoming too small. case WM_GETMINMAXINFO: ((MINMAXINFO*)lParam)->ptMinTrackSize.x = 200; ((MINMAXINFO*)lParam)->ptMinTrackSize.y = 200; return 0; } return DefWindowProc(mhMainWnd, msg, wParam, lParam); } void D3DApp::initMainWindow() { WNDCLASS wc; wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = MainWndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = mhAppInst; wc.hIcon = LoadIcon(0, IDI_APPLICATION); wc.hCursor = LoadCursor(0, IDC_ARROW); wc.hbrBackground = (HBRUSH)GetStockObject(NULL_BRUSH); wc.lpszMenuName = 0; wc.lpszClassName = L"D3DWndClassName"; if( !RegisterClass(&wc) ) { MessageBox(0, L"RegisterClass FAILED", 0, 0); PostQuitMessage(0); } // Compute window rectangle dimensions based on requested client area dimensions. RECT R = { 0, 0, mClientWidth, mClientHeight }; AdjustWindowRect(&R, WS_OVERLAPPEDWINDOW, false); int width = R.right - R.left; int height = R.bottom - R.top; mhMainWnd = CreateWindow(L"D3DWndClassName", mMainWndCaption.c_str(), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, 0, 0, mhAppInst, this); if( !mhMainWnd ) { MessageBox(0, L"CreateWindow FAILED", 0, 0); PostQuitMessage(0); } ShowWindow(mhMainWnd, SW_SHOW); UpdateWindow(mhMainWnd); } void D3DApp::initDirect3D() { // Fill out a DXGI_SWAP_CHAIN_DESC to describe our swap chain. DXGI_SWAP_CHAIN_DESC sd; sd.BufferDesc.Width = mClientWidth; sd.BufferDesc.Height = mClientHeight; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; // No multisampling. sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.BufferCount = 1; sd.OutputWindow = mhMainWnd; sd.Windowed = true; sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; sd.Flags = 0; // Create the device. UINT createDeviceFlags = 0; #if defined(DEBUG) || defined(_DEBUG) createDeviceFlags |= D3D10_CREATE_DEVICE_DEBUG; #endif HR( D3D10CreateDeviceAndSwapChain( 0, //default adapter md3dDriverType, 0, // no software device createDeviceFlags, D3D10_SDK_VERSION, &sd, &mSwapChain, &md3dDevice) ); // The remaining steps that need to be carried out for d3d creation // also need to be executed every time the window is resized. So // just call the onResize method here to avoid code duplication. onResize(); }

    Read the article

  • how to use 3D map Actionscript class in mxml file for display map.

    - by nemade-vipin
    hello friends, I have created the application in which I have to use 3D map Action Script class in mxml file to display a map in form. that is in tab navigator last tab. My ActionScript 3D map class is(FlyingDirections):- package src.SBTSCoreObject { import src.SBTSCoreObject.JSONDecoder; import com.google.maps.InfoWindowOptions; import com.google.maps.LatLng; import com.google.maps.LatLngBounds; import com.google.maps.Map3D; import com.google.maps.MapEvent; import com.google.maps.MapOptions; import com.google.maps.MapType; import com.google.maps.MapUtil; import com.google.maps.View; import com.google.maps.controls.NavigationControl; import com.google.maps.geom.Attitude; import com.google.maps.interfaces.IPolyline; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.services.Directions; import com.google.maps.services.DirectionsEvent; import com.google.maps.services.Route; import flash.display.Bitmap; import flash.display.DisplayObject; import flash.display.DisplayObjectContainer; import flash.display.Loader; import flash.display.LoaderInfo; import flash.display.Sprite; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.MouseEvent; import flash.events.TimerEvent; import flash.filters.DropShadowFilter; import flash.geom.Point; import flash.net.URLLoader; import flash.net.URLRequest; import flash.net.navigateToURL; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.text.TextFormat; import flash.utils.Timer; import flash.utils.getTimer; public class FlyingDirections extends Map3D { /** * Panoramio home page. */ private static const PANORAMIO_HOME:String = "http://www.panoramio.com/"; /** * The icon for the car. */ [Embed("assets/car-icon-24px.png")] private static const Car:Class; /** * The Panoramio icon. */ [Embed("assets/iw_panoramio.png")] private static const PanoramioIcon:Class; /** * We animate a zoom in to the start the route before the car starts * to move. This constant sets the time in seconds over which this * zoom occurs. */ private static const LEAD_IN_DURATION:Number = 3; /** * Duration of the trip in seconds. */ private static const TRIP_DURATION:Number = 40; /** * Constants that define the geometry of the Panoramio image markers. */ private static const BORDER_T:Number = 3; private static const BORDER_L:Number = 10; private static const BORDER_R:Number = 10; private static const BORDER_B:Number = 3; private static const GAP_T:Number = 2; private static const GAP_B:Number = 1; private static const IMAGE_SCALE:Number = 1; /** * Trajectory that the camera follows over time. Each element is an object * containing properties used to generate parameter values for flyTo(..). * fraction = 0 corresponds to the start of the trip; fraction = 1 * correspondsto the end of the trip. */ private var FLY_TRAJECTORY:Array = [ { fraction: 0, zoom: 6, attitude: new Attitude(0, 0, 0) }, { fraction: 0.2, zoom: 8.5, attitude: new Attitude(30, 30, 0) }, { fraction: 0.5, zoom: 9, attitude: new Attitude(30, 40, 0) }, { fraction: 1, zoom: 8, attitude: new Attitude(50, 50, 0) }, { fraction: 1.1, zoom: 8, attitude: new Attitude(130, 50, 0) }, { fraction: 1.2, zoom: 8, attitude: new Attitude(220, 50, 0) }, ]; /** * Number of panaramio photos for which we load data. We&apos;ll select a * subset of these approximately evenly spaced along the route. */ private static const NUM_GEOTAGGED_PHOTOS:int = 50; /** * Number of panaramio photos that we actually show. */ private static const NUM_SHOWN_PHOTOS:int = 7; /** * Scaling between real trip time and animation time. */ private static const SCALE_TIME:Number = 0.001; /** * getTimer() value at the instant that we start the trip. If this is 0 then * we have not yet started the car moving. */ private var startTimer:int = 0; /** * The current route. */ private var route:Route; /** * The polyline for the route. */ private var polyline:IPolyline; /** * The car marker. */ private var marker:Marker; /** * The cumulative duration in seconds over each step in the route. * cumulativeStepDuration[0] is 0; cumulativeStepDuration[1] adds the * duration of step 0; cumulativeStepDuration[2] adds the duration * of step 1; etc. */ private var cumulativeStepDuration:/*Number*/Array = []; /** * The cumulative distance in metres over each vertex in the route polyline. * cumulativeVertexDistance[0] is 0; cumulativeVertexDistance[1] adds the * distance to vertex 1; cumulativeVertexDistance[2] adds the distance to * vertex 2; etc. */ private var cumulativeVertexDistance:Array; /** * Array of photos loaded from Panoramio. This array has the same format as * the &apos;photos&apos; property within the JSON returned by the Panoramio API * (see http://www.panoramio.com/api/), with additional properties added to * individual photo elements to hold the loader structures that fetch * the actual images. */ private var photos:Array = []; /** * Array of polyline vertices, where each element is in world coordinates. * Several computations can be faster if we can use world coordinates * instead of LatLng coordinates. */ private var worldPoly:/*Point*/Array; /** * Whether the start button has been pressed. */ private var startButtonPressed:Boolean = false; /** * Saved event from onDirectionsSuccess call. */ private var directionsSuccessEvent:DirectionsEvent = null; /** * Start button. */ private var startButton:Sprite; /** * Alpha value used for the Panoramio image markers. */ private var markerAlpha:Number = 0; /** * Index of the current driving direction step. Used to update the * info window content each time we progress to a new step. */ private var currentStepIndex:int = -1; /** * The fly directions map constructor. * * @constructor */ public function FlyingDirections() { key="ABQIAAAA7QUChpcnvnmXxsjC7s1fCxQGj0PqsCtxKvarsoS-iqLdqZSKfxTd7Xf-2rEc_PC9o8IsJde80Wnj4g"; super(); addEventListener(MapEvent.MAP_PREINITIALIZE, onMapPreinitialize); addEventListener(MapEvent.MAP_READY, onMapReady); } /** * Handles map preintialize. Initializes the map center and zoom level. * * @param event The map event. */ private function onMapPreinitialize(event:MapEvent):void { setInitOptions(new MapOptions({ center: new LatLng(-26.1, 135.1), zoom: 4, viewMode: View.VIEWMODE_PERSPECTIVE, mapType:MapType.PHYSICAL_MAP_TYPE })); } /** * Handles map ready and looks up directions. * * @param event The map event. */ private function onMapReady(event:MapEvent):void { enableScrollWheelZoom(); enableContinuousZoom(); addControl(new NavigationControl()); // The driving animation will be updated on every frame. addEventListener(Event.ENTER_FRAME, enterFrame); addStartButton(); // We start the directions loading now, so that we&apos;re ready to go when // the user hits the start button. var directions:Directions = new Directions(); directions.addEventListener( DirectionsEvent.DIRECTIONS_SUCCESS, onDirectionsSuccess); directions.addEventListener( DirectionsEvent.DIRECTIONS_FAILURE, onDirectionsFailure); directions.load("48 Pirrama Rd, Pyrmont, NSW to Byron Bay, NSW"); } /** * Adds a big blue start button. */ private function addStartButton():void { startButton = new Sprite(); startButton.buttonMode = true; startButton.addEventListener(MouseEvent.CLICK, onStartClick); startButton.graphics.beginFill(0x1871ce); startButton.graphics.drawRoundRect(0, 0, 150, 100, 10, 10); startButton.graphics.endFill(); var startField:TextField = new TextField(); startField.autoSize = TextFieldAutoSize.LEFT; startField.defaultTextFormat = new TextFormat("_sans", 20, 0xffffff, true); startField.text = "Start!"; startButton.addChild(startField); startField.x = 0.5 * (startButton.width - startField.width); startField.y = 0.5 * (startButton.height - startField.height); startButton.filters = [ new DropShadowFilter() ]; var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.addChild(startButton); startButton.x = 0.5 * (container.width - startButton.width); startButton.y = 0.5 * (container.height - startButton.height); var panoField:TextField = new TextField(); panoField.autoSize = TextFieldAutoSize.LEFT; panoField.defaultTextFormat = new TextFormat("_sans", 11, 0x000000, true); panoField.text = "Photos provided by Panoramio are under the copyright of their owners."; container.addChild(panoField); panoField.x = container.width - panoField.width - 5; panoField.y = 5; } /** * Handles directions success. Starts flying the route if everything * is ready. * * @param event The directions event. */ private function onDirectionsSuccess(event:DirectionsEvent):void { directionsSuccessEvent = event; flyRouteIfReady(); } /** * Handles click on the start button. Starts flying the route if everything * is ready. */ private function onStartClick(event:MouseEvent):void { startButton.removeEventListener(MouseEvent.CLICK, onStartClick); var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.removeChild(startButton); startButtonPressed = true; flyRouteIfReady(); } /** * If we have loaded the directions and the start button has been pressed * start flying the directions route. */ private function flyRouteIfReady():void { if (!directionsSuccessEvent || !startButtonPressed) { return; } var directions:Directions = directionsSuccessEvent.directions; // Extract the route. route = directions.getRoute(0); // Draws the polyline showing the route. polyline = directions.createPolyline(); addOverlay(directions.createPolyline()); // Creates a car marker that is moved along the route. var car:DisplayObject = new Car(); marker = new Marker(route.startGeocode.point, new MarkerOptions({ icon: car, iconOffset: new Point(-car.width / 2, -car.height) })); addOverlay(marker); transformPolyToWorld(); createCumulativeArrays(); // Load Panoramio data for the region covered by the route. loadPanoramioData(directions.bounds); var duration:Number = route.duration; // Start a timer that will trigger the car moving after the lead in time. var leadInTimer:Timer = new Timer(LEAD_IN_DURATION * 1000, 1); leadInTimer.addEventListener(TimerEvent.TIMER, onLeadInDone); leadInTimer.start(); var flyTime:Number = -LEAD_IN_DURATION; // Set up the camera flight trajectory. for each (var flyStep:Object in FLY_TRAJECTORY) { var time:Number = flyStep.fraction * duration; var center:LatLng = latLngAt(time); var scaledTime:Number = time * SCALE_TIME; var zoom:Number = flyStep.zoom; var attitude:Attitude = flyStep.attitude; var elapsed:Number = scaledTime - flyTime; flyTime = scaledTime; flyTo(center, zoom, attitude, elapsed); } } /** * Loads Panoramio data for the route bounds. We load data about more photos * than we need, then select a subset lying along the route. * @param bounds Bounds within which to fetch images. */ private function loadPanoramioData(bounds:LatLngBounds):void { var params:Object = { order: "popularity", set: "full", from: "0", to: NUM_GEOTAGGED_PHOTOS.toString(10), size: "small", minx: bounds.getWest(), miny: bounds.getSouth(), maxx: bounds.getEast(), maxy: bounds.getNorth() }; var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest( "http://www.panoramio.com/map/get_panoramas.php?" + paramsToString(params)); loader.addEventListener(Event.COMPLETE, onPanoramioDataLoaded); loader.addEventListener(IOErrorEvent.IO_ERROR, onPanoramioDataFailed); loader.load(request); } /** * Transforms the route polyline to world coordinates. */ private function transformPolyToWorld():void { var numVertices:int = polyline.getVertexCount(); worldPoly = new Array(numVertices); for (var i:int = 0; i < numVertices; ++i) { var vertex:LatLng = polyline.getVertex(i); worldPoly[i] = fromLatLngToPoint(vertex, 0); } } /** * Returns the time at which the route approaches closest to the * given point. * @param world Point in world coordinates. * @return Route time at which the closest approach occurs. */ private function getTimeOfClosestApproach(world:Point):Number { var minDistSqr:Number = Number.MAX_VALUE; var numVertices:int = worldPoly.length; var x:Number = world.x; var y:Number = world.y; var minVertex:int = 0; for (var i:int = 0; i < numVertices; ++i) { var dx:Number = worldPoly[i].x - x; var dy:Number = worldPoly[i].y - y; var distSqr:Number = dx * dx + dy * dy; if (distSqr < minDistSqr) { minDistSqr = distSqr; minVertex = i; } } return cumulativeVertexDistance[minVertex]; } /** * Returns the array index of the first element that compares greater than * the given value. * @param ordered Ordered array of elements. * @param value Value to use for comparison. * @return Array index of the first element that compares greater than * the given value. */ private function upperBound(ordered:Array, value:Number, first:int=0, last:int=-1):int { if (last < 0) { last = ordered.length; } var count:int = last - first; var index:int; while (count > 0) { var step:int = count >> 1; index = first + step; if (value >= ordered[index]) { first = index + 1; count -= step - 1; } else { count = step; } } return first; } /** * Selects up to a given number of photos approximately evenly spaced along * the route. * @param ordered Array of photos, each of which is an object with * a property &apos;closestTime&apos;. * @param number Number of photos to select. */ private function selectEvenlySpacedPhotos(ordered:Array, number:int):Array { var start:Number = cumulativeVertexDistance[0]; var end:Number = cumulativeVertexDistance[cumulativeVertexDistance.length - 2]; var closestTimes:Array = []; for each (var photo:Object in ordered) { closestTimes.push(photo.closestTime); } var selectedPhotos:Array = []; for (var i:int = 0; i < number; ++i) { var idealTime:Number = start + ((end - start) * (i + 0.5) / number); var index:int = upperBound(closestTimes, idealTime); if (index < 1) { index = 0; } else if (index >= ordered.length) { index = ordered.length - 1; } else { var errorToPrev:Number = Math.abs(idealTime - closestTimes[index - 1]); var errorToNext:Number = Math.abs(idealTime - closestTimes[index]); if (errorToPrev < errorToNext) { --index; } } selectedPhotos.push(ordered[index]); } return selectedPhotos; } /** * Handles completion of loading the Panoramio index data. Selects from the * returned photo indices a subset of those that lie along the route and * initiates load of each of these. * @param event Load completion event. */ private function onPanoramioDataLoaded(event:Event):void { var loader:URLLoader = event.target as URLLoader; var decoder:JSONDecoder = new JSONDecoder(loader.data as String); var allPhotos:Array = decoder.getValue().photos; for each (var photo:Object in allPhotos) { var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); photo.closestTime = getTimeOfClosestApproach(fromLatLngToPoint(latLng, 0)); } allPhotos.sortOn("closestTime", Array.NUMERIC); photos = selectEvenlySpacedPhotos(allPhotos, NUM_SHOWN_PHOTOS); for each (photo in photos) { var photoLoader:Loader = new Loader(); // The images aren&apos;t on panoramio.com: we can&apos;t acquire pixel access // using "new LoaderContext(true)". photoLoader.load( new URLRequest(photo.photo_file_url)); photo.loader = photoLoader; // Save the loader info: we use this to find the original element when // the load completes. photo.loaderInfo = photoLoader.contentLoaderInfo; photoLoader.contentLoaderInfo.addEventListener( Event.COMPLETE, onPhotoLoaded); } } /** * Creates a MouseEvent listener function that will navigate to the given * URL in a new window. * @param url URL to which to navigate. */ private function createOnClickUrlOpener(url:String):Function { return function(event:MouseEvent):void { navigateToURL(new URLRequest(url)); }; } /** * Handles completion of loading an individual Panoramio image. * Adds a custom marker that displays the image. Initially this is made * invisible so that it can be faded in as needed. * @param event Load completion event. */ private function onPhotoLoaded(event:Event):void { var loaderInfo:LoaderInfo = event.target as LoaderInfo; // We need to find which photo element this image corresponds to. for each (var photo:Object in photos) { if (loaderInfo == photo.loaderInfo) { var imageMarker:Sprite = createImageMarker(photo.loader, photo.owner_name, photo.owner_url); var options:MarkerOptions = new MarkerOptions({ icon: imageMarker, hasShadow: true, iconAlignment: MarkerOptions.ALIGN_BOTTOM | MarkerOptions.ALIGN_LEFT }); var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); var marker:Marker = new Marker(latLng, options); photo.marker = marker; addOverlay(marker); // A hack: we add the actual image after the overlay has been added, // which creates the shadow, so that the shadow is valid even if we // don&apos;t have security privileges to generate the shadow from the // image. marker.foreground.visible = false; marker.shadow.alpha = 0; var imageHolder:Sprite = new Sprite(); imageHolder.addChild(photo.loader); imageHolder.buttonMode = true; imageHolder.addEventListener( MouseEvent.CLICK, createOnClickUrlOpener(photo.photo_url)); imageMarker.addChild(imageHolder); return; } } trace("An image was loaded which could not be found in the photo array."); } /** * Creates a custom marker showing an image. */ private function createImageMarker(child:DisplayObject, ownerName:String, ownerUrl:String):Sprite { var content:Sprite = new Sprite(); var panoramioIcon:Bitmap = new PanoramioIcon(); var iconHolder:Sprite = new Sprite(); iconHolder.addChild(panoramioIcon); iconHolder.buttonMode = true; iconHolder.addEventListener(MouseEvent.CLICK, onPanoramioIconClick); panoramioIcon.x = BORDER_L; panoramioIcon.y = BORDER_T; content.addChild(iconHolder); // NOTE: we add the image as a child only after we&apos;ve added the marker // to the map. Currently the API requires this if it&apos;s to generate the // shadow for unprivileged content. // Shrink the image, so that it doesn&apos;t obcure too much screen space. // Ideally, we&apos;d subsample, but we don&apos;t have pixel level access. child.scaleX = IMAGE_SCALE; child.scaleY = IMAGE_SCALE; var imageW:Number = child.width; var imageH:Number = child.height; child.x = BORDER_L + 30; child.y = BORDER_T + iconHolder.height + GAP_T; var authorField:TextField = new TextField(); authorField.autoSize = TextFieldAutoSize.LEFT; authorField.defaultTextFormat = new TextFormat("_sans", 12); authorField.text = "author:"; content.addChild(authorField); authorField.x = BORDER_L; authorField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var ownerField:TextField = new TextField(); ownerField.autoSize = TextFieldAutoSize.LEFT; var textFormat:TextFormat = new TextFormat("_sans", 14, 0x0e5f9a); ownerField.defaultTextFormat = textFormat; ownerField.htmlText = "<a href=\"" + ownerUrl + "\" target=\"_blank\">" + ownerName + "</a>"; content.addChild(ownerField); ownerField.x = BORDER_L + authorField.width; ownerField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var totalW:Number = BORDER_L + Math.max(imageW, ownerField.width + authorField.width) + BORDER_R; var totalH:Number = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B + ownerField.height + BORDER_B; content.graphics.beginFill(0xffffff); content.graphics.drawRoundRect(0, 0, totalW, totalH, 10, 10); content.graphics.endFill(); var marker:Sprite = new Sprite(); marker.addChild(content); content.x = 30; content.y = 0; marker.graphics.lineStyle(); marker.graphics.beginFill(0xff0000); marker.graphics.drawCircle(0, totalH + 30, 3); marker.graphics.endFill(); marker.graphics.lineStyle(2, 0xffffff); marker.graphics.moveTo(30 + 10, totalH - 10); marker.graphics.lineTo(0, totalH + 30); return marker; } /** * Handles click on the Panoramio icon. */ private function onPanoramioIconClick(event:MouseEvent):void { navigateToURL(new URLRequest(PANORAMIO_HOME)); } /** * Handles failure of a Panoramio image load. */ private function onPanoramioDataFailed(event:IOErrorEvent):void { trace("Load of image failed: " + event); } /** * Returns a string containing cgi query parameters. * @param Associative array mapping query parameter key to value. * @return String containing cgi query parameters. */ private static function paramsToString(params:Object):String { var result:String = ""; var separator:String = ""; for (var key:String in params) { result += separator + encodeURIComponent(key) + "=" + encodeURIComponent(params[key]); separator = "&"; } return result; } /** * Called once the lead-in flight is done. Starts the car driving along * the route and starts a timer to begin fade in of the Panoramio * images in 1.5 seconds. */ private function onLeadInDone(event:Event):void { // Set startTimer non-zero so that the car starts to move. startTimer = getTimer(); // Start a timer that will fade in the Panoramio images. var fadeInTimer:Timer = new Timer(1500, 1); fadeInTimer.addEventListener(TimerEvent.TIMER, onFadeInTimer); fadeInTimer.start(); } /** * Handles the fade in timer&apos;s TIMER event. Sets markerAlpha above zero * which causes the frame enter handler to fade in the markers. */ private function onFadeInTimer(event:Event):void { markerAlpha = 0.01; } /** * The end time of the flight. */ private function get endTime():Number { if (!cumulativeStepDuration || cumulativeStepDuration.length == 0) { return startTimer; } return startTimer + cumulativeStepDuration[cumulativeStepDuration.length - 1]; } /** * Creates the cumulative arrays, cumulativeStepDuration and * cumulativeVertexDistance. */ private function createCumulativeArrays():void { cumulativeStepDuration = new Array(route.numSteps + 1); cumulativeVertexDistance = new Array(polyline.getVertexCount() + 1); var polylineTotal:Number = 0; var total:Number = 0; var numVertices:int = polyline.getVertexCount(); for (var stepIndex:int = 0; stepIndex < route.numSteps; ++stepIndex) { cumulativeStepDuration[stepIndex] = total; total += route.getStep(stepIndex).duration; var startVertex:int = stepIndex >= 0 ? route.getStep(stepIndex).polylineIndex : 0; var endVertex:int = stepIndex < (route.numSteps - 1) ? route.getStep(stepIndex + 1).polylineIndex : numVertices; var duration:Number = route.getStep(stepIndex).duration; var stepVertices:int = endVertex - startVertex; var latLng:LatLng = polyline.getVertex(startVertex); for (var vertex:int = startVertex; vertex < endVertex; ++vertex) { cumulativeVertexDistance[vertex] = polylineTotal; if (vertex < numVertices - 1) { var nextLatLng:LatLng = polyline.getVertex(vertex + 1); polylineTotal += nextLatLng.distanceFrom(latLng); } latLng = nextLatLng; } } cumulativeStepDuration[stepIndex] = total; } /** * Opens the info window above the car icon that details the given * step of the driving directions. * @param stepIndex Index of the current step. */ private function openInfoForStep(stepIndex:int):void { // Sets the content of the info window. var content:String; if (stepIndex >= route.numSteps) { content = "<b>" + route.endGeocode.address + "</b>" + "<br /><br />" + route.summaryHtml; } else { content = "<b>" + stepIndex + ".</b> " + route.getStep(stepIndex).descriptionHtml; } marker.openInfoWindow(new InfoWindowOptions({ contentHTML: content })); } /** * Displays the driving directions step appropriate for the given time. * Opens the info window showing the step instructions each time we * progress to a new step. * @param time Time for which to display the step. */ private function displayStepAt(time:Number):void { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex >= 0 && stepIndex <= maxStepIndex && currentStepIndex != stepIndex) { openInfoForStep(stepIndex); currentStepIndex = stepIndex; } } /** * Returns the LatLng at which the car should be positioned at the given * time. * @param time Time for which LatLng should be found. * @return LatLng. */ private function latLngAt(time:Number):LatLng { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex < minStepIndex) { return route.startGeocode.point; } else if (stepIndex > maxStepIndex) { return route.endGeocode.point; } var stepStart:Number = cumulativeStepDuration[stepIndex]; var stepEnd:Number = cumulativeStepDuration[stepIndex + 1]; var stepFraction:Number = (time - stepStart) / (stepEnd - stepStart); var startVertex:int = route.getStep(stepIndex).polylineIndex; var endVertex:int = (stepIndex + 1) < route.numSteps ? route.getStep(stepIndex + 1).polylineIndex : polyline.getVertexCount(); var stepVertices:int = endVertex - startVertex; var stepLeng

    Read the article

< Previous Page | 32 33 34 35 36