Search Results

Search found 85288 results on 3412 pages for 'new bie'.

Page 436/3412 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • Rapid Evolution of Society & Technology

    - by Michael Snow
    We caught up with Brian Solis on the phone the other day and Christie Flanagan had a chance to chat with him and learn a bit more about him and some of the concepts he'll be addressing in our Social Business Thought Leaders Webcast on Thursday 12/13/12. «--- Interview with Brian Solis  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast- mso-fareast-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Be sure and register for this week's webcast ---» ------------------- Guest post by Brian Solis. Reposted (Borrowed) from his posting of May 24, 2012 Dear [insert business name], what’s your promise? - Brian Solis You say you want to get closer to customers, but your actions are different than your words. You say you want to “surprise and delight” customers, but your product development teams are too busy building against a roadmap without consideration of the 5th P of marketing…people. Your employees are your number one asset, however the infrastructure of the organization has turned once optimistic and ambitious intrapreneurs into complacent cogs or worse, your greatest detractors. You question the adoption of disruptive technology by your internal champions yet you’ve not tried to find the value for yourself. You’re a change agent and you truly wish to bring about change, but you’ve not invested time or resources to answer “why” in your endeavors to become a connected or social business. If we are to truly change, we must find purpose. We must uncover the essence of our business and the value it delivers to traditional and connected consumers. We must rethink the spirit of today’s embrace and clearly articulate how transformation is going to improve customer and employee experiences and relationships now and over time. Without doing so, any attempts at evolution will be thwarted by reality. In an era of Digital Darwinism, no business is too big to fail or too small to succeed. These are undisciplined times which require alternative approaches to recognize and pursue new opportunities. But everything begins with acknowledging the 360 view of the world that you see today is actually a filtered view of managed and efficient convenience. Today, many organizations that were once inspired by innovation and engagement have fallen into a process of marketing, operationalizing, managing, and optimizing. That might have worked for the better part of the last century, but for the next 10 years and beyond, new vision, leadership and supporting business models will be written to move businesses from rigid frameworks to adaptive and agile entities. I believe that today’s executives will undergo a great test; a test of character, vision, intention, and universal leadership. It starts with a simple, but essential question…what is your promise? Notice, I didn’t ask about your brand promise. Nor did I ask for you to cite your mission and vision statements. This is much more than value propositions or manufactured marketing language designed to hook audiences and stakeholders. I asked for your promise to me as your consumer, stakeholder, and partner. This isn’t about B2B or B2C, but instead, people to people, person to person. It is this promise that will breathe new life into an organization that on the outside, could be misdiagnosed as catatonic by those who are disrupting your markets. A promise, for example, is meant to inspire. It creates alignment. It serves as the foundation for your vision, mission, and all business strategies and it must come from the top to mean anything. For without it, we cannot genuinely voice what it is we stand for or stand behind. Think for a moment about the definition of community. It’s easy to confuse a workplace or a market where everyone simply shares common characteristics. However, a community in this day and age is much more than belonging to something, it’s about doing something together that makes belonging matter The next few years will force a divide where companies are separated by intention as measured by actions and words. But, becoming a social business is not enough. Becoming more authentic and transparent doesn’t serve as a mantra for a renaissance. A promise is the ink that inscribes the spirit of the relationship between you and me. A promise serves as the words that influence change from within and change beyond the halls of our business. It is the foundation for a renewed embrace, one that must then find its way to every aspect of the organization. It’s the difference between a social business and an adaptive business. While an adaptive business can also be social, it is the culture of the organization that strives to not just use technology to extend current philosophies or processes into new domains, but instead give rise to a new culture where striving for relevance is among its goals. The tools and networks simply become enablers of a greater mission You are reading this because you believe in something more than what you’re doing today. While you fight for change within your organization, remember to aim for a higher purpose. Organizations that strive for innovation, imagination, and relevance will outperform those that do not. Part of your job is to lead a missionary push that unites the groundswell with a top down cascade. Change will only happen because you and other internal champions see what others can’t and will do what other won’t. It takes resolve. It takes the ability to translate new opportunities into business value. And, it takes courage. “This is a very noisy world, so we have to be very clear what we want them to know about us”-Steve Jobs ----------------------------------------------------------------- So -- where do you begin to evaluate the kind of experience you are delivering for your customers, partners, and employees?  Take a look at this White Paper: Creating a Successful and Meaningful Customer Experience on the Web and then have a cup of coffee while you listen to the sage advice of Guy Kawasaki in a short video below.   An interview with Guy Kawasaki on Maximizing Social Media Channels 

    Read the article

  • Retrieving a list of eBay categories using the .NET SDK and GetCategoriesCall

    - by Bill Osuch
    eBay offers a .Net SDK for its Trading API - this post will show you the basics of making an API call and retrieving a list of current categories. You'll need the category ID(s) for any apps that post or search eBay. To start, download the latest SDK from https://www.x.com/developers/ebay/documentation-tools/sdks/dotnet and create a new console app project. Add a reference to the eBay.Service DLL, and a few using statements: using eBay.Service.Call; using eBay.Service.Core.Sdk; using eBay.Service.Core.Soap; I'm assuming at this point you've already joined the eBay Developer Network and gotten your app IDs and user tokens. If not: Join the developer program Generate tokens Next, add an app.config file that looks like this: <?xml version="1.0"?> <configuration>   <appSettings>     <add key="Environment.ApiServerUrl" value="https://api.ebay.com/wsapi"/>     <add key="UserAccount.ApiToken" value="YourBigLongToken"/>   </appSettings> </configuration> And then add the code to get the xml list of categories: ApiContext apiContext = GetApiContext(); GetCategoriesCall apiCall = new GetCategoriesCall(apiContext); apiCall.CategorySiteID = "0"; //Leave this commented out to retrieve all category levels (all the way down): //apiCall.LevelLimit = 4; //Uncomment this to begin at a specific parent category: //StringCollection parentCategories = new StringCollection(); //parentCategories.Add("63"); //apiCall.CategoryParent = parentCategories; apiCall.DetailLevelList.Add(DetailLevelCodeType.ReturnAll); CategoryTypeCollection cats = apiCall.GetCategories(); using (StreamWriter outfile = new StreamWriter(@"C:\Temp\EbayCategories.xml")) {    outfile.Write(apiCall.SoapResponse); } GetApiContext() (provided in the sample apps in the SDK) is required for any call:         static ApiContext GetApiContext()         {             //apiContext is a singleton,             //to avoid duplicate configuration reading             if (apiContext != null)             {                 return apiContext;             }             else             {                 apiContext = new ApiContext();                 //set Api Server Url                 apiContext.SoapApiServerUrl = ConfigurationManager.AppSettings["Environment.ApiServerUrl"];                 //set Api Token to access eBay Api Server                 ApiCredential apiCredential = new ApiCredential();                 apiCredential.eBayToken = ConfigurationManager.AppSettings["UserAccount.ApiToken"];                 apiContext.ApiCredential = apiCredential;                 //set eBay Site target to US                 apiContext.Site = SiteCodeType.US;                 return apiContext;             }         } Running this will give you a large (4 or 5 megs) XML file that looks something like this: <soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">    <soapenv:Body>       <GetCategoriesResponse >          <Timestamp>2012-06-06T16:03:46.158Z</Timestamp>          <Ack>Success</Ack>          <CorrelationID>d02dd9e3-295a-4268-9ea5-554eeb2e0e18</CorrelationID>          <Version>775</Version>          <Build>E775_CORE_BUNDLED_14891042_R1</Build> -          <CategoryArray>             <Category>                <BestOfferEnabled>true</BestOfferEnabled>                <AutoPayEnabled>true</AutoPayEnabled>                <CategoryID>20081</CategoryID>                <CategoryLevel>1</CategoryLevel>                <CategoryName>Antiques</CategoryName>                <CategoryParentID>20081</CategoryParentID>             </Category>             <Category>                <BestOfferEnabled>true</BestOfferEnabled>                <AutoPayEnabled>true</AutoPayEnabled>                <CategoryID>37903</CategoryID>                <CategoryLevel>2</CategoryLevel>                <CategoryName>Antiquities</CategoryName>                <CategoryParentID>20081</CategoryParentID>             </Category> (etc.) You could work with this, but I wanted a nicely nested view, like this: <CategoryArray>    <Category Name='Antiques' ID='20081' Level='1'>       <Category Name='Antiquities' ID='37903' Level='2'/> </CategoryArray> ...so I transformed the xml: private void TransformXML(CategoryTypeCollection cats)         {             XmlElement topLevelElement = null;             XmlElement childLevelElement = null;             XmlNode parentNode = null;             string categoryString = "";             XmlDocument returnDoc = new XmlDocument();             XmlElement root = returnDoc.CreateElement("CategoryArray");             returnDoc.AppendChild(root);             XmlNode rootNode = returnDoc.SelectSingleNode("/CategoryArray");             //Loop through CategoryTypeCollection             foreach (CategoryType category in cats)             {                 if (category.CategoryLevel == 1)                 {                     //Top-level category, so we know we can just add it                     topLevelElement = returnDoc.CreateElement("Category");                     topLevelElement.SetAttribute("Name", category.CategoryName);                     topLevelElement.SetAttribute("ID", category.CategoryID);                     rootNode.AppendChild(topLevelElement);                 }                 else                 {                     // Level number will determine how many Category nodes we are deep                     categoryString = "";                     for (int x = 1; x < category.CategoryLevel; x++)                     {                         categoryString += "/Category";                     }                     parentNode = returnDoc.SelectSingleNode("/CategoryArray" + categoryString + "[@ID='" + category.CategoryParentID[0] + "']");                     childLevelElement = returnDoc.CreateElement("Category");                     childLevelElement.SetAttribute("Name", category.CategoryName);                     childLevelElement.SetAttribute("ID", category.CategoryID);                     parentNode.AppendChild(childLevelElement);                 }             }             returnDoc.Save(@"C:\Temp\EbayCategories-Modified.xml");         } Yes, there are probably much cleaner ways of dealing with it, but I'm not an xml expert… Keep in mind, eBay categories do not change on a regular basis, so you should be able to cache this data (either in a file or database) for some time. The xml returns a CategoryVersion node that you can use to determine if the category list has changed. Technorati Tags: Csharp, eBay

    Read the article

  • CodePlex Daily Summary for Monday, July 15, 2013

    CodePlex Daily Summary for Monday, July 15, 2013Popular ReleasesMVC Forum: MVC Forum v1.0: Finally version 1.0 is here! We have been fixing a few bugs, and making sure the release is as stable as possible. We have also changed the way configuration of the application works, mostly how to add your own code or replace some of the standard code with your own. If you download and use our software, please give us some sort of feedback, good or bad!SharePoint 2013 TypeScript Definitions: Release 1.1: TypeScript 0.9 support SharePoint TypeScript Definitions are now compliant with the new version of TypeScript TypeScript generics are now used for defining collection classes (derivatives of SP.ClientCollection object) Improved coverage Added mQuery definitions (m$) Added SPClientAutoFill definitions SP.Utilities namespace is now fully covered SP.UI modal dialog definitions improved CSR definitions improved, added some missing methods and context properties, mostly related to list ...GoAgent GUI: GoAgent GUI ??? 1.0.0: ????GoAgent GUI????,???????????.Net Framework 4.0 ???????: Windows 8 x64 Windows 8 x86 Windows 7 x64 Windows 7 x86 ???????: Windows 8.1 Preview (x86/x64) ???????: Windows XP ????: ????????GoAgent????,????????,?????????????,???????????????????,??????????,????。PiGraph: PiGraph 2.0.8.13: C?p nh?t:Các l?i dã s?a: S?a l?i không nh?p du?c s? âm. L?i tabindex trong giao di?n Thêm hàm Các l?i chua kh?c ph?c: L?i ghi chú nh?p nháy màu. L?i khung ghi chú vu?t ra kh?i biên khi luu file. Luu ý:N?u không kh?i d?ng duoc chuong trình, b?n nên c?p nh?t driver card d? h?a phiên b?n m?i nh?t: AMD Graphics Drivers NVIDIA Driver Xem yêu c?u h? th?ngD3D9Client: D3D9Client R12 for Orbiter Beta: D3D9Client release for orbiter BetaVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Project Server 2013 Event Handler Admin Tool: PSI Event Admin Tool: Download & exract the File. Use LoggerAdmin to upload the event handlers in project server 2013. PSIEventLogger\LoggerAdmin\bin\DebugGherkin editor: Gherkin Editor Beta 2: Fix issue #7 and add some refactoring and code cleanupNew-NuGetPackage PowerShell Script: New-NuGetPackage.ps1 PowerShell Script v1.2: Show nuget gallery to push to when prompting user if they want to push their package.Site Templates By Steve: SharePoint 2010 CORE Site Theme By Steve WSP: Great Site Theme to start with from Steve. See project home page for install instructions. This is a nice centered, mega-menu, fixed width masterpage with styles. Remember to update the mega menu lists.SharePoint Solution Installer: SharePoint Solution Installer V1.2.8: setup2013.exe now supports CompatibilityLevel to target specific hive Use setup.exe for SP2007 & SP2010. Use setup2013.exe for SP2013.TBox - tool to make developer's life easier.: TBox 1.021: 1)Add console unit tests runner, to run unit tests in parallel from console. Also this new sub tool can save valid xml nunit report. It can help you in continuous integration. 2)Fix build scripts.LifeInSharepoint Modern UI Update: Version 2: Some minor improvements, including Audience Targeting support for quick launch links. Also removing all NextDocs references.Virtual Photonics: VTS MATLAB Package 1.0.13 Beta: This is the beta release of the MATLAB package with examples of using the VTS libraries within MATLAB. Changes for this release: Added two new examples to vts_solver_demo.m that: 1) generates and plots R(lambda) at a given rho, and chromophore concentrations assuming a power law for scattering, and 2) solves inverse problem for R(lambda) at given rho. This example solves for concentrations of HbO2, Hb and H20 given simulated measurements created using Nurbs scaled Monte Carlo and inverted u...Advanced Resource Tab for Blend: Advanced Resource Tab: This is the first alpha release of the advanced resource tab for Blend for Visual Studio 2012.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsLINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneNew Projects[.Net Intl] harroc_c;mallar_a;olouso_f: The goal of this project is to create a web crawler and a web front who allows you to search in your index. You will create a mini (or large!) search engine basButterfly Storage: Butterfly Storage is a data access technology based on object-oriented database model for Windows Store applications.KaveCompany: KaveCompleave that girl alone: a team project!MyClrProfiler: This project helps you learn about and develop your own CLR profiler.NETDeob: Deobfuscate obfuscated .NET files easilyProgram stomatologie: SummarySimple Graph Library: Simple portable class library for graphs data structures. .NET, Silverlight 4/5, Windows Phone, Windows RT, Xbox 360T6502 Emulator: T6502 is a 6502 emulator written in TypeScript using AngularJS. The goal is well-organized, readable code over performance.WP8 File Access Webserver: C# HTTP server and web application on Windows Phone 8. Implements file access, browsing and downloading.wpadk: wpadk????wp7?????? ?????????,?????、SDK、wpadk?????????????。??????????????????。??????????????????,????wpadk?????????????????????????????????????。xlmUnit: xlmUnit, Unit Testing

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • MySQL Utility Users' Console Oerview

    - by rudrap
    MySQL Utility Users' Console (mysqluc): The MySQL Utilities Users' Console is designed to make using the utilities easier via a dedicated console. It helps us to use the utilities without worrying about the python and utility paths. Why do we need a special console? - It does provide a unique shell environment with command completion, help for each utility, user defined variables, and type completion for options. - You no longer have to type out the entire name of the utility. - You don't need to remember the name of a database utility you want to use. - You can define variables and reuse them in your utility commands. - It is possible to run utility command along with mysqluc and come out of the mysqluc console. Console commands: mysqluc> help Command Description ----------------------           --------------------------------------------------- help utilities                     Display list of all utilities supported. help <utility>                  Display help for a specific utility. help or help commands   Show this list. exit or quit                       Exit the console. set <variable>=<value>  Store a variable for recall in commands. show options                   Display list of options specified by the user on launch. show variables                 Display list of variables. <ENTER>                       Press ENTER to execute command. <ESCAPE>                     Press ESCAPE to clear the command entry. <DOWN>                       Press DOWN to retrieve the previous command. <UP>                               Press UP to retrieve the next command in history. <TAB>                            Press TAB for type completion of utility, option,or variable names. <TAB><TAB>                Press TAB twice for list of matching type completion (context sensitive). How do I use it? Pre-requisites: - Download the latest version of MySQL Workbench. - Mysql Servers are running. - Your Pythonpath is set. (e.g. Export PYTHONPATH=/...../mysql-utilities/) Check the Version of mysqluc Utility: /usr/bin/python mysqluc.py –version It should display something like this MySQL Utilities mysqluc.py version 1.1.0 - MySQL Workbench Distribution 5.2.44 Copyright (c) 2010, 2012 Oracle and/or its affiliates. All rights reserved. This program is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, to the extent permitted by law. Use of TAB to get the current utilities: mysqluc> mysqldb<TAB><TAB> Utility Description -------------        ------------------------------------------------------------ mysqldbcopy      copy databases from one server to another mysqldbexport    export metadata and data from databases mysqldbimport    import metadata and data from files mysqluc> mysqldbcopy –source=$se<TAB> Variable Value -------- ---------------------------------------------------------------------- server1 root@localhost:3306 server2 root@localhost:3307 you can see the variables starting with se and then decide which to use Run a utility via the console: /usr/bin/python mysqluc.py -e "mysqldbcopy --source=root@localhost:3306 --destination=root@localhost:3307 dbname" Get help for utilities in the console: mysqluc> help utilities Display help for a utility mysqluc> help mysqldbcopy Details about mysqldbcopy and its options set variables and use them in commands: mysqluc> set server1 = root@localhost:3306 mysqluc>show variables Variable Value -------- ---------------------------------------------------------------------- server1    root@localhost:3306 server2    root@localhost:3307 mysqluc> mysqldbcopy –source=$server1 –destination=$server2 dbname <Enter> Mysqldbcopy utility output will display. mysqluc>show options Display list of options specified by the user mysqluc SERVER=root@host123 VAR_A=57 -e "show variables" Variable Value -------- ----------------------------------------------------------------- SERVER root@host123 VAR_A 57 Finding option names for an Utility: mysqluc> mysqlserverclone --n Option Description ------------------- --------------------------------------------------------- --new-data=NEW_DATA the full path to the location of the data directory for the new instance --new-port=NEW_PORT the new port for the new instance - default=3307 --new-id=NEW_ID the server_id for the new instance - default=2 Limitations: User defined variables have a lifetime of the console run time.

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    2012?2???????????????WebLogic Server 12c?????????Java EE 6?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse 12c??WebLogic Server 12c(???)????Java EE 6??????3??????????????????????????????JPA 2.0??????????·?????????EJB 3.1???????·???????????????(???)???????O/R?????????????JPA 2.0 Java EE 6????????????????????Web?????????????3?????(3????)???????·????????????·????????????????????????????????JPA(Java Persistence API) 2.0???EJB(Enterprise JavaBeans) 3.1???JSF(JavaServer Faces) 2.0????3????????????????·???????????JPA??Java??????????????·?????????????O/R?????????????????????·???????????EJB?Session Bean??????????????????·??????????????????????JSF??????????????????????????????????????? ??????JPA????Oracle Database??EMPLOYEES?????Java??????????????????????Entity Bean??????XML?????????????????????????XML????????????????????????????????????????????????????·?????????????????????????????????????????????????????????????Java EE 6??????JPA 2.0??????????·???????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse(OEPE)??????File????????New?-?Other??????? ??????New??????????????????????????Web?-?Dynamic Web Project???????Next????????????????Dynamic Web Project?????????????Project name????OOW???????????Target Runtime????New Runtime????????? ???New Server Runtime Environment???????????????Oracle?-?Oracle WebLogic Server 12c(12.1.1)???????Next???????????????????????????WebLogic home????C:\Oracle\Middleware\wlserver_12.1???????Finish?????????????WebLogic Home????????????????????????Java home?????????????????????Finish??????????????????????Dynamic Web Project????????????????Finish??????????????????JPA 2.0??????????·?????? ???????????????JPA 2.0???????????????·??????????????????Eclipse??Project Explorer?(??????·???)?????????OOW?????????????????????????????·???????????????Properties?????????????????·???·????????????????????????????Project Facets?????????????JPA??????(?????????????Details?????JPA 2.0?????????????????????)???????????????????Further configuration available????????? ???Modify Faceted Project??????????????????????????????????Connection????????????????????????????Add Connection????????? ??????New Connection Profile????????????????Connection Profile Type????Oracle Database Connection??????Next???????????? ???Specify a Driver and Connection Details???????Drivers????Oracle Database 10g Driver Default???????????Properties?????????????????????SIDxeHostlocalhostPort number1521User nameHRPasswordhr ???????????Test Connection??????????????????Ping Succeeded!?????????????????????????????Finish???????????Modify Faceted Project????????OK????????????????Properties for OOW????????OK?????????????????? ?????????Eclipse????????????????OOW?????????????????·???????????????JPA Tools?-?Generate Entities from Tables...??????? ????Generate Custom Entities???????????????????????????????Schema????HR??????Tables????EMPLOYEES???????????Next???????????? ???????????Next???????????Customize Default Entity Generation??????Package????model???????Finish?????????????JPQL?????????? ?????????Oracle Database??EMPLOYEES??????????????????·????model.Employee.java?????????????????????????????????·?????OOW????Java Resources?-?src?-?model???????Employee.java????????????????????????????????·???Employee????(Employee.java)?package model; import java.io.Serializable; import java.math.BigDecimal; import java.util.Date; import java.util.Set; import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?// Apublic class Employee implements Serializable {        private static final long serialVersionUID = 1L;       @Id  // ?       @Column(name="EMPLOYEE_ID")        private long employeeId;        @Column(name="COMMISSION_PCT")        private BigDecimal commissionPct;        @Column(name="DEPARTMENT_ID")        private BigDecimal departmentId;        private String email;        @Column(name="FIRST_NAME")        private String firstName;       @Temporal( TemporalType.DATE)  //?       @Column(name="HIRE_DATE")        private Date hireDate;        @Column(name="JOB_ID")        private String jobId;        @Column(name="LAST_NAME")        private String lastName;        @Column(name="PHONE_NUMBER")        private String phoneNumber;        private BigDecimal salary;        //bi-directional many-to-one association to Employee<...?...>}  ???????????????·???????????????????????????????????????????@Table(name="")??????@Table??????????????????????????????????????? ?????????????????????????????????????·???????????????? ?????????????????????????????SQL?Data?????????? ???????????????A?????JPA?????????JPQL(Java Persistence Query Language)?????????????JPQL?????SQL???????????????????????????????????????????????????????????????????????????????????Employee.selectByNameEmployee??firstName????????????????????employeeId????????? ?????????????????????import java.util.Date;import java.util.Set;import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?@NamedQueries({       @NamedQuery(name="Employee.selectByName" , query="select e from Employee e where e.firstName like :name order by e.employeeId")})<...?...> ?????????·??????OOW?-?JPA Content?-?persistent.xml??????Connection???????????????Database????JTA data source:???jdbc/test????????????????????????Java EE 6??????JPA 2.0???????????????????????????????????·??????????????????????????????????????SQL????????????????????????·????????????·??????????????XML??????????????????1??????????????????????????????????????????????????????????????????EJB 3.1????????·???????????EJB 3.1????????·?????????????????EJB 3.1?Stateless Session Bean?????·????????????????·???????????????????·??????????????????? EJB3.1?????JPA 2.0???????????·???????????????????????XML???????????????????????????????EJB 3.1?????????·????EJB?????????????????????????????????????????????????????????????? ????????EJB 3.1?Session Bean?????·????????????????????????????????????????????????????public List<Employee> getEmp(String keyword)firstName????????????Employee?????? ????????????????????·???????????OOW????????????·???????????????New?-?Other???????????????????????????????????EJB?-?Session Bean(EJB 3.x)??????NEXT????????????????????Create EJB 3.x Session Bean?????????????Java Package????ejb???class name????EmpLogic???????????State Type????Stateless?????????No-interface???????????????????????Finish???????????? ?????????Stateless Session Bean??????·?????EmpLogic.java????????????????????EmpLogic????·????????EJB?????????????Stateless Session Bean?????????@Stateless?????????????????????????????????????EmpLogic????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       public EmpLogic() {       }} ??????????????????????????????????????·???????????????????????import??????????????????EmpLogic??????????????????????????·???????????????????????import????????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ?import javax.persistence.PersistenceContext;  // ?<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {      @PersistenceContext(unitName = "OOW")  // ?      private EntityManager em;  // ?       public EmpLogic() {       }} ?????????·???????JPA???????????????????·????????????????????????????CRUD???????????????????·????????????EntityManager???????????????????????????1????????????????·???????????????????????@PersistenceContext?????unitName?????????????persistence.xml????persistence-unit???name?????????????? ???????EmpLogic?????·???????????????????????????????????????????????????????????????????????????????EmpLogic????????·???????(EmpLogic.java)?package ejb;import java.util.List;  // ? import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ? import javax.persistence.PersistenceContext;  // ? <...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       @PersistenceContext(unitName = "OOW")  // ?        private EntityManager em;  // ?        public EmpLogic() {       }      @SuppressWarnings("unchecked")  // ?      public List<Employee> getEmp(String keyword) {  // ?             StringBuilder param = new StringBuilder();  // ?             param.append("%");  // ?             param.append(keyword);  // ?             param.append("%");  // ?             return em.createNamedQuery("Employee.selectByName")  // ?                    .setParameter("name", param.toString()).getResultList();  // ?      }} ???EJB 3.1???Stateless Session Bean?????????? ???JSF 2.0???????????????????????????????????????????????????JAX-RS????RESTful?Web??????????????????????

    Read the article

  • Problem to display 3D google map.

    - by nemade-vipin
    hello friends, I have implemented one Flex application in which I want to display 3D google map.In which I am getting the error the NUll object reference during map display. Here is my code:- import com.google.maps.controls.MapTypeControl; import adobe.utils.XMLUI; import mx.rpc.events.FaultEvent; import mx.controls.Alert; import generated.webservices.*; import mx.collections.ArrayCollection; import mx.controls.*; import generated.webservices.*; import com.google.maps.LatLng; import com.google.maps.Map3D; import com.google.maps.MapEvent; import com.google.maps.MapMouseEvent; import com.google.maps.services.GeocodingEvent; import com.google.maps.services.ClientGeocoder; import com.google.maps.overlays.Marker; import com.google.maps.InfoWindowOptions; import com.google.maps.MapOptions; import com.google.maps.MapType; import com.google.maps.View; import com.google.maps.controls.NavigationControl; import com.google.maps.geom.Attitude; import mx.controls.Alert; [Bindable] private var childName:ArrayCollection; [Bindable] private var childId:ArrayCollection; private var photoFeed:ArrayCollection; private var trackinginfochild:TrackingInfo; private var arrayOfchild:Array; private var newEntry:GetSBTSMobileAuthentication; private var trackinginfo:GetSBTSTrackingInfo; private var childObj:Child; private var UserId:int; private var lat:int; private var long:int; private var latlong:LatLng; public var user:SBTSWebService; public function authentication():void { // Instantiate a new Entry object. user = new SBTSWebService(); if(user!=null) { user.addSBTSWebServiceFaultEventListener(handleFaults); user.addgetSBTSMobileAuthenticationEventListener(authenticationResult); newEntry = new GetSBTSMobileAuthentication(); if(newEntry!=null) { newEntry.mobile=mobileno.text; newEntry.password=password.text; user.getSBTSMobileAuthentication(newEntry); } } } public function handleFaults(event:FaultEvent):void { Alert.show("A fault occured contacting the server. Fault message is: " + event.fault.faultString); } public function authenticationResult(event:GetSBTSMobileAuthenticationResultEvent):void { if(event.result != null && event.result._return>0) { if(event.result._return > 0) { UserId = event.result._return; loginform.enabled = false; getChildList(UserId); viewstack2.selectedIndex=1; } else { Alert.show("Authentication fail"); } } } public function getChildList(userId:int):void { var childEntry:GetSBTSMobileChildrenInfo = new GetSBTSMobileChildrenInfo(); childEntry.UserId = userId; user.addgetSBTSMobileChildrenInfoEventListener(sbtsChildrenInfoResult); user.getSBTSMobileChildrenInfo(childEntry); } public function sbtsChildrenInfoResult(event:GetSBTSMobileChildrenInfoResultEvent):void { if( event.result._return!=null) { arrayOfchild = event.result._return as Array; photoFeed = new ArrayCollection(arrayOfchild); childName = new ArrayCollection(); for( var count:int=0;count<photoFeed.length;count++) { childObj = photoFeed.getItemAt(count,0) as Child; childName.addItem(childObj.strName); } } } private function trackingInfo():void { for( var count:int=0;count user.getSBTSTrackingInfo(trackinginfo); } } } } private function getTrackingInfo(event:GetSBTSTrackingInfoResultEvent):void { if(event.result._return != null) { trackinginfochild = event.result._return as TrackingInfo; lat = trackinginfochild.dblLatitude; long = trackinginfochild.dblLongitude; latlong = new LatLng(lat,long); } } private function onMapPreinitialize(event:MapEvent):void { var myMapOptions:MapOptions = new MapOptions(); myMapOptions.zoom = 12; myMapOptions.center = latlong; myMapOptions.mapType = MapType.NORMAL_MAP_TYPE; myMapOptions.viewMode = View.VIEWMODE_PERSPECTIVE; myMapOptions.attitude = new Attitude(20,30,0); buslocation.setInitOptions(myMapOptions); buspath.setInitOptions(myMapOptions); } private function onMapReady(event:MapEvent):void { this.buslocation.addControl(new NavigationControl()); this.buslocation.addControl( new MapTypeControl()); } ]]> <mx:Move id="hideEffect" yTo="-500" /> <mx:Move id="showEffect" xFrom="500"/> <mx:Panel width="100%" height="100%" headerColors="[#000000,#FFFFFF]" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:TabNavigator id="viewstack2" selectedIndex="0" creationPolicy="all" width="100%" height="100%"> <mx:Form label="Login Form" id="loginform" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:FormItem label="Mobile NO:" creationPolicy="all"> <mx:TextInput id="mobileno"/> </mx:FormItem> <mx:FormItem label="Password:" creationPolicy="all"> <mx:TextInput displayAsPassword="true" id="password" /> </mx:FormItem> <mx:FormItem> <mx:Button label="Login" click="authentication()"/> </mx:FormItem> </mx:Form> <mx:Form label="Child List" id="childForm" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:Label width="100%" color="blue" text="Select Child."/> <mx:RadioButtonGroup id="radioGroup" itemClick="trackingInfo()"/> <mx:Repeater id="fieldRepeater" dataProvider="{childName}"> <mx:RadioButton groupName="radioGroup" label="{fieldRepeater.currentItem}" value="{fieldRepeater.currentItem}"/> </mx:Repeater> </mx:Form> <mx:Form label="Child Information" hideEffect="{hideEffect}" showEffect="{showEffect}"> <mx:FormItem> <mx:DataGrid id="childinfo"> <mx:columns> <mx:DataGridColumn headerText="Child Name" dataField="strName"/> <mx:DataGridColumn headerText="School Name" dataField="strSchoolName"/> <mx:DataGridColumn headerText="Pick Up Point" dataField="strPickUpPointName"/> <mx:DataGridColumn headerText="Drop Down Point" dataField="strDropDownPointName"/> </mx:columns> </mx:DataGrid> </mx:FormItem> <mx:FormItem> <mx:Button label="click here to see bus location"/> </mx:FormItem> </mx:Form> <mx:Form label="Bus Location" hideEffect="{hideEffect}" showEffect="{showEffect}"> <maps:Map3D xmlns:maps="com.google.maps.*" mapevent_mappreinitialize="onMapPreinitialize(event)" id="buslocation" width="100%" height="100%" url="http://code.google.com/apis/maps/" key="ABQIAAAAXuX6aG-r_N0-tQNxUEV-vRSE8al1BQssMxLXJiP75kIjR3ssLxT3D52_u94hI-dMIkD72FmnK-P4og"/> </mx:Form> <mx:Form label="Bus path" hideEffect="{hideEffect}" showEffect="{showEffect}"> <maps:Map3D xmlns:maps="com.google.maps.*" mapevent_mappreinitialize="onMapPreinitialize(event)" id="buspath" width="100%" height="100%" url="http://code.google.com/apis/maps/" key="ABQIAAAAXuX6aG-r_N0-tQNxUEV-vRSE8al1BQssMxLXJiP75kIjR3ssLxT3D52_u94hI-dMIkD72FmnK-P4og"/> </mx:Form> </mx:TabNavigator> </mx:Panel> I am getting this error :- TypeError: Error #1009: Cannot access a property or method of a null object reference. at com.google.maps.geom::Geometry$/computeSeparatingAxes() at com.google.maps.geom::TileEnumerator/enumerateTiles() at com.google.maps.core::PerspectiveTilePane/computeOptimalSet() at com.google.maps.core::PerspectiveTilePane/render() at com.google.maps.core::PerspectiveTilePane/configure() at com.google.maps.managers::PerspectiveTileManager/updateView() at com.google.maps.managers::PerspectiveTileManager/initializePanes() at com.google.maps.managers::PerspectiveTileManager/onInitialize() at MethodInfo-190() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:9298] at com.google.maps.wrappers::BaseEventDispatcher/dispatchEvent() at com.google.maps.wrappers::EventDispatcherWrapper/dispatchEvent() at com.google.maps.core::MapImpl/size() at com.google.maps.core::MapImpl/setSize() at com.google.maps.wrappers::IMapWrapper/setSize() at com.google.maps::Map/internalSetSize() at com.google.maps::Map/onUIComponentResized() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:9298] at mx.core::UIComponent/dispatchResizeEvent()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:7077] at mx.core::UIComponent/setActualSize()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:6706] at mx.containers.utilityClasses::BoxLayout/updateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\containers\utilityClasses\BoxLayout.as:219] at mx.containers::Form/updateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\containers\Form.as:375] at mx.core::UIComponent/validateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:6351] at mx.core::Container/validateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\Container.as:2677] at mx.managers::LayoutManager/validateDisplayList()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\managers\LayoutManager.as:622] at mx.managers::LayoutManager/doPhasedInstantiation()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\managers\LayoutManager.as:695] at Function/http://adobe.com/AS3/2006/builtin::apply() at mx.core::UIComponent/callLaterDispatcher2()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:8628] at mx.core::UIComponent/callLaterDispatcher()[C:\autobuild\3.2.0\frameworks\projects\framework\src\mx\core\UIComponent.as:8568] Please resolve my issue.

    Read the article

  • How to use onSensorChanged sensor data in combination with OpenGL

    - by Sponge
    I have written a TestSuite to find out how to calculate the rotation angles from the data you get in SensorEventListener.onSensorChanged(). I really hope you can complete my solution to help people who will have the same problems like me. Here is the code, i think you will understand it after reading it. Feel free to change it, the main idea was to implement several methods to send the orientation angles to the opengl view or any other target which would need it. method 1 to 4 are working, they are directly sending the rotationMatrix to the OpenGl view. all other methods are not working or buggy and i hope someone knows to get them working. i think the best method would be method 5 if it would work, because it would be the easiest to understand but i'm not sure how efficient it is. the complete code isn't optimized so i recommend to not use it as it is in your project. here it is: import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGL10; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import static javax.microedition.khronos.opengles.GL10.*; import android.app.Activity; import android.content.Context; import android.content.pm.ActivityInfo; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; import android.util.Log; import android.view.WindowManager; /** * This class provides a basic demonstration of how to use the * {@link android.hardware.SensorManager SensorManager} API to draw a 3D * compass. */ public class SensorToOpenGlTests extends Activity implements Renderer, SensorEventListener { private static final boolean TRY_TRANSPOSED_VERSION = false; /* * MODUS overview: * * 1 - unbufferd data directly transfaired from the rotation matrix to the * modelview matrix * * 2 - buffered version of 1 where both acceleration and magnetometer are * buffered * * 3 - buffered version of 1 where only magnetometer is buffered * * 4 - buffered version of 1 where only acceleration is buffered * * 5 - uses the orientation sensor and sets the angles how to rotate the * camera with glrotate() * * 6 - uses the rotation matrix to calculate the angles * * 7 to 12 - every possibility how the rotationMatrix could be constructed * in SensorManager.getRotationMatrix (see * http://www.songho.ca/opengl/gl_anglestoaxes.html#anglestoaxes for all * possibilities) */ private static int MODUS = 2; private GLSurfaceView openglView; private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private FloatBuffer colorBuffer; private SensorManager mSensorManager; private float[] rotationMatrix = new float[16]; private float[] accelGData = new float[3]; private float[] bufferedAccelGData = new float[3]; private float[] magnetData = new float[3]; private float[] bufferedMagnetData = new float[3]; private float[] orientationData = new float[3]; // private float[] mI = new float[16]; private float[] resultingAngles = new float[3]; private int mCount; final static float rad2deg = (float) (180.0f / Math.PI); private boolean mirrorOnBlueAxis = false; private boolean landscape; public SensorToOpenGlTests() { } /** Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); openglView = new GLSurfaceView(this); openglView.setRenderer(this); setContentView(openglView); } @Override protected void onResume() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onResume(); openglView.onResume(); if (((WindowManager) getSystemService(WINDOW_SERVICE)) .getDefaultDisplay().getOrientation() == 1) { landscape = true; } else { landscape = false; } mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_GAME); } @Override protected void onPause() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onPause(); openglView.onPause(); mSensorManager.unregisterListener(this); } public int[] getConfigSpec() { // We want a depth buffer, don't care about the // details of the color buffer. int[] configSpec = { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE }; return configSpec; } public void onDrawFrame(GL10 gl) { // clear screen and color buffer: gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); // set target matrix to modelview matrix: gl.glMatrixMode(GL10.GL_MODELVIEW); // init modelview matrix: gl.glLoadIdentity(); // move camera away a little bit: if ((MODUS == 1) || (MODUS == 2) || (MODUS == 3) || (MODUS == 4)) { if (landscape) { // in landscape mode first remap the rotationMatrix before using // it with glMultMatrixf: float[] result = new float[16]; SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result); gl.glMultMatrixf(result, 0); } else { gl.glMultMatrixf(rotationMatrix, 0); } } else { //in all other modes do the rotation by hand: gl.glRotatef(resultingAngles[1], 1, 0, 0); gl.glRotatef(resultingAngles[2], 0, 1, 0); gl.glRotatef(resultingAngles[0], 0, 0, 1); if (mirrorOnBlueAxis) { //this is needed for mode 6 to work gl.glScalef(1, 1, -1); } } //move the axis to simulate augmented behaviour: gl.glTranslatef(0, 2, 0); // draw the 3 axis on the screen: gl.glVertexPointer(3, GL_FLOAT, 0, vertexBuffer); gl.glColorPointer(4, GL_FLOAT, 0, colorBuffer); gl.glDrawElements(GL_LINES, 6, GL_UNSIGNED_BYTE, indexBuffer); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); float r = (float) width / height; gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glFrustumf(-r, r, -1, 1, 1, 10); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glDisable(GL10.GL_DITHER); gl.glClearColor(1, 1, 1, 1); gl.glEnable(GL10.GL_CULL_FACE); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // load the 3 axis and there colors: float vertices[] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 }; float colors[] = { 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1 }; byte indices[] = { 0, 1, 0, 2, 0, 3 }; ByteBuffer vbb; vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertexBuffer = vbb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); vbb = ByteBuffer.allocateDirect(colors.length * 4); vbb.order(ByteOrder.nativeOrder()); colorBuffer = vbb.asFloatBuffer(); colorBuffer.put(colors); colorBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void onAccuracyChanged(Sensor sensor, int accuracy) { } public void onSensorChanged(SensorEvent event) { // load the new values: loadNewSensorData(event); if (MODUS == 1) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); } if (MODUS == 2) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData); } if (MODUS == 3) { rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, bufferedMagnetData); } if (MODUS == 4) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, magnetData); } if (MODUS == 5) { // this mode uses the sensor data recieved from the orientation // sensor resultingAngles = orientationData.clone(); if ((-90 > resultingAngles[1]) || (resultingAngles[1] > 90)) { resultingAngles[1] = orientationData[0]; resultingAngles[2] = orientationData[1]; resultingAngles[0] = orientationData[2]; } } if (MODUS == 6) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); final float[] anglesInRadians = new float[3]; SensorManager.getOrientation(rotationMatrix, anglesInRadians); if ((-90 < anglesInRadians[2] * rad2deg) && (anglesInRadians[2] * rad2deg < 90)) { // device camera is looking on the floor // this hemisphere is working fine mirrorOnBlueAxis = false; resultingAngles[0] = anglesInRadians[0] * rad2deg; resultingAngles[1] = anglesInRadians[1] * rad2deg; resultingAngles[2] = anglesInRadians[2] * -rad2deg; } else { mirrorOnBlueAxis = true; // device camera is looking in the sky // this hemisphere is mirrored at the blue axis resultingAngles[0] = (anglesInRadians[0] * rad2deg); resultingAngles[1] = (anglesInRadians[1] * rad2deg); resultingAngles[2] = (anglesInRadians[2] * rad2deg); } } if (MODUS == 7) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x y z * order Rx*Ry*Rz */ resultingAngles[2] = (float) (Math.asin(rotationMatrix[2])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[0] = -(float) (Math.acos(rotationMatrix[0] / cosB)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[10] / cosB)) * rad2deg; } if (MODUS == 8) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z y x */ resultingAngles[2] = (float) (Math.asin(-rotationMatrix[8])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[9] / cosB)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[4] / cosB)) * rad2deg; } if (MODUS == 9) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z x y * * note z axis looks good at this one */ resultingAngles[1] = (float) (Math.asin(rotationMatrix[9])); final float minusCosA = -(float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[8] / minusCosA)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[1] / minusCosA)) * rad2deg; } if (MODUS == 10) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y x z */ resultingAngles[1] = (float) (Math.asin(-rotationMatrix[6])); final float cosA = (float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[2] / cosA)) * rad2deg; resultingAngles[0] = (float) (Math.acos(rotationMatrix[5] / cosA)) * rad2deg; } if (MODUS == 11) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y z x */ resultingAngles[0] = (float) (Math.asin(rotationMatrix[4])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } if (MODUS == 12) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x z y */ resultingAngles[0] = (float) (Math.asin(-rotationMatrix[1])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } logOutput(); } /** * transposes the matrix because it was transposted (inverted, but here its * the same, because its a rotation matrix) to be used for opengl * * @param source * @return */ private float[] transpose(float[] source) { final float[] result = source.clone(); if (TRY_TRANSPOSED_VERSION) { result[1] = source[4]; result[2] = source[8]; result[4] = source[1]; result[6] = source[9]; result[8] = source[2]; result[9] = source[6]; } // the other values in the matrix are not relevant for rotations return result; } private void rootMeanSquareBuffer(float[] target, float[] values) { final float amplification = 200.0f; float buffer = 20.0f; target[0] += amplification; target[1] += amplification; target[2] += amplification; values[0] += amplification; values[1] += amplification; values[2] += amplification; target[0] = (float) (Math .sqrt((target[0] * target[0] * buffer + values[0] * values[0]) / (1 + buffer))); target[1] = (float) (Math .sqrt((target[1] * target[1] * buffer + values[1] * values[1]) / (1 + buffer))); target[2] = (float) (Math .sqrt((target[2] * target[2] * buffer + values[2] * values[2]) / (1 + buffer))); target[0] -= amplification; target[1] -= amplification; target[2] -= amplification; values[0] -= amplification; values[1] -= amplification; values[2] -= amplification; } private void loadNewSensorData(SensorEvent event) { final int type = event.sensor.getType(); if (type == Sensor.TYPE_ACCELEROMETER) { accelGData = event.values.clone(); } if (type == Sensor.TYPE_MAGNETIC_FIELD) { magnetData = event.values.clone(); } if (type == Sensor.TYPE_ORIENTATION) { orientationData = event.values.clone(); } } private void logOutput() { if (mCount++ > 30) { mCount = 0; Log.d("Compass", "yaw0: " + (int) (resultingAngles[0]) + " pitch1: " + (int) (resultingAngles[1]) + " roll2: " + (int) (resultingAngles[2])); } } }

    Read the article

  • Blackberry Player, custom data source

    - by Alex
    Hello I must create a custom media player within the application with support for mp3 and wav files. I read in the documentation i cant seek or get the media file duration without a custom datasoruce. I checked the demo in the JDE 4.6 but i have still problems... I cant get the duration, it return much more then the expected so i`m sure i screwed up something while i modified the code to read the mp3 file locally from the filesystem. Somebody can help me what i did wrong ? (I can hear the mp3, so the player plays it correctly from start to end) I must support OSs = 4.6. Thank You Here is my modified datasource LimitedRateStreaminSource.java * Copyright © 1998-2009 Research In Motion Ltd. Note: For the sake of simplicity, this sample application may not leverage resource bundles and resource strings. However, it is STRONGLY recommended that application developers make use of the localization features available within the BlackBerry development platform to ensure a seamless application experience across a variety of languages and geographies. For more information on localizing your application, please refer to the BlackBerry Java Development Environment Development Guide associated with this release. */ package com.halcyon.tawkwidget.model; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import javax.microedition.io.Connector; import javax.microedition.io.file.FileConnection; import javax.microedition.media.Control; import javax.microedition.media.protocol.ContentDescriptor; import javax.microedition.media.protocol.DataSource; import javax.microedition.media.protocol.SourceStream; import net.rim.device.api.io.SharedInputStream; /** * The data source used by the BufferedPlayback's media player. / public final class LimitedRateStreamingSource extends DataSource { /* The max size to be read from the stream at one time. */ private static final int READ_CHUNK = 512; // bytes /** A reference to the field which displays the load status. */ //private TextField _loadStatusField; /** A reference to the field which displays the player status. */ //private TextField _playStatusField; /** * The minimum number of bytes that must be buffered before the media file * will begin playing. */ private int _startBuffer = 200000; /** The maximum size (in bytes) of a single read. */ private int _readLimit = 32000; /** * The minimum forward byte buffer which must be maintained in order for * the video to keep playing. If the forward buffer falls below this * number, the playback will pause until the buffer increases. */ private int _pauseBytes = 64000; /** * The minimum forward byte buffer required to resume * playback after a pause. */ private int _resumeBytes = 128000; /** The stream connection over which media content is passed. */ //private ContentConnection _contentConnection; private FileConnection _fileConnection; /** An input stream shared between several readers. */ private SharedInputStream _readAhead; /** A stream to the buffered resource. */ private LimitedRateSourceStream _feedToPlayer; /** The MIME type of the remote media file. */ private String _forcedContentType; /** A counter for the total number of buffered bytes */ private volatile int _totalRead; /** A flag used to tell the connection thread to stop */ private volatile boolean _stop; /** * A flag used to indicate that the initial buffering is complete. In * other words, that the current buffer is larger than the defined start * buffer size. */ private volatile boolean _bufferingComplete; /** A flag used to indicate that the remote file download is complete. */ private volatile boolean _downloadComplete; /** The thread which retrieves the remote media file. */ private ConnectionThread _loaderThread; /** The local save file into which the remote file is written. */ private FileConnection _saveFile; /** A stream for the local save file. */ private OutputStream _saveStream; /** * Constructor. * @param locator The locator that describes the DataSource. */ public LimitedRateStreamingSource(String locator) { super(locator); } /** * Open a connection to the locator. * @throws IOException */ public void connect() throws IOException { //Open the connection to the remote file. _fileConnection = (FileConnection)Connector.open(getLocator(), Connector.READ); //Cache a reference to the locator. String locator = getLocator(); //Report status. System.out.println("Loading: " + locator); //System.out.println("Size: " + _contentConnection.getLength()); System.out.println("Size: " + _fileConnection.totalSize()); //The name of the remote file begins after the last forward slash. int filenameStart = locator.lastIndexOf('/'); //The file name ends at the first instance of a semicolon. int paramStart = locator.indexOf(';'); //If there is no semicolon, the file name ends at the end of the line. if (paramStart < 0) { paramStart = locator.length(); } //Extract the file name. String filename = locator.substring(filenameStart, paramStart); System.out.println("Filename: " + filename); //Open a local save file with the same name as the remote file. _saveFile = (FileConnection) Connector.open("file:///SDCard/blackberry/music" + filename, Connector.READ_WRITE); //If the file doesn't already exist, create it. if (!_saveFile.exists()) { _saveFile.create(); } System.out.println("---------- 1"); //Open the file for writing. _saveFile.setReadable(true); //Open a shared input stream to the local save file to //allow many simultaneous readers. SharedInputStream fileStream = SharedInputStream.getSharedInputStream(_saveFile.openInputStream()); //Begin reading at the beginning of the file. fileStream.setCurrentPosition(0); System.out.println("---------- 2"); //If the local file is smaller than the remote file... if (_saveFile.fileSize() < _fileConnection.totalSize()) { System.out.println("---------- 3"); //Did not get the entire file, set the system to try again. _saveFile.setWritable(true); System.out.println("---------- 4"); //A non-null save stream is used as a flag later to indicate that //the file download was incomplete. _saveStream = _saveFile.openOutputStream(); System.out.println("---------- 5"); //Use a new shared input stream for buffered reading. _readAhead = SharedInputStream.getSharedInputStream(_fileConnection.openInputStream()); System.out.println("---------- 6"); } else { //The download is complete. System.out.println("---------- 7"); _downloadComplete = true; //We can use the initial input stream to read the buffered media. _readAhead = fileStream; System.out.println("---------- 8"); //We can close the remote connection. _fileConnection.close(); System.out.println("---------- 9"); } if (_forcedContentType != null) { //Use the user-defined content type if it is set. System.out.println("---------- 10"); _feedToPlayer = new LimitedRateSourceStream(_readAhead, _forcedContentType); System.out.println("---------- 11"); } else { System.out.println("---------- 12"); //Otherwise, use the MIME types of the remote file. // _feedToPlayer = new LimitedRateSourceStream(_readAhead, _fileConnection)); } System.out.println("---------- 13"); } /** * Destroy and close all existing connections. */ public void disconnect() { try { if (_saveStream != null) { //Destroy the stream to the local save file. _saveStream.close(); _saveStream = null; } //Close the local save file. _saveFile.close(); if (_readAhead != null) { //Close the reader stream. _readAhead.close(); _readAhead = null; } //Close the remote file connection. _fileConnection.close(); //Close the stream to the player. _feedToPlayer.close(); } catch (Exception e) { System.err.println(e.getMessage()); } } /** * Returns the content type of the remote file. * @return The content type of the remote file. */ public String getContentType() { return _feedToPlayer.getContentDescriptor().getContentType(); } /** * Returns a stream to the buffered resource. * @return A stream to the buffered resource. */ public SourceStream[] getStreams() { return new SourceStream[] { _feedToPlayer }; } /** * Starts the connection thread used to download the remote file. */ public void start() throws IOException { //If the save stream is null, we have already completely downloaded //the file. if (_saveStream != null) { //Open the connection thread to finish downloading the file. _loaderThread = new ConnectionThread(); _loaderThread.start(); } } /** * Stop the connection thread. */ public void stop() throws IOException { //Set the boolean flag to stop the thread. _stop = true; } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented Controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented Controls. return null; } /** * Force the lower level stream to a given content type. Must be called * before the connect function in order to work. * @param contentType The content type to use. */ public void setContentType(String contentType) { _forcedContentType = contentType; } /** * A stream to the buffered media resource. */ private final class LimitedRateSourceStream implements SourceStream { /** A stream to the local copy of the remote resource. */ private SharedInputStream _baseSharedStream; /** Describes the content type of the media file. */ private ContentDescriptor _contentDescriptor; /** * Constructor. Creates a LimitedRateSourceStream from * the given InputStream. * @param inputStream The input stream used to create a new reader. * @param contentType The content type of the remote file. */ LimitedRateSourceStream(InputStream inputStream, String contentType) { System.out.println("[LimitedRateSoruceStream]---------- 1"); _baseSharedStream = SharedInputStream.getSharedInputStream(inputStream); System.out.println("[LimitedRateSoruceStream]---------- 2"); _contentDescriptor = new ContentDescriptor(contentType); System.out.println("[LimitedRateSoruceStream]---------- 3"); } /** * Returns the content descriptor for this stream. * @return The content descriptor for this stream. */ public ContentDescriptor getContentDescriptor() { return _contentDescriptor; } /** * Returns the length provided by the connection. * @return long The length provided by the connection. */ public long getContentLength() { return _fileConnection.totalSize(); } /** * Returns the seek type of the stream. */ public int getSeekType() { return RANDOM_ACCESSIBLE; //return SEEKABLE_TO_START; } /** * Returns the maximum size (in bytes) of a single read. */ public int getTransferSize() { return _readLimit; } /** * Writes bytes from the buffer into a byte array for playback. * @param bytes The buffer into which the data is read. * @param off The start offset in array b at which the data is written. * @param len The maximum number of bytes to read. * @return the total number of bytes read into the buffer, or -1 if * there is no more data because the end of the stream has been reached. * @throws IOException */ public int read(byte[] bytes, int off, int len) throws IOException { System.out.println("[LimitedRateSoruceStream]---------- 5"); System.out.println("Read Request for: " + len + " bytes"); //Limit bytes read to our readLimit. int readLength = len; System.out.println("[LimitedRateSoruceStream]---------- 6"); if (readLength > getReadLimit()) { readLength = getReadLimit(); } //The number of available byes in the buffer. int available; //A boolean flag indicating that the thread should pause //until the buffer has increased sufficiently. boolean paused = false; System.out.println("[LimitedRateSoruceStream]---------- 7"); for (;;) { available = _baseSharedStream.available(); System.out.println("[LimitedRateSoruceStream]---------- 8"); if (_downloadComplete) { //Ignore all restrictions if downloading is complete. System.out.println("Complete, Reading: " + len + " - Available: " + available); return _baseSharedStream.read(bytes, off, len); } else if(_bufferingComplete) { if (paused && available > getResumeBytes()) { //If the video is paused due to buffering, but the //number of available byes is sufficiently high, //resume playback of the media. System.out.println("Resuming - Available: " + available); paused = false; return _baseSharedStream.read(bytes, off, readLength); } else if(!paused && (available > getPauseBytes() || available > readLength)) { //We have enough information for this media playback. if (available < getPauseBytes()) { //If the buffer is now insufficient, set the //pause flag. paused = true; } System.out.println("Reading: " + readLength + " - Available: " + available); return _baseSharedStream.read(bytes, off, readLength); } else if(!paused) { //Set pause until loaded enough to resume. paused = true; } } else { //We are not ready to start yet, try sleeping to allow the //buffer to increase. try { Thread.sleep(500); } catch (Exception e) { System.err.println(e.getMessage()); } } } } /** * @see javax.microedition.media.protocol.SourceStream#seek(long) */ public long seek(long where) throws IOException { _baseSharedStream.setCurrentPosition((int) where); return _baseSharedStream.getCurrentPosition(); } /** * @see javax.microedition.media.protocol.SourceStream#tell() */ public long tell() { return _baseSharedStream.getCurrentPosition(); } /** * Close the stream. * @throws IOException */ void close() throws IOException { _baseSharedStream.close(); } /** * @see javax.microedition.media.Controllable#getControl(String) */ public Control getControl(String controlType) { // No implemented controls. return null; } /** * @see javax.microedition.media.Controllable#getControls() */ public Control[] getControls() { // No implemented controls. return null; } } /** * A thread which downloads the remote file and writes it to the local file. */ private final class ConnectionThread extends Thread { /** * Download the remote media file, then write it to the local * file. * @see java.lang.Thread#run() */ public void run() { try { byte[] data = new byte[READ_CHUNK]; int len = 0; //Until we reach the end of the file. while (-1 != (len = _readAhead.read(data))) { _totalRead += len; if (!_bufferingComplete && _totalRead > getStartBuffer()) { //We have enough of a buffer to begin playback. _bufferingComplete = true; System.out.println("Initial Buffering Complete"); } if (_stop) { //Stop reading. return; } } System.out.println("Downloading Complete"); System.out.println("Total Read: " + _totalRead); //If the downloaded data is not the same size //as the remote file, something is wrong. if (_totalRead != _fileConnection.totalSize()) { System.err.println("* Unable to Download entire file *"); } _downloadComplete = true; _readAhead.setCurrentPosition(0); //Write downloaded data to the local file. while (-1 != (len = _readAhead.read(data))) { _saveStream.write(data); } } catch (Exception e) { System.err.println(e.toString()); } } } /** * Gets the minimum forward byte buffer which must be maintained in * order for the video to keep playing. * @return The pause byte buffer. */ int getPauseBytes() { return _pauseBytes; } /** * Sets the minimum forward buffer which must be maintained in order * for the video to keep playing. * @param pauseBytes The new pause byte buffer. */ void setPauseBytes(int pauseBytes) { _pauseBytes = pauseBytes; } /** * Gets the maximum size (in bytes) of a single read. * @return The maximum size (in bytes) of a single read. */ int getReadLimit() { return _readLimit; } /** * Sets the maximum size (in bytes) of a single read. * @param readLimit The new maximum size (in bytes) of a single read. */ void setReadLimit(int readLimit) { _readLimit = readLimit; } /** * Gets the minimum forward byte buffer required to resume * playback after a pause. * @return The resume byte buffer. */ int getResumeBytes() { return _resumeBytes; } /** * Sets the minimum forward byte buffer required to resume * playback after a pause. * @param resumeBytes The new resume byte buffer. */ void setResumeBytes(int resumeBytes) { _resumeBytes = resumeBytes; } /** * Gets the minimum number of bytes that must be buffered before the * media file will begin playing. * @return The start byte buffer. */ int getStartBuffer() { return _startBuffer; } /** * Sets the minimum number of bytes that must be buffered before the * media file will begin playing. * @param startBuffer The new start byte buffer. */ void setStartBuffer(int startBuffer) { _startBuffer = startBuffer; } } And in this way i use it: LimitedRateStreamingSource source = new LimitedRateStreamingSource("file:///SDCard/music3.mp3"); source.setContentType("audio/mpeg"); mediaPlayer = javax.microedition.media.Manager.createPlayer(source); mediaPlayer.addPlayerListener(this); mediaPlayer.realize(); mediaPlayer.prefetch(); After start i use mediaPlayer.getDuration it returns lets say around 24:22 (the inbuild media player in the blackberry say the file length is 4:05) I tried to get the duration in the listener and there unfortunatly returned around 64 minutes, so im sure something is not good inside the datasoruce....

    Read the article

  • How to improve this piece of code

    - by user303518
    Can anyone help me on this. It may be very frustrating for you all. But I want you guys to take a moment to look at the code below and please tell me all the things that are wrong in the below piece of code. You can copy it into your visual studio to analyze this better. You don’t have to make this code compile. My task is to get the following things: Most obvious mistakes with this code All the things that are wrong/bad practices with the code below Modify/Write an optimized version of this code. Keep in mind, the code DOES NOT need to compile. Reduce the number of lines of code You should NEVER try to implement something like below: public List<ValidationErrorDto> ProcessEQuote(string eQuoteXml, long programUniversalID) { // Get Program Info. DataTable programs = GetAllPrograms(); DataRow[] programRows = programs.Select(string.Format("ProgramUniversalID = {0}", programUniversalID)); long programID = (long)programRows[0]["ProgramID"]; string programName = (string)programRows[0]["Description"]; // Get Config file values. string fromEmail = ConfigurationManager.AppSettings["eQuotesFromEmail"]; string technicalSupportPhone = ConfigurationManager.AppSettings["TechnicalSupportPhone"]; string fromEmailDisplayName = string.IsNullOrEmpty(ConfigurationManager.AppSettings["eQuotesFromDisplayName"]) ? null : string.Format(ConfigurationManager.AppSettings["eQuotesFromDisplayName"], programName); string itEmail = !string.IsNullOrEmpty(ConfigurationManager.AppSettings["ITEmail"]) ? ConfigurationManager.AppSettings["ITEmail"] : string.Empty; string itName = !string.IsNullOrEmpty(ConfigurationManager.AppSettings["ITName"]) ? ConfigurationManager.AppSettings["ITName"] : "IT"; try { List<ValidationErrorDto> allValidationErrors = new List<ValidationErrorDto>(); List<ValidationErrorDto> validationErrors = new List<ValidationErrorDto>(); if (validationErrors.Count == 0) { validationErrors.AddRange(ValidateEQuoteXmlAgainstSchema(eQuoteXml)); if (validationErrors.Count == 0) { XmlDocument eQuoteXmlDoc = new XmlDocument(); eQuoteXmlDoc.LoadXml(eQuoteXml); XmlElement rootElement = eQuoteXmlDoc.DocumentElement; XmlNodeList quotesList = rootElement.SelectNodes("Quote"); foreach (XmlNode node in quotesList) { // Each node should be a quote node but to be safe, check if (node.Name == "Quote") { string groupName = node.SelectSingleNode("Group/GroupName").InnerText; string groupCity = node.SelectSingleNode("Group/GroupCity").InnerText; string groupPostalCode = node.SelectSingleNode("Group/GroupZipCode").InnerText; string groupSicCode = node.SelectSingleNode("Group/GroupSIC").InnerText; string generalAgencyLicenseNumber = node.SelectSingleNode("Group/GALicenseNbr").InnerText; string brokerName = node.SelectSingleNode("Group/BrokerName").InnerText; string deliverToEmailAddress = node.SelectSingleNode("Group/ReturnEmailAddress").InnerText; string brokerEmail = node.SelectSingleNode("Group/BrokerEmail").InnerText; string groupEligibleEmployeeCountString = node.SelectSingleNode("Group/GroupNbrEmployees").InnerText; string quoteEffectiveDateString = node.SelectSingleNode("Group/QuoteEffectiveDate").InnerText; string salesRepName = node.SelectSingleNode("Group/SalesRepName").InnerText; string salesRepPhone = node.SelectSingleNode("Group/SalesRepPhone").InnerText; string salesRepEmail = node.SelectSingleNode("Group/SalesRepEmail").InnerText; string brokerLicenseNumber = node.SelectSingleNode("Group/BrokerLicenseNbr").InnerText; DateTime? quoteEffectiveDate = null; int eligibleEmployeeCount = int.Parse(groupEligibleEmployeeCountString); DateTime quoteEffectiveDateOut; if (!DateTime.TryParse(quoteEffectiveDateString, out quoteEffectiveDateOut)) validationErrors.Add(ValidationHelper.CreateValidationError((long)QuoteField.EffectiveDate, "Quote Effective Date", ValidationErrorDto.ValueOutOfRange, false, ValidationHelper.CreateValidationContext(Entity.QuoteDetail, "Quote"))); else quoteEffectiveDate = quoteEffectiveDateOut; Dictionary<string, string> replacementCodeValues = new Dictionary<string, string>(); if (string.IsNullOrEmpty(Resources.ParameterMessageKeys.ResourceManager.GetString("GroupName"))) throw new InvalidOperationException("GroupName key is not configured"); replacementCodeValues.Add(Resources.ParameterMessageKeys.GroupName, groupName); replacementCodeValues.Add(Resources.ParameterMessageKeys.ProgramName, programName); replacementCodeValues.Add(Resources.ParameterMessageKeys.SalesRepName, salesRepName); replacementCodeValues.Add(Resources.ParameterMessageKeys.SalesRepPhone, salesRepPhone); replacementCodeValues.Add(Resources.ParameterMessageKeys.SalesRepEmail, salesRepEmail); replacementCodeValues.Add(Resources.ParameterMessageKeys.TechnicalSupportPhone, technicalSupportPhone); replacementCodeValues.Add(Resources.ParameterMessageKeys.EligibleEmployeCount, eligibleEmployeeCount.ToString()); // Retrieve the CityID and StateID long? cityID = null; long? stateID = null; DataSet citiesAndStates = Addresses.GetCitiesAndStatesFromPostalCode(groupPostalCode); DataTable cities = citiesAndStates.Tables[0]; DataTable states = citiesAndStates.Tables[1]; DataRow[] cityRows = cities.Select(string.Format("Name = '{0}'", groupCity)); if (cityRows.Length > 0) { cityID = (long)cityRows[0]["CityID"]; DataRow[] stateRows = states.Select(string.Format("CityID = {0}", cityID)); if (stateRows.Length > 0) stateID = (long)stateRows[0]["StateID"]; } // If the StateID does not exist, then we cannot get the GeneralAgency, so set a validation error and do not contine. // Else, Continue and look for an General Agency. If a GA was found, look for or create a Broker. Then look for or create a Prospect Group // Then using the info, create a quote. if (!stateID.HasValue) validationErrors.Add(ValidationHelper.CreateValidationError((long)ProspectGroupField.State, "Prospect Group State", ValidationErrorDto.RequiredFieldMissing, false, ValidationHelper.CreateValidationContext(Entity.ProspectGroup, "Prospect Group"))); bool brokerValidationError = false; SalesRepDto salesRep = GetSalesRepByEmail(salesRepEmail, ref validationErrors); if (salesRep == null) { string exceptionMessage = "Sales Rep Not found in Opportunity System. Please make sure Sales Rep is present in the system by adding the sales rep in AWP SR Add Screen." + Environment.NewLine; exceptionMessage = exceptionMessage + " Sales Rep Name: " + salesRepName + Environment.NewLine; exceptionMessage = exceptionMessage + " Sales Rep Email: " + salesRepEmail + Environment.NewLine; exceptionMessage = exceptionMessage + " Module : E-Quote Service" + Environment.NewLine; throw new Exception(exceptionMessage); } if (validationErrors.Count == 0) { // Note that StateID and EffectiveDate should be valid at this point. If it weren't there would be validation errors. // Find General Agency long? generalAgencyID; validationErrors.AddRange(GetEQuoteGeneralAgency(generalAgencyLicenseNumber, stateID.Value, out generalAgencyID)); // If GA was found, check for Broker. if (validationErrors.Count == 0 && generalAgencyID.HasValue) { Dictionary<string, string> brokerNameParts = ContactHelper.GetNamePartsFromFullName(brokerName); long? brokerID; validationErrors.AddRange(CreateEQuoteBroker(brokerNameParts["FirstName"], brokerNameParts["LastName"], brokerEmail, brokerLicenseNumber, stateID.Value, generalAgencyID.Value, salesRep, programID, out brokerID)); // If Broker was found but had validation errors if (validationErrors.Count > 0) { DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteBrokerValidationFailureMessageEmail, replacementCodeValues); brokerValidationError = true; } // If Broker was found, check for Prospect Group if (validationErrors.Count == 0 && brokerID.HasValue) { long? prospectGroupID; validationErrors.AddRange(CreateEQuoteProspectGroup(groupName, cityID, stateID, groupPostalCode, groupSicCode, brokerID.Value, out prospectGroupID)); if (validationErrors.Count == 0 && prospectGroupID.HasValue) { if (validationErrors.Count == 0) { long? quoteID; validationErrors.AddRange(CreateEQuote(programID, prospectGroupID.Value, generalAgencyID.Value, quoteEffectiveDate.Value, eligibleEmployeeCount, deliverToEmailAddress, node.SelectNodes("Employees/Employee"), deliverToEmailAddress, out quoteID)); if (validationErrors.Count == 0 && quoteID.HasValue) { QuoteFromServiceDto quoteFromService = GetQuoteByQuoteID(quoteID.Value); // Generate Pre-Message replacementCodeValues.Add(Resources.ParameterMessageKeys.QuoteNumber, string.Format("{0}.{1}", quoteFromService.QuoteNumber, quoteFromService.QuoteVersion)); replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, brokerName); replacementCodeValues.Add(Resources.ParameterMessageKeys.LicenseNumbers, brokerLicenseNumber); DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, deliverToEmailAddress, DocumentType.EQuotePreMessageEmail, replacementCodeValues); bool quoteGenerated = false; try { quoteGenerated = GenerateAndDeliverInitialQuote(quoteID.Value); } catch (Exception exception) { TraceLogger.LogException(exception, LoggingCategory); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Name)) replacementCodeValues[Resources.ParameterMessageKeys.Name] = itName; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, itName); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Errors)) replacementCodeValues[Resources.ParameterMessageKeys.Errors] = string.Format("Errors:\r\n:{0}", exception); else replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Format("Errors:\r\n:{0}", exception)); DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); } if (!quoteGenerated) { // Generate System Failure Message if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Name)) replacementCodeValues[Resources.ParameterMessageKeys.Name] = brokerName; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, brokerName); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Errors)) replacementCodeValues[Resources.ParameterMessageKeys.Errors] = string.Empty; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Empty); DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); } } } } } } } //if (validationErrors.Count > 0) // Per spec, if Broker Validation returned an error we already sent an email, don't send another generic one if (validationErrors.Count > 0 && (!brokerValidationError)) { StringBuilder errorString = new StringBuilder(); foreach (ValidationErrorDto validationError in validationErrors) errorString = errorString.AppendLine(string.Format(" - {0}", ValidationHelper.GetValidationErrorReason(validationError, true))); replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, errorString.ToString()); if (replacementCodeValues.ContainsKey(Resources.ParameterMessageKeys.Name)) replacementCodeValues[Resources.ParameterMessageKeys.Name] = brokerName; else replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, brokerName); // HACK: If there is no effective date, then use Today's date. Do we care about the effecitve dat on validation message? if (quoteEffectiveDate.HasValue) DeliverEmailMessage(programID, quoteEffectiveDate.Value, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteValidationFailureMessageEmail, replacementCodeValues); else DeliverEmailMessage(programID, DateTime.Now, fromEmail, fromEmailDisplayName, itEmail, DocumentType.EQuoteValidationFailureMessageEmail, replacementCodeValues); } allValidationErrors.AddRange(validationErrors); validationErrors.Clear(); } } } else { // Use todays date as the effective date. Dictionary<string, string> replacementCodeValues = new Dictionary<string, string>(); StringBuilder errorString = new StringBuilder(); foreach (ValidationErrorDto validationError in validationErrors) errorString = errorString.AppendLine(string.Format(" - {0}", ValidationHelper.GetValidationErrorReason(validationError, true))); replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Format("The following validation errors occurred: \r\n{0}", errorString)); replacementCodeValues.Add(Resources.ParameterMessageKeys.ProgramName, programName); replacementCodeValues.Add(Resources.ParameterMessageKeys.GroupName, "Group"); replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, itName); DeliverEmailMessage(programID, DateTime.Now, fromEmail, null, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); allValidationErrors.AddRange(validationErrors); validationErrors.Clear(); } } return allValidationErrors; } catch (Exception exception) { TraceLogger.LogException(exception, LoggingCategory); // Use todays date as the effective date. Dictionary<string, string> replacementCodeValues = new Dictionary<string, string>(); replacementCodeValues.Add(Resources.ParameterMessageKeys.ProgramName, programName); replacementCodeValues.Add(Resources.ParameterMessageKeys.GroupName, "Group"); replacementCodeValues.Add(Resources.ParameterMessageKeys.Name, itName); replacementCodeValues.Add(Resources.ParameterMessageKeys.Errors, string.Format("Errors:\r\n:{0}", exception)); DeliverEmailMessage(programID, DateTime.Now, fromEmail, null, itEmail, DocumentType.EQuoteSystemFailureMessageEmail, replacementCodeValues); throw new FaultException(exception.ToString()); } }

    Read the article

  • Steganography Experiment - Trouble hiding message bits in DCT coefficients

    - by JohnHankinson
    I have an application requiring me to be able to embed loss-less data into an image. As such I've been experimenting with steganography, specifically via modification of DCT coefficients as the method I select, apart from being loss-less must also be relatively resilient against format conversion, scaling/DSP etc. From the research I've done thus far this method seems to be the best candidate. I've seen a number of papers on the subject which all seem to neglect specific details (some neglect to mention modification of 0 coefficients, or modification of AC coefficient etc). After combining the findings and making a few modifications of my own which include: 1) Using a more quantized version of the DCT matrix to ensure we only modify coefficients that would still be present should the image be JPEG'ed further or processed (I'm using this in place of simply following a zig-zag pattern). 2) I'm modifying bit 4 instead of the LSB and then based on what the original bit value was adjusting the lower bits to minimize the difference. 3) I'm only modifying the blue channel as it should be the least visible. This process must modify the actual image and not the DCT values stored in file (like jsteg) as there is no guarantee the file will be a JPEG, it may also be opened and re-saved at a later stage in a different format. For added robustness I've included the message multiple times and use the bits that occur most often, I had considered using a QR code as the message data or simply applying the reed-solomon error correction, but for this simple application and given that the "message" in question is usually going to be between 10-32 bytes I have plenty of room to repeat it which should provide sufficient redundancy to recover the true bits. No matter what I do I don't seem to be able to recover the bits at the decode stage. I've tried including / excluding various checks (even if it degrades image quality for the time being). I've tried using fixed point vs. double arithmetic, moving the bit to encode, I suspect that the message bits are being lost during the IDCT back to image. Any thoughts or suggestions on how to get this working would be hugely appreciated. (PS I am aware that the actual DCT/IDCT could be optimized from it's naive On4 operation using row column algorithm, or an FDCT like AAN, but for now it just needs to work :) ) Reference Papers: http://www.lokminglui.com/dct.pdf http://arxiv.org/ftp/arxiv/papers/1006/1006.1186.pdf Code for the Encode/Decode process in C# below: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing.Imaging; using System.Drawing; namespace ImageKey { public class Encoder { public const int HIDE_BIT_POS = 3; // use bit position 4 (1 << 3). public const int HIDE_COUNT = 16; // Number of times to repeat the message to avoid error. // JPEG Standard Quantization Matrix. // (to get higher quality multiply by (100-quality)/50 .. // for lower than 50 multiply by 50/quality. Then round to integers and clip to ensure only positive integers. public static double[] Q = {16,11,10,16,24,40,51,61, 12,12,14,19,26,58,60,55, 14,13,16,24,40,57,69,56, 14,17,22,29,51,87,80,62, 18,22,37,56,68,109,103,77, 24,35,55,64,81,104,113,92, 49,64,78,87,103,121,120,101, 72,92,95,98,112,100,103,99}; // Maximum qauality quantization matrix (if all 1's doesn't modify coefficients at all). public static double[] Q2 = {1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1}; public static Bitmap Encode(Bitmap b, string key) { Bitmap response = new Bitmap(b.Width, b.Height, PixelFormat.Format32bppArgb); uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Start be transferring the unmodified image portions. // As we'll be using slightly less width/height for the encoding process we'll need the edges to be populated. for (int y = 0; y < b.Height; y++) for (int x = 0; x < b.Width; x++) { if( (x >= imgWidth && x < b.Width) || (y>=imgHeight && y < b.Height)) response.SetPixel(x, y, b.GetPixel(x, y)); } // Setup the counters and byte data for the message to encode. StringBuilder sb = new StringBuilder(); for(int i=0;i<HIDE_COUNT;i++) sb.Append(key); byte[] codeBytes = System.Text.Encoding.ASCII.GetBytes(sb.ToString()); int bitofs = 0; // Current bit position we've encoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to encode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] redData = GetRedChannelData(b, x, y); int[] greenData = GetGreenChannelData(b, x, y); int[] blueData = GetBlueChannelData(b, x, y); int[] newRedData; int[] newGreenData; int[] newBlueData; if (bitofs < totalBits) { double[] redDCT = DCT(ref redData); double[] greenDCT = DCT(ref greenData); double[] blueDCT = DCT(ref blueData); int[] redDCTI = Quantize(ref redDCT, ref Q2); int[] greenDCTI = Quantize(ref greenDCT, ref Q2); int[] blueDCTI = Quantize(ref blueDCT, ref Q2); int[] blueDCTC = Quantize(ref blueDCT, ref Q); HideBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); double[] redDCT2 = DeQuantize(ref redDCTI, ref Q2); double[] greenDCT2 = DeQuantize(ref greenDCTI, ref Q2); double[] blueDCT2 = DeQuantize(ref blueDCTI, ref Q2); newRedData = IDCT(ref redDCT2); newGreenData = IDCT(ref greenDCT2); newBlueData = IDCT(ref blueDCT2); } else { newRedData = redData; newGreenData = greenData; newBlueData = blueData; } MapToRGBRange(ref newRedData); MapToRGBRange(ref newGreenData); MapToRGBRange(ref newBlueData); for(int dy=0;dy<8;dy++) { for(int dx=0;dx<8;dx++) { int col = (0xff<<24) + (newRedData[dx+(dy*8)]<<16) + (newGreenData[dx+(dy*8)]<<8) + (newBlueData[dx+(dy*8)]); response.SetPixel(x+dx,y+dy,Color.FromArgb(col)); } } } } if (bitofs < totalBits) throw new Exception("Failed to encode data - insufficient cover image coefficients"); return (response); } public static void HideBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { int tempValue = 0; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ( (u != 0 || v != 0) && CMatrix[v+(u*8)] != 0 && DCTMatrix[v+(u*8)] != 0) { if (bitofs < totalBits) { tempValue = DCTMatrix[v + (u * 8)]; int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); byte value = (byte)((codeBytes[bytePos] & mask) >> bitPos); // 0 or 1. if (value == 0) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a != 0) DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS) - 1; DCTMatrix[v + (u * 8)] &= ~(1 << HIDE_BIT_POS); } else if (value == 1) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a == 0) DCTMatrix[v + (u * 8)] &= ~((1 << HIDE_BIT_POS) - 1); DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS); } if (DCTMatrix[v + (u * 8)] != 0) bitofs++; else DCTMatrix[v + (u * 8)] = tempValue; } } } } } public static void MapToRGBRange(ref int[] data) { for(int i=0;i<data.Length;i++) { data[i] += 128; if(data[i] < 0) data[i] = 0; else if(data[i] > 255) data[i] = 255; } } public static int[] GetRedChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x,y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 16) & 0xff) - 128; } } return (data); } public static int[] GetGreenChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 8) & 0xff) - 128; } } return (data); } public static int[] GetBlueChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 0) & 0xff) - 128; } } return (data); } public static int[] Quantize(ref double[] DCTMatrix, ref double[] Q) { int[] DCTMatrixOut = new int[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (int)Math.Round(DCTMatrix[v + (u * 8)] / Q[v + (u * 8)]); } } return(DCTMatrixOut); } public static double[] DeQuantize(ref int[] DCTMatrix, ref double[] Q) { double[] DCTMatrixOut = new double[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (double)DCTMatrix[v + (u * 8)] * Q[v + (u * 8)]; } } return(DCTMatrixOut); } public static double[] DCT(ref int[] data) { double[] DCTMatrix = new double[8 * 8]; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double sum = 0.0; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double s = data[x + (y * 8)]; double dctVal = Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += s * dctVal; } } DCTMatrix[u + (v * 8)] = (0.25 * cu * cv * sum); } } return (DCTMatrix); } public static int[] IDCT(ref double[] DCTMatrix) { int[] Matrix = new int[8 * 8]; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double sum = 0; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double idctVal = (cu * cv) / 4.0 * Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += (DCTMatrix[u + (v * 8)] * idctVal); } } Matrix[x + (y * 8)] = (int)Math.Round(sum); } } return (Matrix); } } public class Decoder { public static string Decode(Bitmap b, int expectedLength) { expectedLength *= Encoder.HIDE_COUNT; uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Setup the counters and byte data for the message to decode. byte[] codeBytes = new byte[expectedLength]; byte[] outBytes = new byte[expectedLength / Encoder.HIDE_COUNT]; int bitofs = 0; // Current bit position we've decoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to decode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] blueData = ImageKey.Encoder.GetBlueChannelData(b, x, y); double[] blueDCT = ImageKey.Encoder.DCT(ref blueData); int[] blueDCTI = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q2); int[] blueDCTC = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q); if (bitofs < totalBits) GetBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); } } bitofs = 0; for (int i = 0; i < (expectedLength / Encoder.HIDE_COUNT) * 8; i++) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); List<int> values = new List<int>(); int zeroCount = 0; int oneCount = 0; for (int j = 0; j < Encoder.HIDE_COUNT; j++) { int val = (codeBytes[bytePos + ((expectedLength / Encoder.HIDE_COUNT) * j)] & mask) >> bitPos; values.Add(val); if (val == 0) zeroCount++; else oneCount++; } if (oneCount >= zeroCount) outBytes[bytePos] |= mask; bitofs++; values.Clear(); } return (System.Text.Encoding.ASCII.GetString(outBytes)); } public static void GetBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ((u != 0 || v != 0) && CMatrix[v + (u * 8)] != 0 && DCTMatrix[v + (u * 8)] != 0) { if (bitofs < totalBits) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); int value = DCTMatrix[v + (u * 8)] & (1 << Encoder.HIDE_BIT_POS); if (value != 0) codeBytes[bytePos] |= mask; bitofs++; } } } } } } } UPDATE: By switching to using a QR Code as the source message and swapping a pair of coefficients in each block instead of bit manipulation I've been able to get the message to survive the transform. However to get the message to come through without corruption I have to adjust both coefficients as well as swap them. For example swapping (3,4) and (4,3) in the DCT matrix and then respectively adding 8 and subtracting 8 as an arbitrary constant seems to work. This survives a re-JPEG'ing of 96 but any form of scaling/cropping destroys the message again. I was hoping that by operating on mid to low frequency values that the message would be preserved even under some light image manipulation.

    Read the article

  • OpenGL 3.x Assimp trouble implementing phong shading (normals?)

    - by Defcronyke
    I'm having trouble getting phong shading to look right. I'm pretty sure there's something wrong with either my OpenGL calls, or the way I'm loading my normals, but I guess it could be something else since 3D graphics and Assimp are both still very new to me. When trying to load .obj/.mtl files, the problems I'm seeing are: The models seem to be lit too intensely (less phong-style and more completely washed out, too bright). Faces that are lit seem to be lit equally all over (with the exception of a specular highlight showing only when the light source position is moved to be practically right on top of the model) Because of problems 1 and 2, spheres look very wrong: picture of sphere And things with larger faces look (less-noticeably) wrong too: picture of cube I could be wrong, but to me this doesn't look like proper phong shading. Here's the code that I think might be relevant (I can post more if necessary): file: assimpRenderer.cpp #include "assimpRenderer.hpp" namespace def { assimpRenderer::assimpRenderer(std::string modelFilename, float modelScale) { initSFML(); initOpenGL(); if (assImport(modelFilename)) // if modelFile loaded successfully { initScene(); mainLoop(modelScale); shutdownScene(); } shutdownOpenGL(); shutdownSFML(); } assimpRenderer::~assimpRenderer() { } void assimpRenderer::initSFML() { windowWidth = 800; windowHeight = 600; settings.majorVersion = 3; settings.minorVersion = 3; app = NULL; shader = NULL; app = new sf::Window(sf::VideoMode(windowWidth,windowHeight,32), "OpenGL 3.x Window", sf::Style::Default, settings); app->setFramerateLimit(240); app->setActive(); return; } void assimpRenderer::shutdownSFML() { delete app; return; } void assimpRenderer::initOpenGL() { GLenum err = glewInit(); if (GLEW_OK != err) { /* Problem: glewInit failed, something is seriously wrong. */ std::cerr << "Error: " << glewGetErrorString(err) << std::endl; } // check the OpenGL context version that's currently in use int glVersion[2] = {-1, -1}; glGetIntegerv(GL_MAJOR_VERSION, &glVersion[0]); // get the OpenGL Major version glGetIntegerv(GL_MINOR_VERSION, &glVersion[1]); // get the OpenGL Minor version std::cout << "Using OpenGL Version: " << glVersion[0] << "." << glVersion[1] << std::endl; return; } void assimpRenderer::shutdownOpenGL() { return; } void assimpRenderer::initScene() { // allocate heap space for VAOs, VBOs, and IBOs vaoID = new GLuint[scene->mNumMeshes]; vboID = new GLuint[scene->mNumMeshes*2]; iboID = new GLuint[scene->mNumMeshes]; glClearColor(0.4f, 0.6f, 0.9f, 0.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glEnable(GL_CULL_FACE); shader = new Shader("shader.vert", "shader.frag"); projectionMatrix = glm::perspective(60.0f, (float)windowWidth / (float)windowHeight, 0.1f, 100.0f); rot = 0.0f; rotSpeed = 50.0f; faceIndex = 0; colorArrayA = NULL; colorArrayD = NULL; colorArrayS = NULL; normalArray = NULL; genVAOs(); return; } void assimpRenderer::shutdownScene() { delete [] iboID; delete [] vboID; delete [] vaoID; delete shader; } void assimpRenderer::renderScene(float modelScale) { sf::Time elapsedTime = clock.getElapsedTime(); clock.restart(); if (rot > 360.0f) rot = 0.0f; rot += rotSpeed * elapsedTime.asSeconds(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, -3.0f, -10.0f)); // move back a bit modelMatrix = glm::scale(glm::mat4(1.0f), glm::vec3(modelScale)); // scale model modelMatrix = glm::rotate(modelMatrix, rot, glm::vec3(0, 1, 0)); //modelMatrix = glm::rotate(modelMatrix, 25.0f, glm::vec3(0, 1, 0)); glm::vec3 lightPosition( 0.0f, -100.0f, 0.0f ); float lightPositionArray[3]; lightPositionArray[0] = lightPosition[0]; lightPositionArray[1] = lightPosition[1]; lightPositionArray[2] = lightPosition[2]; shader->bind(); int projectionMatrixLocation = glGetUniformLocation(shader->id(), "projectionMatrix"); int viewMatrixLocation = glGetUniformLocation(shader->id(), "viewMatrix"); int modelMatrixLocation = glGetUniformLocation(shader->id(), "modelMatrix"); int ambientLocation = glGetUniformLocation(shader->id(), "ambientColor"); int diffuseLocation = glGetUniformLocation(shader->id(), "diffuseColor"); int specularLocation = glGetUniformLocation(shader->id(), "specularColor"); int lightPositionLocation = glGetUniformLocation(shader->id(), "lightPosition"); int normalMatrixLocation = glGetUniformLocation(shader->id(), "normalMatrix"); glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, &projectionMatrix[0][0]); glUniformMatrix4fv(viewMatrixLocation, 1, GL_FALSE, &viewMatrix[0][0]); glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, &modelMatrix[0][0]); glUniform3fv(lightPositionLocation, 1, lightPositionArray); for (unsigned int i = 0; i < scene->mNumMeshes; i++) { colorArrayA = new float[3]; colorArrayD = new float[3]; colorArrayS = new float[3]; material = scene->mMaterials[scene->mNumMaterials-1]; normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glUniformMatrix3fv(normalMatrixLocation, 1, GL_FALSE, normalArray); aiColor3D ambient(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_AMBIENT, ambient); aiColor3D diffuse(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_DIFFUSE, diffuse); aiColor3D specular(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_SPECULAR, specular); colorArrayA[0] = ambient.r; colorArrayA[1] = ambient.g; colorArrayA[2] = ambient.b; colorArrayD[0] = diffuse.r; colorArrayD[1] = diffuse.g; colorArrayD[2] = diffuse.b; colorArrayS[0] = specular.r; colorArrayS[1] = specular.g; colorArrayS[2] = specular.b; // bind color for each mesh glUniform3fv(ambientLocation, 1, colorArrayA); glUniform3fv(diffuseLocation, 1, colorArrayD); glUniform3fv(specularLocation, 1, colorArrayS); // render all meshes glBindVertexArray(vaoID[i]); // bind our VAO glDrawElements(GL_TRIANGLES, scene->mMeshes[i]->mNumFaces*3, GL_UNSIGNED_INT, 0); glBindVertexArray(0); // unbind our VAO delete [] normalArray; delete [] colorArrayA; delete [] colorArrayD; delete [] colorArrayS; } shader->unbind(); app->display(); return; } void assimpRenderer::handleEvents() { sf::Event event; while (app->pollEvent(event)) { if (event.type == sf::Event::Closed) { app->close(); } if ((event.type == sf::Event::KeyPressed) && (event.key.code == sf::Keyboard::Escape)) { app->close(); } if (event.type == sf::Event::Resized) { glViewport(0, 0, event.size.width, event.size.height); } } return; } void assimpRenderer::mainLoop(float modelScale) { while (app->isOpen()) { renderScene(modelScale); handleEvents(); } } bool assimpRenderer::assImport(const std::string& pFile) { // read the file with some example postprocessing scene = importer.ReadFile(pFile, aiProcess_CalcTangentSpace | aiProcess_Triangulate | aiProcess_JoinIdenticalVertices | aiProcess_SortByPType); // if the import failed, report it if (!scene) { std::cerr << "Error: " << importer.GetErrorString() << std::endl; return false; } return true; } void assimpRenderer::genVAOs() { int vboIndex = 0; for (unsigned int i = 0; i < scene->mNumMeshes; i++, vboIndex+=2) { mesh = scene->mMeshes[i]; indexArray = new unsigned int[mesh->mNumFaces * sizeof(unsigned int) * 3]; // convert assimp faces format to array faceIndex = 0; for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; std::memcpy(&indexArray[faceIndex], face->mIndices, sizeof(float) * 3); faceIndex += 3; } // generate VAO glGenVertexArrays(1, &vaoID[i]); glBindVertexArray(vaoID[i]); // generate IBO for faces glGenBuffers(1, &iboID[i]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * mesh->mNumFaces * 3, indexArray, GL_STATIC_DRAW); // generate VBO for vertices if (mesh->HasPositions()) { glGenBuffers(1, &vboID[vboIndex]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, mesh->mVertices, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)0); glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); } // generate VBO for normals if (mesh->HasNormals()) { normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glGenBuffers(1, &vboID[vboIndex+1]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex+1]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, normalArray, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)1); glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0); delete [] normalArray; } // tex coord stuff goes here // unbind buffers glBindVertexArray(0); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); delete [] indexArray; } vboIndex = 0; return; } } file: shader.vert #version 150 core in vec3 in_Position; in vec3 in_Normal; uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; uniform vec3 lightPosition; uniform mat3 normalMatrix; smooth out vec3 vVaryingNormal; smooth out vec3 vVaryingLightDir; void main() { // derive MVP and MV matrices mat4 modelViewProjectionMatrix = projectionMatrix * viewMatrix * modelMatrix; mat4 modelViewMatrix = viewMatrix * modelMatrix; // get surface normal in eye coordinates vVaryingNormal = normalMatrix * in_Normal; // get vertex position in eye coordinates vec4 vPosition4 = modelViewMatrix * vec4(in_Position, 1.0); vec3 vPosition3 = vPosition4.xyz / vPosition4.w; // get vector to light source vVaryingLightDir = normalize(lightPosition - vPosition3); // Set the position of the current vertex gl_Position = modelViewProjectionMatrix * vec4(in_Position, 1.0); } file: shader.frag #version 150 core out vec4 out_Color; uniform vec3 ambientColor; uniform vec3 diffuseColor; uniform vec3 specularColor; smooth in vec3 vVaryingNormal; smooth in vec3 vVaryingLightDir; void main() { // dot product gives us diffuse intensity float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir))); // multiply intensity by diffuse color, force alpha to 1.0 out_Color = vec4(diff * diffuseColor, 1.0); // add in ambient light out_Color += vec4(ambientColor, 1.0); // specular light vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir), normalize(vVaryingNormal))); float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection)); if (diff != 0) { float fSpec = pow(spec, 128.0); // Set the output color of our current pixel out_Color.rgb += vec3(fSpec, fSpec, fSpec); } } I know it's a lot to look through, but I'm putting most of the code up so as not to assume where the problem is. Thanks in advance to anyone who has some time to help me pinpoint the problem(s)! I've been trying to sort it out for two days now and I'm not getting anywhere on my own.

    Read the article

  • can someone help me fix my code?

    - by user267490
    Hi, I have this code I been working on but I'm having a hard time for it to work. I did one but it only works in php 5.3 and I realized my host only supports php 5.0! do I was trying to see if I could get it to work on my sever correctly, I'm just lost and tired lol <?php //Temporarily turn on error reporting @ini_set('display_errors', 1); error_reporting(E_ALL); // Set default timezone (New PHP versions complain without this!) date_default_timezone_set("GMT"); // Common set_time_limit(0); require_once('dbc.php'); require_once('sessions.php'); page_protect(); // Image settings define('IMG_FIELD_NAME', 'cons_image'); // Max upload size in bytes (for form) define ('MAX_SIZE_IN_BYTES', '512000'); // Width and height for the thumbnail define ('THUMB_WIDTH', '150'); define ('THUMB_HEIGHT', '150'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>whatever</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <style type="text\css"> .validationerrorText { color:red; font-size:85%; font-weight:bold; } </style> </head> <body> <h1>Change image</h1> <?php $errors = array(); // Process form if (isset($_POST['submit'])) { // Get filename $filename = stripslashes($_FILES['cons_image']['name']); // Validation of image file upload $allowedFileTypes = array('image/gif', 'image/jpg', 'image/jpeg', 'image/png'); if ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_NO_FILE) { $errors['img_empty'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['type'] != '') && (!in_array($_FILES[IMG_FIELD_NAME]['type'], $allowedFileTypes))) { $errors['img_type'] = true; } elseif (($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_INI_SIZE) || ($_FILES[IMG_FIELD_NAME]['error'] == UPLOAD_ERR_FORM_SIZE) || ($_FILES[IMG_FIELD_NAME]['size'] > MAX_SIZE_IN_BYTES)) { $errors['img_size'] = true; } elseif ($_FILES[IMG_FIELD_NAME]['error'] != UPLOAD_ERR_OK) { $errors['img_error'] = true; } elseif (strlen($_FILES[IMG_FIELD_NAME]['name']) > 200) { $errors['img_nametoolong'] = true; } elseif ( (file_exists("\\uploads\\{$username}\\images\\banner\\{$filename}")) || (file_exists("\\uploads\\{$username}\\images\\banner\\thumbs\\{$filename}")) ) { $errors['img_fileexists'] = true; } if (! empty($errors)) { unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } // Create thumbnail if (empty($errors)) { // Make directory if it doesn't exist if (!is_dir("\\uploads\\{$username}\\images\\banner\\thumbs\\")) { // Take directory and break it down into folders $dir = "uploads\\{$username}\\images\\banner\\thumbs"; $folders = explode("\\", $dir); // Create directory, adding folders as necessary as we go (ignore mkdir() errors, we'll check existance of full dir in a sec) $dirTmp = ''; foreach ($folders as $fldr) { if ($dirTmp != '') { $dirTmp .= "\\"; } $dirTmp .= $fldr; mkdir("\\".$dirTmp); //ignoring errors deliberately! } // Check again whether it exists if (!is_dir("\\uploads\\$username\\images\\banner\\thumbs\\")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } } if (empty($errors)) { // Move uploaded file to final destination if (! move_uploaded_file($_FILES[IMG_FIELD_NAME]['tmp_name'], "/uploads/$username/images/banner/$filename")) { $errors['move_source'] = true; unlink($_FILES[IMG_FIELD_NAME]['tmp_name']); //cleanup: delete temp file } else { // Create thumbnail in new dir if (! make_thumb("/uploads/$username/images/banner/$filename", "/uploads/$username/images/banner/thumbs/$filename")) { $errors['thumb'] = true; unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file } } } } // Record in database if (empty($errors)) { // Find existing record and delete existing images $sql = "SELECT `bannerORIGINAL`, `bannerTHUMB` FROM `agent_settings` WHERE (`agent_id`={$user_id}) LIMIT 1"; $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } $numResults = mysql_num_rows($result); if ($numResults == 1) { $row = mysql_fetch_assoc($result); // Delete old files unlink("/uploads/$username/images/banner/" . $row['bannerORIGINAL']); //delete OLD source file unlink("/uploads/$username/images/banner/thumbs/" . $row['bannerTHUMB']); //delete OLD thumbnail file } // Update/create record with new images if ($numResults == 1) { $sql = "INSERT INTO `agent_settings` (`agent_id`, `bannerORIGINAL`, `bannerTHUMB`) VALUES ({$user_id}, '/uploads/$username/images/banner/$filename', '/uploads/$username/images/banner/thumbs/$filename')"; } else { $sql = "UPDATE `agent_settings` SET `bannerORIGINAL`='/uploads/$username/images/banner/$filename', `bannerTHUMB`='/uploads/$username/images/banner/thumbs/$filename' WHERE (`agent_id`={$user_id})"; } $result = mysql_query($sql); if (!$result) { unlink("/uploads/$username/images/banner/$filename"); //cleanup: delete source file unlink("/uploads/$username/images/banner/thumbs/$filename"); //cleanup: delete thumbnail file die("<div><b>Error: Problem occurred with Database Query!</b><br /><br /><b>File:</b> " . __FILE__ . "<br /><b>Line:</b> " . __LINE__ . "<br /><b>MySQL Error Num:</b> " . mysql_errno() . "<br /><b>MySQL Error:</b> " . mysql_error() . "</div>"); } } // Print success message and how the thumbnail image created if (empty($errors)) { echo "<p>Thumbnail created Successfully!</p>\n"; echo "<img src=\"/uploads/$username/images/banner/thumbs/$filename\" alt=\"New image thumbnail\" />\n"; echo "<br />\n"; } } if (isset($errors['move_source'])) { echo "\t\t<div>Error: Failure occurred moving uploaded source image!</div>\n"; } if (isset($errors['thumb'])) { echo "\t\t<div>Error: Failure occurred creating thumbnail!</div>\n"; } ?> <form action="" enctype="multipart/form-data" method="post"> <input type="hidden" name="MAX_FILE_SIZE" value="<?php echo MAX_SIZE_IN_BYTES; ?>" /> <label for="<?php echo IMG_FIELD_NAME; ?>">Image:</label> <input type="file" name="<?php echo IMG_FIELD_NAME; ?>" id="<?php echo IMG_FIELD_NAME; ?>" /> <?php if (isset($errors['img_empty'])) { echo "\t\t<div class=\"validationerrorText\">Required!</div>\n"; } if (isset($errors['img_type'])) { echo "\t\t<div class=\"validationerrorText\">File type not allowed! GIF/JPEG/PNG only!</div>\n"; } if (isset($errors['img_size'])) { echo "\t\t<div class=\"validationerrorText\">File size too large! Maximum size should be " . MAX_SIZE_IN_BYTES . "bytes!</div>\n"; } if (isset($errors['img_error'])) { echo "\t\t<div class=\"validationerrorText\">File upload error occured! Error code: {$_FILES[IMG_FIELD_NAME]['error']}</div>\n"; } if (isset($errors['img_nametoolong'])) { echo "\t\t<div class=\"validationerrorText\">Filename too long! 200 Chars max!</div>\n"; } if (isset($errors['img_fileexists'])) { echo "\t\t<div class=\"validationerrorText\">An image file already exists with that name!</div>\n"; } ?> <br /><input type="submit" name="submit" id="image1" value="Upload image" /> </form> </body> </html> <?php ################################# # # F U N C T I O N S # ################################# /* * Function: make_thumb * * Creates the thumbnail image from the uploaded image * the resize will be done considering the width and * height defined, but without deforming the image * * @param $sourceFile Path anf filename of source image * @param $destFile Path and filename to save thumbnail as * @param $new_w the new width to use * @param $new_h the new height to use */ function make_thumb($sourceFile, $destFile, $new_w=false, $new_h=false) { if ($new_w === false) { $new_w = THUMB_WIDTH; } if ($new_h === false) { $new_h = THUMB_HEIGHT; } // Get image extension $ext = strtolower(getExtension($sourceFile)); // Copy source switch($ext) { case 'jpg': case 'jpeg': $src_img = imagecreatefromjpeg($sourceFile); break; case 'png': $src_img = imagecreatefrompng($sourceFile); break; case 'gif': $src_img = imagecreatefromgif($sourceFile); break; default: return false; } if (!$src_img) { return false; } // Get dimmensions of the source image $old_x = imageSX($src_img); $old_y = imageSY($src_img); // Calculate the new dimmensions for the thumbnail image // 1. calculate the ratio by dividing the old dimmensions with the new ones // 2. if the ratio for the width is higher, the width will remain the one define in WIDTH variable // and the height will be calculated so the image ratio will not change // 3. otherwise we will use the height ratio for the image // as a result, only one of the dimmensions will be from the fixed ones $ratio1 = $old_x / $new_w; $ratio2 = $old_y / $new_h; if ($ratio1 > $ratio2) { $thumb_w = $new_w; $thumb_h = $old_y / $ratio1; } else { $thumb_h = $new_h; $thumb_w = $old_x / $ratio2; } // Create a new image with the new dimmensions $dst_img = ImageCreateTrueColor($thumb_w, $thumb_h); // Resize the big image to the new created one imagecopyresampled($dst_img, $src_img, 0, 0, 0, 0, $thumb_w, $thumb_h, $old_x, $old_y); // Output the created image to the file. Now we will have the thumbnail into the file named by $filename switch($ext) { case 'jpg': case 'jpeg': $result = imagepng($dst_img, $destFile); break; case 'png': $result = imagegif($dst_img, $destFile); break; case 'gif': $result = imagejpeg($dst_img, $destFile); break; default: //should never occur! } if (!$result) { return false; } // Destroy source and destination images imagedestroy($dst_img); imagedestroy($src_img); return true; } /* * Function: getExtension * * Returns the file extension from a given filename/path * * @param $str the filename to get the extension from */ function getExtension($str) { return pathinfo($str, PATHINFO_EXTENSION); } ?>

    Read the article

  • How do i return integers from a string ?

    - by kannan.ambadi
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Suppose you are passing a string(for e.g.: “My name has 1 K, 2 A and 3 N”)  which may contain integers, letters or special characters. I want to retrieve only numbers from the input string. We can implement it in many ways such as splitting the string into an array or by using TryParse method. I would like to share another idea, that’s by using Regular expressions. All you have to do is, create an instance of Regular Expression with a specified pattern for integer. Regular expression class defines a method called Split, which splits the specified input string based on the pattern provided during object initialization.     We can write the code as given below:   public static int[] SplitIdSeqenceValues(object combinedArgs)         {             var _argsSeperator = new Regex(@"\D+", RegexOptions.Compiled);               string[] splitedIntegers = _argsSeperator.Split(combinedArgs.ToString());               var args = new int[splitedIntegers.Length];               for (int i = 0; i < splitedIntegers.Length; i++)                 args[i] = MakeSafe.ToSafeInt32(splitedIntegers[i]);                           return args;         }    It would be better, if we set to RegexOptions.Compiled so that the regular expression will have performance boost by faster compilation.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Happy Programming  :))   

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Challenge with HTML5 – In Pictures

    - by dwahlin
    I love working with Web technologies and am looking forward to the new functionality that HTML5 will ultimately bring to the table (some of which can be used today). Having been through the div versus layer battle back in the IE4 and Netscape 4 days I think we’re headed down that road again as a result of browsers implementing features differently. I’ve been spending a lot of time researching and playing around with HTML5 samples and features (mainly because we’re already seeing demand for training on HTML5) and there’s a lot of great stuff there that will truly revolutionize web applications as we know them. However, browsers just aren’t there yet and many people outside of the development world don’t really feel a need to upgrade their browser if it’s working reasonably well (Mom and Dad come to mind) so it’s going to be awhile. There’s a nice test site at http://www.HTML5Test.com that runs through different HTML5 features and scores how well they’re supported. They don’t test for everything and are very clear about that on the site: “The HTML5 test score is only an indication of how well your browser supports the upcoming HTML5 standard and related specifications. It does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform. The score is calculated by testing for the many new features of HTML5. Each feature is worth one or more points. Apart from the main HTML5 specification and other specifications created the W3C HTML Working Group, this test also awards points for supporting related drafts and specifications. Some of these specifications were initially part of HTML5, but are now further developed by other W3C working groups. WebGL is also part of this test despite not being developed by the W3C, because it extends the HTML5 canvas element with a 3d context. The test also awards bonus points for supporting audio and video codecs and supporting SVG or MathML embedding in a plain HTML document. These test do not count towards the total score because HTML5 does not specify any required audio or video codec. Also SVG and MathML are not required by HTML5, the specification only specifies rules for how such content should be embedded inside a plain HTML file. Please be aware that the specifications that are being tested are still in development and could change before receiving an official status. In the future new tests will be added for the pieces of the specification that are currently still missing. The maximum number of points that can be scored is 300 at this moment, but this is a moving goalpost.” It looks like their tests haven’t been updated since June, but the numbers are pretty scary as a developer because it means I’m going to have to do a lot of browser sniffing before assuming a particular feature is available to use. Not that much different from what we do today as far as browser sniffing you say? I’d have to disagree since HTML5 takes it to a whole new level. In today’s world we have script libraries such as jQuery (my personal favorite), Prototype, script.aculo.us, YUI Library, MooTools, etc. that handle the heavy lifting for us. Until those libraries handle all of the key HTML5 features available it’s going to be a challenge. Certain features such as Canvas are supported fairly well across most of the major browsers while other features such as audio and video are hit or miss depending upon what codec you want to use. Run the tests yourself to see what passes and what fails for different browsers. You can also view the HTML5 Test Suite Conformance Results at http://test.w3.org/html/tests/reporting/report.htm (a work in progress). The table below lists the scores that the HTML5Test site returned for different browsers I have installed on my desktop PC and laptop. A specific list of tests run and features supported are given when you go to the site. Note that I went ahead and tested the IE9 beta and it didn’t do nearly as good as I expected it would, but it’s not officially out yet so I expect that number will change a lot. Am I opposed to HTML5 as a result of these tests? Of course not - I’m actually really excited about what it offers.  However, I’m trying to be realistic and feel it'll definitely add a new level of headache to the Web application development process having been through something like this many years ago. On the flipside, developers that are able to target a specific browser (typically Intranet apps) or master the cross-browser issues are going to release some pretty sweet applications. Check out http://html5gallery.com/ for a look at some of the more cutting-edge sites out there that use HTML5. Also check out the http://www.beautyoftheweb.com site that Microsoft put together to showcase IE9. Chrome 8 Safari 5 for Windows     Opera 10 Firefox 3.6     Internet Explorer 9 Beta (Note that it’s still beta) Internet Explorer 8

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • MVC 2 Editor Template for Radio Buttons

    - by Steve Michelotti
    A while back I blogged about how to create an HTML Helper to produce a radio button list.  In that post, my HTML helper was “wrapping” the FluentHtml library from MvcContrib to produce the following html output (given an IEnumerable list containing the items “Foo” and “Bar”): 1: <div> 2: <input id="Name_Foo" name="Name" type="radio" value="Foo" /><label for="Name_Foo" id="Name_Foo_Label">Foo</label> 3: <input id="Name_Bar" name="Name" type="radio" value="Bar" /><label for="Name_Bar" id="Name_Bar_Label">Bar</label> 4: </div> With the release of MVC 2, we now have editor templates we can use that rely on metadata to allow us to customize our views appropriately.  For example, for the radio buttons above, we want the “id” attribute to be differentiated and unique and we want the “name” attribute to be the same across radio buttons so the buttons will be grouped together and so model binding will work appropriately. We also want the “for” attribute in the <label> element being set to correctly point to the id of the corresponding radio button.  The default behavior of the RadioButtonFor() method that comes OOTB with MVC produces the same value for the “id” and “name” attributes so this isn’t exactly what I want out the the box if I’m trying to produce the HTML mark up above. If we use an EditorTemplate, the first gotcha that we run into is that, by default, the templates just work on your view model’s property. But in this case, we *also* was the list of items to populate all the radio buttons. It turns out that the EditorFor() methods do give you a way to pass in additional data. There is an overload of the EditorFor() method where the last parameter allows you to pass an anonymous object for “extra” data that you can use in your view – it gets put on the view data dictionary: 1: <%: Html.EditorFor(m => m.Name, "RadioButtonList", new { selectList = new SelectList(new[] { "Foo", "Bar" }) })%> Now we can create a file called RadioButtonList.ascx that looks like this: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <% 3: var list = this.ViewData["selectList"] as SelectList; 4: %> 5: <div> 6: <% foreach (var item in list) { 7: var radioId = ViewData.TemplateInfo.GetFullHtmlFieldId(item.Value); 8: var checkedAttr = item.Selected ? "checked=\"checked\"" : string.Empty; 9: %> 10: <input type="radio" id="<%: radioId %>" name="<%: ViewData.TemplateInfo.HtmlFieldPrefix %>" value="<%: item.Value %>" <%: checkedAttr %>/> 11: <label for="<%: radioId %>"><%: item.Text %></label> 12: <% } %> 13: </div> There are several things to note about the code above. First, you can see in line #3, it’s getting the SelectList out of the view data dictionary. Then on line #7 it uses the GetFullHtmlFieldId() method from the TemplateInfo class to ensure we get unique IDs. We pass the Value to this method so that it will produce IDs like “Name_Foo” and “Name_Bar” rather than just “Name” which is our property name. However, for the “name” attribute (on line #10) we can just use the normal HtmlFieldPrefix property so that we ensure all radio buttons have the same name which corresponds to the view model’s property name. We also get to leverage the fact the a SelectListItem has a Boolean Selected property so we can set the checkedAttr variable on line #8 and use it on line #10. Finally, it’s trivial to set the correct “for” attribute for the <label> on line #11 since we already produced that value. Because the TemplateInfo class provides all the metadata for our view, we’re able to produce this view that is widely re-usable across our application. In fact, we can create a couple HTML helpers to better encapsulate this call and make it more user friendly: 1: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, params string[] items) 2: { 3: return htmlHelper.RadioButtonList(expression, new SelectList(items)); 4: } 5:   6: public static MvcHtmlString RadioButtonList<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, IEnumerable<SelectListItem> items) 7: { 8: var func = expression.Compile(); 9: var result = func(htmlHelper.ViewData.Model); 10: var list = new SelectList(items, "Value", "Text", result); 11: return htmlHelper.EditorFor(expression, "RadioButtonList", new { selectList = list }); 12: } This allows us to simply the call like this: 1: <%: Html.RadioButtonList(m => m.Name, "Foo", "Bar" ) %> In that example, the values for the radio button are hard-coded and being passed in directly. But if you had a view model that contained a property for the collection of items you could call the second overload like this: 1: <%: Html.RadioButtonList(m => m.Name, Model.FooBarList ) %> The Editor templates introduced in MVC 2 definitely allow for much more flexible views/editors than previously available. By knowing about the features you have available to you with the TemplateInfo class, you can take these concepts and customize your editors with extreme flexibility and re-usability.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >