Search Results

Search found 4220 results on 169 pages for 'generating passwords'.

Page 131/169 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Evaluating Solutions to Manage Product Compliance? Don’t Wait Much Longer

    - by Evelyn Neumayr
    By Kerrie Foy, Director PLM Product Marketing, Oracle Depending on severity, product compliance issues can cause various problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS), a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment, was originally adopted by the European Union in 2003 for implementation in 2006 and has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey. In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014. Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives. Additional regulations are expected from organizations such as the Food and Drug Administration in the US and similar organizations elsewhere. Meeting compliance requirements and also successfully investing in eco-friendly designs can be a major challenge. It may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes.  Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming.  However, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and thrive by implementing systematic approaches to product compliance that are more than functional bandages, they are revenue-generating engines. Consider working with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase.  Agile PG&C is a comprehensive solution that makes product compliance per corporate initiatives and regulations more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, gain rapid visibility into non-compliance issues, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise-wide systematic approach to product compliance is a competitive investment. From the start, Agile PG&C enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738.

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

  • Serving up a RSS feed in MVC using WCF Syndication

    - by brian_ritchie
    With .NET 3.5, Microsoft added the SyndicationFeed class to WCF for generating ATOM 1.0 & RSS 2.0 feeds.  In .NET 3.5, it lives in System.ServiceModel.Web but was moved into System.ServiceModel in .NET 4.0. Here's some sample code on constructing a feed: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: SyndicationFeed feed = new SyndicationFeed(title, description, new Uri(link)); 2: feed.Categories.Add(new SyndicationCategory(category)); 3: feed.Copyright = new TextSyndicationContent(copyright); 4: feed.Language = "en-us"; 5: feed.Copyright = new TextSyndicationContent(DateTime.Now.Year + " " + ownerName); 6: feed.ImageUrl = new Uri(imageUrl); 7: feed.LastUpdatedTime = DateTime.Now; 8: feed.Authors.Add(new SyndicationPerson() { Name = ownerName, Email = ownerEmail }); 9:   10: var feedItems = new List<SyndicationItem>(); 11: foreach (var item in Items) 12: { 13: var sItem = new SyndicationItem(item.title, null, new Uri(link)); 14: sItem.Summary = new TextSyndicationContent(item.summary); 15: sItem.Id = item.id; 16: if (item.publishedDate != null) 17: sItem.PublishDate = (DateTimeOffset)item.publishedDate; 18: sItem.Links.Add(new SyndicationLink() { Title = item.title, Uri = new Uri(link), Length = item.size, MediaType = item.mediaType }); 19: feedItems.Add(sItem); 20: } 21: feed.Items = feedItems;   Then, we create a custom ContentResult to serialize the feed & stream it to the client: 1: public class SyndicationFeedResult : ContentResult 2: { 3: public SyndicationFeedResult(SyndicationFeed feed) 4: : base() 5: { 6: using (var memstream = new MemoryStream()) 7: using (var writer = new XmlTextWriter(memstream, System.Text.UTF8Encoding.UTF8)) 8: { 9: feed.SaveAsRss20(writer); 10: writer.Flush(); 11: memstream.Position = 0; 12: Content = new StreamReader(memstream).ReadToEnd(); 13: ContentType = "application/rss+xml" ; 14: } 15: } 16: } Finally, we wire it up through the controller: 1: public class RssController : Controller 2: { 3: public SyndicationFeedResult Feed() 4: { 5: var feed = new SyndicationFeed(); 6: // populate feed... 7: return new SyndicationFeedResult(feed); 8: } 9: }   In the next post, I'll discuss how to add iTunes markup to the feed to publish it on iTunes as a Podcast. 

    Read the article

  • SSRS Parameters and MDX Data Sets

    - by blakmk
    Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.

    Read the article

  • How to develop RPG Damage Formulas?

    - by user127817
    I'm developing a classical 2d RPG (in a similar vein to final fantasy) and I was wondering if anyone had some advice on how to do damage formulas/links to resources/examples? I'll explain my current setup. Hopefully I'm not overdoing it with this question, and I apologize if my questions is too large/broad My Characters stats are composed of the following: enum Stat { HP = 0, MP = 1, SP = 2, Strength = 3, Vitality = 4, Magic = 5, Spirit = 6, Skill = 7, Speed = 8, //Speed/Agility are the same thing Agility = 8, Evasion = 9, MgEvasion = 10, Accuracy = 11, Luck = 12, }; Vitality is basically defense to physical attacks and spirit is defense to magic attacks. All stats have fixed maximums (9999 for HP, 999 for MP/SP and 255 for the rest). With abilities, the maximums can be increased (99999 for HP, 9999 for HP/SP, 999 for the rest) with typical values (at level 100) before/after abilities+equipment+etc will be 8000/20,000 for HP, 800/2000 for SP/MP, 180/350 for other stats Late game Enemy HP will typically be in the lower millions (with a super boss having the maximum of ~12 million). I was wondering how do people actually develop proper damage formulas that scale correctly? For instance, based on this data, using the damage formulas for Final Fantasy X as a base looked very promising. A full reference here http://www.gamefaqs.com/ps2/197344-final-fantasy-x/faqs/31381 but as a quick example: Str = 127, 'Attack' command used, enemy Def = 34. 1. Physical Damage Calculation: Step 1 ------------------------------------- [{(Stat^3 ÷ 32) + 32} x DmCon ÷16] Step 2 ---------------------------------------- [{(127^3 ÷ 32) + 32} x 16 ÷ 16] Step 3 -------------------------------------- [{(2048383 ÷ 32) + 32} x 16 ÷ 16] Step 4 --------------------------------------------------- [{(64011) + 32} x 1] Step 5 -------------------------------------------------------- [{(64043 x 1)}] Step 6 ---------------------------------------------------- Base Damage = 64043 Step 7 ----------------------------------------- [{(Def - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------ [{(34 - 280.4)^2} ÷ 110] + 16 Step 9 ------------------------------------------------- [(-246)^2) ÷ 110] + 16 Step 10 ---------------------------------------------------- [60516 ÷ 110] + 16 Step 11 ------------------------------------------------------------ [550] + 16 Step 12 ---------------------------------------------------------- DefNum = 566 Step 13 ---------------------------------------------- [BaseDmg * DefNum ÷ 730] Step 14 --------------------------------------------------- [64043 * 566 ÷ 730] Step 15 ------------------------------------------------------ [36248338 ÷ 730] Step 16 ------------------------------------------------- Base Damage 2 = 49655 Step 17 ------------ Base Damage 2 * {730 - (Def * 51 - Def^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ---------------------- 49655 * {730 - (34 * 51 - 34^2 ÷ 11) ÷ 10} ÷ 730 Step 19 ------------------------- 49655 * {730 - (1734 - 1156 ÷ 11) ÷ 10} ÷ 730 Step 20 ------------------------------- 49655 * {730 - (1734 - 105) ÷ 10} ÷ 730 Step 21 ------------------------------------- 49655 * {730 - (1629) ÷ 10} ÷ 730 Step 22 --------------------------------------------- 49655 * {730 - 162} ÷ 730 Step 23 ----------------------------------------------------- 49655 * 568 ÷ 730 Step 24 -------------------------------------------------- Final Damage = 38635 I simply modified the dividers to include the attack rating of weapons and the armor rating of armor. Magic Damage is calculated as follows: Mag = 255, Ultima is used, enemy MDef = 1 Step 1 ----------------------------------- [DmCon * ([Stat^2 ÷ 6] + DmCon) ÷ 4] Step 2 ------------------------------------------ [70 * ([255^2 ÷ 6] + 70) ÷ 4] Step 3 ------------------------------------------ [70 * ([65025 ÷ 6] + 70) ÷ 4] Step 4 ------------------------------------------------ [70 * (10837 + 70) ÷ 4] Step 5 ----------------------------------------------------- [70 * (10907) ÷ 4] Step 6 ------------------------------------ Base Damage = 190872 [cut to 99999] Step 7 ---------------------------------------- [{(MDef - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------- [{(1 - 280.4)^2} ÷ 110] + 16 Step 9 ---------------------------------------------- [{(-279.4)^2} ÷ 110] + 16 Step 10 -------------------------------------------------- [(78064) ÷ 110] + 16 Step 11 ------------------------------------------------------------ [709] + 16 Step 12 --------------------------------------------------------- MDefNum = 725 Step 13 --------------------------------------------- [BaseDmg * MDefNum ÷ 730] Step 14 --------------------------------------------------- [99999 * 725 ÷ 730] Step 15 ------------------------------------------------- Base Damage 2 = 99314 Step 16 ---------- Base Damage 2 * {730 - (MDef * 51 - MDef^2 ÷ 11) ÷ 10} ÷ 730 Step 17 ------------------------ 99314 * {730 - (1 * 51 - 1^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ------------------------------ 99314 * {730 - (51 - 1 ÷ 11) ÷ 10} ÷ 730 Step 19 --------------------------------------- 99314 * {730 - (49) ÷ 10} ÷ 730 Step 20 ----------------------------------------------------- 99314 * 725 ÷ 730 Step 21 -------------------------------------------------- Final Damage = 98633 The problem is that the formulas completely fall apart once stats start going above 255. In particular Defense values over 300 or so start generating really strange behavior. High Strength + Defense stats lead to massive negative values for instance. While I might be able to modify the formulas to work correctly for my use case, it'd probably be easier just to use a completely new formula. How do people actually develop damage formulas? I was considering opening excel and trying to build the formula that way (mapping Attack Stats vs. Defense Stats for instance) but I was wondering if there's an easier way? While I can't convey the full game mechanics of my game here, might someone be able to suggest a good starting place for building a damage formula? Thanks

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • Copying A Slide From One Presentation To Another

    - by Tim Murphy
    There are many ways to generate a PowerPoint presentation using Open XML.  The first way is to build it by hand strictly using the SDK.  Alternately you can modify a copy of a base presentation in place.  The third approach to generate a presentation is to build a new presentation from the parts of an existing presentation by copying slides as needed.  This post will focus on the third option. In order to make this solution a little more elegant I am going to create a VSTO add-in as I did in my previous post.  This one is going to insert Tags to identify slides instead of NonVisualDrawingProperties which I used to identify charts, tables and images.  The code itself is fairly short. SlideNameForm dialog = new SlideNameForm(); Selection selection = Globals.ThisAddIn.Application.ActiveWindow.Selection;   if(dialog.ShowDialog() == DialogResult.OK) { selection.SlideRange.Tags.Add(dialog.slideName,dialog.slideName); } Zeyad Rajabi has a good post here on combining slides from two presentations.  The example he gives is great if you are doing a straight merge.  But what if you want to use your source file as almost a supermarket where you pick and chose slides and may even insert them repeatedly?  The following code uses the tags we created in the previous step to pick a particular slide an copy it to a destination file. using (PresentationDocument newDocument = PresentationDocument.Open(OutputFileText.Text,true)) { PresentationDocument templateDocument = PresentationDocument.Open(FileNameText.Text, false);   uniqueId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideMasterIdList); uint maxId = GetMaxIdFromChild(newDocument.PresentationPart.Presentation.SlideIdList);   SlidePart oldPart = GetSlidePartByTagName(templateDocument, SlideToCopyText.Text);   SlidePart newPart = newDocument.PresentationPart.AddPart<SlidePart>(oldPart, "sourceId1");   SlideMasterPart newMasterPart = newDocument.PresentationPart.AddPart(newPart.SlideLayoutPart.SlideMasterPart);   SlideIdList idList = newDocument.PresentationPart.Presentation.SlideIdList;   // create new slide ID maxId++; SlideId newId = new SlideId(); newId.Id = maxId; newId.RelationshipId = "sourceId1"; idList.Append(newId);   // Create new master slide ID uniqueId++; SlideMasterId newMasterId = new SlideMasterId(); newMasterId.Id = uniqueId; newMasterId.RelationshipId = newDocument.PresentationPart.GetIdOfPart(newMasterPart); newDocument.PresentationPart.Presentation.SlideMasterIdList.Append(newMasterId);   // change slide layout ID FixSlideLayoutIds(newDocument.PresentationPart);     //newPart.Slide.Save(); newDocument.PresentationPart.Presentation.Save(); } The GetMaxIDFromChild and FixSlideLayoutID methods are barrowed from Zeyad’s article.  The GetSlidePartByTagName method is listed below.  It is really one LINQ query that finds SlideParts with child Tags that have the requested Name. private SlidePart GetSlidePartByTagName(PresentationDocument templateDocument, string tagName) { return (from p in templateDocument.PresentationPart.SlideParts where p.UserDefinedTagsParts.First().TagList.Descendants <DocumentFormat.OpenXml.Presentation.Tag>().First().Name == tagName.ToUpper() select p).First(); } This is what really makes the difference from what Zeyad posted.  The most powerful thing you can have when generating documents from templates is a consistent way of naming items to be manipulated.  I will be show more approaches like this in upcoming posts. del.icio.us Tags: Office Open XML,Presentation,PowerPoint,VSTO,TagList

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • forEach and Facelets - a bugfarm just waiting for harvest

    - by Duncan Mills
    An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the ADF EMG bugzilla (Issue 53) and in a blog posting by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent. To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code: <af:forEach var="item" items="#{itemList.items}"> <af:commandLink id="cl1" text="#{item.label}" action="#{item.doAction}"  partialSubmit="true"/> </af:forEach> Looks innocent enough right? We see a bunch of links printed out, great. The issue here though is the id attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one id is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing?  The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much. However, (you could see this coming), the same is not true when running with Facelets  (this is under 11.1.2.n)  in that case, what you put for the id is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime.  So subtle chaos can ensue.  The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue.  The Solution  Once you're aware of this one it's really simple to fix it, there are two options: Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness. Use the var attribute of the loop to generate a unique id for each child instance.  for example in the above case: <af:commandLink id="cl1_#{item.index}" ... />.  So one to watch out for in your upgrades to JSF 2 and one perhaps, for your coding standards today to prepare you for. For completeness, here's the reference to the underlying JSF issue that's at the heart of this: JAVASERVERFACES-1527

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Systems of Engagement

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}  Engagement Week 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This week we’ll be looking at the ever evolving topic of systems of engagement. This topic continues generating widespread discussion around how we connect with businesses, employers, governments, and extended social communities across multiple channels spanning web, mobile and human face to face contact. Earlier in our Social Business Thought Leader Webcast Series, we had AIIM President John Mancini presenting "Moving from Records to Engagement to Insight" discussing the factors that are driving organizations to think more strategically about the intersection of content management, social technologies, and business processes. John spoke about how Content Management and Enterprise IT are being changed by social technologies and how new technologies are being used to drive innovation and transform processes along and what the implications of this transformation are for information professionals. He used these two slides below to illustrate the evolution from Systems of Record to Systems of Engagement. The AIIM White Paper is available for download from the AIIM website. Later this week (09/20), we'll have another session in our Social Business Thought Leader Webcast Series featuring  R “Ray” Wang (@rwang0) Principal Analyst & CEO from Constellation Research presenting: "Engaging Customers in the Era of Overexposure"  More info to come tomorrow on the upcoming webcast this week. ~~~~~~ In the spirit of spreading good karma - one of the first things that came to mind as I was thinking about "Engagement" was the evolution of the Marriage Proposal.  Someone sent me a link to this link a couple of months ago and it raises the bar on all proposals. I hope you'll enjoy!

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • PhP Login/Register system [migrated]

    - by Marian
    I found this good tutorial on creating a login/register system using PhP and MySQL. The forum is around 5 years old (edited last year) but it can still be usefull. Beginner Simple Register-Login system There seems to be an issue with both login and register pages. <?php function register_form(){ $date = date('D, M, Y'); echo "<form action='?act=register' method='post'>" ."Username: <input type='text' name='username' size='30'><br>" ."Password: <input type='password' name='password' size='30'><br>" ."Confirm your password: <input type='password' name='password_conf' size='30'><br>" ."Email: <input type='text' name='email' size='30'><br>" ."<input type='hidden' name='date' value='$date'>" ."<input type='submit' value='Register'>" ."</form>"; } function register(){ $connect = mysql_connect("host", "username", "password"); if(!$connect){ die(mysql_error()); } $select_db = mysql_select_db("database", $connect); if(!$select_db){ die(mysql_error()); } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $pass_conf = $_REQUEST['password_conf']; $email = $_REQUEST['email']; $date = $_REQUEST['date']; if(empty($username)){ die("Please enter your username!<br>"); } if(empty($password)){ die("Please enter your password!<br>"); } if(empty($pass_conf)){ die("Please confirm your password!<br>"); } if(empty($email)){ die("Please enter your email!"); } $user_check = mysql_query("SELECT username FROM users WHERE username='$username'"); $do_user_check = mysql_num_rows($user_check); $email_check = mysql_query("SELECT email FROM users WHERE email='$email'"); $do_email_check = mysql_num_rows($email_check); if($do_user_check > 0){ die("Username is already in use!<br>"); } if($do_email_check > 0){ die("Email is already in use!"); } if($password != $pass_conf){ die("Passwords don't match!"); } $insert = mysql_query("INSERT INTO users (username, password, email) VALUES ('$username', '$password', '$email')"); if(!$insert){ die("There's little problem: ".mysql_error()); } echo $username.", you are now registered. Thank you!<br><a href=login.php>Login</a> | <a href=index.php>Index</a>"; } switch($act){ default; register_form(); break; case "register"; register(); break; } ?> Once pressed the register button the page does nothing, fields are erased and no data is added inside the database or error given. I tought that the problem might be the switch($act){ part so I removed it and changed the page using a require require('connect.php'); where connect.php is <?php mysql_connect("localhost","host","password"); mysql_select_db("database"); ?> Removed the function register_form(){ and echo part turning it into an HTML code: <form action='register' method='post'> Username: <input type='text' name='username' size='30'><br> Password: <input type='password' name='password' size='30'><br> Confirm your password: <input type='password' name='password_conf' size='30'><br> Email: <input type='text' name='email' size='30'><br> <input type='hidden' name='date' value='$date'> <input type='submit' name="register" value='Register'> </form> And instead of having a function register(){ I replaced it with a if($register){ So when the Register button is pressed it runs the php code, but this edit doesn't seem to work either. So what can the problem be? If needed I can re-add this code on my Domain The login page has the same issue, nothing happens when the button is pressed beside emptying the fields.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • NetworkManager Applet shows no networks

    - by Kkelk
    I am "the friend" referred to in the questions here and here. I decided to come and ask a question myself, as I can still not connect to the wireless network. I downloaded Keryx, as suggested here, and managed to download the necessary package and its dependencies. When I attempted to install the packages on Ubuntu using Keryx, Keryx just closed. Following this, I installed the packages manually using dpkg, and as far as I can tell, this was successful: kieran@ubuntu:~$ cd /host/wifi/Keryx/keryx/projects/Kieran/packages kieran@ubuntu:/host/wifi/Keryx/keryx/projects/Kieran/packages$ sudo dpkg -i *.deb [sudo] password for kieran: Selecting previously deselected package bcmwl-kernel-source. (Reading database ... 118296 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.60.48.36+bdcom-0ubuntu5_i386.deb) ... Selecting previously deselected package dkms. Unpacking dkms (from dkms_2.1.1.2-3ubuntu1.1_all.deb) ... Selecting previously deselected package fakeroot. Unpacking fakeroot (from fakeroot_1.14.4-1ubuntu1_i386.deb) ... Selecting previously deselected package linux-image. Unpacking linux-image (from linux-image_2.6.35.22.23_i386.deb) ... Selecting previously deselected package menu. Unpacking menu (from menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package patch. Unpacking patch (from patch_2.6-2ubuntu1_i386.deb) ... Setting up fakeroot (1.14.4-1ubuntu1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up linux-image (2.6.35.22.23) ... Setting up menu (2.1.44ubuntu1) ... Setting up patch (2.6-2ubuntu1) ... Processing triggers for man-db ... Setting up dkms (2.1.1.2-3ubuntu1.1) ... Setting up bcmwl-kernel-source (5.60.48.36+bdcom-0ubuntu5) ... Loading new bcmwl-5.60.48.36+bdcom DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i686 Building initial module for 2.6.35-22-generic Done. wl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/2.6.35-22-generic/updates/dkms/ depmod..... DKMS: install Completed. update-initramfs: deferring update (trigger activated) Processing triggers for install-info ... Processing triggers for doc-base ... Processing 31 changed 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for menu ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.35-22-generic Warning: No support for locale: en_GB.utf8 After rebooting, however, there were still no wireless networks in the NetworkManager Applet list. I opened the file /var/lib/NetworkManager/NetworkManager.state, and both NetworkEnabled and WirelessEnabled were set to True. While i'm very concious I may be asking a stupid question here, both my friend and I have nothing left to suggest, and as such - I would be very grateful for any answers as to how to get wireless working.

    Read the article

  • The inevitable Hello World post!

    - by brendonpage
    Greetings to anyone reading this! This is my first of hopefully many posts. I would like to use this post to introduce myself and to let you know what to expect from this blog in future. Okay so a bit about myself. In case you missed the name of this blog, my name is Brendon Page! I am a Software Developer from South Africa and work for a small company who’s main focus is producing software for the kitchen cupboard industry, although from time to time we do produce custom solutions for other industries. I work in a small team of 3, including myself, and am fortunate enough to work from home! I have been involved in IT since 1996, which is when I got my first PC, and started working as a junior programmer in 2003. Outside of work I enjoy playing squash, PC Games and of course LANing with my friends. If I get any free time between all of that I will usually dedicate some of it to a personal project, these are mainly prototypes for an idea I have had or for something that could be useful at work. I was in 2 minds on whether to include a photo of myself. The reason for this was because while I was looking for a suitable photo to use, it dawned on me how much time I dedicate to pulling funny faces in photos! I also realized how little I shave, which I blame completely on working form home. So after much debate here I am, funny face, beard and all!   Now that you know a bit about me lets move onto what expect from this blog. I work predominantly with Microsoft technologies so most if not all of my posts will be related to something Microsoft. Since most of my job entails Software Development you can expect a lot of posts which will deal with the .NET Framework. I am currently working on a large Silverlight project, so my first few posts will be targeted at in that direction. I will be striving to make the content of my posts as useful as possible from both an explanation and code perspective, I aim to include a working solution for every post, which I will put up on my skydrive for download. Here is what I have planned for my next few posts: Where did my session variables go?  Here I will take you through the lessons I learnt the hard way about the ASP.NET session. I am not going to go into to much depth in this post, as there is already a lot of information available on it. I mainly want to cover it in an effort to keep the scope creep of my posts to a minimum, some the solutions I upload will use it and I would like to have a post that I can reference to explain why I am doing something a certain way. Uploading files through SIlverlight Again there is a lot of existing information on this topic, so I wont be going into to much depth, but I will be using the solution from this as a base for my next post. Generating and Displaying DeepZoom images dynamically in Silverlight Well the title pretty much speaks for it’s self on this one. As I mentioned I will be building off the solution that I create in my ‘Uploading files through Silverlight’ post. Securing DeepZoom images using a custom implementation of the MultiScaleTileSource In this post I will look at the privacy issue surrounding the default usage of DeepZoom images in Silverlight and how to overcome it. This makes the use of DeepZoom in privacy conscious applications more viable. Thanks to anyone who actually read this post! I look forward to producing more which will hopefully be helpful to you.

    Read the article

  • Bitmap font rendering, UV generation and vertex placement

    - by jack
    I am generating a bitmap, however, I am not sure on how to render the UV's and placement. I had a thread like this once before, but it was too loosely worded as to what I was looking to do. What I am doing right now is creating a large 1024x1024 image with characters evenly placed every 64 pixels. Here is an example of what I mean. I then save the bitmap X/Y information to a file (which is all multiples of 64). However, I am not sure how to properly use this information and bitmap to render. This falls into two different categories, UV generation and kerning. Now I believe I know how to do both of these, however, when I attempt to couple them together I will get horrendous results. For example, I am trying to render two different text arrays, "123" and "njfb". While ignoring the texture quality (I will be increasing the texture to provide more detail once I fix this issue), here is what it looks like when I try to render them. http://img64.imageshack.us/img64/599/badfontrendering.png Now for the algorithm. I am doing my letter placement with both GetABCWidth and GetKerningPairs. I am using GetABCWidth for the width of the characters, then I am getting the kerning information for adjust the characters. Does anyone have any suggestions on how I can implement my own bitmap font renderer? I am trying to do this without using external libraries such as angel bitmap tool or freetype. I also want to stick to the way the bitmap font sheet is generated so I can do extra effects in the future. Rendering Algorithm for(U32 c = 0, vertexID = 0, i = 0; c < numberOfCharacters; ++c, vertexID += 4, i += 6) { ObtainCharInformation(fontName, m_Text[c]); letterWidth = (charInfo.A + charInfo.B + charInfo.C) * scale; if(c != 0) { DWORD BytesReq = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, 0, 0, &mat); U8 * glyphImg= new U8[BytesReq]; DWORD r = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, BytesReq, glyphImg, &mat); for (int k=0; k<nKerningPairs; k++) { if ((kerningpairs[k].wFirst == previousCharIndex) && (kerningpairs[k].wSecond == m_Text[c])) { letterBottomLeftX += (kerningpairs[k].iKernAmount * scale); break; } } letterBottomLeftX -= (gm.gmCellIncX * scale); } SetVertex(letterBottomLeftX, 0.0f, zFight, vertexID); SetVertex(letterBottomLeftX, letterHeight, zFight, vertexID + 1); SetVertex(letterBottomLeftX + letterWidth, letterHeight, zFight, vertexID + 2); SetVertex(letterBottomLeftX + letterWidth, 0.0f, zFight, vertexID + 3); zFight -= 0.001f; float BottomLeftX = (F32)(charInfo.bitmapXOrigin) / (float)m_BitmapWidth; float BottomLeftY = (F32)(charInfo.bitmapYOrigin + charInfo.charBitmapHeight) / (float)m_BitmapWidth; float TopLeftX = BottomLeftX; float TopLeftY = (F32)(charInfo.bitmapYOrigin) / (float)m_BitmapWidth; float TopRightX = (F32)(charInfo.bitmapXOrigin + charInfo.B - charInfo.C) / (float)m_BitmapWidth; float TopRightY = TopLeftY; float BottomRightX = TopRightX; float BottomRightY = BottomLeftY; SetTextureCoordinate(TopLeftX, TopLeftY, vertexID + 1); SetTextureCoordinate(BottomLeftX, BottomLeftY, vertexID + 0); SetTextureCoordinate(BottomRightX, BottomRightY, vertexID + 3); SetTextureCoordinate(TopRightX, TopRightY, vertexID + 2); /// index setting letterBottomLeftX += letterWidth; previousCharIndex = m_Text[c]; }

    Read the article

  • Join us on our Journey to be #1 in SaaS!

    - by jessica.ebbelaar(at)oracle.com
    WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring?   Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. WHY ORACLE? Oracle is a robust organization that has proven to maintain growth and innovation at all levels with a constant evolving attitude. The main ingredient of Oracles success is the 105.000 talented employees who constantly amaze each other in building a better and more innovative organization. Oracle is a company where YOU can make a difference. What is OD? Oracle Direct is a state-of-the-art, multi-channel EMEA sales operation bringing to life the benefits of Oracle’s complete technology stack. It offers you the unique opportunity to work with the most talented and like-minded sales professionals in the industry.  You will have access to world class training and structured career development programmes allowing you to accelerate your Solution Sales career across a multitude of product lines and a choice of attractive locations. What positions are OD Hiring? Oracle is on a journey to be the #1 SaaS vendor in EMEA.  Due to recent expansion and acquisitions within our Cloud Business, we are now growing our EMEA Cloud Applications Sales Group in Dublin. We have many exciting NEW opportunities across our CRM and HCM SaaS Sales teams. As a SaaS Sales Account Manager, you will proactively manage an assigned territory / vertical with responsibility for the full sales cycle. This role requires strong business development, solution selling, account management and closing skills. What is the Business Development Group (BDG) The Business Development Group is the key entry point in Oracle for the future Sales and Management talent of the organisation. We are the Demand Generation engine for Oracle in EMEA. We provide revenue generating, quality sales pipeline to our Inside and Field Sales professionals as well as to our Channel Partners. Our current focus is to provide an agile and flexible service offering to our customers and stakeholders to meet ever changing business needs, whilst constantly striving to improve the customer experience, quality of our pipeline, market coverage and penetration. As a SaaS Business Development Consultant (BDC) you will be the first touch point with new customers. Your goal is to proactively identify and qualify business opportunities leading to revenue for Oracle. You will work closely with your Inside Sales colleagues who will progress your qualified pipeline and opportunities. Work for us Work for the only multi-pillar SaaS vendor in the market Be part of a FUN, fast paced and truly International sales team  Develop you solution sales EXPERTISE Drive your CAREER development within a structured and supportive environment The Profile You have a passion for selling cutting-edge technology You thrive in a fast paced and dynamic work environment where being the best is paramount Your priority is always the customer You live for a challenge and you love to win Join us on our Journey to be #1 in SaaS and be part of our Cloud Success Story! You will find more information about open roles here

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >