Search Results

Search found 4187 results on 168 pages for 'secure erase'.

Page 153/168 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Identity Management Monday at Oracle OpenWorld

    - by Tanu Sood
    What a great start to Oracle OpenWorld! Did you catch Larry Ellison’s keynote last evening? As expected, it was a packed house and the keynote received a tremendous response both from the live audience as well as the online community as evidenced by the frequent spontaneous applause in house and the twitter buzz. Here’s but a sampling of some of the tweets that flowed in: @paulvallee: I freaking love that #oracle has been born again in it's interest in core tech #oow (so good for #pythian) @rwang0: MyPOV: #oracle just leapfrogged the competition on the tech front across the board. All they need is the content delivery network #oow12 @roh1: LJE more astute & engaging this year. Nice announcements this year with 12c the MTDB sounding real good. #oow12 @brooke: Cool to see @larryellison interrupted multiple times by applause from the audience. Great speaker. #OOW And there’s lot more to come this week. Identity Management sessions kick-off today. Here’s a quick preview of what’s in store for you today for Identity Management: CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Hear directly from subject matter experts from Kaiser Permanente and SuperValu who would share the stage with Amit Jasuja, Senior Vice President, Oracle Identity Management and Security, to discuss how the latest advances in Identity Management that made it in Oracle Identity Management 11g Release 2 are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. This session will also explore the architectural simplifications of Oracle Identity Governance 11gR2, focusing on how these enhancements simply deployments. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. HOL10478: Complete Access Management Monday, October 1, 1:45 p.m. – 2:45 p.m., Marriott Marquis - Salon 1/2 And, get your hands on technology today. Register and attend the Hands-On-Lab session that demonstrates Oracle’s complete and scalable access management solution, which includes single sign-on, authorization, federation, and integration with social identity providers. Further, the session shows how to securely extend identity services to mobile applications and devices—all while leveraging a common set of policies and a single instance. Product Demonstrations The latest technology in Identity Management is also being showcased in the Exhibition Hall so do find some time to visit our product demonstrations there. Experts will be at hand to answer any questions. DEMOS LOCATION EXHIBITION HALL HOURS Access Management: Complete and Scalable Access Management Moscone South, Right - S-218 Monday, October 1 9:30 a.m.–6:00 p.m. 9:30 a.m.–10:45 a.m. (Dedicated Hours) Tuesday, October 2 9:45 a.m.–6:00 p.m. 2:15 p.m.–2:45 p.m. (Dedicated Hours) Wednesday, October 3 9:45 a.m.–4:00 p.m. 2:15 p.m.–3:30 p.m. (Dedicated Hours) Access Management: Federating and Leveraging Social Identities Moscone South, Right - S-220 Access Management: Mobile Access Management Moscone South, Right - S-219 Access Management: Real-Time Authorizations Moscone South, Right - S-217 Access Management: Secure SOA and Web Services Security Moscone South, Right - S-223 Identity Governance: Modern Administration and Tooling Moscone South, Right - S-210 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Oracle Directory Services Plus: Performant, Cloud-Ready Moscone South, Right - S-222 Oracle Identity Management: Closed-Loop Access Certification Moscone South, Right - S-221 We recommend you keep the Focus on Identity Management document handy. And don’t forget, if you are not on site, you can catch all the keynotes LIVE from the comfort of your desk on YouTube.com/Oracle. Keep the conversation going on @oracleidm. Use #OOW and #IDM and get engaged today. Photo Courtesy: @OracleOpenWorld

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • Help Prevent Carpal Tunnel Problems with Workrave

    - by Matthew Guay
    Whether for work or leisure, many of us spend entirely too much time on the computer everyday.  This puts us at risk of having or aggravating Carpal Tunnel problems, but thanks to Workrave you can help to divert these problems. Workrave helps Carpal Tunnel problems by reminding you to get away from your computer periodically.  Breaking up your computer time with movement can help alleviate many computer and office related health problems.  Workrave helps by reminding you to take short pauses after several minutes of computer use, and longer breaks after continued use.  You can also use it to keep from using the computer for too much You time in a day.  Since you can change the settings to suit you, this can be a great way to make sure you’re getting the breaks you need. Install Workrave on Windows If you’re using Workrave on Windows, download (link below) and install it with the default settings. One installation setting you may wish to change is the startup.  By default Workrave will run automatically when you start your computer; if you don’t want this, you can simply uncheck the box and proceed with the installation. Once setup is finished, you can run Workrave directly from the installer. Or you can open it from your start menu by entering “workrave” in the search box. Install Workrave in Ubuntu If you wish to use it in Ubuntu, you can install it directly from the Ubuntu Software Center.  Click the Applications menu, and select Ubuntu Software Center. Enter “workrave” into the search box in the top right corner of the Software Center, and it will automatically find it.  Click the arrow to proceed to Workrave’s page. This will give you information about Workrave; simply click Install to install Workrave on your system. Enter your password when prompted. Workrave will automatically download and install.   When finished, you can find Workrave in your Applications menu under Universal Access. Using Workrave Workrave by default shows a small counter on your desktop, showing the length of time until your next Micro break (30 second break), Rest break (10 minute break), and max amount of computer usage for the day. When it’s time for a micro break, Workrave will popup a reminder on your desktop. If you continue working, it will disappear at the end of the timer.  If you stop, it will start a micro-break which will freeze most on-screen activities until the timer is over.  You can click Skip or Postpone if you do not want to take a break right then. After an hour of work, Workrave will give you a 10 minute rest break.  During this it will show you some exercises that can help eliminate eyestrain, muscle tension, and other problems from prolonged computer usage.  You can click through the exercises, or can skip or postpone the break if you wish.   Preferences You can change your Workrave preferences by right-clicking on its icon in your system tray and selecting Preferences. Here you can customize the time between your breaks, and the length of your breaks.  You can also change your daily computer usage limit, and can even turn off the postpone and skip buttons on notifications if you want to make sure you follow Workrave and take your rests! From the context menu, you can also choose Statistics.  This gives you an overview of how many breaks, prompts, and more were shown on a given day.  It also shows a total Overdue time, which is the total length of the breaks you skipped or postponed.  You can view your Workrave history as well by simply selecting a date on the calendar.   Additionally, the Activity tab in the Statics pane shows more info about your computer usage, including total mouse movement, mouse button clicks, and keystrokes. Conclusion Whether you’re suffering with Carpal Tunnel or trying to prevent it, Workrave is a great solution to help remind you to get away from your computer periodically and rest.  Of course, since you can simply postpone or skip the prompts, you’ve still got to make an effort to help your own health.  But it does give you a great way to remind yourself to get away from the computer, and especially for geeks, this may be something that we really need! Download Workrave Similar Articles Productive Geek Tips Switch to the Dvorak Keyboard Layout in XPAccess Your MySQL Server Remotely Over SSHHow to Secure Gaim Instant Messenger traffic at Work with SecureCRT and SSHConnect to VMware Server Console Over SSHDisclaimers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • Polynomial division overloading operator (solved)

    - by Vlad
    Ok. here's the operations i successfully code so far thank's to your help: Adittion: polinom operator+(const polinom& P) const { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(i->coef, i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(j->coef, j->pow); j++; } else { // if both are equal Result.insert(i->coef + j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Subtraction: polinom operator-(const polinom& P) const //fixed prototype re. const-correctness { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(-(i->coef), i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(-(j->coef), j->pow); j++; } else { // if both are equal Result.insert(i->coef - j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Multiplication: polinom operator*(const polinom& P) const { polinom Result; constIter i, j, lastItem = Result.poly.end(); Iter it1, it2, first, last; int nr_matches; for (i = poly.begin() ; i != poly.end(); i++) { for (j = P.poly.begin(); j != P.poly.end(); j++) Result.insert(i->coef * j->coef, i->pow + j->pow); } Result.poly.sort(SortDescending()); lastItem--; while (true) { nr_matches = 0; for (it1 = Result.poly.begin(); it1 != lastItem; it1++) { first = it1; last = it1; first++; for (it2 = first; it2 != Result.poly.end(); it2++) { if (it2->pow == it1->pow) { it1->coef += it2->coef; nr_matches++; } } nr_matches++; do { last++; nr_matches--; } while (nr_matches != 0); Result.poly.erase(first, last); } if (nr_matches == 0) break; } return Result; } Division(Edited): polinom operator/(const polinom& P) const { polinom Result, temp2; polinom temp = *this; Iter i = temp.poly.begin(); constIter j = P.poly.begin(); int resultSize = 0; if (temp.poly.size() < 2) { if (i->pow >= j->pow) { Result.insert(i->coef / j->coef, i->pow - j->pow); temp = temp - Result * P; } else { Result.insert(0, 0); } } else { while (true) { if (i->pow >= j->pow) { Result.insert(i->coef / j->coef, i->pow - j->pow); if (Result.poly.size() < 2) temp2 = Result; else { temp2 = Result; resultSize = Result.poly.size(); for (int k = 1 ; k != resultSize; k++) temp2.poly.pop_front(); } temp = temp - temp2 * P; } else break; } } return Result; } }; The first three are working correctly but division doesn't as it seems the program is in a infinite loop. Final Update After listening to Dave, I finally made it by overloading both / and & to return the quotient and the remainder so thanks a lot everyone for your help and especially you Dave for your great idea! P.S. If anyone wants for me to post these 2 overloaded operator please ask it by commenting on my post (and maybe give a vote up for everyone involved).

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • Polynomial division overloading operator

    - by Vlad
    Ok. here's the operations i successfully code so far thank's to your help: Adittion: polinom operator+(const polinom& P) const { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(i->coef, i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(j->coef, j->pow); j++; } else { // if both are equal Result.insert(i->coef + j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Subtraction: polinom operator-(const polinom& P) const //fixed prototype re. const-correctness { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(-(i->coef), i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(-(j->coef), j->pow); j++; } else { // if both are equal Result.insert(i->coef - j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Multiplication: polinom operator*(const polinom& P) const { polinom Result; constIter i, j, lastItem = Result.poly.end(); Iter it1, it2, first, last; int nr_matches; for (i = poly.begin() ; i != poly.end(); i++) { for (j = P.poly.begin(); j != P.poly.end(); j++) Result.insert(i->coef * j->coef, i->pow + j->pow); } Result.poly.sort(SortDescending()); lastItem--; while (true) { nr_matches = 0; for (it1 = Result.poly.begin(); it1 != lastItem; it1++) { first = it1; last = it1; first++; for (it2 = first; it2 != Result.poly.end(); it2++) { if (it2->pow == it1->pow) { it1->coef += it2->coef; nr_matches++; } } nr_matches++; do { last++; nr_matches--; } while (nr_matches != 0); Result.poly.erase(first, last); } if (nr_matches == 0) break; } return Result; } Division(Edited): polinom operator/(const polinom& P) { polinom Result, temp; Iter i = poly.begin(); constIter j = P.poly.begin(); if (poly.size() < 2) { if (i->pow >= j->pow) { Result.insert(i->coef, i->pow - j->pow); *this = *this - Result; } } else { while (true) { if (i->pow >= j->pow) { Result.insert(i->coef, i->pow - j->pow); temp = Result * P; *this = *this - temp; } else break; } } return Result; } The first three are working correctly but division doesn't as it seems the program is in a infinite loop. Update Because no one seems to understand how i thought the algorithm, i'll explain: If the dividend contains only one term, we simply insert the quotient in Result, then we multiply it with the divisor ans subtract it from the first polynomial which stores the remainder. If the polynomial we do this until the second polynomial( P in this case) becomes bigger. I think this algorithm is called long division, isn't it? So based on these, can anyone help me with overloading the / operator correctly for my class? Thanks!

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Use WLST to Delete All JMS Messages From a Destination

    - by james.bayer
    I got a question today about whether WebLogic Server has any tools to delete all messages from a JMS Queue.  It just so happens that the WLS Console has this capability already.  It’s available on the screen after the “Show Messages” button is clicked on a destination’s Monitoring tab as seen in the screen shot below. The console is great for something ad-hoc, but what if I want to automate this?  Well it just so happens that the console is just a weblogic application layered on top of the JMX Management interface.  If you look at the MBean Reference, you’ll find a JMSDestinationRuntimeMBean that includes the operation deleteMessages that takes a JMS Message Selector as an argument.  If you pass an empty string, that is essentially a wild card that matches all messages. Coding a stand-alone JMX client for this is kind of lame, so let’s do something more suitable to scripting.  In addition to the console, WebLogic Scripting Tool (WLST) based on Jython is another way to browse and invoke MBeans, so an equivalent interactive shell session to delete messages from a destination would looks like this: D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain\bin>setDomainEnv.cmd D:\Oracle\fmw11gr1ps3\user_projects\domains\hotspot_domain>java weblogic.WLST   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   wls:/offline> connect('weblogic','welcome1','t3://localhost:7001') Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'hotspot_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   wls:/hotspot_domain/serverConfig> serverRuntime() Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   wls:/hotspot_domain/serverRuntime> cd('JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0') wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> ls() dr-- DurableSubscribers   -r-- BytesCurrentCount 0 -r-- BytesHighCount 174620 -r-- BytesPendingCount 0 -r-- BytesReceivedCount 253548 -r-- BytesThresholdTime 0 -r-- ConsumersCurrentCount 0 -r-- ConsumersHighCount 0 -r-- ConsumersTotalCount 0 -r-- ConsumptionPaused false -r-- ConsumptionPausedState Consumption-Enabled -r-- DestinationInfo javax.management.openmbean.CompositeDataSupport(compositeType=javax.management.openmbean.CompositeType(name=DestinationInfo,items=((itemName=ApplicationName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ModuleName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName openmbean.SimpleType(name=java.lang.Boolean)),(itemName=SerializedDestination,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=ServerName,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=Topic,itemType=javax.management.openmbean.SimpleType(name=java.lang.Boolean)),(itemName=VersionNumber,itemType=javax.management.op ule-0!Queue-0, Queue=true, SerializedDestination=rO0ABXNyACN3ZWJsb2dpYy5qbXMuY29tbW9uLkRlc3RpbmF0aW9uSW1wbFSmyJ1qZfv8DAAAeHB3kLZBABZTeXN0ZW1Nb2R1bGUtMCFRdWV1ZS0wAAtKTVNTZXJ2ZXItMAAOU3lzdGVtTW9kdWxlLTABAANBbGwCAlb6IS6T5qL/AAAACgEAC0FkbWluU2VydmVyAC2EGgJW+iEuk+ai/wAAAAsBAAtBZG1pblNlcnZlcgAthBoAAQAQX1dMU19BZG1pblNlcnZlcng=, ServerName=JMSServer-0, Topic=false, VersionNumber=1}) -r-- DestinationType Queue -r-- DurableSubscribers null -r-- InsertionPaused false -r-- InsertionPausedState Insertion-Enabled -r-- MessagesCurrentCount 0 -r-- MessagesDeletedCurrentCount 3 -r-- MessagesHighCount 2 -r-- MessagesMovedCurrentCount 0 -r-- MessagesPendingCount 0 -r-- MessagesReceivedCount 3 -r-- MessagesThresholdTime 0 -r-- Name SystemModule-0!Queue-0 -r-- Paused false -r-- ProductionPaused false -r-- ProductionPausedState Production-Enabled -r-- State advertised_in_cluster_jndi -r-- Type JMSDestinationRuntime   -r-x closeCursor Void : String(cursorHandle) -r-x deleteMessages Integer : String(selector) -r-x getCursorEndPosition Long : String(cursorHandle) -r-x getCursorSize Long : String(cursorHandle) -r-x getCursorStartPosition Long : String(cursorHandle) -r-x getItems javax.management.openmbean.CompositeData[] : String(cursorHandle),Long(start),Integer(count) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),Long(messageHandle) -r-x getMessage javax.management.openmbean.CompositeData : String(cursorHandle),String(messageID) -r-x getMessage javax.management.openmbean.CompositeData : String(messageID) -r-x getMessages String : String(selector),Integer(timeout) -r-x getMessages String : String(selector),Integer(timeout),Integer(state) -r-x getNext javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x getPrevious javax.management.openmbean.CompositeData[] : String(cursorHandle),Integer(count) -r-x importMessages Void : javax.management.openmbean.CompositeData[],Boolean(replaceOnly) -r-x moveMessages Integer : String(java.lang.String),javax.management.openmbean.CompositeData,Integer(java.lang.Integer) -r-x moveMessages Integer : String(selector),javax.management.openmbean.CompositeData -r-x pause Void : -r-x pauseConsumption Void : -r-x pauseInsertion Void : -r-x pauseProduction Void : -r-x preDeregister Void : -r-x resume Void : -r-x resumeConsumption Void : -r-x resumeInsertion Void : -r-x resumeProduction Void : -r-x sort Long : String(cursorHandle),Long(start),String[](fields),Boolean[](ascending)   wls:/hotspot_domain/serverRuntime/JMSRuntime/AdminServer.jms/JMSServers/JMSServer-0/Destinations/SystemModule-0!Queue-0> cmo.deleteMessages('') 2 where the domain name is “hotspot_domain”, the JMS Server name is “JMSServer-0”, the Queue name is “Queue-0” and the System Module is named “SystemModule-0”.  To invoke the operation, I use the “cmo” object, which is the “Current Management Object” that represents the currently navigated to MBean.  The 2 indicates that two messages were deleted.  Combining this WLST code with a recent post by my colleague Steve that shows you how to use an encrypted file to store the authentication credentials, you could easily turn this into a secure automated script.  If you need help with that step, a long while back I blogged about some WLST basics.  Happy scripting.

    Read the article

  • Sorting a file with 55K rows and varying Columns

    - by Prasad
    Hi I want to find a programmatic solution using C++. I have a 900 files each of 27MB size. (just to inform about the enormity ). Each file has 55K rows and Varying columns. But the header indicates the columns I want to sort the rows in an order w.r.t to a Column Value. I wrote the sorting algorithm for this (definitely my newbie attempts, you may say). This algorithm is working for few numbers, but fails for larger numbers. Here is the code for the same: basic functions I defined to use inside the main code: int getNumberOfColumns(const string& aline) { int ncols=0; istringstream ss(aline); string s1; while(ss>>s1) ncols++; return ncols; } vector<string> getWordsFromSentence(const string& aline) { vector<string>words; istringstream ss(aline); string tstr; while(ss>>tstr) words.push_back(tstr); return words; } bool findColumnName(vector<string> vs, const string& colName) { vector<string>::iterator it = find(vs.begin(), vs.end(), colName); if ( it != vs.end()) return true; else return false; } int getIndexForColumnName(vector<string> vs, const string& colName) { if ( !findColumnName(vs,colName) ) return -1; else { vector<string>::iterator it = find(vs.begin(), vs.end(), colName); return it - vs.begin(); } } ////////// I like the Recurssive functions - I tried to create a recursive function ///here. This worked for small values , say 20 rows. But for 55K - core dumps void sort2D(vector<string>vn, vector<string> &srt, int columnIndex) { vector<double> pVals; for ( int i = 0; i < vn.size(); i++) { vector<string>meancols = getWordsFromSentence(vn[i]); pVals.push_back(stringToDouble(meancols[columnIndex])); } srt.push_back(vn[max_element(pVals.begin(), pVals.end())-pVals.begin()]); if (vn.size() > 1 ) { vn.erase(vn.begin()+(max_element(pVals.begin(), pVals.end())-pVals.begin()) ); vector<string> vn2 = vn; //cout<<srt[srt.size() -1 ]<<endl; sort2D(vn2 , srt, columnIndex); } } Now the main code: for ( int i = 0; i < TissueNames.size() -1; i++) { for ( int j = i+1; j < TissueNames.size(); j++) { //string fname = path+"/gse7307_Female_rma"+TissueNames[i]+"_"+TissueNames[j]+".txt"; //string fname2 = sortpath2+"/gse7307_Female_rma"+TissueNames[i]+"_"+TissueNames[j]+"Sorted.txt"; string fname = path+"/gse7307_Male_rma"+TissueNames[i]+"_"+TissueNames[j]+".txt"; string fname2 = sortpath2+"/gse7307_Male_rma"+TissueNames[i]+"_"+TissueNames[j]+"4Columns.txt"; //vector<string>AllLinesInFile; BioInputStream fin(fname); string aline; getline(fin,aline); replace (aline.begin(), aline.end(), '"',' '); string headerline = aline; vector<string> header = getWordsFromSentence(aline); int pindex = getIndexForColumnName(header,"p-raw"); int xcindex = getIndexForColumnName(header,"xC"); int xeindex = getIndexForColumnName(header,"xE"); int prbindex = getIndexForColumnName(header,"X"); string newheaderline = "X\txC\txE\tp-raw"; BioOutputStream fsrt(fname2); fsrt<<newheaderline<<endl; int newpindex=3; while ( getline(fin, aline) ){ replace (aline.begin(), aline.end(), '"',' '); istringstream ss2(aline); string tstr; ss2>>tstr; tstr = ss2.str().substr(tstr.length()+1); vector<string> words = getWordsFromSentence(tstr); string values = words[prbindex]+"\t"+words[xcindex]+"\t"+words[xeindex]+"\t"+words[pindex]; AllLinesInFile.push_back(values); } vector<string>SortedLines; sort2D(AllLinesInFile, SortedLines,newpindex); for ( int si = 0; si < SortedLines.size(); si++) fsrt<<SortedLines[si]<<endl; cout<<"["<<i<<","<<j<<"] = "<<SortedLines.size()<<endl; } } can some one suggest me a better way of doing this? why it is failing for larger values. ? The primary function of interest for this query is Sort2D function. thanks for the time and patience. prasad.

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Identity R2 - Experts Podcast Series

    - by Tanu Sood
    To follow up on the Identity Management R2 launch, a series of podcasts were recorded with subject matter experts from customer organizations, our partners and Oracle’s PM team to discuss key trends, R2 capabilities, implementation best practices and more. Below is a roll-up of the podcast series that is available on Fusion Middleware radio. R2 Podcasts:   ·         Designing the Next-Generation Identity Platform Vadim Lander, Oracle Highlights: Common architecture model, integration, interoperability and the driving factors behind R2 innovation IT Departments are shifting their Identity Management strategy to be able to support mobile, cloud and social applications. Oracle has anticipated this shift and has built a product roadmap to take advantage of this focus. Join Vadim as he discusses the design strategy behind the latest 11gR2 release and talks about how IDM services have to evolve to meet this new challenge.   ·         BETA Customer Perspective on R2 Ravi Meduri, Kaiser Permanente Highlights: R2 scalability and high availability In this podcast Ravi discusses the new features in 11gR2 that he is most interested in, including High Availability options for Access Management, multi-datacenter architecture, and what it was like working with the Oracle product team during the BETA program.   ·         Partner Perspective on R2 Rex Thexton, PricewaterhouseCoopers Highlights: Usability Enhancements for Users and Administrators A lot of new usability features went into the 11gR2 release making this the most business friendly IDM release to date. In this podcast Rex Thexton, Managing Director from PwC, talks about some of the new UI changes for both end users and administrators, and also about the new connector creation framework.   Access Request Updates in R2 Marc Boroditsky, Oracle Highlights: Access request User Interface innovations A lot of changes have been made to the Access Request user interface in the latest version of Oracle Identity Manager 11gR2. A real focus has been put on making the request process more business user friendly, and a lot of new customization capability has been added for the IT administrators. Hear Marc discuss the updated UI, and explain how administrators will be able to customize OIM to meet their company's requirements   ·         Oracle Optimized System for Oracle Unified Directory (OOS4OUD) Nick Kloski, Oracle Highlights: New Optimized System configuration for Unified Directory One of the new features in 11gR2 is the availability of an Optimized System configuration for Oracle Unified Directory. Oracle engineers installed the OUD software onto off the shelf hardware and then created a performance tuned configuration. Join us as we talk to Nick Kloski, Infrastructure Solutions Manager, all about the testing process and the resulting performance metrics.   Privileged Account Management Mark Wilcox, Oracle Highlights: Oracle Privileged Account Manager key capabilities, use cases The new release of Oracle Identity Management 11g R2 includes the capability to manage privileged accounts. Privileged accounts, if compromised, create a risk for fraud in the enterprise and as a result controlling access to privileged accounts is critical. Hear what Mark Wilcox, Principal Product Manager of Oracle Privileged Account Manager has to say about the capabilities of the offering in this podcast.   ·         Browser-based User Interface (UI) Customization Clayton Donley, Oracle Highlights: Benefits of Durable UI Configuration framework Business users need user interfaces that are not only friendly but also easily customizable. However the downside of any customization project is the cost and complexity involved in developing, testing, deploying and managing custom code. In this podcast, we examine how a new capability in Oracle Identity Management around browser based UI customization can reduce costs and complexity of customization while simplifying self service integration with corporate portal strategies.   ·         Simplifying Mobile and Social Sign-On Dan Killmer, Oracle Highlights: Secure mobile sign-on and consumption of social identities with Oracle Access Management The proliferation of mobile devices has spurred a new trend where employees tend to bring their own mobile devices to work and access corporate applications the same way they would access from a desktop or laptop. In this podcast, we examine how Oracle's latest innovation in Identity Management around Mobile and Social Sign On can simplify security and access management challenges posed by the widespread adoption of mobile devices in the enterprise. ·         Enabling Your Business with IDM R2 Scott Bonnell, Oracle Highlights: Self service, mobile access, personalization Gone are the days when Identity Management was just about stopping unauthorized users in their tracks. Identity Management if done right, can also enable your business. Join Scott Bonnell as he discusses how the IDM 11gR2 release enables the enterprise by providing self service, personalization and mobile access to corporate resources.

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Android app crashes on emulator - logCat shows no errors

    - by David Miler
    I have just added the SherlockActionBar library to my android project. After some small changes (FragmentActivity - SherlockFragmentActivity, getActionBar() - getSupportActionBar(), imports) it all compiled nicely. After I run the app, however, the debugger stops, as though it had encountered an exception. However, there are no errors shown in the LogCat output. I just can't wrap my head around what's going on. Here is the logCat output after I terminate the app. 10-02 14:11:19.227: I/SystemUpdateService(174): UpdateTask at time 1349187079227 10-02 14:11:19.237: I/ActivityThread(328): Pub com.android.email.attachmentprovider: com.android.email.provider.AttachmentProvider 10-02 14:11:19.687: I/dalvikvm(81): Jit: resizing JitTable from 512 to 1024 10-02 14:11:19.809: D/MediaScannerService(150): start scanning volume internal: [/system/media] 10-02 14:11:20.047: V/AlarmClock(239): AlarmInitReceiver finished 10-02 14:11:20.087: I/ActivityManager(81): Start proc com.android.quicksearchbox for broadcast com.android.quicksearchbox/.SearchWidgetProvider: pid=346 uid=10012 gids={3003} 10-02 14:11:20.127: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = false, running = false 10-02 14:11:20.427: I/ActivityThread(346): Pub com.android.quicksearchbox.google: com.android.quicksearchbox.google.GoogleSuggestionProvider 10-02 14:11:20.497: I/ActivityThread(346): Pub com.android.quicksearchbox.shortcuts: com.android.quicksearchbox.ShortcutsProvider 10-02 14:11:20.657: I/ActivityManager(81): Start proc com.android.music for broadcast com.android.music/.MediaAppWidgetProvider: pid=358 uid=10028 gids={3003, 1015} 10-02 14:11:20.927: D/ExchangeService(320): !!! EAS ExchangeService, onCreate 10-02 14:11:20.967: D/dalvikvm(260): GC_CONCURRENT freed 213K, 6% free 6409K/6791K, paused 5ms+101ms 10-02 14:11:21.077: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = true, running = false 10-02 14:11:21.567: D/GTalkService(174): [ReonnectMgr] ### report Inet condition: status=false, networkType=0 10-02 14:11:21.587: D/ConnectivityService(81): reportNetworkCondition(0, 0) 10-02 14:11:21.597: D/ConnectivityService(81): Inet connectivity change, net=0, condition=0,mActiveDefaultNetwork=0 10-02 14:11:21.597: D/ConnectivityService(81): starting a change hold 10-02 14:11:21.697: D/GTalkService(174): [RawStanzaProvidersMgr] ##### searchProvidersFromIntent 10-02 14:11:21.697: D/GTalkService(174): [RawStanzaProvidersMgr] no intent receivers found 10-02 14:11:21.847: I/SystemUpdateService(174): cancelUpdate (empty URL) 10-02 14:11:21.847: E/TelephonyManager(174): Hidden constructor called more than once per process! 10-02 14:11:21.867: D/dalvikvm(174): GC_CONCURRENT freed 337K, 7% free 6561K/7047K, paused 5ms+4ms 10-02 14:11:21.917: D/GTalkService(174): [ReonnectMgr] ### report Inet condition: status=false, networkType=0 10-02 14:11:21.917: D/ConnectivityService(81): reportNetworkCondition(0, 0) 10-02 14:11:21.917: D/ConnectivityService(81): Inet connectivity change, net=0, condition=0,mActiveDefaultNetwork=0 10-02 14:11:21.917: D/ConnectivityService(81): currently in hold - not setting new end evt 10-02 14:11:21.990: E/TelephonyManager(174): Original: com.google.android.location, new: com.google.android.gsf 10-02 14:11:22.027: I/SystemUpdateService(174): removeAllDownloads (cancelUpdate) 10-02 14:11:22.127: D/dalvikvm(328): GC_CONCURRENT freed 205K, 6% free 6506K/6855K, paused 660ms+3ms 10-02 14:11:22.197: D/Eas Debug(320): Logging: 10-02 14:11:22.319: D/dalvikvm(81): GREF has increased to 401 10-02 14:11:22.947: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = true, running = false 10-02 14:11:23.130: D/Eas Debug(320): Logging: 10-02 14:11:23.307: I//system/bin/fsck_msdos(29): Attempting to allocate 2044 KB for FAT 10-02 14:11:23.560: I/ActivityManager(81): Starting: Intent { flg=0x10000000 cmp=com.google.android.gsf/.update.SystemUpdateInstallDialog } from pid 174 10-02 14:11:23.587: I/ActivityManager(81): Starting: Intent { flg=0x10000000 cmp=com.google.android.gsf/.update.SystemUpdateDownloadDialog } from pid 174 10-02 14:11:24.087: W/ActivityManager(81): Activity pause timeout for ActivityRecord{407c7320 com.android.launcher/com.android.launcher2.Launcher} 10-02 14:11:24.237: E/TelephonyManager(174): Hidden constructor called more than once per process! 10-02 14:11:24.237: E/TelephonyManager(174): Original: com.google.android.location, new: com.google.android.gsf 10-02 14:11:24.507: D/dalvikvm(174): GC_EXPLICIT freed 231K, 7% free 6596K/7047K, paused 4ms+6ms 10-02 14:11:24.607: D/ConnectivityService(81): Inet hold end, net=0, condition =0, published condition =0 10-02 14:11:24.607: D/ConnectivityService(81): no change in condition - aborting 10-02 14:11:24.707: D/dalvikvm(174): GC_EXPLICIT freed 17K, 7% free 6579K/7047K, paused 4ms+4ms 10-02 14:11:24.947: I//system/bin/fsck_msdos(29): ** Phase 2 - Check Cluster Chains 10-02 14:11:25.117: I//system/bin/fsck_msdos(29): ** Phase 3 - Checking Directories 10-02 14:11:25.128: I//system/bin/fsck_msdos(29): ** Phase 4 - Checking for Lost Files 10-02 14:11:25.167: I//system/bin/fsck_msdos(29): 12 files, 1044448 free (522224 clusters) 10-02 14:11:25.227: I/Vold(29): Filesystem check completed OK 10-02 14:11:25.227: I/Vold(29): Device /dev/block/vold/179:0, target /mnt/sdcard mounted @ /mnt/secure/staging 10-02 14:11:25.237: D/Vold(29): Volume sdcard state changing 3 (Checking) -> 4 (Mounted) 10-02 14:11:25.257: I/PackageManager(81): Updating external media status from unmounted to mounted 10-02 14:11:25.457: D/dalvikvm(303): GC_EXPLICIT freed 35K, 6% free 6242K/6595K, paused 3ms+312ms 10-02 14:11:25.987: D/ExchangeService(320): !!! EAS ExchangeService, onStartCommand, startingUp = true, running = false 10-02 14:11:26.157: D/MediaScanner(150): prescan time: 2905ms 10-02 14:11:26.167: D/MediaScanner(150): scan time: 148ms 10-02 14:11:26.167: D/MediaScanner(150): postscan time: 2ms 10-02 14:11:26.167: D/MediaScanner(150): total time: 3055ms 10-02 14:11:26.197: D/MediaScannerService(150): done scanning volume internal 10-02 14:11:26.237: D/MediaScannerService(150): start scanning volume external: [/mnt/sdcard] 10-02 14:11:26.497: D/dalvikvm(143): GC_EXPLICIT freed 234K, 8% free 7735K/8327K, paused 3ms+5ms 10-02 14:11:27.180: D/dalvikvm(143): GC_CONCURRENT freed 150K, 4% free 8004K/8327K, paused 7ms+3ms 10-02 14:11:27.397: D/dalvikvm(143): GC_FOR_ALLOC freed 96K, 6% free 8310K/8775K, paused 76ms 10-02 14:11:27.580: D/dalvikvm(143): GC_FOR_ALLOC freed 515K, 11% free 8135K/9095K, paused 79ms 10-02 14:11:27.829: D/dalvikvm(143): GC_CONCURRENT freed 3K, 5% free 8694K/9095K, paused 7ms+6ms 10-02 14:11:28.137: V/TLINE(143): new: android.text.TextLine@4065b280 10-02 14:11:28.527: D/dalvikvm(143): GC_CONCURRENT freed 729K, 10% free 8764K/9671K, paused 5ms+13ms 10-02 14:11:28.677: D/dalvikvm(143): GC_FOR_ALLOC freed 152K, 11% free 8683K/9671K, paused 99ms 10-02 14:11:28.717: I/dalvikvm-heap(143): Grow heap (frag case) to 11.434MB for 2975968-byte allocation 10-02 14:11:28.807: D/dalvikvm(143): GC_FOR_ALLOC freed 0K, 9% free 11589K/12615K, paused 84ms 10-02 14:11:29.159: D/dalvikvm(143): GC_CONCURRENT freed 197K, 7% free 12195K/12999K, paused 8ms+6ms 10-02 14:11:29.647: D/dalvikvm(143): GC_EXPLICIT freed 351K, 6% free 12790K/13511K, paused 8ms+17ms 10-02 14:11:29.717: I/SurfaceFlinger(32): Boot is finished (70768 ms) 10-02 14:11:29.877: I/ARMAssembler(32): generated scanline__00000177:03010104_00000002_00000000 [ 44 ipp] (66 ins) at [0x407c7290:0x407c7398] in 990662 ns 10-02 14:11:29.907: I/ARMAssembler(32): generated scanline__00000177:03515104_00000001_00000000 [ 73 ipp] (95 ins) at [0x407c73a0:0x407c751c] in 989381 ns 10-02 14:11:30.287: D/dalvikvm(174): GC_EXPLICIT freed 25K, 8% free 6554K/7047K, paused 4ms+32ms 10-02 14:11:30.380: D/dalvikvm(143): GC_EXPLICIT freed 349K, 6% free 13124K/13895K, paused 5ms+25ms 10-02 14:11:30.957: D/dalvikvm(143): GC_FOR_ALLOC freed 1069K, 10% free 13860K/15239K, paused 81ms 10-02 14:11:32.177: D/dalvikvm(150): GC_CONCURRENT freed 183K, 6% free 6438K/6791K, paused 5ms+4ms 10-02 14:11:32.187: W/ActivityManager(81): No content provider found for: 10-02 14:11:32.607: V/MediaScanner(150): pruneDeadThumbnailFiles... android.database.sqlite.SQLiteCursor@406724a8 10-02 14:11:32.617: V/MediaScanner(150): /pruneDeadThumbnailFiles... android.database.sqlite.SQLiteCursor@406724a8 10-02 14:11:32.640: W/ActivityManager(81): No content provider found for: 10-02 14:11:32.640: D/VoldCmdListener(29): asec list 10-02 14:11:32.647: I/PackageManager(81): No secure containers on sdcard 10-02 14:11:32.667: D/MediaScanner(150): prescan time: 107ms 10-02 14:11:32.667: D/MediaScanner(150): scan time: 89ms 10-02 14:11:32.667: D/MediaScanner(150): postscan time: 61ms 10-02 14:11:32.667: D/MediaScanner(150): total time: 257ms 10-02 14:11:32.697: W/PackageManager(81): Unknown permission android.permission.ADD_SYSTEM_SERVICE in package com.android.phone 10-02 14:11:32.707: W/PackageManager(81): Unknown permission com.android.smspush.WAPPUSH_MANAGER_BIND in package com.android.phone 10-02 14:11:32.737: W/PackageManager(81): Not granting permission android.permission.SEND_DOWNLOAD_COMPLETED_INTENTS to package com.android.browser (protectionLevel=2 flags=0x9be45) 10-02 14:11:32.737: W/PackageManager(81): Not granting permission android.permission.BIND_APPWIDGET to package com.android.widgetpreview (protectionLevel=3 flags=0x28be44) 10-02 14:11:32.767: W/PackageManager(81): Unknown permission android.permission.READ_OWNER_DATA in package com.android.exchange 10-02 14:11:32.778: W/PackageManager(81): Unknown permission android.permission.READ_OWNER_DATA in package com.android.email 10-02 14:11:32.788: W/PackageManager(81): Unknown permission com.android.providers.im.permission.READ_ONLY in package com.google.android.apps.maps 10-02 14:11:32.797: W/PackageManager(81): Not granting permission android.permission.DEVICE_POWER to package com.android.deskclock (protectionLevel=2 flags=0x8be45) 10-02 14:11:33.137: D/MediaScannerService(150): done scanning volume external 10-02 14:11:33.197: D/PackageParser(81): Scanning package: /data/app/vmdl257911298.tmp 10-02 14:11:33.837: I/InputReader(81): Device reconfigured: id=0, name='qwerty2', surface size is now 1024x800 10-02 14:11:34.097: D/dalvikvm(81): GC_CONCURRENT freed 12185K, 47% free 13966K/26311K, paused 8ms+23ms 10-02 14:11:36.798: I/TabletStatusBar(124): DISABLE_CLOCK: no 10-02 14:11:36.798: I/TabletStatusBar(124): DISABLE_NAVIGATION: no 10-02 14:11:37.348: I/ARMAssembler(32): generated scanline__00000177:03515104_00001001_00000000 [ 91 ipp] (114 ins) at [0x407c7520:0x407c76e8] in 919320 ns 10-02 14:11:37.598: I/TabletStatusBar(124): DISABLE_BACK: no 10-02 14:11:37.710: I/ActivityManager(81): Displayed com.android.launcher/com.android.launcher2.Launcher: +46s212ms 10-02 14:11:38.817: D/dalvikvm(143): GC_CONCURRENT freed 969K, 8% free 14867K/16007K, paused 4ms+10ms 10-02 14:11:39.437: I/dalvikvm(81): Jit: resizing JitTable from 1024 to 2048 10-02 14:11:40.267: D/dalvikvm(143): GC_FOR_ALLOC freed 2357K, 16% free 14395K/17031K, paused 80ms 10-02 14:11:40.717: D/dalvikvm(143): GC_EXPLICIT freed 742K, 16% free 14358K/17031K, paused 8ms+4ms 10-02 14:11:41.617: D/dalvikvm(81): GC_CONCURRENT freed 1955K, 48% free 13869K/26311K, paused 9ms+10ms 10-02 14:11:42.559: D/dalvikvm(81): GC_CONCURRENT freed 1830K, 48% free 13881K/26311K, paused 9ms+9ms 10-02 14:11:42.758: I/PackageManager(81): Removing non-system package:cz.trilimi.sfaui 10-02 14:11:42.758: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:42.967: D/PackageManager(81): Scanning package cz.trilimi.sfaui 10-02 14:11:42.967: I/PackageManager(81): Package cz.trilimi.sfaui codePath changed from /data/app/cz.trilimi.sfaui-1.apk to /data/app/cz.trilimi.sfaui-2.apk; Retaining data and using new 10-02 14:11:42.967: I/PackageManager(81): Unpacking native libraries for /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:43.097: D/installd(35): DexInv: --- BEGIN '/data/app/cz.trilimi.sfaui-2.apk' --- 10-02 14:11:45.317: D/dalvikvm(391): DexOpt: load 434ms, verify+opt 1260ms 10-02 14:11:45.407: D/installd(35): DexInv: --- END '/data/app/cz.trilimi.sfaui-2.apk' (success) --- 10-02 14:11:45.407: W/PackageManager(81): Code path for pkg : cz.trilimi.sfaui changing from /data/app/cz.trilimi.sfaui-1.apk to /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:45.407: W/PackageManager(81): Resource path for pkg : cz.trilimi.sfaui changing from /data/app/cz.trilimi.sfaui-1.apk to /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:45.407: D/PackageManager(81): Activities: cz.trilimi.sfaui.ItemListActivity cz.trilimi.sfaui.ItemDetailActivity 10-02 14:11:45.427: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:45.657: I/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 10-02 14:11:45.657: D/PackageManager(81): New package installed in /data/app/cz.trilimi.sfaui-2.apk 10-02 14:11:45.997: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:46.147: D/dalvikvm(143): GC_EXPLICIT freed 3K, 16% free 14356K/17031K, paused 10ms+9ms 10-02 14:11:46.237: D/PackageManager(81): generateServicesMap(android.accounts.AccountAuthenticator): 3 services unchanged 10-02 14:11:46.277: D/PackageManager(81): generateServicesMap(android.content.SyncAdapter): 5 services unchanged 10-02 14:11:46.337: D/PackageManager(81): generateServicesMap(android.accounts.AccountAuthenticator): 3 services unchanged 10-02 14:11:46.347: D/PackageManager(81): generateServicesMap(android.content.SyncAdapter): 5 services unchanged 10-02 14:11:46.437: D/dalvikvm(208): GC_EXPLICIT freed 258K, 7% free 6488K/6919K, paused 3ms+5ms 10-02 14:11:46.477: W/RecognitionManagerService(81): no available voice recognition services found 10-02 14:11:46.897: I/ActivityManager(81): Start proc com.svox.pico for broadcast com.svox.pico/.VoiceDataInstallerReceiver: pid=398 uid=10006 gids={} 10-02 14:11:47.087: I/ActivityThread(398): Pub com.svox.pico.providers.SettingsProvider: com.svox.pico.providers.SettingsProvider 10-02 14:11:47.138: D/GTalkService(174): [GTalkService.1] handlePackageInstalled: re-initialize providers 10-02 14:11:47.147: D/GTalkService(174): [RawStanzaProvidersMgr] ##### searchProvidersFromIntent 10-02 14:11:47.147: D/GTalkService(174): [RawStanzaProvidersMgr] no intent receivers found 10-02 14:11:47.718: I/AccountTypeManager(208): Loaded meta-data for 1 account types, 0 accounts in 186ms 10-02 14:11:48.377: D/dalvikvm(143): GC_CONCURRENT freed 1865K, 15% free 14513K/17031K, paused 7ms+4ms 10-02 14:11:48.917: D/dalvikvm(208): GC_CONCURRENT freed 219K, 6% free 6788K/7175K, paused 7ms+73ms 10-02 14:11:49.207: D/dalvikvm(143): GC_FOR_ALLOC freed 4558K, 31% free 11866K/17031K, paused 89ms 10-02 14:11:49.587: D/dalvikvm(143): GC_CONCURRENT freed 713K, 24% free 13010K/17031K, paused 5ms+4ms 10-02 14:11:49.967: D/dalvikvm(143): GC_CONCURRENT freed 1046K, 19% free 13922K/17031K, paused 5ms+4ms 10-02 14:11:50.437: D/dalvikvm(81): GC_EXPLICIT freed 898K, 47% free 13955K/26311K, paused 6ms+39ms 10-02 14:11:50.467: I/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 10-02 14:11:50.477: D/AndroidRuntime(227): Shutting down VM 10-02 14:11:50.507: D/dalvikvm(227): GC_CONCURRENT freed 97K, 84% free 331K/2048K, paused 1ms+2ms 10-02 14:11:50.507: I/AndroidRuntime(227): NOTE: attach of thread 'Binder Thread #3' failed 10-02 14:11:50.517: D/jdwp(227): adbd disconnected 10-02 14:11:51.177: D/AndroidRuntime(410): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< 10-02 14:11:51.177: D/AndroidRuntime(410): CheckJNI is ON 10-02 14:11:51.897: D/AndroidRuntime(410): Calling main entry com.android.commands.am.Am 10-02 14:11:51.937: I/ActivityManager(81): Force stopping package cz.trilimi.sfaui uid=10036 10-02 14:11:51.937: I/ActivityManager(81): Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=cz.trilimi.sfaui/.ItemListActivity } from pid 410 10-02 14:11:51.968: W/WindowManager(81): Failure taking screenshot for (230x179) to layer 21005 10-02 14:11:51.997: I/ActivityManager(81): Start proc cz.trilimi.sfaui for activity cz.trilimi.sfaui/.ItemListActivity: pid=418 uid=10036 gids={} 10-02 14:11:52.007: D/AndroidRuntime(410): Shutting down VM 10-02 14:11:52.057: I/AndroidRuntime(410): NOTE: attach of thread 'Binder Thread #3' failed 10-02 14:11:52.097: D/dalvikvm(410): GC_CONCURRENT freed 98K, 83% free 360K/2048K, paused 1ms+0ms 10-02 14:11:52.097: D/jdwp(410): adbd disconnected 10-02 14:11:53.147: W/ActivityThread(418): Application cz.trilimi.sfaui is waiting for the debugger on port 8100... 10-02 14:11:53.207: I/System.out(418): Sending WAIT chunk 10-02 14:11:53.217: I/dalvikvm(418): Debugger is active 10-02 14:11:53.447: I/System.out(418): Debugger has connected 10-02 14:11:53.457: I/System.out(418): waiting for debugger to settle... 10-02 14:11:53.637: I/ARMAssembler(32): generated scanline__00000177:03515104_00001002_00000000 [ 87 ipp] (110 ins) at [0x407c76f0:0x407c78a8] in 598498 ns 10-02 14:11:53.660: I/System.out(418): waiting for debugger to settle... 10-02 14:11:53.857: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.057: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.257: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.317: V/TLINE(81): new: android.text.TextLine@4155dde8 10-02 14:11:54.467: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.667: I/System.out(418): waiting for debugger to settle... 10-02 14:11:54.870: I/System.out(418): waiting for debugger to settle... 10-02 14:11:55.027: D/dalvikvm(143): GC_EXPLICIT freed 900K, 16% free 14420K/17031K, paused 7ms+4ms 10-02 14:11:55.067: I/System.out(418): waiting for debugger to settle... 10-02 14:11:55.292: I/System.out(418): debugger has settled (1315) 10-02 14:12:02.008: W/ActivityManager(81): Launch timeout has expired, giving up wake lock! 10-02 14:12:02.971: W/ActivityManager(81): Activity idle timeout for ActivityRecord{4078c6b0 cz.trilimi.sfaui/.ItemListActivity} 10-02 14:12:08.359: D/ExchangeService(320): Received deviceId from Email app: androidc259148960 10-02 14:12:08.507: D/ExchangeService(320): Reconciling accounts... 10-02 14:16:11.437: D/SntpClient(81): request time failed: java.net.SocketException: Address family not supported by protocol 10-02 14:17:21.573: W/jdwp(418): Debugger is telling the VM to exit with code=1 10-02 14:17:21.573: I/dalvikvm(418): GC lifetime allocation: 8642 bytes 10-02 14:17:21.637: D/Zygote(33): Process 418 exited cleanly (1) 10-02 14:17:21.651: I/ActivityManager(81): Process cz.trilimi.sfaui (pid 418) has died. 10-02 14:17:21.847: D/dalvikvm(143): GC_EXPLICIT freed <1K, 16% free 14420K/17031K, paused 7ms+7ms 10-02 14:17:21.917: W/InputManagerService(81): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@40bfbf28

    Read the article

  • 5 minutes WIF: Make your ASP.NET application use test-STS

    - by DigiMortal
    Windows Identity Foundation (WIF) provides us with simple and dummy STS application we can use to develop our system with no actual STS in place. In this posting I will show you how to add STS support to your existing application and how to generate dummy application that plays you real STS. Word of caution! Although it is relatively easy to build your own STS using WIF tools I don’t recommend you to build it. Identity providers must be highly secure and stable in every means and this makes development of your own STS very complex task. If it is possible then use some known STS solution. I suppose you have WIF and WIF SDK installed on your development machine. If you don’t then here are the links to download pages: Windows Identity Foundation Windows Identity Foundation SDK Adding STS support to your web application Suppose you have web application and you want to externalize authentication so your application is able to detect users, send unauthenticated users to login and work in other terms exactly like it worked before. WIF tools provide you with all you need. 1. Click on your web application project and select “Add STS reference…” from context menu to start adding or updating STS settings for web application. 2. Insert your application URI in application settings window. Note that web.config file is already selected for you. I inserted URI that corresponds to my web application address under IIS Express. This URI must exist (later) because otherwise you cannot use dummy STS service. 3. Select “Create a new STS project in the current solution” and click Next button. 4. Summary screen gives you information about how your site will use STS. You can run this wizard always when you have to modify STS parameters. Click Finish. If everything goes like expected then new web site will be added to your solution and it is named as YourWebAppName_STS. Dummy STS application Image on right shows you dummy STS web site. Yes, it is created as web site project not as web application. But it still works nice and you don’t have to make there any modifications. It just works but it is dummy one. Why dummy STS? Some points about dummy STS web site: Dummy STS is not template for your own custom STS identity provider. Dummy STS is very good and simple replacement of real STS so you have more flexible development environment and you don’t have to authenticate yourself in real service. Of course, you can modify dummy STS web site to mimic some behavior of your real STS. Pages in dummy STS Dummy STS has two pages – Login.aspx and  Default.aspx. Default.aspx is the page that handles requests to STS service. Login.aspx is the page where authentication takes place. Dummy STS authenticates users using FBA. You can insert whatever username you like and dummy STS still works. You can take a look at the code behind these pages to get some idea about how this dummy service is built up. But again – this service is there to simplify your life as developer. Authenticating users using dummy STS If you are using development web server that ships with Visual Studio 2010 I suggest you to switch over to IIS or IIS Express and make some more configuration changes as described in my previous posting Making WIF local STS to work with your ASP.NET application. When you are done with these little modifications you are ready to run your application and see how authentication works. If everything is okay then you are redirected to dummy STS login page when running your web application. Adam Carter is provided as username by default. If you click on submit button you are authenticated and redirected to application page. In my case it looks like this. Conclusion As you saw it is very easy to set up your own dummy STS web site for testing purposes. You coded nothing. You just ran wizard, inserted some data, modified configuration a little bit and you were done. Later, when your application goes to production you can run again this STS configuration utility and it generates correct settings for your real STS service automatically.

    Read the article

  • Architect Day: Boston - Agenda Update

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. When: September 12, 2012 8:30am – 5:00pm Where: Boston Marriott Burlington One Burlington Mall Road Burlington, MA 01803 Register now Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast Salon E Foyer 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Salon E 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Salon E 10:00 am - 10:30 am Securing Public and Private Clouds | Anton Nielsen Long before the term "Cloud Computing" existed, Oracle technologies supported and promoted the concept. Centralized data with remote users has been at the core of these technologies for decades. The public cloud, and extending private clouds to the internet, though, has added security challenges never imagined decades ago. This presentation will examine a real life security breach and introduce architecture, technologies and policies to secure public and private clouds.  Salon E 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Salon E Innovations in Grid Computing with Oracle Coherence | Rob Misek Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Salon C 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Salon E Oracle Enterprise Manager | Avi Huber Much more than a DB management tool, Oracle Enterprise Manager provides management and monitoring coverage for the entire Oracle stack, and beyond. This session will concentrate on the middleware management functionality in OEM, starting with Real User Experience monitoring, through AppServer management, and into deep-dive Java diagnostics. We’ll discuss Business Driven Application Management (BDAM) and the benefits of top-down monitoring. Lastly, we’ll demonstrate how to trace a specific user experience problem, through a multitier SOA application, to its root cause, deep in the JVM. Salon C 12:15 pm - 1:15 pm Lunch Salon E Foyer 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Salon E 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy (ETS). Salon E 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Salon C 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Salon E 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Salon E 4:15 pm - 5:00 pm Networking / Reception Salon E Foyer Note: Session schedule and content subject to change.

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Whoosh: PASS Board Year 1, Q4

    - by Denise McInerney
    "Whoosh". That's the sound the last quarter of 2012 made as it rushed by. My first year on the PASS Board is complete, and the last three months of it were probably the busiest. PASS Summit 2012 Much of October was devoted to preparing for Summit. Every Board  member, HQ staffer and dozens of volunteers were busy in the run-up to our flagship event. It takes a lot of work to put on the Summit. The community meetings,  first-timers program, keynotes, sessions and that fabulous Community Appreciation party are the result of many hours of preparation. Virtual Chapters at the Summit With a lot of help from Karla Landrum, Michelle Nalliah, Lana Montgomery and others at HQ the VCs had a good presence at Summit. We started the week with a VC leaders meeting. I shared some information about the activities and growth during the first part of the year.   From January - September 2012: The number of VCs increased from 14 to 20 VC membership  grew from 55,200 to 80,100 Total attendance at VC meetings increased from 1,480 to 2,198 Been part of PASS Global Growth with language-based VC- including Chinese, Spanish and Portuguese. We also heard from some VC leaders and volunteers. Ryan Adams (Performance VC) shared his tips for successful marketing of VC events. Amy Lewis (Business Intelligence VC) described how the BI chapter has expanded to support PASS' global growth by finding volunteers to organize events at times that are convenient for people in Europe and Australia. Felipe Ferreira (Portuguese language VC) described the experience of building a user group first in Brazil, then expanding to work with Portuguese-speaking data professionals around the world. Virtual Chapter leaders and volunteers were in evidence throughout Summit, beginning with the Welcome Reception. For the past several years VCs have had an organized presence at this event, signing up new members and advertising their meetings. Many VC leaders also spent time at the Community Zone. This new addition to the Summit proved to be a vibrant spot were new members and volunteers could network with others and find out how to start a chapter or host a SQL Saturday. Women In Technology 2012 was the 10th WIT Luncheon to be held at Summit. I was honored to be asked to be on the panel to discuss the topic "Where Have We Been and Where are We Going?" The PASS community has come a long way in our understanding of issues facing women in tech and our support of women in the organization. It was great to hear from panelists Stefanie Higgins and Kevin Kline who were there at the beginning as well as Kendra Little and Jen Stirrup who are part of the progress being made by women in our community today. Bylaw Changes The Board spent a good deal of time in 2012 discussing how to move our global growth initiatives forward. An important component of this is a proposed change to how the Board is elected with some seats representing geographic regions. At the end of December we voted on these proposed bylaw changes which have been published for review. The member review and feedback is open until February 8. I encourage all members to review these changes and send any feedback to [email protected]  In addition to reading the bylaws, I recommend reading Bill Graziano's blog post on the subject. Business Analytics Conference At Summit we announced a new event: the PASS Business Analytics Conference. The inaugural event will be April 10-12, 2013 in Chicago. The world of data is changing rapidly. More and more businesses want to extract value and insight from their data. Data professionals who provide these insights or enable others to do so are in demand. The BA Conference offers expert content on predictive analytics, data exploration and visualization, content delivery strategies and more. By holding this new event PASS is participating in important discussions happening in our industry, offering our members more educational value and reaching out to data professionals who are not currently part of our organization. New Year, New Portfolio In addition to my work with the Virtual Chapters I am also now responsible for the 24 Hours of PASS portfolio. Since the first 24HOP of 2013 is scheduled for January 30 we started the transition of the portfolio work from Rob Farley to me right after Summit. Work immediately started to secure speakers for the January event. We have also been evaluating webinar platforms that can be used for 24HOP as well as the Virtual Chapters. Next Up 24 Hours of PASS: Business Analytics Edition will be held on January 30. I'll be there and will moderate one or two sessions. The 24HOP topics are a sneak peek into the type of content that will be offered at the Business Analytics Conference. I hope to see some of you there. The Virtual Chapters have hit the ground running in 2013; many of them have events scheduled. The Application Development VC is getting restarted  and a new Business Analytics VC will be starting soon. Check out the lineup and join the VCs that interest you. And watch the Events page and Connector for announcements of upcoming meetings. At the end of January I will be attending a Board meeting in Seattle, and February 23 I will be at SQL Saturday #177 in Silicon Valley.

    Read the article

  • Getting a handle on mobile data

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} written by Ashok Joshi The proliferation of mobile devices in the corporate world is both a blessing as well as a challenge.  Mobile devices improve productivity and the velocity of business for the end users; on the other hand, IT departments need to manage the corporate data and applications that run on these devices. Oracle Database Mobile Server (DMS for short) provides a simple and effective way to deal with the management challenge.  DMS supports data synchronization between a central Oracle database server and data on mobile devices.  It also provides authentication, encryption and application and device management.  Finally, DMS is a highly scalable solution that can be used to manage hundreds of thousands of devices.   Here’s a simplified outline of how such a solution might work. Each device runs local sync and mgmt agents that handle bidirectional data flow with an Oracle enterprise backend, run remote commands, and provide status to the management console. For example, mobile admins could monitor multiple networks of mobile devices, upgrade their software remotely, and even destroy the local database on a compromised device. DMS supports either Oracle Berkeley DB or SQLite for device-local storage, and runs on a wide variety of mobile platforms. The schema for the device-local database is pretty simple – it contains the name of the application that’s installed on the device as well as details such as product name, version number, time of last access etc. Each mobile user has an account on the monitoring system.  DMS supports authentication via the Oracle database authentication mechanisms or alternately, via an external authentication server such as Oracle Identity Management. DMS also provides the option of encrypting the data on disk as well as while it is being synchronized. Whenever a device connects with DMS, it sends the list of all local application changes to the server; the server updates the central repository with this information.  Synchronization can be triggered on-demand, whenever there’s a change on the device (e.g. new application installed or an existing application removed) or via a rule-based schedule (e.g. every Saturday). Synchronization is very fast and efficient, since only the changes are propagated.  This includes resume capability; should synchronization be interrupted for any reason, the next synchronization will resume where the previous synchronization was interrupted. If the device should be lost or stolen, DMS has the capability to remove the applications and/or data from the device. This ability to control access to sensitive data and applications is critical in the corporate environment. The central repository also allows the IT manager to track the kinds of applications that mobile users use and recommend patches and upgrades, while still allowing the mobile user full control over what applications s/he downloads and uses on the device.  This is useful since most devices are used for corporate as well as personal information. In certain restricted use scenarios, the IT manager can also control whether a certain application can be installed on a mobile device.  Should an unapproved application be installed, it can easily be removed the next time the device connects with the central server. Oracle Database mobile server provides a simple, effective and highly secure and scalable solution for managing the data and applications for the mobile workforce.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer

    - by Elton Stoneman
    This is the second in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Part 2 is nice and easy. From Part 1 we exposed our service over the Azure Service Bus Relay using the netTcpRelayBinding and verified we could set up our network to listen for relayed messages. Assuming we want to consume that service in .NET from an environment which is fairly unrestricted for us, but quite restricted for attackers, we can use netTcpRelay and shared secret authentication. Pattern applicability This is a good fit for scenarios where: the consumer can run .NET in full trust the environment does not restrict use of external DLLs the runtime environment is secure enough to keep shared secrets the service does not need to know who is consuming it the service does not need to know who the end-user is So for example, the consumer is an ASP.NET website sitting in a cloud VM or Azure worker role, where we can keep the shared secret in web.config and we don't need to flow any identity through to the on-premise service. The service doesn't care who the consumer or end-user is - say it's a reference data service that provides a list of vehicle manufacturers. Provided you can authenticate with ACS and have access to Service Bus endpoint, you can use the service and it doesn't care who you are. In this post, we’ll consume the service from Part 1 in ASP.NET using netTcpRelay. The code for Part 2 (+ Part 1) is on GitHub here: IPASBR Part 2 Authenticating and authorizing with ACS In this scenario the consumer is a server in a controlled environment, so we can use a shared secret to authenticate with ACS, assuming that there is governance around the environment and the codebase which will prevent the identity being compromised. From the provider's side, we will create a dedicated service identity for this consumer, so we can lock down their permissions. The provider controls the identity, so the consumer's rights can be revoked. We'll add a new service identity for the namespace in ACS , just as we did for the serviceProvider identity in Part 1. I've named the identity fullTrustConsumer. We then need to add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus (see Part 1 for a walkthrough creating Service Idenitities): Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: fullTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send This sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. Adding a Service Reference The Part 2 sample client code is ready to go, but if you want to replicate the steps, you’re going to add a WSDL reference, add a reference to Microsoft.ServiceBus and sort out the ServiceModel config. In Part 1 we exposed metadata for our service, so we can browse to the WSDL locally at: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc?wsdl If you add a Service Reference to that in a new project you'll get a confused config section with a customBinding, and a set of unrecognized policy assertions in the namespace http://schemas.microsoft.com/netservices/2009/05/servicebus/connect. If you NuGet the ASB package (“windowsazure.servicebus”) first and add the service reference - you'll get the same messy config. Either way, the WSDL should have downloaded and you should have the proxy code generated. You can delete the customBinding entries and copy your config from the service's web.config (this is already done in the sample project in Sixeyed.Ipasbr.NetTcpClient), specifying details for the client:     <client>       <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                 behaviorConfiguration="SharedSecret"                 binding="netTcpRelayBinding"                 contract="FormatService.IFormatService" />     </client>     <behaviors>       <endpointBehaviors>         <behavior name="SharedSecret">           <transportClientEndpointBehavior credentialType="SharedSecret">             <clientCredentials>               <sharedSecret issuerName="fullTrustConsumer"                             issuerSecret="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/>             </clientCredentials>           </transportClientEndpointBehavior>         </behavior>       </endpointBehaviors>     </behaviors>   The proxy is straight WCF territory, and the same client can run against Azure Service Bus through any relay binding, or directly to the local network service using any WCF binding - the contract is exactly the same. The code is simple, standard WCF stuff: using (var client = new FormatService.FormatServiceClient()) { outputString = client.ReverseString(inputString); } Running the sample First, update Solution Items\AzureConnectionDetails.xml with your service bus namespace, and your service identity credentials for the netTcpClient and the provider:   <!-- ACS credentials for the full trust consumer (Part2): -->   <netTcpClient identityName="fullTrustConsumer"                 symmetricKey="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/> Then rebuild the solution and verify the unit tests work. If they’re green, your service is listening through Azure. Check out the client by navigating to http://localhost:53835/Sixeyed.Ipasbr.NetTcpClient. Enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Windows Phone 7 Review &ndash; Part 1: LG Quantum

    - by Nikita Polyakov
    As many of my fellow geeks, I ran out and got a retail windows Phone 7 on the first day. Just had to have it :) I’ve had the developer prototypes in my hands for previous 3 months on and off, so I finally wanted to have one I call my own. I’ve rushed the Launch   I’ve checked out both AT&T and T-Mobile offerings on day 1 and decided on a Samsung Focus. Great screen, super light and thin. If you don’t believe me that this phone can compete with the best of the non-Phone 7 offerings - get it in your hand to compare for yourself. I have to say that even though the on-screen keyboard on Windows Phone 7 is one of the best, the amount of text I write on my phone and my expectation of how long that takes for a short reply are very high. Also the phone being so slick and sexy did not feel solid or confident in my hand or pocket. As the dust settled   Arrives the LG Quantum – now on AT&T and worldwide. First impression of the softer plastic, the back battery cover is solid metal - the entire phone feels solid and indestructible! Phone fits just right in my hand, it’s almost too good. It does not feel like it will crack in your jeans. I feel safe holding it and don’t feel like if I or someone were to bump into me walking it’d fly out of my hand. I’ve dropped and had thrown the Focus a few times on accident as it’s weight is negligible. I won’t even dream of lying the first day adjusting to a 3.5’ LCD screen from the Samsung’s blistering bright and poppy AMOLED 4’ was hard. But the colors and sharpness are still very good. I find it almost easier on the eyes actually for day to day use.  I had a chance to lay the phone down in the line with the prototypes and final versions of other phones that had LCD screens – LG makes HTC looks like a budget LCD compared to a high end LCD in the home theatre department. I am consistently complemented by friends that have the HD7 or Surround on how much better my screen looks. The screen just looks like the most color correct phone out of the line up. Even next to Samsung it makes it look oversaturated, but can’t match the true blacks compensating with true white.   Day to Day Usability   What I also noticed that is a huge difference is how much I am not accidently hitting the soft keys at the bottom. I real pain on Focus since holding it in am average size hand already would accidently touch the controls at the bottom. QWERTY keyboard on this phone is great. It’s like the mission for LG is “make it solid!”. Keyboard has a very durable feel.   LG’s has a secret wild card though is the DLNA support. If you seen an ad for it, you should. Imagine this – playing a song from your phone straight to your network connected A/V receiver. Done. Pictures to TV. Done. Video. Done. DLNA works with components that advertise to as well as Windows 7, XBOX 360 and other consoles.  I will write an extensive review of that experience in near future. LG Exclusive apps – from panorama photo taker to voice to text translator and even look-n-type app that works like a backup inverse camera, there is quite a bit there that won’t be found on the other phones. I’ll review those in more detail in another segment. Conclusion So for a quick comparison: If you want a phone that is super thin, light and is core reference of a Windows Phone 7 – Samsung Focus it is. If you want a great phone with solid secure feel, real keyboard, media features - the hands down winner is LG Quantum.   You can pick up the LG Quantum at AT&T in US and worldwide as LG Optimus 7Q.   Final thought: I have not had SmartPhone that I felt was a reliable trusty primary communication device since Samsung BlackJack II, this time the LG got the crown.   [ Disclosure: Phone was provided to me free of charge. That has been the case for all of my phones for years, nothing new - I get them all. ]

    Read the article

  • Stack overflow error after creating a instance using 'new'

    - by Justin
    EDIT - The code looks strange here, so I suggest viewing the files directly in the link given. While working on my engine, I came across a issue that I'm unable to resolve. Hoping to fix this without any heavy modification, the code is below. void Block::DoCollision(GameObject* obj){ obj->DoCollision(this); } That is where the stack overflow occurs. This application works perfectly fine until I create two instances of the class using the new keyword. If I only had 1 instance of the class, it worked fine. Block* a = new Block(0, 0, 0, 5); AddGameObject(a); a = new Block(30, 0, 0, 5); AddGameObject(a); Those parameters are just x,y,z and size. The code is checked before hand. Only a object with a matching Collisonflag and collision type will trigger the DoCollision(); function. ((*list1)->m_collisionFlag & (*list2)->m_type) Maybe my check is messed up though. I attached the files concerned here http://celestialcoding.com/index.php?topic=1465.msg9913;topicseen#new. You can download them without having to sign up. The main suspects, I also pasted the code for below. From GameManager.cpp void GameManager::Update(float dt){ GameList::iterator list1; for(list1=m_gameObjectList.begin(); list1 != m_gameObjectList.end(); ++list1){ GameObject* temp = *list1; // Update logic and positions if((*list1)->m_active){ (*list1)->Update(dt); // Clip((*list1)->m_position); // Modify for bounce affect } else continue; // Check for collisions if((*list1)->m_collisionFlag != GameObject::TYPE_NONE){ GameList::iterator list2; for(list2=m_gameObjectList.begin(); list2 != m_gameObjectList.end(); ++list2){ if(!(*list2)->m_active) continue; if(list1 == list2) continue; if( (*list2)->m_active && ((*list1)->m_collisionFlag & (*list2)->m_type) && (*list1)->IsColliding(*list2)){ (*list1)->DoCollision((*list2)); } } } if(list1==m_gameObjectList.end()) break; } GameList::iterator end    = m_gameObjectList.end(); GameList::iterator newEnd = remove_if(m_gameObjectList.begin(),m_gameObjectList.end(),RemoveNotActive); if(newEnd != end)        m_gameObjectList.erase(newEnd,end); } void GameManager::LoadAllFiles(){ LoadSkin(m_gameTextureList, "Models/Skybox/Images/Top.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Right.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Back.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Left.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Front.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Bottom.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Textures/Terrain1.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Textures/Terrain2.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Details/TerrainDetails.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Textures/Water1.bmp", GetNextFreeID()); Block* a = new Block(0, 0, 0, 5); AddGameObject(a); a = new Block(30, 0, 0, 5); AddGameObject(a); Player* d = new Player(0, 100,0); AddGameObject(d); } void Block::Draw(){ glPushMatrix(); glTranslatef(m_position.x(), m_position.y(), m_position.z()); glRotatef(m_facingAngle, 0, 1, 0); glScalef(m_size, m_size, m_size); glBegin(GL_LINES); glColor3f(255, 255, 255); glVertex3f(m_boundingRect.left, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.left, m_boundingRect.bottom, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.bottom, m_position.z()); glVertex3f(m_boundingRect.left, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.left, m_boundingRect.bottom, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.bottom, m_position.z()); glEnd(); // DrawBox(m_position.x(), m_position.y(), m_position.z(), m_size, m_size, m_size, 8); glPopMatrix(); } void Block::DoCollision(GameObject* obj){ GameObject* t = this;   // I modified this to see for sure that it was causing the mistake. // obj->DoCollision(NULL); // Just revert it back to /* void Block::DoCollision(GameObject* obj){     obj->DoCollision(this);   }   */ }

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >