Search Results

Search found 3489 results on 140 pages for 'summary'.

Page 104/140 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • Browser Specific Extensions of HttpClient

    - by imran_ku07
            Introduction:                     REpresentational State Transfer (REST) causing/leaving a great impact on service/API development because it offers a way to access a service without requiring any specific library by embracing HTTP and its features. ASP.NET Web API makes it very easy to quickly build RESTful HTTP services. These HTTP services can be consumed by a variety of clients including browsers, devices, machines, etc. With .NET Framework 4.5, we can use HttpClient class to consume/send/receive RESTful HTTP services(for .NET Framework 4.0, HttpClient class is shipped as part of ASP.NET Web API). The HttpClient class provides a bunch of helper methods(for example, DeleteAsync, PostAsync, GetStringAsync, etc.) to consume a HTTP service very easily. ASP.NET Web API added some more extension methods(for example, PutAsJsonAsync, PutAsXmlAsync, etc) into HttpClient class to further simplify the usage. In addition, HttpClient is also an ideal choice for writing integration test for a RESTful HTTP service. Since a browser is a main client of any RESTful API, it is also important to test the HTTP service on a variety of browsers. RESTful service embraces HTTP headers and different browsers send different HTTP headers. So, I have created a package that will add overloads(with an additional Browser parameter) for almost all the helper methods of HttpClient class. In this article, I will show you how to use this package.           Description:                     Create/open your test project and install ImranB.SystemNetHttp.HttpClientExtensions NuGet package. Then, add this using statement on your class, using ImranB.SystemNetHttp;                     Then, you can start using any HttpClient helper method which include the additional Browser parameter. For example,  var client = new HttpClient(myserver); var task = client.GetAsync("http://domain/myapi", Browser.Chrome); task.Wait(); var response = task.Result; .                     Here is the definition  of Browser, public enum Browser { Firefox = 0, Chrome = 1, IE10 = 2, IE9 = 3, IE8 = 4, IE7 = 5, IE6 = 6, Safari = 7, Opera = 8, Maxthon = 9, }                     These extension methods will make it very easy to write browser specific integration test. It will also help HTTP service consumer to mimic the request sending behavior of a browser. This package source is available on github. So, you can grab the source and add some additional behavior on the top of these extensions.         Summary:                     Testing a REST API is an important aspect of service development and today, testing with a browser is crucial. In this article, I showed how to write integration test that will mimic the browser request sending behavior. I also showed an example. Hopefully you will enjoy this article too.

    Read the article

  • SOA Community Newsletter May 2014

    - by JuergenKress
    Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! If you can’t make it to Lisbon attend our SOA Suite 11c free on-demand Bootcamp or  Managing the Complexity of IoT online trainings. With more than 5000 customers, SOA Suite Achieves Significant Customer Adoption and Industry Recognition.Thanks to all our SOA Specialized partners for making our joins SOA customers successful! As a summary of the Industrial SOA series we published the Podcast Show Notes: SOA and Cloud - Where's This Relationship Going? Make sure you use the Oracle Demo Systems for your customer presentations. The demo systems are hosted by Oracle and include complete scenarios based on the latest Middleware version like the new B2B SOA Suite Demo System! For local presentations without fast internet use the SOA/BPM 11.1.1.7.1 Virtual Machine and Case Management Sample. At our SOA Community Workspace (SOA Community membership required) you can get new IoT presentations for Location Based Offers for Banking & Whitepaper and online Webcast & Utility presentation. In this newsletter you will find many articles about OSB: OSB 11g – A Hands-on Tutorial & Using Split-Joins in OSB Services for parallel processing of messages & OSB, Service Callouts and OQL & Working with Oracle Security Token Service. Thanks for sharing all the additional SOA articles within the community: How to configure Oracle SOA/BPM task auto release & Controlling BPEL process flow at runtime & Upgrading to Oracle SOA Suite 11g PS6 (11.1.1.7)? Do this. & BPEL and BPM's performance monitoring using DMS & SOA 11g - Create RESTful Service In Oracle SOA & Wrong timezone causes TopLink warning in SOA suite. Highlight of the BPM and ACM section is the IDC BPM vendor report. The new bundle Patch including the ACM UI is now available. If you want to learn more about ACM, get the ACM training material at our SOA Community Workspace (SOA Community membership required). A great demo for your next BPM presentation is the BPM iPad app. It’s simpleMobile BPM is Not An Option. It’s a Necessity. Thanks for sharing all the additional BPM articles within the community: BPM update adds Case Management Web Interface and REST APIs & Implementing deadline functionality with Oracle Adaptive Case Management & BPM 11g Timeout Heuristics & Humantask Assignment: Names and Expressions Assignment via Rules. In our last section Architecture, it is all about design. Usability is a key factor for customer satisfaction, worth to spend some time and read the Simplified User Experience Design Patterns eBook. Great blueprint for your project! See you in Lisbon! To read the newsletter please visit www.tinyurl.com/soaNewsMay2014 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • 13 MORE Things from the Oracle Social Summit You Should Know

    - by Mike Stiles
    In our previous blog, we started giving those of you who couldn’t make it just a sampling of the valuable takeaways from the first annual Oracle Social Summit, held Nov 14 and 15 in Las Vegas. And while yes, 13 items is a pretty healthy sampling, we wanted to go the extra mile and give you 13 more, an indication of just how much great information came out of it.  Follow the arrow, and come on in as if you were there with us. 1. Weber Shandwick takes a 70/20/10 approach when advising clients how to allocate resources to paid social opportunities. 70% of spend should go toward paid opportunities the agency and client both know work, 20% should go toward paid social the agency knows works, and 10% should go toward experimentation. (Matt Dickman – Weber Shandwick) 2. By 2017, the technically competent CMO will spend more on IT than the CIO. (Gartner Study) 3. CIOs are focused on infrastructure. As the roles of the CMO and CIO continue coming together, those CIOs have to make a very conscious decision to get CMOs what they need. 4. It’s now harder for brands to differentiate based on product. The advantage will go to the brands that are successful in garnering customer trust. 5. More and more, enterprise software is going to start looking like the software consumers are used to seeing and using. 6. You will see brands prioritizing mobile and dropping investments in www, HTML, POS systems, etc. 7. The social graph has to be added to brands’ customer data for a more holistic view. Customers will give you the information you need if the reward is appropriate. 8. Viacom did a study that showed viewers are most honest on social. Not so much on surveys or other feedback vehicles. 9. How are you determining your influencers? Influence isn’t about reach. It’s about getting people to change behavior. 10. A mix of skills is becoming critically important in a social staff. It shouldn’t be a mixture of several disciplines, not just a bunch of “social experts.” 11. If senior management isn’t engaged, the social team is forced into guessing what might be considered a “success” by the C-suite. 12. Mobile customization will be getting big investments from brands in 2013. Brands need to provide shoppers utility, not just information. 75% will use mobile this holiday season to avoid in-store madness. 13. Data becomes information, information becomes insight, and insight becomes actionable. The Oracle Social Summit brought together brands, agencies, Oracle social experts and industry thought leaders to take a serious look at where social stands today, and where it’s headed in the near future. Given the speed of social’s evolution, attending such events (or at least reading nifty summary blogs) is a good investment in making sure your enterprise isn’t falling gradually behind.

    Read the article

  • Session and Pop Up Window

    - by imran_ku07
     Introduction :        Session is the secure state management. It allows the user to store their information in one page and access in another page. Also it is so much powerful that store any type of object. Every user's session is identified by their cookie, which client presents to server. But unfortunately when you open a new pop up window, this cookie is not post to server with request, due to which server is unable to identify the session data for current user.         In this Article i will show you how to handle this situation,  Description :         During working in a application, i was getting an Exception saying that Session is null, when a pop window opens. After seeing the problem more closely i found that ASP.NET_SessionId cookie for parent page is not post in cookie header of child (popup) window.         Therefore for making session present in both parent and child (popup) window, you have to present same cookie. For cookie sharing i passed parent SessionID in query string,   window.open('http://abc.com/s.aspx?SASID=" & Session.SessionID &','V');           and in Application_PostMapRequestHandler application Event, check if the current request has no ASP.NET_SessionId cookie and SASID query string is not null then add this cookie to Request before Session is acquired, so that Session data remain same for both parent and popup window.    Private Sub Application_PostMapRequestHandler(ByVal sender As Object, ByVal e As EventArgs)           If (Request.Cookies("ASP.NET_SessionId") Is Nothing) AndAlso (Request.QueryString("SASID") IsNot Nothing) Then               Request.Cookies.Add(New HttpCookie("ASP.NET_SessionId", Request.QueryString("SASID")))           End If       End Sub           Now access Session in your parent and child window without any problem. How this works :          ASP.NET (both Web Form or MVC) uses a cookie (ASP.NET_SessionId) to identify the user who is requesting. Cookies are may be persistent (saved permanently in user cookies ) or non-persistent (saved temporary in browser memory). ASP.NET_SessionId cookie saved as non-persistent. This means that if the user closes the browser, the cookie is immediately removed. This is a sensible step that ensures security. That's why ASP.NET unable to identify that the request is coming from the same user. Therefore every browser instance get it's own ASP.NET_SessionId. To resolve this you need to present the same parent ASP.NET_SessionId cookie to the server when open a popup window.           You can confirm this situation by using some tools like Firebug, Fiddler,  Summary :          Hopefully you will enjoy after reading this article, by seeing that how to workaround the problem of sharing Session between different browser instances by sharing their Session identifier Cookie.

    Read the article

  • Visual Studio 2012 and .NET 4.5 now Live!

    - by Tarun Arora
    Today was the formal launch event for Visual Studio 2012 and .NET 4.5, a state-of-the-art development solution for building modern applications that span connected devices and continuous services, from the client to the cloud. The event was streamed live from http://visualstudiolaunch.com, S.Somasegar corporate vice president of the Developer Division opened the key note, Jason Zander dived deeper into how to leverage Visual Studio 2012 and .NET 4.5 to build modern application. Brian Harry all the awesome features in Visual Studio 2012 to improve the application lifecycle management.   I. Summary of the announcements made today 1. Visual Studio Updates coming this fall –  VS Update will better support agile teams, enable continuous quality, elevate SharePoint development with application lifecycle management (ALM) tools, and expand Visual Studio 2012 Windows development capabilities. It will be available as a community technology preview (CTP) later this month and in final release later this calendar year. A comprehensive list of what will be on offer can be found here. 2. Visual Studio Express 2012 for Windows Desktop – Visual Studio Express 2012 for Windows Desktop brings the newest desktop development capabilities in Visual Studio 2012 to Express users, too. You would be excited to know that the express SKU will support Integration with TFS among some of the other cool features I would like to mention Unit Testing, Unit Testing, Code Analysis, dependency management with NuGet a full list and download links can be found here. 3. F# tools for Visual Studio Express 2012 for web –  This F# Tools release adds in F# 3.0 components, such as the F# 3.0 compiler, F# Interactive, IDE support, and new F# features such as type providers and query expressions to your Visual Studio 2012 express for web. More details and download links can be found here. 4. Visual Studio TFS 2012 Power Tools – The TFS 2012 Power tools brings the goodness of Best Practice Analyzer, Process Template Editor, Storyboard Shapes, Team Explorer enhancements, TFPT command line, TFS Server Backups, etc via to your TFS 2012 installation. It can be downloaded right away from here. II. Road shows There will be many more community road shows this month packaged with hours of demos and discussions. The Visual Studio UK Team has just announced that there will be four UK launch events, face to face session including a product group speaker and partner sessions: Edinburgh, 1st October Manchester, 3rd October London, 4th October Reading, 5th October III. Get Started Download Visual Studio 2012 and the additional supporting software's from here. The Visual Studio development team has put together over 60 videos to help you learn about the new Visual Studio 2012 capabilities in more detail, and all of these will be available for watching here. IV. What’s Next A lot more exciting stuff lined up… Windows 8 Anticipated release: Oct. 26 (UPDATED 9/12) Windows Server 2012 Released (UPDATED 9/4) System Center 2012 Released (UPDATED 9/11) SQL Server 2012 Released (UPDATED 4/2) Internet Explorer 10 Anticipated release: Between Q3 2012 and early 2013 (UPDATED 5/3   Office 2013 Anticipated release: Q4 2012 or Q1 2013(UPDATED 9/12) Exchange 2013 Anticipated release: Q4 2012 (UPDATED 7/26) Visual Studio 2012 Released (UPDATED 9/12) Kinect for Windows Released (UPDATED 9/4) Windows Phone "Tango" and 8 "Tango": Released; Anticipated "Windows Phone 8" release: Q4 2012 (UPDATED 9/5) Dynamics ERP Online Anticipated release: September or October 2012 (UPDATED 7/20) Office 365 Anticipated update schedule: "Almost weekly"(UPDATED 9/12) Windows Azure Rumored CTP release: Spring 2012 (UPDATED 9/7) SharePoint 2013 Anticipated release: Q4 2012 (UPDATED 8/21) Enjoy

    Read the article

  • Can't boot in Ubuntu after windows upgrade

    - by VanceAnce
    After my lastest update for Ubuntu and Windows XP, I got a Grub error on booting the next day. ls lists the following (without () ): sd0 sd1, msdos sd2 sd5 sd6 When I tried to get into one with (sd0,xy)/ it doesn't detect system or unknown file system error. I tried to boot to a live session with a Knoppix live CD and found out that all data exists. I also tried to recover with TestDisk and it finds all systems. Here is the test disk result: Start End Size in sectors 1 * HPFS - NTFS 0 1 1 7079 254 63 113740137 2 E extended LBA 7080 0 1 12161 254 63 81642330 5 L HPFS - NTFS 7080 1 1 10266 254 63 51199092 [Schule] X extended 12031 30 1 12161 254 63 2102625 6 L Linux Swap 12031 31 33 12161 254 63 2102530 I've 1 winxp-home, 1x Ubuntu (ext3+swap) and 1 winxp prof and then I wrote on mbr with TestDisk but I always get the same errors with Grub. What should I do? I need both XP and Ubuntu. Help me please. more infos in answers below - sry for thos confusing style but im working on diff live system and browsers and have to reboot always the boot info script output is also down below maybe an advanced user can correct my fail posting - after i can solve my issuse i will register here thanx and pls help me with those weired issues ! as i still cant just comment my own answer or those on top i again has to put it here as a sepperate answer..... (or even edit - maybe an browser failur using the live cds ... cause this posti can edit) here the bootinfo script output - but the result is the same as with TestDisk ... but it looks worse - cause it also doesnt detect my old ubuntu ... but there wasnt a eares process or overwrite process visibile ending the last working session output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sda. sda1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /ntldr /NTDETECT.COM sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 63. Operating System: Windows XP Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 100.0 GB, 100030242816 bytes 255 heads, 63 sectors/track, 12161 cylinders, total 195371568 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 113,740,199 113,740,137 7 NTFS / exFAT / HPFS /dev/sda2 113,740,200 195,382,529 81,642,330 f W95 Extended (LBA) /dev/sda5 113,740,263 164,939,354 51,199,092 7 NTFS / exFAT / HPFS /dev/sda6 193,280,000 195,382,529 2,102,530 82 Linux swap / Solaris /dev/sda2 ends after the last sector of /dev/sda /dev/sda6 ends after the last sector of /dev/sda "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 6596D86768011128 ntfs /dev/sda5 1300D3B7744EC141 ntfs Schule /dev/sda6 5b95f2a1-4145-43a5-ac51-41d7dd32b213 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) ================================ sda1/boot.ini: ================================ [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /fastdetect /NoExecute=OptOut multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect [spybotsd] timeout.old=30 the last part shows that i now use the windows boot loader so that i can acces at least one OS but shouldnt i also get acces to my ubuntu partitions with live-linux-cds ? or do i have to boot with grub to get to those files only ??

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 1/ 2

    - by Sanjeevio
    Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation.  Genuine competitive advantage will stem from being able to cope with innovation demands of the present economic environment while meeting compliance goals with regulatory mandates in a faster and cost-efficient manner. Let’s first take a look at the key factors that are limiting the pursuit of the above goal. Regulatory requirements are growing, driven in-part by revisions to existing mandates in line with cross-border, pan-geographic, nature of financial value chains today and more so by frequent systemic failures that have destabilized the financial markets and the global economy over the last decade.  In addition to the increase in regulation, financial institutions are faced with pressures of regulatory overlap and regulatory conflict. Regulatory overlap arises primarily from two things: firstly, due to the blurring of boundaries between lines-of-businesses with complex organizational structures and secondly, due to varying requirements of jurisdictional directives across geographic boundaries e.g. a securities firm with operations in US and EU would be subject different requirements of “Know-Your-Customer” (KYC) as per the PATRIOT ACT in US and MiFiD in EU. Another consequence and concomitance of regulatory change is regulatory conflict, which again, arises primarily from two things: firstly, due to diametrically opposite priorities of line-of-business and secondly, due to tension that regulatory requirements create between shareholders interests of tighter due-diligence and customer concerns of privacy. For instance, Customer Due Diligence (CDD) as per KYC requires eliciting detailed information from customers to prevent illegal activities such as money-laundering, terrorist financing or identity theft. While new customers are still more likely to comply with such stringent background checks at time of account opening, existing customers baulk at such practices as a breach of trust and privacy. As mentioned earlier regulatory compliance addresses both financial and non-financial risks. Operational risk is a non-financial risk that stems from business execution and spans people, processes, systems and information. Operational risk arising from financial processes in particular transcends other sources of such risk. Let’s look at the factors underpinning the operational risk of financial processes. The rapid pace of innovation and geographic expansion of financial institutions has resulted in proliferation and ad-hoc evolution of back-office, mid-office and front-office processes. This has had two serious implications on increasing the operational risk of financial processes: ·         Inconsistency of processes across lines-of-business, customer channels and product/service offerings. This makes it harder for the risk function to enforce a standardized risk methodology and in turn breaches harder to detect. ·         The proliferation of processes coupled with increasingly frequent change-cycles has resulted in accidental breaches and increased vulnerability to regulatory inadequacies. In summary, regulatory growth (including overlap and conflict) coupled with process proliferation and inconsistency is driving process compliance complexity In my next post I will address the implications of this process complexity on financial institutions and outline the role of BPM in lowering specific aspects of operational risk of financial processes.

    Read the article

  • jtreg update, December 2012

    - by jjg
    There is a new version of jtreg available. The primary new feature is support for tests that have been written for use with TestNG, the popular open source testing framework. TestNG is supported by a variety of tools and plugins, which means that it is now possible to develop tests for OpenJDK using those tools, while still retaining the ability to have the tests be part of the OpenJDK test suite, and run with a single test harness, jtreg. jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg. TestNG support jtreg supports both single TestNG tests, which can be freely intermixed with other types of jtreg tests, and groups of TestNG tests. A single TestNG test class can be compiled and run by providing a test description using the new action tag: @run testng classname The test will be executed by using org.testng.TestNG. No main method is required. A group of TestNG tests organized in a standard package hierarchy can also be compiled and run by jtreg. Any such group must be identified by specifying the root directory of the package hierarchy. You can either do this in the top level TEST.ROOT file, or in a TEST.properties file in any subdirectory enclosing the group of tests. In either case, add a line to the file of the form: TestNG.dirs = dir ... Directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. In particular, note that you can simply use "TestNG.dirs = ." in a TEST.properties file in the root directory of the test group's package hierarchy. No additional test descriptions are necessary, but test descriptions containing information tags, such as @bug, @summary, etc are permitted. All the Java source files in the group will be compiled if necessary, before any of the tests in the group are run. The selected tests within the group will be run, one at a time, using org.testng.TestNG. Library classes The specification for the @library tag has been extended so that any paths beginning with '/' will be evaluated relative to the root directory of the test suite. In addition, some bugs have been fixed that prevented sharing the compiled versions of library classes between tests in different directories. Note: This has uncovered some issues in tests that use a combination of @build and @library tags, such that some tests may fail unexpectedly with ClassNotFoundException. The workaround for now is to ensure that library classes are listed before the test classes in any @build tags. To specify one or more library directories for a group of TestNG tests, add a line of the following form to the TEST.properties file in the root directory of the group's package hierarchy: lib.dirs = dir ... As before, directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. The libraries will be available to all classes in the group; you cannot specify different libraries for different tests within the group. Coming soon ... From this point on, jtreg development will be using the new jtreg repository in the OpenJDK code-tools project. There is a new email alias jtreg-dev at openjdk.java.net for discussions about jtreg development. The existing alias jtreg-use at openjdk.java.net will continue to be available for questions about using jtreg. For more information ... An updated version of the jtreg Tag Language Specification is being prepared, and will be made available when it is ready. In the meantime, you can find more information about the support for TestNG by executing the following command: $ jtreg -onlinehelp TestNG For more information on TestNG itself, visit testng.org.

    Read the article

  • C++ property system interface for game editors (reflection system)

    - by Cristopher Ismael Sosa Abarca
    I have designed an reusable game engine for an project, and their functionality is like this: Is a completely scripted game engine instead of the usual scripting languages as Lua or Python, this uses Runtime-Compiled C++, and an modified version of Cistron (an component-based programming framework).to be compatible with Runtime-Compiled C++ and so on. Using the typical GameObject and Component classes of the Component-based design pattern, is serializable via JSON, BSON or Binary useful for selecting which objects will be loaded the next time. The main problem: We want to use our custom GameObjects and their components properties in our level editor, before used hardcoded functions to access GameObject base class virtual functions from the derived ones, if do you want to modify an property specifically from that class you need inside into the code, this situation happens too with the derived classes of Component class, in little projects there's no problem but for larger projects becomes tedious, lengthy and error-prone. I've researched a lot to find a solution without luck, i tried with the Ogitor's property system (since our engine is Ogre-based) but we find it inappropiate for the component-based design and it's limited only for the Ogre classes and can lead to performance overhead, and we tried some code we find in the Internet we tested it and worked a little but we considered the macro and lambda abuse too horrible take a look (some code omitted): IWE_IMPLEMENT_PROP_BEGIN(CBaseEntity) IWE_PROP_LEVEL_BEGIN("Editor"); IWE_PROP_INT_S("Id", "Internal id", m_nEntID, [](int n) {}, true); IWE_PROP_LEVEL_END(); IWE_PROP_LEVEL_BEGIN("Entity"); IWE_PROP_STRING_S("Mesh", "Mesh used for this entity", m_pModelName, [pInst](const std::string& sModelName) { pInst->m_stackMemUndoType.push(ENT_MEM_MESH); pInst->m_stackMemUndoStr.push(pInst->getModelName()); pInst->setModel(sModelName, false); pInst->saveState(); }, false); IWE_PROP_VECTOR3_S("Position", m_vecPosition, [pInst](float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_POSITION); pInst->m_stackMemUndoVec3.push(pInst->getPosition()); pInst->saveState(); pInst->m_vecPosition.Get()[0] = fX; pInst->m_vecPosition.Get()[1] = fY; pInst->m_vecPosition.Get()[2] = fZ; pInst->setPosition(pInst->m_vecPosition); }, false); IWE_PROP_QUATERNION_S("Orientation (Quat)", m_quatOrientation, [pInst](float fW, float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_ROTATE); pInst->m_stackMemUndoQuat.push(pInst->getOrientation()); pInst->saveState(); pInst->m_quatOrientation.Get()[0] = fW; pInst->m_quatOrientation.Get()[1] = fX; pInst->m_quatOrientation.Get()[2] = fY; pInst->m_quatOrientation.Get()[3] = fZ; pInst->setOrientation(pInst->m_quatOrientation); }, false); IWE_PROP_LEVEL_END(); IWE_IMPLEMENT_PROP_END() We are finding an simplified way to this, without leading confusing the programmers, (will be released to the public) i find ways to achieve this but they are only available for the common scripting as Lua or editors using C#. also too portable, we can write "wrappers" for different GUI toolkits as Qt or GTK, also i'm thinking to using Boost.Wave to get additional macro functionality without creating my own compiler. The properties designed to use in the editor they are removed in the game since the save file contains their data and loads it using an simple 'load' function to reduce unnecessary code bloat may will be useful if some GameObject property wants to be hidden instead. In summary, there's a way to implement an reflection(property) system for a level editor based in properties from derived classes? Also we can use C++11 and Boost (restricted only to Wave and PropertyTree)

    Read the article

  • Package Version Numbers, why are they so important

    - by Chris W Beal
    One of the design goals of IPS has been to allow people to easily move forward to a supported "Surface" of component. That is to say, when you  # pkg update your system, you get the latest set of components which all work together, based on the packages you already have installed. During development, this has meant simply you update to the latest "build" of the components. (During development, we build everything and publish everything every two weeks). Now we've released Solaris 11 using the IPS technologies, things are a bit more complicated. We need to be able to reflect all the types of Solaris release we are doing. For example Solaris Development builds, Solaris Update builds and "Support Repository Updates" (the replacement for patches) in the version scheme. So simply saying "151" as the build number isn't sufficient to articulate what you are running, or indeed what is available to update to In my previous blog post I talked about creating your own package, and gave an example FMRI of pkg://tools/[email protected],0.5.11-0.0.0 But it's probably more instructive to look at the FMRI of a Solaris package. The package "core-os" contains all the common utilities and daemons you need to use Solaris.  $ pkg info core-os Name: system/core-os Summary: Core Solaris Description: Operating system core utilities, daemons, and configuration files. Category: System/Core State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: Wed Oct 19 07:04:57 2011 Size: 25.14 MB FMRI: pkg://solaris/system/[email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z The FMRI is what we will concentrate on here. In this package "solaris" is the publisher. You can use the pkg publisher command to see where the solaris publisher gets it's bits from $ pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ So we can see we get solaris packages from pkg.oracle.com.  The package name is system/core-os. These can be arbitrary length, just to allow you to group similar packages together. Now on the the interesting? bit, the versions, everything after the @ is part of the version. IPS will only upgrade to a "higher" version. [email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z core-os = Package Name0.5.11 = Component - in this case we're saying it's a SunOS 5.11 package, = separator5.11 = Built on version - to indicate what OS version you built the package on- = another separator0.175.0.0.0.2.1 = Branch Version : = yet another separator20111019T070457Z = Time stamp when the package was published So from that we can see the Branch Version seems rather complex. It is necessarily so, to allow us to describe the hierachy of releases we do In this example we see the following 0.175: is known as the trunkid, and is incremented each build of a new release of Solaris. During Solaris 11 this should not change  0: is the Update release for Solaris. 0 for FCS, 1 for update 1 etc 0: is the SRU for Solaris. 0 for FCS, 1 for SRU 1 etc 0: is reserved for future use 2: Build number of the SRU 1: Nightly ID - only important for Solaris developersTake a hypothetical example [email protected],5.11-0.175.1.5.0.4.1:<something> This would be build 4 of SRU 5 of Update 1 of Solaris 11 This is actually documented in a MOS article 1378134.1 Which you can read if you have a support contract.

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Backup SQL Database Federation

    - by Herve Roggero
    One of the amazing features of Windows Azure SQL Database is the ability to create federations in order to scale your cloud databases. However until now, there were very few options available to backup federated databases. In this post I will show you how Enzo Cloud Backup can help you backup, and restore your federated database easily. You can restore federated databases in SQL Database, or even on SQL Server (as regular databases). Generally speaking, you will need to perform the following steps to backup and restore the federations of a SQL Database: Backup the federation root Backup the federation members Restore the federation root Restore the federation members These actions can be automated using: the built-in scheduler of Enzo Cloud Backup, the command-line utilities, or the .NET Cloud Backup API provided, giving you complete control on how you want to perform your backup and restore operations. Backing up federations Let’s look at the tool to backup federations. You can explore your existing federations by using the Enzo Cloud Backup application as shown below. As you can see, the federation root and the various federations available are shown in separate tabs for convenience. You would first need to backup the federation root (unless you intend to restore the federation member on a local SQL Server database and you don’t need what’s in the federation root). The steps are similar than those to backup a federation member, so let’s proceed to backing up a federation member. You can click on a specific federation member to view the database details by clicking at the tab that contains your federation member. You can see the size currently consumed and a summary of its content at the bottom of the screen. If you right-click on a specific range, you can choose to backup the federation member. This brings up a window with the details of the federation member already filled out for you, including the value of the member that is used to select the federation member. Notice that the list of Federations includes “Federation Root”, which is what you need to select to backup the federation root (you can also do that directly from the root database tab).  Once you provide at least one backup destination, you can begin the backup operation.  From this window, you can also schedule this operation as a job and perform this operation entirely in the cloud. You can also “filter” the connection, so that only the specific member value is backed up (this will backup all the global tables, and only the records for which the distribution value is the one specified). You can repeat this operation for every federation member in your federation. Restoring Federations Once backed up, you can restore your federations easily. Select the backup device using the tool, then select Restore. The following window will appear. From here you can create a new root database. You can also view the backup properties, showing you exactly which federations will be created. Under the Federations tab, you can select how the federations will be created. I chose to recreate the federations and let the tool perform all the SPLIT operations necessary to recreate the same number of federation members. Other options include to create the first federation member only, or not to create the federation members at all. Once the root database has been restored and the federation members have been created, you can restore the federation members you previously backed up. The screen below shows you how to restore a backup of a federation member into a specific federation member (the details of the federation member are provided to make it easier to identify). Conclusion This post gave you an overview on how to backup and restore federation roots and federation members. The backup operations can be setup once, then scheduled daily.

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • UPOS RFIDScanner data format

    - by Robert Snyder
    A lot of work that I do currently is based in the OPOS/UPOS world. My company has a device that can read 13.56Mhz tags (RFID), Smart Cards, and Mag Stripe cards. Up until somewhat recently I have only been working with RFID for a very specific scenario. That was to read UltraLight C and Desfire cards. These cards were all setup very specifically so that I could take the data read from those cards and force it into a MSR track2 format. The past couple of weeks, however, I have been working on reading RFID credit cards (since I have a Visa card I've been using mine), and Smart Card credit cards. (The visa card I have has both) In learning how to communicate with SmartCard and reading ISO7816 and EMVCO documents I became a little more familiar with how info is stored. But now I have a question regarding UPOS. The RFID data on my Visa is stored (and read) very similar to how the data is stored and read from the Smart Card on my Visa. Cool. Well in the UPOS spec for SmartCardRW the ReadData method returns a byte array. That's cool, I can just return all that data and then parse it as my heart desires. The RFID though has a LinkedList of Tags. Well this makes sense in terms of my Visa card (reminds me of a question I have in regards to SmartCard, but that is for another question) but what about ULC and Desfire, or for that matter any Mifare card. Pages, Files, Purses don't exactly fit the Tag profile. For instance lets just say I read pages 4-12 on my ULC card. Each page I read is 4 bytes long. Does this mean I have 9 tags in my LinkedList? Is my Tag id the page number? Or then how does that translate to Desfire? I open application 123456 and read file 1 and file 2, Do I have 2 tags? and if so what is my tag id? At least with my Visa I think that I have to use the Tag id (ex 5F24 for my expiration date) and value of {0x15, 0x10, 0x31} Part of me says yes..that makes sense. Another part of me says, "well if that is the case then why doesn't SmartCardRW have Tags?" So that is my question. How do I format my data from those different types of media? or is that the job of my Control Object (the application)? Is so how does it know? The only protocols I have are: // Summary: // Enumerates the available predefined RFID tag protocols the device supports. [Flags] public enum RFIDProtocols { EpcClass0 = 1, RFIDSdt0Plus = 2, EpcClass1 = 4, EpcClass1Gen2 = 8, EpcClass2 = 16, Iso14443A = 4096, Iso14443B = 8192, Iso15693 = 12288, Iso180006B = 16384, Other = 16777216, All = 1073741824, } If I use that well all of my cards that I have are all Iso14443A. I use the ATQA and the SAK to know what type of card I really have. There is no RFID property that lets me specify that. So I'm lost.

    Read the article

  • Where Twitter Stands Heading Into 2013

    - by Mike Stiles
    As Twitter continued throughout 2012 to be a stage on which global politics and culture played itself out, the company itself underwent some adjustments that give us a good indication of what users and brands can expect from the platform in 2013. The power of the network did anything but fade. Celebrities continued to use it to connect one-on-one. Even the Pope signed on this year. It continued to fuel revolutions. It played an exponentially large factor in this US Presidential election. And around the world, the freedom to speak was challenged as users were fired, sued, sometimes even jailed for their tweets. Expect more of the same in 2013, as Twitter has entrenched itself, for individuals, causes and brands, as the fastest, easiest, most efficient way to message the masses so some measure of impact can come from it. It’s changed everything, and it’s not finished. These fun facts reveal the position of strength with which Twitter enters 2013: It now generates a billion tweets every 2.5 days It has 500 million+ users The average Twitter user has tweeted 307 times 32% of everyone using the Internet uses Twitter It’s expected to bring in $540 million in ad revenue by 2014 11 new accounts are created every second High-level Executive Summary: people love it, people use it, and they’re going to keep loving and using it. Whether or not outside developers love it is a different matter. 2012 marked a shift from welcoming the third party support that played at least some role in Twitter being so warmly embraced, to discouraging anything that replicates what Twitter can do itself…or plans to do itself. It’s not the open playground it once was. Now Twitter must spend 2013 proving it can innovate in-house and keep us just as entranced. Likewise, Twitter is distancing itself from Facebook. Images from the #1 platform’s Instagram don’t work on Twitter anymore, and Twitter’s rolling out their own photo filter product. Where the two have lived in a “plenty of room for everybody” symbiosis up to now, 2013 could see the giants ramping up a full-on rivalry. Twitter is exhibiting a deliberate strategy. Updates have centered on more visually appealing search results, and making finding and sharing content easier. Deals have been cut with some media entities so their content stands out. CEO Dick Costolo has said tweets aren’t the attraction, they’re what leads you to content. Twitter aims to be a key distributor of media and info. Add the addition of former News Corp. President Peter Chernin to the board, and their hashtag landing page experience for events, and their media behemoth ambitions get pretty clear. There are challenges ahead and Costolo has also laid those out; entry into China, figuring out how to have Twitter deliver both comprehensive and relevant, targeted experiences, and the visualization of big data. What does this mean for corporations? They can expect a more media-rich evolution and growing emphases on imagery. They can expect more opportunities to create great media content and leverage Twitter for its distribution. And they can expect new ways to surface in searches. Are brands diving in? 56% of customer tweets to companies get completely and totally ignored. Ugh. A study Twitter recently conducted with Compete shows people who see tweets from retailers are more likely to buy a product. And, the more retailer tweets they see, the more likely they are to purchase on the retail site. As more of those tweets point to engaging media content from the brand, the results should get even better. Twitter appears ready for 2013. Enterprise brands have some work to do. @mikestilesPhoto Stuart Miles, freedigitalphotos.net

    Read the article

  • Fresh Ubuntu Install - Grub not loading

    - by Ryan Sharp
    System Ubuntu 12.04 64-bit Windows 7 SP1 Samsung 64GB SSD - OS' Samsung 1TB HDD - Games, /Home, Swap WD 300'ishGB HDD - Backup Okay, so I'm very frustrated, so please excuse me if I miss anything out as my head is clouded by anger and impatience, etc. I'll try me best, though. First of all, I'll explain how I got to my predicament. I finally got my new SSD. I firstly installed Windows, which completed without a hitch. Afterwards, I tried to install Ubuntu, which failed several times due to problems irrelevant to this question, but I mention this to explain my frustrations, sorry. Anyway, I finally installed Ubuntu. However, I chose the 'bootloader' to be installed on the same partition as where I was installing the Ubuntu Root partition, as that was what I believed to be the best choice. It was of my thinking that it was supposed to go on the same partition and on the SSD, which is my OS drive, though with my problem, it apparently was wrong. So I tried to fix it by checking guides and following their directions, but seemed to have messed it up even more. Here is what I receive after I use the fdisk -l command: (I also added explanations for which I used each partition for) Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x324971d1 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 208896 48957439 24374272 7 HPFS/NTFS/exFAT /dev/sda3 48959486 125044735 38042625 5 Extended /dev/sda5 48959488 125044735 38042624 83 Linux sda1 --/ Windows Recovery sda2 --/ Windows 7 sda3/5 --/ Ubuntu root [ / ] Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc0ee6a69 Device Boot Start End Blocks Id System /dev/sdb1 1024208894 1953523711 464657409 5 Extended /dev/sdb3 * 2048 1024206847 512102400 7 HPFS/NTFS/exFAT /dev/sdb5 1024208896 1939851263 457821184 83 Linux /dev/sdb6 1939853312 1953523711 6835200 82 Linux swap / Solaris sdb3 --/ Partition for Steam games, etc. sdb5 --/ Ubuntu Home [ /home ] sdb6 --/ Ubuntu Swap Partition table entries are not in disk order Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x292eee23 Device Boot Start End Blocks Id System /dev/sdc1 2048 625141759 312569856 7 HPFS/NTFS/exFAT sdc1 --/ Generic backup I also used a Boot Script that other users suggested, so that I can give more details on my partitions and also where Grub is located... ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Windows is installed in the MBR of /dev/sdc. Now that is weird... Why would Grub2 be installed on both my SSD and HDD? Even weirder is why is Windows on the MBR of my backup hard drive? Nothing I did should have done that... Anyway, here is the entire Output from that script... PASTEBIN So, to summarize what I need: How can I fix my setup so grub loads on startup? How can I clean my partitions to remove unnecessary grubs? What did I do wrong so that I don't do something so daft again? Thank you so much for reading, and I hope you can help me. I've been trying to have a successful setup since Friday, and I'm almost at the point that I'm really tempted to throw my computer out the window due to my frustration.

    Read the article

  • Migrating a WebLogic Domain from a 32 to a 64 bit JVM/Architecture

    - by adejuanc
    Currently on 32 bit OS which presents a physical constraint when trying to allocate memory. By limit, a 32 bit OS, will not be able to allocate more than 3 GB on a Linux environment, or 2 GB for Windows environment (not considering Physical Address Extension (PAE) implementation, which can increase the maximum allocatable memory). When this limit is reached, a migration to a 64 bit architecture is recommended. Below are the steps on how to install a 64 WebLogic Server version, along with a 64 Bit JVM and then how to migrate the domain to this new environment. Weblogic Server Version: WebLogic Server Version: 10.3.4.0 OS: Linux adejuan-cl.cl.oracle.com 2.6.18-194.el5 #1 SMP Mon Jan 01 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux JVM:java version "1.6.0_22" Java(TM) SE Runtime Environment (build 1.6.0_22-b04) Oracle JRockit(R) (build R28.1.1-13-139783-1.6.0_22-20101206-0136-linux-x86_64, compiled mode) To create migration template: 1. Execute $<WLS_HOME>/wlserver_10.3/common/bin/config_builder.sh 2. Create Domain Template. 3. Select Domain to migrate. 4. Enter the name of template and other info. Click next. 5. Using Add Button, Select libraries you want to add under lib folder. copy jdbc data sources you would like to have under config/jdbc folder. If you want to have log4j configuration, copy log4j.xml under domain_root folder. 6. Select data base for the domain and click next. 7. Enter Admin Server Name and port numbers. Click next. 8. Enter username and password. Click next. Select No if you don want to add users/groups/roles 9. Click next until reach button create. This will create a jar file that is needed for the next action plan. To install and migrate domain to 64 bit architecture. 1. Install JVM Jrockit R28.1.1.13 on environment. (uncompress zip folder on a known location) 2. Go to JRockit_Home/bin and execute $ java -d64 -jar wls1034_generic.jar 3. When prompt, select JRockit JVM in browse menu, and wait for installation to finish. 4. Go to <WLS_HOME>/wlserver_10.3/common/bin and execute $ ./config.sh 5. Select Create a new Weblogic domain 6. Select Base this domain on an existing template and select browse. 7. Select jar file created on previous action plan. 8. Select Name of domain. Maintain in most cases. 9. Select user name and password. Maintain in most cases. 10. Select Jrockit SDK 1.6.0_22 and click next. 11. Confirm all JMS/JDBC/security configuration. 12. On select Optional Configuration, reconfigure if necessary. 13. Check on Configuration summary for all domain configuration. 14. Click on create, to finish up domain import. Note: Always when installing a 64 bit WLS, it's necessary to install first the 64 JVM and then run the generic installer with the -d64 bit option. If this is not performed, the installation will be the 32 default version. For more information, please refer: http://download.oracle.com/docs/cd/E13188_01/jrockit/docs50/dev_faq.html#pae

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • OpenWorld: Our (Road) Maps are Looking Good!

    - by Tony Berk
    Wow, only one (or two) days down at Oracle OpenWorld! Are you on overload yet? I'm still trying to figure out how to be in 3 sessions at the same time... I guess everyone needs to prioritize! There was a lot to see in Monday's sessions, especially some great forward-looking roadmap sessions. In case you aren't here or you decided to go to other sessions, this is my quick summary of what I could capture from a couple of the roadmaps: In the Fusion CRM Strategy and Roadmap session, Anthony Lye provided an overview of the Fusion CRM strategy including the key design principles of 3 E's: Easy, Effective and Efficient. After an overview of how Oracle has deployed Fusion CRM internally to 25,000 users worldwide, Anthony discussed the features coming in the next release, the releases in the next 12 months and beyond. I can't detail too much since you haven't read Oracle's Safe Harbor statement, but check out Fusion Tap and look for new features and added functionality for sales prediction, marketing, social and integration with a number of the key Customer Experience products.  In the Oracle RightNow CX Cloud Service Vision and Roadmap session, Chris Hamilton presented the focus areas for the RightNow product. As a result of the large increase in development resources after the acquisition, the RightNow CX team is planning a lot of enhancements to the functionality, infrastructure and integrations. As a key piece of the Oracle Customer Experience (CX) strategy, RightNow will be integrated with Oracle Social Network, Oracle Commerce (ATG and Endeca), Oracle Knowledge, Oracle Policy Automation and, of course, further integration with Fusion Sales and Marketing. Look forward to seeing more on the Virtual Assistant, Smart Interaction Hub and Mobility. In addition to the roadmaps, I was looking forward to hearing from Oracle CRM customers. So, I sat in on two great Siebel customer panels: The Maximizing User Adoption Rates for Siebel Sales and Siebel Partner Relationship Management panel consisted of speakers from CSL Behring, McKesson and Intuit. It was great to get an overview of implementations for both B2B and B2C companies. It was great hearing that all of these companies have more than 1,000 sales users (Intuit has 4,000) and how the 360 degree view of the customer in Siebel is helping these customers improve their customers' experience (CX). They are all great examples of centralized implementations which have standardized processes across the globe and across business units.  Waste Management, Farmers Insurance and the US Citizenship & Immigration Services presented in the Driving Great Customer Experiences with Siebel Service Applications session. Talk about serving large customer bases! Is it possible that Farmers with only 10 million households is the smallest of these 3? All of them provided great examples of how they are improving the customer experience (CX) including 60-70% improvements in efficiency or reducing the number of applications the customer service reps (CSRs) need to use from 10 to 1 (Waste Management) and context aware call transfers to avoid the caller explaining their issue 3 times (USCIS). So that's my wrap up of only 4 sessions from Monday. In between sessions, I stopped by the Oracle DEMOgrounds and CRM Pavilion to visit with a group of great partners and see the products and partner integrations in action. Don't miss a recap of Mark Hurd's Keynote. I can't believe there were another 40+ sessions covering CRM, Fusion, Cloud, etc. that I missed today! Anyone else see any great sessions?

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >