Search Results

Search found 7019 results on 281 pages for 'adaptive systems'.

Page 255/281 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • BI&EPM in Focus June 2014

    - by Mike.Hallett(at)Oracle-BI&EPM
    Applications Webcast Centre – A Library of Discussion and Research for Best Practice: Achieving Reliable Planning, Budgeting and Forecasting Talent Analytics and Big Data – Is HR ready for the challenge Enterprise Data – The cost of non-quality Customers Josephine Niemiec from ADP talks about Oracle Hyperion Workforce Planning at Collaborate 2014 (link) Video Chris Nelms from Ameren talks about Oracle BI Spend and Procurement Analytics at Collaborate 2014 (link) Video Leggett & Platt Leverages Oracle Hyperion EPM and Demantra (link) Video Pella Corporation Accelerates Close Cycle by Cutting Time for Financial Consolidation from Three Days to Less Than One Day (link) Secretaría General de Administración de Justicia en España Enhances Citizen Services with Near-Real-Time Business Intelligence Gleaned from 500 Databases  (link) Bellco Credit Union Speeds Budget Development by 30%—Gains Insight into Specific Branch and Financial Product Profitability  (link)  Video QDQ media Speeds up Financial Reporting by 24x, Gains Business Agility, and Integrates Seamlessly into Corporate Accounting System  (link) Westfield Group Maximizes Shopping Mall Revenue, Shortens Year-End Financial Consolidation by 75%  (link)  IL&FS Transportation Networks Shortens Financial Consolidation and Reporting Cycle by Eight Days, Gains In-Depth Insight into Business Performance   (link) Angel Trains Optimizes Rail Operations for Purchasing, Sourcing, and Project Management to Meet Challenges of Evolving Rail Industry  (link) Enterprise Performance Management June 11, at Oracle Utrecht, NL: Morning session: Explore Planning and Budgeting in the Cloud (link) June 12, London: PureApps Presents: Best Practice Financial Consolidation and Reporting Workshop (link) July 3, Koln: Oracle Hyperion Business Analytics Roundtable (link) Blog: What's Your Tax Strategy? Automate the Operational Transfer Pricing Process (link) YouTube Video: Automate Tax Reporting with Oracle Hyperion Tax Provision (link) YouTube Video: Introducing Oracle Hyperion Planning’s Tablet Optimized Interface (link) OracleEPMWebcasts @ YouTube (link) Partner webcasts: Wednesday, 4 June, 5.00 GMT - Case Study:  Lessons Learned from Edgewater Ranzal's Internal Implementation of Oracle Planning & Budgeting Cloud Service (PBCS) - Learn more and register here! Thursday, 5 June, 4.00 GMT - Achieving Accountable Care Using Oracle Technology - Learn more and register here! Tuesday, 17 June, 4.00 GMT - Optimizing Performance for Oracle EPM Systems - Learn more and register here! Oracle University Blog: The Coolest Features Available with Oracle Hyperion 11.1.2.3 – Training from OU to help you to best use them (link) Support: Proactive Support: EPM Hyperion Planning 11.1.2.3.500 Using RMI Service [Blog] Proactive Support: Planning and Budgeting Cloud Service Videos (link) Planning and Budgeting Cloud Service (PBCS) 11.1.2.3.410 Patch Bundle [Doc ID 1670981.1] Hyperion Analytic Provider Services 11.1.2.2.106 Patch Set Update [Doc ID 1667350.1] Hyperion Essbase 11.1.2.2.106 Patch Set Update [Doc ID 1667346.1] Hyperion Essbase Administration Services 11.1.2.2.106 Patch Set Update [Doc ID 1667348.1] Hyperion Essbase Studio 11.1.2.2.106 Patch Set Update [Doc ID 1667329.1] Hyperion Smart View 11.1.2.5.210 Patch Set Update [Doc ID 1669427.1] Using HPCM, HSF or DRM Communities (link) Business Intelligence June 12, Birmingham, UK: Oracle Big Data at Work - Use Cases and Architecture (link) June 17, London: Oracle at Cloud & Big Data World Forums (link) June 17, Partner Webcast: Transform your Planning Capabilities with Peloton's CloudAccelerator for Oracle PBCS (link) June 19, London: Oracle at the Whitehall Media Big Data Analytics Conference and Exhibition (link) June 19, London: Partner Event - Agile BI Conference by Peak Indicators [link] June 25, Munich: Oracle Special Day auf der TDWI 2014 Konferenz (link) July 15, London: Oracle Endeca Information Discovery Workshop (link) July 16, London: BI Applications Workshop – Financial Analytics & Procurement Analytics (link) July 17, London: BI Applications Workshop – HR Analytics (link) Milan, Italy: L’Osservatorio Big Data Analytics & Business Intelligence with Politecnico di Milano (link) OBIA 11.1.1.8.1 - Now Available [Blog] What’s New in OBIA 11.1.1.8.1 [Blog] BI Blog: A closer look at Oracle BI Applications 11.1.1.8.1 release (link) Press Release: BI Applications Deliver Greater Insight into Talent and Procurement (link) Support Blog: OBIA 11.1.1.8.1 Upgrade Guide & Documentation (link) YouTube Video: Glenn Hoormann of Ludus talks to us about Oracle Business Intelligence and ERP at Collaborate 2014 (Link) YouTube Video: Performance Architects talks about key BI and Mobile trends, including Endeca at Collaborate 2014 (link) Big Data Blog: 3 Keys for Using Big Data Effectively for Enhanced Customer Experience (link) Big Data Lite Demo VM 3.0 Now Available on OTN BI Blog: Data Relationship Governance - Workflow in a Bottle (link) MDM Blog: Register for Product Data Management Weekly Cloudcasts (link) MDM Blog: Improve your Customer Experience with High Quality Information (link) MDM Blog: Big Data Challenges & Considerations (link) Oracle University: Oracle BI Applications 11g: Implementation using ODI (link) Proactive Support: Monthly Index [Blog] My Oracle Support: Partner Accreditation for Business Analytics Support [Blog] OBIEE 11g Test-to-Production (T2P) / Clone Procedures Guide [Blog] Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to Increase the VMWare Boot Screen Delay

    - by Trevor Bekolay
    If you’ve wanted to try out a bootable CD or USB flash drive in a virtual machine environment, you’ve probably noticed that VMWare’s offerings make it difficult to change the boot device. We’ll show you how to change these options. You can do this either for one boot, or permanently for a particular virtual machine. Even experienced users of VMWare Player or Workstation may not recognize the screen above – it’s the virtual machine’s BIOS, which in most cases flashes by in the blink of an eye. If you want to boot up the virtual machine with a CD or USB key instead of the hard drive, then you’ll need more than an eye’s-blink to press Escape and bring up the Boot Menu. Fortunately, there is a way to introduce a boot delay that isn’t exposed in VMWare’s graphical interface – you have to edit the virtual machine’s settings file (a .vmx file) manually. Editing the Virtual Machine’s .vmx Find the .vmx file that contains the settings for your virtual machine. You chose a location for this when you created the virtual machine – in Windows, the default location is a folder called My Virtual Machines in your My Documents folder. In VMWare Workstation, the location of the .vmx file is listed on the virtual machine’s tab. If in doubt, search your hard drive for .vmx files. If you don’t want to use Windows default search, an awesome utility that locates files instantly is Everything. Open the .vmx file with any text editor. Somewhere in this file, enter in the following line… save the file, then close out of the text editor: bios.bootdelay = 20000 This will introduce a 20 second delay when the virtual machine loads up, giving you plenty of time to press the Escape button and access the boot menu. The number in this line is just a value in milliseconds, so for a five second boot delay, enter 5000, and so on. Change Boot Options Temporarily Now, when you boot up your virtual machine, you’ll have plenty of time to enter one of the keystrokes listed at the bottom of the BIOS screen on boot-up. Press Escape to bring up the Boot Menu. This allows you to select a different device to boot from – like a CD drive. Your selection will be forgotten the next time you boot up this virtual machine. Change Boot Options Permanently When the BIOS screen comes up, press F2 to enter the BIOS Setup menu. Switch to the Boot tab, and change the ordering of the items by pressing the “+” key to move items up on the list, and the “-” key to move items down the list. We’ve switched the order so that the CD-ROM Drive boots first. Once you make this change permanent, you may want to re-edit the .vmx file to remove the boot delay. Boot from a USB Flash Drive One thing that is noticeably missing from the list of boot options is a USB device. VMWare’s BIOS just does not allow this, but we can get around that limitation using the PLoP Boot Manager that we’ve previously written about. And as a bonus, since everything is virtual anyway, there’s no need to actually burn PLoP to a CD. Open the settings for the virtual machine you want to boot with a USB drive. Click on Add… at the bottom of the settings screen, and select CD/DVD Drive. Click Next. Click the Use ISO Image radio button, and click Next. Browse to find plpbt.iso or plpbtnoemul.iso from the PLoP zip file. Ensure that Connect at power on is checked, and then click Finish. Click OK on the main Virtual Machine Settings page. Now, if you use the steps above to boot using that CD/DVD drive, PLoP will load, allowing you to boot from a USB drive! Conclusion We’re big fans of VMWare Player and Workstation, as they let us try out a ton of geeky things without worrying about harming our systems. By introducing a boot delay, we can add bootable CDs and USB drives to the list of geeky things we can try out. Download PLoP Boot Manager Similar Articles Productive Geek Tips How To Switch to Console Mode for Ubuntu VMware GuestHack: Turn Off Debug Mode in VMWare Workstation 6 BetaStart Your Computer More Quickly by Delaying the Startup of a Service in VistaEnable Hidden BootScreen in Windows VistaEnable Copy and Paste from Ubuntu VMware Guest TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • HTG Explains: Are You Using IPv6 Yet? Should You Even Care?

    - by Chris Hoffman
    IPv6 is extremely important for the long-term health of the Internet. But is your Internet service provider providing IPv6 connectivity yet? Does your home network support it? Should you even care if you’re using IPv6 yet? Switching from IPv4 to IPv6 will give the Internet a much larger pool of IP addresses. It should also allow every device to have its own public IP address, rather than be hidden behind a NAT router. IPv6 is Important Long-Term IPv6 is very important for the long-term health of the Internet. There are only about 3.7 billion public IPv4 addresses. This may sound like a lot, but it isn’t even one IP address for each person on the planet. Considering people have more and more Internet-connected devices — everything from light bulbs to thermostats are starting to become network-connected — the lack of IP addresses is already proving to be a serious problem. This may not affect those of us in well-off developed countries just yet, but developing countries are already running out of IPv4 addresses. So, if you work at an Internet service provider, manage Internet-connected servers, or develop software or hardware — yes, you should care about IPv6! You should be deploying it and ensuring your software and hardware works properly with it. It’s important to prepare for the future before the current IPv4 situation becomes completely unworkable. But, if you’re just typical user or even a typical geek with a home Internet connection and a home network, should you really care about your home network just yet? Probably not. What You Need to Use IPv6 To use IPv6, you’ll need three things: An IPv6-Compatible Operating System: Your operating system’s software must be capable of using IPv6. All modern desktop operating systems should be compatible — Windows Vista and newer versions of Windows, as well as modern versions of Mac OS X and Linux. Windows XP doesn’t have IPv6 support installed by default, but you shouldn’t be using Windows XP anymore, anyway. A Router With IPv6 Support: Many — maybe even most — consumer routers in the wild don’t support IPv6. Check your router’s specifications details to see if it supports IPv6 if you’re curious. If you’re going to buy a new router, you’ll probably want to get one with IPv6 support to future-proof yourself. If you don’t have an IPv6-enabled router yet, you don’t need to buy a new one just to get it. An ISP With IPv6 Enabled:  Your Internet service provider must also have IPv6 set up on their end. Even if you have modern software and hardware on your end, your ISP has to provide an IPv6 connection for you to use it. IPv6 is rolling out steadily, but slowly — there’s a good chance your ISP hasn’t enabled it for you yet. How to Tell If You’re Using IPv6 The easiest way to tell if you have IPv6 connectivity is to visit a website like testmyipv6.com. This website allows you to connect to it in different ways — click the links near the top to see if you can connect to the website via different types of connections. If you can’t connect via IPv6, it’s either because your operating system is too old (unlikely), your router doesn’t support IPv6 (very possible), or because your ISP hasn’t enabled it for you yet (very likely). Now What? If you can connect to the test website above via IPv6, congratulations! Everything is working as it should. Your ISP is doing a good job of rolling out IPv6 rather than dragging its feet. There’s a good chance you won’t have IPv6 working properly, however. So what should you do about this — should you head to Amazon and buy a new IPv6-enabled router or switch to an ISP that offers IPv6? Should you use a “tunnel broker,” as the test site recommends, to tunnel into IPv6 via your IPv4 connection? Well, probably not. Typical users shouldn’t have to worry about this yet. Connecting to the Internet via IPv6 shouldn’t be perceptibly faster, for example. It’s important for operating system vendors, hardware companies, and Internet service providers to prepare for the future and get IPv6 working, but you don’t need to worry about this on your home network. IPv6 is all about future-proofing. You shouldn’t be racing to implement this at home yet or worrying about it too much — but, when you need to buy a new router, try to buy one that supports IPv6. Image Credit: Adobe of Chaos on Flickr, hisperati on Flickr, Vox Efx on Flickr     

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Using real fonts in HTML 5 & CSS 3 pages

    - by nikolaosk
    This is going to be the fifth post in a series of posts regarding HTML 5. You can find the other posts here, here , here and here.In this post I will provide a hands-on example on how to use real fonts in HTML 5 pages with the use of CSS 3.Font issues have been appearing in all websites and caused all sorts of problems for web designers.The real problem with fonts for web developers until now was that they were forced to use only a handful of fonts.CSS 3 allows web designers not to use only web-safe fonts.These fonts are in wide use in most user's operating systems.Some designers (when they wanted to make their site stand out) resorted in various techniques like using images instead of fonts. That solution is not very accessible-friendly and definitely less SEO friendly.CSS (through CSS3's Fonts module) 3 allows web developers to embed fonts directly on a web page.First we need to define the font and then attach the font to elements.Obviously we have various formats for fonts. Some are supported by all modern browsers and some are not.The most common formats are, Embedded OpenType (EOT),TrueType(TTF),OpenType(OTF). I will use the @font-face declaration to define the font used in this page.  Before you download fonts (in any format) make sure you have understood all the licensing issues. Please note that all these real fonts will be downloaded in the client's computer.A great resource on the web (maybe the best) is http://www.typekit.com/.They have an abundance of web fonts for use. Please note that they sell those fonts.Another free (best things in life a free, aren't they?) resource is the http://www.google.com/webfonts website. I have visited the website and downloaded the Aladin webfont.When you download any font you like make sure you read the license first. Aladin webfont is released under the Open Font License (OFL) license. Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for web fonts from the latest versions of modern browsers.Please have a look at the picture below. We see that all the latest versions of modern browsers support this feature. In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I create a simple HTML 5 page. The markup follows and it is very easy to use and understand<!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>        <div id="main">          <h2>HTML 5</h2>                        <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>      </div>             </body>  </html> Then I create the style.css file.<style type="text/css">@font-face{font-family:Aladin;src: url('Aladin-Regular.ttf')}h1{font-family:Aladin,Georgia,serif;}</style> As you can see we want to style the h1 tag in our HTML 5 markup.I just use the @font-face property,specifying the font-family and the source of the web font. Then I just use the name in the font-family property to style the h1 tag.Have a look below to see my page in IE10. Make sure you open this page in all your browsers installed in your machine. Make sure you have downloaded the latest versions. Now we can make our site stand out with web fonts and give it a really unique look and feel. Hope it helps!!!  

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Oracle Fusion Middleware Innovation Award Winners 2012: ADF & Fusion Development

    - by Dana Singleterry
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Fusion Middleware Innovation Awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The awards were presented during Oracle OpenWorld 2012 and following winners are for the category of ADF & Fusion Development. Micros – an OPN Platinum partner – has been working closely with Oracle product management teams in applying industry best practices in the development of their solutions. Their current application suite for the hospitality industry was built on Oracle Forms and the Oracle database running on MS Windows. The next generation of this suite is being developed and released in modules that are now based on Oracle FMW (including ADF) 11g technologies and Oracle Database 11g all running on Oracle Linux. The primary driver was that of modernization and hence the reason Oracle ADF was selected to provide a rich UI for business processes that could be served up through traditional methods or through mobile devices globally. SOA Suite & ADF allowed for loosely-coupled services that could evolve with the needs of the business. Micros's application innovations includes the use of business application portlets that have been published from ADF Faces Task Flows generated using WebCenter portlet libraries  & Oracle Metadata Services (MDS) with multi-layered customizations using Oracle WebCenter Composer. PCS (Marfin Egnatia Bank of Greece) – PCS Wealth Management is a WM Software Solution, which captures and automates the WM business processes allowing Service Providers to allocate enough time and effort into Customer Service and Investment Strategies, under Advisory or Execution-Only Services. The Product is built upon the latest Web Technologies and ensures Best Practices covering all functional expectations, meeting local regulatory requirements and discovering successful opportunities for the WM Customers' Portfolios. The new unified Wealth Management system offers an unparalleled User Interface taking full advantage of the user friendly ADF Faces Components to a great extent, all serving Private Banking purposes. The application offers a true Account Officer Cockpit with shallow navigation, one-click access to informed decisions and a perfect customer service. ADF Grids and Pivots, the Data Visualization Components, as well as the Calendar and Map Components are cleverly used to help the user eliminate the usage of Excel, Outlook and other systems. PCS's application is unique in the way it leverages the ADF Faces data visualization components to create a truly attractive and insightful dashboard for their application. PCS Wealth Management Demo Qualcomm – Qualcomm, a $17B per year company, designs and sells semiconductor products for wireless telecommunications, mobile and computing markets. In addition, Qualcomm companies provide various hardware and software products to facilitate the design, development and deployment of phones and the applications that run on them. Qualcomm’s challenge has been to not only develop and deploy new business system functions to keep pace with customer demand, but also to provide a customer collaboration capability that is sufficiently robust, easy to use, and flexible to meet emerging and future needs. Qualcomm has taken successful steps in building and deploying the customer engagement platform Ieveraging various Oracle technologies including Fusion Middleware (ADF, SOA, OBIEE) and their proven ERP foundation of EBS and 11g databases. The new platform delivers a more unified and “seamless” business solution with a consistent, modern “look and feel” all based on standard business processes which facilitate efficient collaboration with Qualcomm and its customers. The look and feel leverages ADF in innovative ways and includes hover over navigation, custom pagination components, and skinning. Qualcomm has exposed a services layer that provides significant functionality including order-to-ship, quote-to-order, customer on-boarding and contract validation. Qualcomm's creative designs leverage Oracle's SOA Suite to integrate with Oracle EBS and desperate applications to provide a rich user interface through the use use of Oracle ADF Faces Rich Client Components providing a self-service solution to their customers.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Find only physical network adapters with WMI Win32_NetworkAdapter class

    - by Mladen Prajdic
    WMI is Windows Management Instrumentation infrastructure for managing data and machines. We can access it by using WQL (WMI querying language or SQL for WMI). One thing to remember from the WQL link is that it doesn't support ORDER BY. This means that when you do SELECT * FROM wmiObject, the returned order of the objects is not guaranteed. It can return adapters in different order based on logged-in user, permissions of that user, etc… This is not documented anywhere that I've looked and is derived just from my observations. To get network adapters we have to query the Win32_NetworkAdapter class. This returns us all network adapters that windows detect, real and virtual ones, however it only supplies IPv4 data. I've tried various methods of combining properties that are common on all systems since Windows XP. The first thing to do to remove all virtual adapters (like tunneling, WAN miniports, etc…) created by Microsoft. We do this by adding WHERE Manufacturer!='Microsoft' to our WMI query. This greatly narrows the number of adapters we have to work with. Just on my machine it went from 20 adapters to 5. What was left were one real physical Realtek LAN adapter, 2 virtual adapters installed by VMware and 2 virtual adapters installed by VirtualBox. If you read the Win32_NetworkAdapter help page you'd notice that there's an AdapterType that enumerates various adapter types like LAN or Wireless and AdapterTypeID that gives you the same information as AdapterType only in integer form. The dirty little secret is that these 2 properties don't work. They are both hardcoded, AdapterTypeID to "0" and AdapterType to "Ethernet 802.3". The only exceptions I've seen so far are adapters that have no values at all for the two properties, "RAS Async Adapter" that has values of AdapterType = "Wide Area Network" and AdapterTypeID = "3" and various tunneling adapters that have values of AdapterType = "Tunnel" and AdapterTypeID = "15". In the help docs there isn't even a value for 15. So this property was of no help. Next property to give hope is NetConnectionId. This is the name of the network connection as it appears in the Control Panel -> Network Connections. Problem is this value is also localized into various languages and can have different names for different connection. So both of these properties don't help and we haven't even started talking about eliminating virtual adapters. Same as the previous one this property was also of no help. Next two properties I checked were ConfigManagerErrorCode and NetConnectionStatus in hopes of finding disabled and disconnected adapters. If an adapter is enabled but disconnected the ConfigManagerErrorCode = 0 with different NetConnectionStatus. If the adapter is disabled it reports ConfigManagerErrorCode = 22. This looked like a win by using (ConfigManagerErrorCode=0 or ConfigManagerErrorCode=22) in our condition. This way we get enabled (connected and disconnected adapters). Problem with all of the above properties is that none of them filter out the virtual adapters installed by virtualization software like VMware and VirtualBox. The last property to give hope is PNPDeviceID. There's an interesting observation about physical and virtual adapters with this property. Every virtual adapter PNPDeviceID starts with "ROOT\". Even VMware and VirtualBox ones. There were some really, really old physical adapters that had PNPDeviceID starting with "ROOT\" but those were in pre win XP era AFAIK. Since my minimum system to check was Windows XP SP2 I didn't have to worry about those. The only virtual adapter I've seen to not have PNPDeviceID start with "ROOT\" is the RAS Async Adapter for Wide Area Network. But because it is made by Microsoft we've eliminated it with the first condition for the manufacturer. Using the PNPDeviceID has so far proven to be really effective and I've tested it on over 20 different computers of various configurations from Windows XP laptops with wireless and bluetooth cards to virtualized Windows 2008 R2 servers. So far it always worked as expected. I will appreciate you letting me know if you find a configuration where it doesn't work. Let's see some C# code how to do this: ManagementObjectSearcher mos = null;// WHERE Manufacturer!='Microsoft' removes all of the // Microsoft provided virtual adapters like tunneling, miniports, and Wide Area Network adapters.mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft'");// Trying the ConfigManagerErrorCode and NetConnectionStatus variations // proved to still not be enough and it returns adapters installed by // the virtualization software like VMWare and VirtualBox// ConfigManagerErrorCode = 0 -> Device is working properly. This covers enabled and/or disconnected devices// ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0 -> Device is disabled and Disconnected. // Some virtual devices report ConfigManagerErrorCode = 22 (disabled) and some other NetConnectionStatus than 0mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND (ConfigManagerErrorCode = 0 OR (ConfigManagerErrorCode = 22 AND NetConnectionStatus = 0))");// Final solution with filtering on the Manufacturer and PNPDeviceID not starting with "ROOT\"// Physical devices have PNPDeviceID starting with "PCI\" or something else besides "ROOT\"mos = new ManagementObjectSearcher(@"SELECT * FROM Win32_NetworkAdapter WHERE Manufacturer != 'Microsoft' AND NOT PNPDeviceID LIKE 'ROOT\\%'");// Get the physical adapters and sort them by their index. // This is needed because they're not sorted by defaultIList<ManagementObject> managementObjectList = mos.Get() .Cast<ManagementObject>() .OrderBy(p => Convert.ToUInt32(p.Properties["Index"].Value)) .ToList();// Let's just show all the properties for all physical adapters.foreach (ManagementObject mo in managementObjectList){ foreach (PropertyData pd in mo.Properties) Console.WriteLine(pd.Name + ": " + (pd.Value ?? "N/A"));}   That's it. Hope this helps you in some way.

    Read the article

  • SSH error: Permission denied, please try again

    - by Kamal
    I am new to ubuntu. Hence please forgive me if the question is too simple. I have a ubuntu server setup using amazon ec2 instance. I need to connect my desktop (which is also a ubuntu machine) to the ubuntu server using SSH. I have installed open-ssh in ubuntu server. I need all systems of my network to connect the ubuntu server using SSH (no need to connect through pem or pub keys). Hence opened SSH port 22 for my static IP in security groups (AWS). My SSHD-CONFIG file is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Through webmin (Command shell), I have created a new user named 'senthil' and added this new user to 'sudo' group. sudo adduser -y senthil sudo adduser senthil sudo I tried to login using this new user 'senthil' in 'webmin'. I was able to login successfully. When I tried to connect ubuntu server from my terminal through SSH, ssh senthil@SERVER_IP It asked me to enter password. After the password entry, it displayed: Permission denied, please try again. On some research I realized that, I need to monitor my server's auth log for this. I got the following error in my auth log (/var/log/auth.log) Jul 2 09:38:07 ip-192-xx-xx-xxx sshd[3037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=MY_CLIENT_IP user=senthil Jul 2 09:38:09 ip-192-xx-xx-xxx sshd[3037]: Failed password for senthil from MY_CLIENT_IP port 39116 ssh2 When I tried to debug using: ssh -v senthil@SERVER_IP OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to SERVER_IP [SERVER_IP] port 22. debug1: Connection established. debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA {SERVER_HOST_KEY} debug1: Host 'SERVER_IP' is known and matches the ECDSA host key. debug1: Found key in {MY-WORKSPACE}/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: password debug1: Next authentication method: password senthil@SERVER_IP's password: debug1: Authentications that can continue: password Permission denied, please try again. senthil@SERVER_IP's password: For password, I have entered the same value which I normally use for 'ubuntu' user. Can anyone please guide me where the issue is and suggest some solution for this issue?

    Read the article

  • International Radio Operators Alphabet in F# &amp; Silverlight &ndash; Part 1

    - by MarkPearl
    So I have been delving into F# more and more and thought the best way to learn the language is to write something useful. I have been meaning to get some more Silverlight knowledge (up to now I have mainly been doing WPF) so I came up with a really simple project that I can actually use at work. Simply put – I often get support calls from clients wanting new activation codes. One of our main app’s was written in VB6 and had its own “security” where it would require about a 45 character sequence for it to be activated. The catch being that each time you reopen the program it would require a different character sequence, which meant that when we activate clients systems we have to do it live! This involves us either referring them to a website, or reading the characters to them over the phone and since nobody in the office knows the IROA off by heart we would come up with some interesting words to represent characters… 9 times out of 10 the client would type in the wrong character and we would have to start all over again… with this app I am hoping to reduce the errors of reading characters over the phone by treating it like a ham radio. My “Silverlight” application will allow for the user to input a series of characters and the system will then generate the equivalent IROA words… very basic stuff e.g. Character Input – abc Words Generated – Alpha Bravo Charlie After listening to Anders Hejlsberg on Dot Net Rocks Show 541 he mentioned that he felt many applications could make use of F# but in an almost silo basis – meaning that you would write modules that leant themselves to Functional Programming in F# and then incorporate it into a solution where the front end may be in C# or where you would have some other sort of glue. I buy into this kind of approach, so in this project I will use F# to do my very intensive “Business Logic” and will use Silverlight/C# to do the front end. F# Business Layer I am no expert at this, so I am sure to get some feedback on way I could improve my algorithm. My approach was really simple. I would need a function that would convert a single character to a string – i.e. ‘A’ –> “Alpha” and then I would need a function that would take a string of characters, convert them into a sequence of characters, and then apply my converter to return a sequence of words… make sense? Lets start with the CharToString function let CharToString (element:char) = match element.ToString().ToLower() with | "1" -> "1" | "5" -> "5" | "9" -> "9" | "2" -> "2" | "6" -> "6" | "0" -> "0" | "3" -> "3" | "7" -> "7" | "4" -> "4" | "8" -> "8" | "a" -> "Alpha" | "b" -> "Bravo" | "c" -> "Charlie" | "d" -> "Delta" | "e" -> "Echo" | "f" -> "Foxtrot" | "g" -> "Golf" | "h" -> "Hotel" | "i" -> "India" | "j" -> "Juliet" | "k" -> "Kilo" | "l" -> "Lima" | "m" -> "Mike" | "n" -> "November" | "o" -> "Oscar" | "p" -> "Papa" | "q" -> "Quebec" | "r" -> "Romeo" | "s" -> "Sierra" | "t" -> "Tango" | "u" -> "Uniform" | "v" -> "Victor" | "w" -> "Whiskey" | "x" -> "XRay" | "y" -> "Yankee" | "z" -> "Zulu" | element -> "Unknown" Quite simple, an element is passed in, this element is them converted to a lowercase single character string and then matched up with the equivalent word. If by some chance a character is not recognized, “Unknown” will be returned… I know need a function that can take a string and can parse each character of the string and generate a new sequence with the converted words… let ConvertCharsToStrings (s:string) = s |> Seq.toArray |> Seq.map(fun elem -> CharToString(elem)) Here… the Seq.toArray converts the string to a sequence of characters. I then searched for some way to parse through every element in the sequence. Originally I tried Seq.iter, but I think my understanding of what iter does was incorrect. Eventually I found Seq.map, which applies a function to every element in a sequence and then creates a new collection with the adjusted processed element. It turned out to be exactly what I needed… To test that everything worked I created one more function that parsed through every element in a sequence and printed it. AT this point I realized the the Seq.iter would be ideal for this… So my testing code is below… let PrintStrings items = items |> Seq.iter(fun x -> Console.Write(x.ToString() + " ")) let newSeq = ConvertCharsToStrings("acdefg123") PrintStrings newSeq Console.ReadLine()   Pretty basic stuff I guess… I hope my approach was right? In Part 2 I will look into doing a simple Silverlight Frontend, referencing the projects together and deploying….

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Super Joybox 5 HID 0925:8884 not recognized as joystick in Ubuntu 12.04 LTS

    - by Tim Evans
    Problem: When using the "Super JoyBox 5" 4 port playstation 2 to USB adapter, the device is not recognized as a joystick. there is no js0 created, but instead another input eventX and mouseX are created in /dev/input. When using the directional buttons (up down left right) on a Playstation 1 controller attached to the device, the mouse cursor moves to the top, bottom, left, and right edges of the screen respectively. Buttons are unresponsive. The joypads attached to the device cannot be used in any games or other programs. Attempted remedies: Creating a symlink from the eventX to js0 does not solve the problem. Addl Info: joydev is loaded and running peroperly according to LSMOD. evtest can be run on the created eventX (sudo evtest /dev/input/event14 in my case) and the buttons and axes all register inputs. Here is a paste of EVTEST's diagnostic and the first couple button events. [code] sudo evtest /dev/input/event14 Input driver version is 1.0.1 Input device ID: bus 0x3 vendor 0x925 product 0x8884 version 0x100 Input device name: "HID 0925:8884" Supported events: Event type 0 (EV_SYN) Event type 1 (EV_KEY) Event code 288 (BTN_TRIGGER) Event code 289 (BTN_THUMB) Event code 290 (BTN_THUMB2) Event code 291 (BTN_TOP) Event code 292 (BTN_TOP2) Event code 293 (BTN_PINKIE) Event code 294 (BTN_BASE) Event code 295 (BTN_BASE2) Event code 296 (BTN_BASE3) Event code 297 (BTN_BASE4) Event code 298 (BTN_BASE5) Event code 299 (BTN_BASE6) Event code 300 (?) Event code 301 (?) Event code 302 (?) Event code 303 (BTN_DEAD) Event code 304 (BTN_A) Event code 305 (BTN_B) Event code 306 (BTN_C) Event code 307 (BTN_X) Event code 308 (BTN_Y) Event code 309 (BTN_Z) Event code 310 (BTN_TL) Event code 311 (BTN_TR) Event code 312 (BTN_TL2) Event code 313 (BTN_TR2) Event code 314 (BTN_SELECT) Event code 315 (BTN_START) Event code 316 (BTN_MODE) Event code 317 (BTN_THUMBL) Event code 318 (BTN_THUMBR) Event code 319 (?) Event code 320 (BTN_TOOL_PEN) Event code 321 (BTN_TOOL_RUBBER) Event code 322 (BTN_TOOL_BRUSH) Event code 323 (BTN_TOOL_PENCIL) Event code 324 (BTN_TOOL_AIRBRUSH) Event code 325 (BTN_TOOL_FINGER) Event code 326 (BTN_TOOL_MOUSE) Event code 327 (BTN_TOOL_LENS) Event code 328 (?) Event code 329 (?) Event code 330 (BTN_TOUCH) Event code 331 (BTN_STYLUS) Event code 332 (BTN_STYLUS2) Event code 333 (BTN_TOOL_DOUBLETAP) Event code 334 (BTN_TOOL_TRIPLETAP) Event code 335 (BTN_TOOL_QUADTAP) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 127 Min 0 Max 255 Flat 15 Event code 1 (ABS_Y) Value 127 Min 0 Max 255 Flat 15 Event code 2 (ABS_Z) Value 127 Min 0 Max 255 Flat 15 Event code 3 (ABS_RX) Value 127 Min 0 Max 255 Flat 15 Event code 4 (ABS_RY) Value 127 Min 0 Max 255 Flat 15 Event code 5 (ABS_RZ) Value 127 Min 0 Max 255 Flat 15 Event code 6 (ABS_THROTTLE) Value 127 Min 0 Max 255 Flat 15 Event code 7 (ABS_RUDDER) Value 127 Min 0 Max 255 Flat 15 Event code 8 (ABS_WHEEL) Value 127 Min 0 Max 255 Flat 15 Event code 9 (ABS_GAS) Value 127 Min 0 Max 255 Flat 15 Event code 10 (ABS_BRAKE) Value 127 Min 0 Max 255 Flat 15 Event code 11 (?) Value 127 Min 0 Max 255 Flat 15 Event code 12 (?) Value 127 Min 0 Max 255 Flat 15 Event code 13 (?) Value 127 Min 0 Max 255 Flat 15 Event code 14 (?) Value 127 Min 0 Max 255 Flat 15 Event code 15 (?) Value 127 Min 0 Max 255 Flat 15 Event code 16 (ABS_HAT0X) Value 0 Min -1 Max 1 Event code 17 (ABS_HAT0Y) Value 0 Min -1 Max 1 Event code 18 (ABS_HAT1X) Value 0 Min -1 Max 1 Event code 19 (ABS_HAT1Y) Value 0 Min -1 Max 1 Event code 20 (ABS_HAT2X) Value 0 Min -1 Max 1 Event code 21 (ABS_HAT2Y) Value 0 Min -1 Max 1 Event code 22 (ABS_HAT3X) Value 0 Min -1 Max 1 Event code 23 (ABS_HAT3Y) Value 0 Min -1 Max 1 Event type 4 (EV_MSC) Event code 4 (MSC_SCAN) Testing ... (interrupt to exit) Event: time 1351223176.126127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223176.126130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 1 Event: time 1351223176.126166, -------------- SYN_REPORT ------------ Event: time 1351223178.238127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90001 Event: time 1351223178.238130, type 1 (EV_KEY), code 288 (BTN_TRIGGER), value 0 Event: time 1351223178.238167, -------------- SYN_REPORT ------------ Event: time 1351223180.422127, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223180.422129, type 1 (EV_KEY), code 289 (BTN_THUMB), value 1 Event: time 1351223180.422163, -------------- SYN_REPORT ------------ Event: time 1351223181.558099, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90002 Event: time 1351223181.558102, type 1 (EV_KEY), code 289 (BTN_THUMB), value 0 Event: time 1351223181.558137, -------------- SYN_REPORT ------------ Event: time 1351223182.486137, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223182.486140, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 1 Event: time 1351223182.486172, -------------- SYN_REPORT ------------ Event: time 1351223183.302130, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90003 Event: time 1351223183.302132, type 1 (EV_KEY), code 290 (BTN_THUMB2), value 0 Event: time 1351223183.302165, -------------- SYN_REPORT ------------ Event: time 1351223184.030133, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.030136, type 1 (EV_KEY), code 291 (BTN_TOP), value 1 Event: time 1351223184.030166, -------------- SYN_REPORT ------------ Event: time 1351223184.558135, type 4 (EV_MSC), code 4 (MSC_SCAN), value 90004 Event: time 1351223184.558138, type 1 (EV_KEY), code 291 (BTN_TOP), value 0 Event: time 1351223184.558168, -------------- SYN_REPORT ------------ [/code] The directional buttons on the pad are being identified as HAT0Y and HAT0X axes, thats zero, not the letter O. Aparently, this device used to work flawlessly on kernel 2.4.x systems, and even as late as ubunto 10.04. Perhaps the Joydev rules for identifying joypads has changed? Currently, this kind of bug is affecting a few different type of controller adapters, but since this is the one that i PERSONALLY have (and has been driving me my own special brand of crazy), its the one im documenting. What i think should be happening instead: The device should be registering js0 through js3, one for each port, or JS0 that will handle all of the connected devices with different numbered axes for each connected joypad. Either way, it should work as a joystick and stop controlling the mouse cursor. Please help!

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Ranking - an Introduction

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Ranking Ranking is quite common in the internet. Readers are asked to rank their latest reading by clicking on one of 5 (sometimes 10) stars. The number of stars is then converted to a number and the average number of stars as selected by all the readers is proudly (or shamefully) displayed for future readers. SharePoint 2007 lacked this feature altogether. SharePoint 2010 allows the users to rank items in a list or documents in a library (the two are actually the same because a library is actually a list). But in SP2010 the computation of the average is done later on a timer rather than on-the-spot as it should be. I suspect that the reason for this shortcoming is that they did not involve a mathematician! Let me explain. Ranking is kept in a related list. When a user rates a document the rank-list is added an item with the item id, the user name, and his number of stars. The fact that a user already ranked an item prevents him from ranking it again. This prevents the creator of the item from asking his mother to rank it a 5 and do it 753 times, thus stacking the ballot. Some systems will allow a user to change his rating and this will be done by updating the rank-list item. Now, when the timer kicks off, the list is spanned and for each item the rank-list items containing this id are summed up and divided by the number of votes thus yielding the new average. This is obviously very time consuming and very server intensive. In the 18th century an early actuary named James Dodson used what the great Augustus De Morgan (of De Morgan’s law) later named Commutation tables. The labor involved in computing a life insurance premium was staggering and also very error prone. Clerks with pencil and paper would multiply and add mountains of numbers to do the task. The more steps the greater the probability of error and the more expensive the process. Commutation tables created a “summary” of many steps and reduced the work 100 fold. So had Microsoft taken a lesson in the history of computation, they would have developed a much faster way for rating that may be done in real-time and is also 100 times faster and less CPU intensive. How do we do this? We use a form of commutation. We always keep the number of votes and the total of stars. One simple division gives us the average. So we write an event receiver. When a vote is added, we just add the stars to the total-stars and 1 to the number of votes. We then recomputed the average. When a vote is updated, we reduce the total by the old vote, increase it by the new vote and leave the number of votes the same. Again we do the division to get the new average. When a vote is deleted (highly unlikely and maybe even prohibited), we reduce the total by that vote and reduce the number of votes by 1… Gone are the days of spanning lists, counting items, and tallying votes and we have no need for a timer process to run it all. This is the first of a few treatises on ranking. Even though I discussed the math and the history thereof, in here I am only going to solve the presentation issue. I wanted to create the CSS and Jscript needed to display the stars, create the various effects like hovering and clicking (onmouseover, onmouseout, onclick, etc.) and I wanted to create a general solution with any number of stars. When I had it all done, I created the ranking game so that I could test it. The game is interesting in and on itself, so here it is (or go to the games page and select “rank the stars”). BTW, when you play it, look at the source code and see how it was all done.  Next, how the 5 stars are displayed in the New and Update forms. When the whole set of articles will be done, you’ll be able to create the complete solution. That’s all folks!

    Read the article

  • Calling All Agile Customers-Share Your Stories at the Upcoming PLM Summit

    - by Terri Hiskey
    Now that we've closed the door on another Oracle OpenWorld, planning is in full swing for the next PLM Summit, taking place February 4-6, 2013 in San Francisco, in conjunction with the Oracle Value Chain Summit. This event is a must-attend for all Agile PLM customers. We will be holding five tracks with over forty Agile PLM-focused sessions covering a range of topics and industries. If you'd like to be notified once registration is live for this event, be sure to sign up at www.oracle.com/goto/vcs. CALL FOR PRESENTATIONS: We are looking for some fresh, new customer stories to share with attendees. Read below for descriptions of the five tracks, and the suggested topics that we'd like to hear from customers. If you are interested in presenting at the PLM Summit (and getting a FREE pass to attend if your presentation is accepted!) send me an email at terri.hiskey-AT-oracle.com with: Your proposed session title and the track your session fits into 3-5 bullets of takeaways that attendees will get from your presentation Your complete contact information including name, title, company, telephone number and email The deadline for this call for presentations is Thursday, November 15, so get your submission in soon! PLM Track #1:  Product Insights and Best Practices This track will provide executive attendees and line of business managers with an overview of how Agile PLM has been deployed and used at customers to enable and manage critical product-related business processes including enterprise quality and supplier management, compliance, product cost management, portfolio management, commercialization and software lifecycle management. These sessions will also provide details around how to manage the development and rollout of the solutions and how to achieve and track value. Possible session topics: Software Lifecycle Management Enterprise Quality Management New Product Development Integrated Business Planning ECO effectivity planning Rapid Commercialization             Manage the Design to Release Process for Complex Configured Products PLM for Life Sciences Companies I (Compliant Data Set) PLM for Life Sciences Companies II (eMDR, UDI) Discrete CPG – Private Label Mgmt Cost Management and Strategic Sourcing IP Mgmt in the Semiconductor Industry Implementing the Enterprise Training Record using Agile PLM PLM Track #2: Product Deep Dives & Demos This track is aimed at line of business  and IT managers who would like to understand the benefits of expanding their PLM footprint. The sessions in this track will provide attendees with an up-close and in-depth look Agile PLM’s newer and exciting applications, including analytics and innovation management, and will detail features and functionality that are available in the latest version of Agile PLM Possible session topics: Oracle Product Lifecycle Analytics Integrating PLM with Engineering and Supply Chain Systems Streamline PLM Design to Manufacturing Processes with AutoVue Visualization Solutions         Achieve Environmental Compliance (REACH and ROHS) with Agile Product Governance & Compliance PIM Deep Dive Achieving Integrated Change Control with Agile PLM and E-Business Suite Deploying PLM at Small and Midsize Enterprises Enhancing Oracle PQM w/APQP and 8D functionality Advanced Roles and Privileges – Enabling ITAR Model Unit Effectivity Implementing REACH with 9.3.2 Deploying Job Functions, Functional Teams in 9.3.2 to Improve Your Approval Matrix PLM Track #3: Administration & Integrations This track will provide sessions for Agile administrators, managers and daily Agile PLM users who are preparing to upgrade or looking to extend the use of their current PLM implementation through AIA and process extensions. It will include deeper conversation about Agile PLM features and best practices on managing an Agile PLM infrastructure. Possible session topics: Expand the Value of your Agile Investment with Innovative Process Extension Ideas Ensuring Implementation & Upgrade Success Ensure the Integrity and Accuracy of Product Data Across the Enterprise              Maximize the Benefits of an Integrated Architecture with AIA Integrating your PLM Implementation with ERP               Infrastructure Optimization Expanding Your PLM Implementation PLM Administrator Open Forum Q&A/Discussion FDA Validation Best Practices Best Practices for Managing a large Agile Deployment: Clustering, Load Balancing and Firewalls PLM Track #4: Agile PLM for Process This track is aimed at attendees interested in or currently using Agile PLM for Process. The sessions in this track will go over new features and functionality available in the newest version of PLM for Process and will give attendees an overview on how PLM for Process is being used to manage critical business processes such as formulation, recipe and specification management Possible session topics: PLM for Process Strategy, Roadmap and Update New Product Development and Introduction Effective Product Supplier Collaboration             Leverage Agile Formulation and Compliance to Manage Cost, Compliance, Quality, Labeling and Nutrition Menu Management Innovation Data Management Food Safety/ Introduction of P4P Quality Mgmt PLM Track #5: Agile PLM and Innovation Management This track consists of five sessions, and is for attendees interested in learning more about Oracle’s Agile Innovation Management, an exciting new addition to the Agile PLM application family that redefines the industry’s scope of product lifecycle management. Oracle’s innovation solutions enable companies to collaborate in a focused way among various functional groups (marketing, sales, operations, engineering/R&D and sourcing), combining insights of customer needs/requirements, competition, available technologies, alternate design scenarios and portfolio constraints to deliver what customers truly value. The results are better products, higher margins, greater efficiencies, more satisfied customers and the increased ability to continuously innovate. Possible session topics: Product Innovation Management Solution Overview Product Requirements & Ideation Management Concept Design Management Product Lifecycle Portfolio Management Innovation as a Competitive Differentiator

    Read the article

  • concurrency::accelerator

    - by Daniel Moth
    Overview An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class. For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app. Construction and operator usage You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup. Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc"); There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following: accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc; accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref); accelerator::direct3d_warp represents a target that I will cover in future blog posts. Example: accelerator acc(accelerator::direct3d_warp); accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique that I'll cover separately. Example: accelerator acc(accelerator::cpu_accelerator); You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device. Querying accelerator characteristics Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting. Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload): void inspect_accelerator(concurrency::accelerator acc) { std::wcout << "New accelerator: " << acc.description << std::endl; std::wcout << "is_debug = " << acc.is_debug << std::endl; std::wcout << "is_emulated = " << acc.is_emulated << std::endl; std::wcout << "dedicated_memory = " << acc.dedicated_memory << std::endl; std::wcout << "device_path = " << acc.device_path << std::endl; std::wcout << "has_display = " << acc.has_display << std::endl; std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; } accelerator_view In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of deferred or immediate that we'll also explore in the next blog post). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned how to become a Data Scientist for Big Data. In this article we will go over various learning resources related to Big Data. In this series we have covered many of the most essential details about Big Data. At the beginning of this series, I have encouraged readers to send me questions. One of the most popular questions is - “I want to learn more about Big Data. Where can I learn it?” This is indeed a great question as there are plenty of resources out to learn about Big Data and it is indeed difficult to select on one resource to learn Big Data. Hence I decided to write here a few of the very important resources which are related to Big Data. Learn from Pluralsight Pluralsight is a global leader in high-quality online training for hardcore developers.  It has fantastic Big Data Courses and I started to learn about Big Data with the help of Pluralsight. Here are few of the courses which are directly related to Big Data. Big Data: The Big Picture Big Data Analytics with Tableau NoSQL: The Big Picture Understanding NoSQL Data Analysis Fundamentals with Tableau I encourage all of you start with this video course as they are fantastic fundamentals to learn Big Data. Learn from Apache Resources at Apache are single point the most authentic learning resources. If you want to learn fundamentals and go deep about every aspect of the Big Data, I believe you must understand various concepts in Apache’s library. I am pretty impressed with the documentation and I am personally referencing it every single day when I work with Big Data. I strongly encourage all of you to bookmark following all the links for authentic big data learning. Haddop - The Apache Hadoop® project develops open-source software for reliable, scalable, distributed computing. Ambari: A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heat maps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner. Avro: A data serialization system. Cassandra: A scalable multi-master database with no single points of failure. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout: A Scalable machine learning and data mining library. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. Learn from Vendors One of the biggest issues with about learning Big Data is setting up the environment. Every Big Data vendor has different environment request and there are lots of things require to set up Big Data framework. Many of the users do not start with Big Data as they are afraid about the resources required to set up framework as well as a time commitment. Here Hortonworks have created fantastic learning environment. They have created Sandbox with everything one person needs to learn Big Data and also have provided excellent tutoring along with it. Sandbox comes with a dozen hands-on tutorial that will guide you through the basics of Hadoop as well it contains the Hortonworks Data Platform. I think Hortonworks did a fantastic job building this Sandbox and Tutorial. Though there are plenty of different Big Data Vendors I have decided to list only Hortonworks due to their unique setup. Please leave a comment if there are any other such platform to learn Big Data. I will include them over here as well. Learn from Books There are indeed few good books out there which one can refer to learn Big Data. Here are few good books which I have read. I will update the list as I will learn more. Ethics of Big Data Balancing Risk and Innovation Big Data for Dummies Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions If you search on Amazon there are millions of the books but I think above three books are a great set of books and it will give you great ideas about Big Data. Once you go through above books, you will have a clear idea about what is the next step you should follow in this series. You will be capable enough to make the right decision for yourself. Tomorrow In tomorrow’s blog post we will wrap up this series of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >