Search Results

Search found 522 results on 21 pages for 'relate'.

Page 19/21 | < Previous Page | 15 16 17 18 19 20 21  | Next Page >

  • How to get physical partition name from iSCSI details on Windows?

    - by Barry Kelly
    I've got a piece of software that needs the name of a partition in \Device\Harddisk2\Partition1 style, as shown e.g. in WinObj. I want to get this partition name from details of the iSCSI connection that underlies the partition. The trouble is that disk order is not fixed - depending on what devices are connected and initialized in what order, it can move around. So suppose I have the portal name (DNS of the iSCSI target), target IQN, etc. I'd like to somehow discover which volumes in the system relate to it, in an automated fashion. I can write some PowerShell WMI queries that get somewhat close to the desired info: PS> get-wmiobject -class Win32_DiskPartition NumberOfBlocks : 204800 BootPartition : True Name : Disk #0, Partition #0 PrimaryPartition : True Size : 104857600 Index : 0 ... From the Name here, I think I can fabricate the corresponding name by adding 1 to the partition number: \Device\Harddisk0\Partition1 - Partition0 appears to be a fake partition mapping to the whole disk. But the above doesn't have enough information to map to the underlying physical device, unless I take a guess based on exact size matching. I can get some info on SCSI devices, but it's not helpful in joining things up (iSCSI target is Nexenta/Solaris COMSTAR): PS> get-wmiobject -class Win32_SCSIControllerDevice __GENUS : 2 __CLASS : Win32_SCSIControllerDevice ... Antecedent : \\COBRA\root\cimv2:Win32_SCSIController.DeviceID="ROOT\\ISCSIPRT\\0000" Dependent : \\COBRA\root\cimv2:Win32_PnPEntity.DeviceID="SCSI\\DISK&VEN_NEXENTA&PROD_COMSTAR... Similarly, I can run queries like these: PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_TargetClass PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_PersistentDevices These guys return information relating to my iSCSI target name and the GUID volume name respectively (a volume name like \\?\Volume{guid-goes-here}), but the GUID volume name is no good to me, and there doesn't appear to be a reliable correspondence between the target name and the volume that I can join on. I simply can't find an easy way of getting from an IQN (e.g. iqn.1992-01.com.example:storage:diskarrays-sn-a8675309) to physical partitions mapped from that target. The way I do it by hand? I start Disk Management, and look for a partition of the correct size, verify that its driver says NEXENTA COMSTAR, and look at the disk number. But even this is unreliable if I have multiple iSCSI volumes of the exact same size. Any suggestions?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

  • Off The Beaten Path—Three Things Growing Midsize Companies are Thankful For

    - by Christine Randle
    By: Jim Lein, Senior Director, Oracle Accelerate Last Sunday I went on a walkabout.  That’s when I just step out the door of my Colorado home and hike through the mountains for hours with no predetermined destination. I favor “social trails”, the unmapped routes pioneered by both animal and human explorers.  These tracks  are usually more challenging than established, marked routes and you can’t be 100% sure of where you’re going to end up. But I’ve found the rewards to be much greater. For awhile, I pondered on how—depending upon your perspective—the current economic situation worldwide could be viewed as either a classic “the glass is half empty” or a “the glass is half full” scenario. Midsize companies buy Oracle to grow and so I’m continually amazed and fascinated by the success stories our customers relate to me.  Oracle’s successful midsize companies are growing via innovation, agility, and opportunity. For them, the glass isn’t half full—it’s overflowing. Growing Midsize Companies are Thankful for: Innovation The sun angling through the pine trees reminded me of a conversation with a European customer a year ago May.  You might not recognize the name but, chances are, your local evening weather report relies on this company’s weather observation, monitoring and measurement products.  For decades, the company was recognized in its industry for product innovation, but its recent rapid growth comes from tailoring end to end product and service solutions based on the needs of distinctly different customer groups across industrial, public sector, and defense sectors.  Hours after that phone call I was walking my dog in a local park and came upon a small white plastic box sprouting short antennas and dangling by a nylon cord from a tree branch.  I cut it down. The name of that customer’s company was stamped on the housing. “It’s a radiosonde from a high altitude weather balloon,” he told me the next day. “Keep it as a souvenir.”  It sits on my fireplace mantle and elicits many questions from guests. Growing Midsize Companies are Thankful for: Agility In July, I had another interesting discussion with the CFO of an Asia-Pacific company which owns and operates a large portfolio of leisure assets. They are best known for their epic outdoor theme parks. However, their primary growth today is coming from a chain of indoor amusement centers in the USA where billiards, bowling, and laser tag take the place of roller coasters, kiddy rides, and wave pools. With mountains and rivers right out my front door, I’m not much for theme parks, but I’ll take a spirited game of laser tag any day.  This company has grown dramatically since first implementing Oracle ERP more than a decade ago. Their profitable expansion into a completely foreign market is derived from the ability to replicate proven and efficient best business practices across diverse operating environments.  They recently went live on Oracle’s Fusion HCM and Taleo. Their CFO explained to me how, with thousands of employees in three countries, Fusion HCM and Taleo would enable them to remain incredibly agile by acting on trends linking individual employee performance to their management, establishing and maintaining those best practices. Growing Midsize Companies are Thankful for: Opportunity I have three GPS apps on my iPhone. I use them mainly to keep track of my stats—distance, time, and vertical gain. However, every once in awhile I need to find the most efficient route back home before dark from my current location (notice I didn’t use the word “lost”). In August I listened in on an interview with the CFO of another European company that designs and delivers telematics solutions—the integrated use of telecommunications and informatics—for managing the mobile workforce. These solutions enable customers to achieve evolutionary step-changes in their performance and service delivery. Forgive the overused metaphor, but this is route optimization on steroids.  The company’s executive team saw an opportunity in this emerging market and went “all in”. Consequently, they are being rewarded with tremendous growth results and market domination by providing the ability for their clients to collect and analyze performance information related to fuel consumption, service workforce safety, and asset productivity. This Thanksgiving, I’m thankful for health, family, friends, and a career with an innovative company that helps companies leverage top tier software to drive and manage growth. And I’m thankful to have learned the lesson that good things happen when you get off the beaten path—both when hiking and when forging new routes through a complex world economy. Halfway through my walkabout on Sunday, after scrambling up a long stretch of scree-covered hill, I crested a ridge with an obstructed view of 14,265 ft Mt Evans just a few miles to the west.  There, nowhere near a house or a trail, someone had placed a wooden lounge chair. Its wood was worn and faded but it was sturdy. I had lunch and a cold drink in my pack. Opportunity knocked and I seized it. Happy Thanksgiving.  

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • User Experience Fundamentals

    - by ultan o'broin
    Understanding what user experience means in the modern work environment is central to building great-looking usable applications on the desktop or mobile devices. What better place to start a series of blog posts on such Applications User Experience team enablement for customers and partners than by sharing what the term really means, writes team member Karen Scipi. Applications UX have gained valuable insights into developing a user experience that reflects the experience of today’s worker. We have observed real workers performing real tasks in real work environments, and we have developed a set of new standards of application design that have been scientifically proven to be beneficial to enable today’s workers. We share such expertise to enable our customers and partners to benefit from our insights and to further their return on investment when building Oracle applications. So, What is User Experience? ?The user interface (UI) is about the on-screen user context provided by the layout of widgets (such as icons, fields, and buttons and more) and the visual impact of colors, typographic choices, and so on. The UI comprises the “look and feel” of the application that users interact with, and reflects, in essence, the most immediate aspects of usability we can now all relate to.  User experience, on the other hand, is about understanding the whole context of the world of work, how workers go about completing tasks, crossing all sorts of boundaries along the way. It is a study of how business processes and workers goals coincide, how users work with technology or other tools to get their jobs done, their interactions with other users, and their response to the technical, physical, and cultural environment around them. User experience is all about how users work—their work environments, office layouts, desk tools, types of devices, their working day, and more. Even their job aids, such as sticky notes, offer insight for UX innovation. User experience matters because businesses needs to be efficient, work must be productive, and users now demand to be satisfied by the applications they work with. In simple terms, tasks finished quickly and accurately for a business evokes organization and worker satisfaction, which in turn makes workers feel good and more than willing to use the application again tomorrow. Design Principles for the Enterprise Worker The consumerization of information technology has raised the bar for enterprise applications. Applications must be consistent, simple, intuitive, but above all contextual, reflecting how and when workers work, in the office or on the go. For example, the Google search experience with its type-ahead keyword-prompting feature is how workers expect to be able to discover enterprise information, too. Type-ahead in PeopleSoft 9.1 To build software that enables workers to be productive, our design principles meet modern work requirements about consistency, with well-organized, context-driven information, geared for a working world of discovery and collaboration. Our applications must also behave in a simple, web-like way just like Amazon, Google, and Apple products that workers use at home or on the go. Our user experience must also reflect workers’ needs for flexibility and well-loved enterprise practices such as using popular desktop tools like Microsoft Excel or Outlook as required. Building User Experience Productively The building blocks of Oracle Fusion Applications are the user experience design patterns. Based on the Oracle Fusion Middleware technology used to build Oracle Fusion Applications, the patterns are reusable solutions to common usability challenges that ADF developers typically face as they build applications, extensions, and integrations. Developers use the patterns as part of their Oracle toolkits to realize great usability consistently and in a productive way. Our design pattern creation process is informed by user experience research and science, an understanding of our technology’s capabilities, the demands for simplification and intuitiveness from users, and the best of Oracle’s acquisitions strategy (an injection of smart people and smart innovation). The patterns are supported by usage guidelines and are tested in our labs and assembled into a library of proven resources we used to build own Oracle Fusion Applications and other Oracle applications user experiences. The design patterns library is now available to the ADF community and to our partners and customers, for free. Developers with ADF skills and other technology skills can now offer more than just coding and functionality and still use the best in enterprise methodologies to ensure that a great user experience is easily applied, scaled, and maintained, whether it be for SaaS or on-premise deployments for Oracle Fusion Applications, for applications coexistence, or for partner integrations scenarios.  Oracle partners and customers already using our design patterns to build solutions and win business in smart and productive ways are now sharing their experiences and insights on pattern use to benefit your entire business. Applications UX is going global with the message and the means. Our hands-on user experience enablement through ADF  is expanding. So, stay tuned to Misha Vaughan's Voice of User Experience (VOX) blog and follow along on Twitter at @usableapps for news of outreach events and other learning opportunities. Interested in Learning More? Oracle Fusion Applications User Experience Patterns and Guidelines Library Shout-outs for Oracle UX Design Patterns Oracle Fusion Applications User Experience Design Patterns: Productivity Realized

    Read the article

  • MySQL Connect 9 Days Away – Optimizer Sessions

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following my previous blog post focusing on InnoDB talks at MySQL Connect, let us review today the sessions focusing on the MySQL Optimizer: Saturday, 11.30 am, Room Golden Gate 6: MySQL Optimizer Overview—Olav Sanstå, Oracle The goal of MySQL optimizer is to take a SQL query as input and produce an optimal execution plan for the query. This session presents an overview of the main phases of the MySQL optimizer and the primary optimizations done to the query. These optimizations are based on a combination of logical transformations and cost-based decisions. Examples of optimization strategies the presentation covers are the main query transformations, the join optimizer, the data access selection strategies, and the range optimizer. For the cost-based optimizations, an overview of the cost model and the data used for doing the cost estimations is included. Saturday, 1.00 pm, Room Golden Gate 6: Overview of New Optimizer Features in MySQL 5.6—Manyi Lu, Oracle Many optimizer features have been added into MySQL 5.6. This session provides an introduction to these great features. Multirange read, index condition pushdown, and batched key access will yield huge performance improvements on large data volumes. Structured explain, explain for update/delete/insert, and optimizer tracing will help users analyze and speed up queries. And last but not least, the session covers subquery optimizations in Release 5.6. Saturday, 7.00 pm, Room Golden Gate 4: BoF: Query Optimizations: What Is New and What Is Coming? This BoF presents common techniques for query optimization, covers what is new in MySQL 5.6, and provides a discussion forum in which attendees can tell the MySQL optimizer team which optimizations they would like to see in the future. Sunday, 1.15 pm, Room Golden Gate 8: Query Performance Comparison of MySQL 5.5 and MySQL 5.6—Øystein Grøvlen, Oracle MySQL Release 5.6 contains several improvements in the query optimizer that create improved performance for complex queries. This presentation looks at how MySQL 5.6 improves the performance of many of the queries in the DBT-3 benchmark. Based on the observed improvements, the presentation discusses what makes the specific queries perform better in Release 5.6. It describes the relevant new optimization techniques and gives examples of the types of queries that will benefit from these techniques. Sunday, 4.15 pm, Room Golden Gate 4: Powerful EXPLAIN in MySQL 5.6—Evgeny Potemkin, Oracle The EXPLAIN command of MySQL has long been a very useful tool for understanding how MySQL will execute a query. Release 5.6 of the MySQL database offers several new additions that give more-detailed information about the query plan and make it easier to understand at the same time. This presentation gives an overview of new EXPLAIN features: structured EXPLAIN in JSON format, EXPLAIN for INSERT/UPDATE/DELETE, and optimizer tracing. Examples in the session give insights into how you can take advantage of the new features. They show how these features supplement and relate to each other and to classical EXPLAIN and how and why the MySQL server chooses a particular query plan. You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

  • Stumbling Through: Making a case for the K2 Case Management Framework

    I have recently attended a three-day training session on K2s Case Management Framework (CMF), a free framework built on top of K2s blackpearl workflow product, and I have come away with several different impressions for some of the different aspects of the framework.  Before we get into the details, what is the Case Management Framework?  It is essentially a suite of tools that, when used together, solve many common workflow scenarios.  The tool has been developed over time by K2 consultants that have realized they tend to solve the same problems over and over for various clients, so they attempted to package all of those common solutions into one framework.  Most of these common problems involve workflow process that arent necessarily direct and would tend to be difficult to model.  Such solutions could be achieved in blackpearl alone, but the workflows would be complex and difficult to follow and maintain over time.  CMF attempts to simplify such scenarios not so much by black-boxing the workflow processes, but by providing different points of entry to the processes allowing them to be simpler, moving the complexity to a middle layer.  It is not a solution in and of itself, development is still required to tie the pieces together. CMF is under continuous development, both a plus and a minus in that bugs are fixed quickly and features added regularly, but it may be difficult to know which versions are the most stable.  CMF is not an officially supported K2 product, which means you will not get technical support but you will get access to the source code. The example given of a business process that would fit well into CMF is that of a file cabinet, where each folder in said file cabinet is a case that contains all of the data associated with one complaint/customer/incident/etc. and various users can access that case at any time and take one of a set of pre-determined actions on it.  When I was given that example, my first thought was that any workflow I have ever developed in the past could be made to fit this model there must be more than just this model to help decide if CMF is the right solution.  As the training went on, we learned that one of the key features of CMF is SharePoint integration as each case gets a SharePoint site created for it, and there are a number of excellent web parts that can be used to design a portal for users to get at all the information on their cases.  While CMF does not require SharePoint, without it you will be missing out on a huge portion of functionality that CMF offers.  My opinion is that without SharePoint integration, you may as well write your workflows and other components the old fashioned way. When I heard that each case gets its own SharePoint site created for it, warning bells immediately went off in my head as I felt that depending on the data load, a CMF enabled solution could quickly overwhelm SharePoint with thousands of sites so we have yet another deciding factor for CMF:  Just how many cases will your solution be creating?  While it is not necessary to use the site-per-case model, it is one of the more useful parts of the framework.  Without it, you are losing a big chunk of what CMF has to offer. When it comes to developing on top of the Case Management Framework, it becomes a matter of configuring what makes up a case, what can be done to a case, where each action on a case should take the user, and then typing up actions to case statuses.  This last step is one that I immediately warmed up to, as just about every workflow Ive designed in the past needed some sort of mapping table to set the status of a work item based on the action being taken definitely one of those common solutions that it is good to see rolled up into a re-useable entity (and it gets a nice configuration UI to boot!).  This concept is a little different than traditional workflow design, in that you dont have to think of an end-to-end process around passing a case along a path, rather, you must envision the case as central object with workflow threads branching off of it and doing their own thing with the case data.  Certainly there can be certain workflow threads that get rather complex, but the idea is that they RELATE to the case, they dont BECOME the case (though it is still possible with action->status mappings to prevent certain actions in certain cases, so it isnt always a wide-open free for all of actions on a case). I realize that this description of the Case Management Framework merely scratches the surface on what the product actually can do, and I dont think Ive conclusively defined for what sort of business scenario you can make a case for Case Management Framework.  What I do hope to have accomplished with this post is to raise awareness of CMF there is a (free!) product out there that could potentially simplify a tangled workflow process and give (for free!) a very useful set of SharePoint web parts and a nice set of (free!) reports.  The best way to see if it will truly fit your needs is to give it a try did I mention it is FREE?  Er, ok, so it is free, but only obtainable at this time for K2 partnersDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Blank screen after installing nvidia restricted driver

    - by LaMinifalda
    I have a new machine with a MSI N560GTX Ti Twin Frozr II/OC graphic card and MSI PH67A-C43 (B3) main board. If i install the current nvidia restricted driver and reboot the machine on Natty (64-bit), then i only get a black screen after reboot and my system does not respond. I can´t see the login screen. On nvidia web page i saw that the current driver is 270.41.06. Is that driver used as current driver? Btw, i am an ubuntu/linux beginner and therefore not very familiar with ubuntu. What can i do to solve the black screen problem? EDIT: Setting the nomodeset parameter does not solve the problem. After ubuntu start, first i see the ubuntu logo, then strange pixels and at the end the black screen. HELP! EDIT2: Thank you, but setting the "video=vesa:off gfxpayload=text" parameters do no solve the problem too. Same result as in last edit. HELP. I would like to see Unity. This is my grub: GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="video=vesa:off gfxpayload=text nomodeset quiet splash" GRUB_CMDLINE_LINUX=" vga=794" EDIT3: I dont know if it is important. If this edit is unnecessary and helpless I will delete it. There are some log files (Xorg.0.log - Xorg.4.log). I dont know how these log files relate to each other. Please, check the errors listed below. In Xorg.1.log I see the following error: [ 20.603] (EE) Failed to initialize GLX extension (ComIatible NVIDIA X driver not found) In Xorg.2.log I see the following error: [ 25.971] (II) Loading /usr/lib/xorg/modules/libfb.so [ 25.971] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 25.971] (==) NVIDIA(0): RGB weight 888 [ 25.971] (==) NVIDIA(0): Default visual is TrueColor [ 25.971] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 26.077] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [ 26.078] (EE) NVIDIA(0): check your system's kernel log for additional error [ 26.078] (EE) NVIDIA(0): messages and refer to Chapter 8: Common Problems in the [ 26.078] (EE) NVIDIA(0): README for additional information. [ 26.078] (EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device! [ 26.078] (II) UnloadModule: "nvidia" [ 26.078] (II) Unloading nvidia [ 26.078] (II) UnloadModule: "wfb" [ 26.078] (II) Unloading wfb [ 26.078] (II) UnloadModule: "fb" [ 26.078] (II) Unloading fb [ 26.078] (EE) Screen(s) found, but none have a usable configuration. [ 26.078] Fatal server error: [ 26.078] no screens found [ 26.078] Please consult the The X.Org Found [...] In Xorg.4.log I see the following errors: [ 15.437] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 15.437] (==) NVIDIA(0): RGB weight 888 [ 15.437] (==) NVIDIA(0): Default visual is TrueColor [ 15.437] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 15.703] (II) NVIDIA(0): NVIDIA GPU GeForce GTX 560 Ti (GF114) at PCI:1:0:0 (GPU-0) [ 15.703] (--) NVIDIA(0): Memory: 1048576 kBytes [ 15.703] (--) NVIDIA(0): VideoBIOS: 70.24.11.00.00 [ 15.703] (II) NVIDIA(0): Detected PCI Express Link width: 16X [ 15.703] (--) NVIDIA(0): Interlaced video modes are supported on this GPU [ 15.703] (--) NVIDIA(0): Connected display device(s) on GeForce GTX 560 Ti at [ 15.703] (--) NVIDIA(0): PCI:1:0:0 [ 15.703] (--) NVIDIA(0): none [ 15.706] (EE) NVIDIA(0): No display devices found for this X screen. [ 15.943] (II) UnloadModule: "nvidia" [ 15.943] (II) Unloading nvidia [ 15.943] (II) UnloadModule: "wfb" [ 15.943] (II) Unloading wfb [ 15.943] (II) UnloadModule: "fb" [ 15.943] (II) Unloading fb [ 15.943] (EE) Screen(s) found, but none have a usable configuration. [ 15.943] Fatal server error: [ 15.943] no screens found EDIT4 There was a file /etc/X11/xorg.conf. As fossfreedom suggested I executed sudo mv /etc/X11/xorg.conf /etc/X11/xorg.conf.backup However, there is still the black screen after reboot. EDIT5 Neutro's advice (reinstalling the headers) did not solve the problem, too. :-( Any further help is appreciated! EDIT6 I just installed driver 173.xxx. After reboot the system shows me only "Checking battery state". Just for information. I will google the problem, but help is also appreciated! ;-) EDIT7 When using the free driver (Ubuntu says that the free driver is in use and activated), Xorg.0.log shows the following errors: [ 9.267] (II) LoadModule: "nouveau" [ 9.267] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 9.267] (II) Module nouveau: vendor="X.Org Foundation" [ 9.267] compiled for 1.10.0, module version = 0.0.16 [ 9.267] Module class: X.Org Video Driver [ 9.267] ABI class: X.Org Video Driver, version 10.0 [ 9.267] (II) LoadModule: "nv" [ 9.267] (WW) Warning, couldn't open module nv [ 9.267] (II) UnloadModule: "nv" [ 9.267] (II) Unloading nv [ 9.267] (EE) Failed to load module "nv" (module does not exist, 0) [ 9.267] (II) LoadModule: "vesa" [...] [ 9.399] drmOpenDevice: node name is /dev/dri/card14 [ 9.402] drmOpenDevice: node name is /dev/dri/card15 [ 9.406] (EE) [drm] failed to open device [ 9.406] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 9.406] (WW) Falling back to old probe method for fbdev [ 9.406] (II) Loading sub module "fbdevhw" [ 9.406] (II) LoadModule: "fbdevhw" EDIT8 In the meanwhile i tried to install WIN7 64 bit on my machine. As a result i got a BSOD after installing the nvidia driver. :-) For this reason i sent my new machine back to the hardware reseller. I will inform you as soon as i have a new system. Thank you all for the great help and support. EDIT9 In the meanwhile I have a complete new system with "only" a MSI N460GTX Hawk, but more RAM. The system works perfect. :-) The original N560GTX had a hardware defect. Is is possible to close this question? THX!

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Reflections from the Young Prisms

    - by Oracle OpenWorld Blog Team
     By Karen Shamban The Young Prisms began their musical journey in San Francisco, and it's here they return to bring their unique sound to the Oracle OpenWorld Music Festival.  We asked them to tell us their thoughts on music, performing, and what they like in an audience.  Here's what they had to say: Q. What do you like best about performing in front of a live audience?A. There are a lot of things to love about playing in front of people. The best part is definitely the nights when the energy the audience brings shows through. Although it always differs from city to city and person to person, when you play to a full house and people are really getting into it, it's like no other feeling.Q. How do you use technology in creating and delivering your music?A. Well, we actually use a lot more electronic components than people realize. Pretty much every string instrument played either live or on recording has been filtered through numerous electronic effects. Matt uses somewhere around 12 or 14 every time we play live. Giovanni has six. Most of our writing and demoing is done with drum sequencers and samplers too, so it's safe to say we use technology to our advantage in the writing process. Live is a bit different, since we keep it to the basics with guitars and acoustic drums. We also tend to use projections when we play live, so technology helps us do that fairly easily as well.Q. Do you prefer smaller, intimate venues or larger, louder ones?  Why?A. Couldn’t say we have a real preference in venue size. I mean, its always great to get to play through a massive killer sound system, but small venues when packed full are equally as special if not more so, because of the intimacy of it. Some of my favorite shows I've seen as an audience member/ fan have been at the smaller venues in San Francisco.Q. What about your fans surprises you?A. Sometimes the older guys are a surprise. We've played shows where there are more older guys in their 40s and 50s, who come and stare and take notes at our effects pedals. Then there are kids our age or in their 20s. Sometimes it's surprising to think that the older guys relate to what we're doing more than our peers and friends in our age group.Q. What about your live act surprises your fans?A. I think people are often surprised by how shy we can be. It feels like people expect us to be really rowdy and throw things and make really loud noises and get really aggressive on stage because some of the sounds we use can have an abrasive element to them. People expect Matt to have some kind of Kurt Cobain attitude, which he doesn’t at all. So it seems it surprises people to see musicians playing loud and noisy songs in their early and mid 20s being fairly tame and calm on stage.Q. There are going to be a lot of technical people (you could call them geeks) in the Oracle crowd -- what are they going to love about your performance?A. Hopefully most of them are pedal nerds like we are and like the previously mentioned “older dude crowd.” Besides that I hope they’d be into the projections and group of songs we're going to play for them.Q. What's new and different in the music you're making today, versus a year or two ago?A. I'd say there is more focus on the songwriting now and less of the noise today than last year. I think it's pretty evident on the new record compared to the last two. On the first two records we made as YP, we had another guitar player and songwriter who no longer plays with us. So the process in which we develop songs is different as well.Q. Have you been on tour recently? If so, what do you like about touring, and what do you dislike?A. Touring is amazing. Some people might tell you different if they've been doing it for what they'd call too long, but for us it's really a great chance to play for people who care about the music we're making and also to see and explore the world. Getting to visit so many different cities and explore so many different cultures is amazing. Of course we love getting into cultural foods too. Stefanie is a fashion geek so getting to go to New York as often as we do as well as getting to play in London and Paris is always especially fun for her.Q. Ever think about playing another kind of music? If so, what, and why?A. Never really thought about wanting to do anything drastically different. I think the style of music we play has a lot to do with the stuff we have been listening to both growing up and now. It wasn’t really a conscious decision to make sure it was a certain sound, so I'm not sure we've ever thought about doing a way different genre or whatever like electronic music or country. Although there have been times we've had conversations where we discuss possibly doing quiet sets or using the string synth sounds.Q. What are the top three things people should know about your music?A1. We like noise.A2. We use ambience and atmosphere as much as as we can.A3. Yes, the vocals are supposed to be mixed in with the guitars. Get more info: Oracle OpenWorld Music Festival Young Prisms

    Read the article

  • Knowing your user is key--Part 1: Motivation

    - by erikanollwebb
    I was thinking where the best place to start in this blog would be and finally came back to a theme that I think is pretty critical--successful gamification in the enterprise comes down to knowing your user.  Lots of folks will say that gamification is about understanding that everyone is a gamer.  But at least in my org, that argument won't play for a lot of people.  Pun intentional.  It's not that I don't see the attraction to the idea--really, very few people play no games at all.  If they don't play video games, they might play solitaire on their computer.  They may play card games, or some type of sport.  Mario Herger has some great facts on how much game playing there is going on at his Enterprise-Gamification.com website. But at the end of the day, I can't sell that into my organization well.  We are Oracle.  We make big, serious software designed run your whole business.  We don't make Angry Birds out of your financial reporting tools.  So I stick with the argument that works better.  Gamification techniques are really just good principals of user experience packaged a little differently.  Feedback?  We already know feedback is important when using software.  Progress indicators?  Got that too.  Game mechanics may package things in a more explicit way but it's not really "new".  To know how to use game mechanics, and what a user experience team is important for, is totally understanding who our users are and what they are motivated by. For several years, I taught college psychology courses, including Motivation.  Motivation is generally broken down into intrinsic and extrinsic motivation.  There's intrinsic, which comes from within the individual.  And there's extrinsic, which comes from outside the individual.  Intrinsic motivation is that motivation that comes from just a general sense of pleasure in the doing of something.  For example, I like to cook.  I like to cook a lot.  The kind of cooking I think is just fun makes other people--people who don't like to cook--cringe.  Like the cake I made this week--the star-spangled rhapsody from The Cake Bible: two layers of meringue, two layers of genoise flavored with a raspberry eau de vie syrup, whipped cream with berries and a mousseline buttercream, also flavored with raspberry liqueur and topped with fresh raspberries and blueberries. I love cooking--I ask for cooking tools for my birthday and Christmas, I take classes like sushi making and knife skills for fun.  I like reading about you can make an emulsion of egg yolks, melted butter and lemon, cook slowly and transform them into a sauce hollandaise (my use of all the egg yolks that didn't go into the aforementioned cake).  And while it's nice when people like what I cook, I don't do it for that.  I do it because I think it's fun.  My former boss, Ultan Ó Broin, loves to fish in the sea off the coast of Ireland.  Not because he gets prizes for it, or awards, but because it's fun.  To quote a note he sent me today when I asked if having been recently ill kept him from the beginning of mackerel season, he told me he had already been out and said "I can fish when on a deathbed" (read more of Ultan's work, see his blogs on User Assistance and Translation.). That's not the kind of intensity you get about something you don't like to do.  I'm sure you can think of something you do just because you like it. So how does that relate to gamification?  Gamification in the enterprise space is about uncovering the game within work.  Gamification is about tapping into things people already find motivating.  But to do that, you need to know what that user is motivated by. Customer Relationship Management (CRM) is one of those areas where over-the-top gamification seems to work (not to plug a competitor in this space, but you can search on what Bunchball* has done with a company just a little north of us on 101 for the CRM crowd).  Sales people are naturally competitive and thrive on that plus recognition of their sales work.  You can use lots of game mechanics like leaderboards and challenges and scorecards with this type of user and they love it.  Show my whole org I'm leading in sales for the quarter?  Bring it on!  However, take the average accountant and show how much general ledger activity they have done in the last week and expose it to their whole org on a leaderboard and I think you'd see a lot of people looking for a new job.  Why?  Because in general, accountants aren't extraverts who thrive on competition in their work.  That doesn't mean there aren't game mechanics that would work for them, but they won't be the same game mechanics that work for sales people.  It's a different type of user and they are motivated by different things. To break this up, I'll stop here and post now.  I'll pick this thread up in the next post. Thoughts? Questions? *Disclosure: To my knowledge, Oracle has no relationship with Bunchball at this point in time.

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • From the Classroom to the Boardroom

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Pens and Paper...these are the only things being a student and being a graduate / professional have in common. Walking in to the offices of Oracle South Africa as a graduate the first thing you notice is how polished and sleek all the people who work here look. 80% of the ladies wear sky-scraper heels and walk with the greatest grace. This was the first of many rude awakenings to remind me that I am no longer a student but a graduate. My first struggle was having to wake up at wee hours of the morning to prepare for work. As a student going to class was almost an optional thing, if you missed a morning class you could always attend an evening class to make up for it or simply attend with another group. But in the workplace, you HAVE to show up every single morning at the same time, with no option of coming in when it suits you and there is definitely no coming in with the evening class/shift. As a student, the earliest hour I ever woke up was 7:00am, anything earlier than that was considered inhumane torture. My reason for waking up every morning as a student was “you have a degree to go get” but as a graduate having to go to work I have to say to myself “here’s to a new day of learning and growing”. My second struggle has come in having to change my beloved wardrobe. Everyone who knows me knows how passionate I am about fashion and shopping. For me Shopping is a BASIC HUMAN RIGHT, that should not be messed with. Therefore it was with great sadness that I swopped my rippled skinny jeans for pin-striped formal pants, my long chandelier earrings for simple studs, my flat shoes for heels, my sheer blouses for crisp white shirts, even my beloved wild hair had to make way for a simple ponytail. Our looks as ladies also came under great scrutiny, we had to acquaint ourselves with some serious grooming tools: the mascara, blush, lip-gloss, blush, a touch of lipstick and a manicure set. Language was a struggle of its own as well. Being a student you learn to relate to your peers in a informal way. In the workplace you have to address everyone with the same respect, including your peers. Words like “Hey buddy” had to make way for “good morning friend”. The month long winter school holiday was one of the things I looked forward to as a student. This was a time where we got to be at home and avoid the coldest month of the year, July. It was the most amazing thing ever, just sleeping and snuggling up to all sorts of warm things but sadly it is now a thing of the past. It is currently winter in South Africa and going to work has become the most unfashionable thing with all the jackets, boots, scarves and gloves. But summer is coming and I will miss those holidays too. As a student the school holidays were like a gift for us to catch a break and not think for a while which was why it was imaginable how someone would go on for the entire year without a break, with only the promise of a mere 21 days annual leave!! Right now I am sure we are all looking forward to taking that annual leave when the time is right. The worst rude awakening I must say, has to be presenting in front of clients and managers. As a student you have the same class mates for almost four years therefore presenting in front of them becomes the norm over the years and your lecturer will always go gently on you. What they don’t tell you at University is that in the real world, time is money and clients pay money to see you present therefore there is no room for error. Clients are not there to give you a score and boost your ego, they expect nothing less than 100% and they will let you know without a second thought. For a graduate this can feel like you are being fed to the sharks, you either get eaten or you swim for your life. At the end of the day, it is all an experience that is meant to groom us into better professional and make us a part of the Red Team. All the sacrifices are worth it and they lead us to being better and more polished professionals. So if you are interested in joining the ECEMEA Sales and Presales Internship Programme, please have a look at http://campus.oracle.com for more information and for our latest vacancies and internships. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Approaches to create a nested tree structure of NSDictionaries?

    - by d11wtq
    I'm parsing some input which produces a tree structure containing NSDictionary instances on the branches and NSString instance at the nodes. After parsing, the whole structure should be immutable. I feel like I'm jumping through hoops to create the structure and then make sure it's immutable when it's returned from my method. We can probably all relate to the input I'm parsing, since it's a query string from a URL. In a string like this: a=foo&b=bar&a=zip We expect a structure like this: NSDictionary { "a" => NSDictionary { 0 => "foo", 1 => "zip" }, "b" => "bar" } I'm keeping it just two-dimensional in this example for brevity, though in the real-world we sometimes see var[key1][key2]=value&var[key1][key3]=value2 type structures. The code hasn't evolved that far just yet. Currently I do this: - (NSDictionary *)parseQuery:(NSString *)queryString { NSMutableDictionary *params = [NSMutableDictionary dictionary]; NSArray *pairs = [queryString componentsSeparatedByString:@"&"]; for (NSString *pair in pairs) { NSRange eqRange = [pair rangeOfString:@"="]; NSString *key; id value; // If the parameter is a key without a specified value if (eqRange.location == NSNotFound) { key = [pair stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; value = @""; } else { // Else determine both key and value key = [[pair substringToIndex:eqRange.location] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; if ([pair length] > eqRange.location + 1) { value = [[pair substringFromIndex:eqRange.location + 1] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; } else { value = @""; } } // Parameter already exists, it must be a dictionary if (nil != [params objectForKey:key]) { id existingValue = [params objectForKey:key]; if (![existingValue isKindOfClass:[NSDictionary class]]) { value = [NSDictionary dictionaryWithObjectsAndKeys:existingValue, [NSNumber numberWithInt:0], value, [NSNumber numberWithInt:1], nil]; } else { // FIXME: There must be a more elegant way to build a nested dictionary where the end result is immutable? NSMutableDictionary *newValue = [NSMutableDictionary dictionaryWithDictionary:existingValue]; [newValue setObject:value forKey:[NSNumber numberWithInt:[newValue count]]]; value = [NSDictionary dictionaryWithDictionary:newValue]; } } [params setObject:value forKey:key]; } return [NSDictionary dictionaryWithDictionary:params]; } If you look at the bit where I've added FIXME it feels awfully clumsy, pulling out the existing dictionary, creating an immutable version of it, adding the new value, then creating an immutable dictionary from that to set back in place. Expensive and unnecessary? I'm not sure if there are any Cocoa-specific design patterns I can follow here?

    Read the article

  • Linq to XML Read and output XML generated from lookup list

    - by Greg S
    I am trying to use XML created from a lookup list in SharePoint as a datasource for a treeview. It is in the form of : <NewDataSet> <test_data> <ID>1</ID> <Title>MenuItem_1</Title> <child_of /> </test_data> <test_data> <ID>2</ID> <Title>Subitem_1</Title> <Action>http://www.google.com</Action> <child_of>MenuItem_1</child_of> </test_data> <test_data> <ID>3</ID> <Title>Subitem_2</Title> <Action>http://www.google.com</Action> <child_of>MenuItem_1</child_of> </test_data> <test_data> <ID>4</ID> <Title>MenuItem_2</Title> <child_of /> </test_data> <test_data> <ID>5</ID> <Title>Subitem_2_1</Title> <Action>http://www.google.com</Action> <child_of>MenuItem_2</child_of> </test_data> <test_data> <ID>6</ID> <Title>Subitem_2_2</Title> <Action>http://www.google.com</Action> <child_of>MenuItem_2</child_of> </test_data> <test_data> <ID>7</ID> <Title>Subitem_2_2_1</Title> <Action>http://www.google.com</Action> <child_of>Subitem_2_2</child_of> </test_data> </NewDataSet> There may be N tiers, but the items relate to the parent via the <child_of> element. I can't seem to figure out how to write the LINQ in C# to nest the menu items properly. A friend recommended I post here. Any help is greatly appreciated.

    Read the article

  • LINQ to XML, a problem conceptualizing how the tree should look.

    - by snark
    Have you ever had one of those days when you've dug a hole but your so deep in now that the only option is to keep on digging and see where you come out? I thought just for giggles in the latest component I'm writing to use LINQ to XML because it was about time I saw what all the hooplah was about. The problem: I am making a graph component that contains series of data that get graphed and then you can apply a formula to that series and graph another series then apply a formula to that series and so on. So I figured that I would do so in 2 steps, create (and manage) an XML representaion of the series and how they relate to each other, then pass this xml to a draw engine which draws. Conceptually its a tree, with the exception of the root all child series being based upon a parent (1 parent can have many children. So I should always be adding child nodes to their parent and if I delete a node(series) then I can simply delete the series and its descendants (then draw) and voila all the messy iterating through each node finding parents and children is unneccessary. Trouble is I dont know how to represent this tree in XML i.e. the structure. My first attempt saw me programatically adding each series as siblings, which worked like a treat because I ended up with an ordered list and thus my order of rendering was maintained. I had this <Chart> <Series id="1">seriesText1</Series> <Series id="2">seriesText2</Series> <Series id="3">seriesText3</Series> <Series id="4">seriesText4</Series> </Chart> I'm in a muddle now ... how can I represent a series and a series that has children series. If some-one can give me a hint to how my tree should look (perhaps with a snippet on how to programatically add nodes to their parents) All the examples I have read usually have some container elements such as <ContactS> or <BookS> but my head says I have <series> some of them parent some of them children. Would appreciate a nudge in the right direction.

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

< Previous Page | 15 16 17 18 19 20 21  | Next Page >