Search Results

Search found 21814 results on 873 pages for 'morgan may'.

Page 99/873 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Announcing Spacewalk Support for Oracle Linux Basic and Premier Customers

    - by Michele Casey
    Over the years, customers migrating to Oracle Linux have asked for options to provide a transitional solution for their existing system management tools (such as Red Hat Satellite Server) while evaluating and planning migrations to Oracle's Enterprise Manager, which is offered at no additional charge with Oracle Linux Support Subscriptions.  Based on this request, we are pleased to announce support for the open-source community project, Spacewalk, which is the basis for both Red Hat Satellite Server and SUSE Manager.  Effective today, customers with Oracle Linux Basic and Premier Support subscriptions have access to a fully supported Spacewalk build which can be setup to easily manage Oracle Linux systems.   Spacewalk support for Oracle Linux requires Oracle Linux 6, x86_64 for the server and provides support for Oracle Linux 5 and Oracle Linux 6 (x86, x86_64) clients.  This solution requires Oracle Database 11g Release 2 as the  supported database repository for Spacewalk with Oracle Linux.  Within the next several weeks, a limited use license for the Oracle Database will be included with this offer.  Until this is complete, customers may use an existing Oracle database license or they may begin by downloading a 30-day trial license from eDelivery.  Customers with Oracle Linux Basic and Premier subscriptions will automatically have access to the channel hosting the supported build.  Please review the release notes for further instructions. Oracle Enterprise Manager is still the recommended enterprise solution for managing Oracle Linux systems and we want to provide the easiest transition path for our customers.  We are excited to offer this solution to our Oracle Linux customers while they plan and implement their migration to Oracle Enterprise Manager. 

    Read the article

  • Handling UTF-8 with BOM in HTTP

    - by Alois Mahdal
    Say I have a script which at some point serves a plain text file as a content (right after "\n\n"). These files are provided by users, but I can expect they will be UTF-8. So I hard-wire Content-Type: text/plain; charset=UTF-8. But while I can teach users to save everything in UTF-8, I can't be very sure that the files will be without BOM ("\xEE\xBB\xBF"), as at least on Windows, this is not very clearly distinguished in common plain text editors and not every one of them uses the same default. So what about these files created on Windows, where they may/may not start with BOM? Should/will server or UA get rid of this debris for me? Or is it my task to prepare clean UTF-8, i.e. open each file and check whether BOM needs to be removed?

    Read the article

  • How to build the mainline kernel source package?

    - by Maxime R.
    Ubuntu kernel PPA only provides linux-headers*.deb and linux-image*.deb packages. How can I build the corresponding linux-source*.deb package ? Context: I'm currently running Ubuntu 11.10 with the mainline kernel (3.2 rc6 now) to get a better support for my sandybridge IGP (Dell E6420 laptop with intel i5-2520M CPU). Appears, i'd like to install this touchpad driver, ALPS touchpads being badly supported (see previous link bug report), while waiting for upstream support in kernel version 3.3. Problem is, DKMS keeps complaining about not finding the full kernel source: Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. Appears I may not need the full source but I'd still like to try having it installed to see if it solve my problem. What I tried : Uncompressing the kernel.org source archive in /usr/src/. DKMS still complaining. Manually updating the kernel source package with uupdate and the mainline source package like explained here. Did not succeed. Manually building the linux-source package following @roadmr and @elmicha instructions. I eventually succeeded to build it but DKMS still complained about the missing source. At last I noticed an error I did not catch in the first place while reinstalling the kernel headers. Appears the .deb I got may have been corrupted, downloading it again did the trick :) Alas, while DKMS agreed to compile the module i ran into the following error which appears to have already been reported. This issue isn't yet solved but I won't try to because of the following: in the end I decided to test the precise kernel version 3.2-rc6 through the xorg-edgers ppa which appears to be correctly patched: it works. Nevertheless, it might still be of some interest to know how to build the mainline linux-source package as the Ubuntu Kernel Team doesn't provide it. Not to mention that I learned a lot in the process ^^

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Does Agile force developers to work more?

    - by Shooshpanchick
    Looking at common Agile practices it seems to me that they (intentionally or unintentionally?) force developer to spend more time actually working as opposed to reading blogs/articles, chatting, coffee breaks and just plain procrastinating. In particular: 1) Pair programming - the biggest work-forcer, just because it is inconvenient to do all that procrastination when there are two of you sitting together. 2) Short stories - when you have a HUGE chunk of work that must be done in e.g. a month, it is pretty common to slack off in the first three weeks and switch to OMG DEADLINE mode for the last one. And with the little chunks (that must be done in a day or less) it is exact opposite - you feel that time is tight, there is no space for maneuvering, and you will be held accountable for the task pretty soon, so you start working immediately. 3) Team communication and cohesion - when you underperform in a slow, distanced and silent environment it may feel ok, but when at the end of the day at Scrum meeting everyone boasts what they have accomplished and you have nothing to say you may actually feel ashamed. 4) Testing and feedback - again, it prevents you from keeping tasks "99% ready" (when it's actually around 20%) until the deadline suddenly happens. Do you feel that under Agile you work more than under "conventional" methodologies? Is this pressure compensated by the more comfortable environment and by the feeling of actually getting right things done quickly?

    Read the article

  • Unable to install OpenLDAP in debian based elementary OS

    - by Waqas Khan
    Hi while installing openldap when i enter the command: apt-get install slapd I get the following output Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: slapd : Depends: libldap-2.4-2 (= 2.4.28-1.1ubuntu4.1) but 2.4.28-1.1ubuntu4.2 is to be installed E: Unable to correct problems, you have held broken packages. Please help.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Windows 7 setup hangs after "Starting Windows..."-screen

    - by Eirik Lillebo
    Hi! I'm having some trouble installing Windows 7. I need to install the OS from boot in order to split my C: into two different partitions, as this is not allowed when installing from inside Vista. When I boot up from the install disc, I get the usual "Windows is copying files..."-screen, shortly followed by the "Starting Windows..."-screen with the animated window-logo or whatever. Then it looks as if the installation is about to begin with a blue screen and a cursor I can move around, but here it all stops. Nothing more happens, and the setup seems to hang. Not a single key on my keyboard has any effect, and all I am left to do is to abort and reboot. I've tried to install using two different DVDs (not clones), and the same thing happens every time. What may be causing this, and how may I fix it? Thanks in advance :)

    Read the article

  • Tools for game script / storyboard

    - by Pietro Polsinelli
    I am searching for a tool that will help in writing a game script. By "script" I mean the text core of a storyboard - without the drawing drafts, which may or may not be there (yet). What I'm thinking of will let write a piece of text of the script, define a simplified workflow from that step, and then define the text of next steps, and so on. Searching online, I found Inform http://inform7.com/ ("A Design System for Interactive Fiction Based on Natural Language") which in theory is exactly what I am searching for, but trying to use it it has this model of a space (a dungeon, a library) where you are picking up objects and exploring them. In my case I am designing more a Sims like game, the flow is entirely different. Considering non specific software, mind mapping tools miss the linearity of the process. What I am writing is a directed graph - simply a work-flow, but the way I want to design it is more text based than work-flow based. SO what I'm doing now is using a text editor, which I'll transform directly in code. Any suggestions?

    Read the article

  • install libreoffice in Ubuntu 12.04 is impossible

    - by user1587239
    What is wrong with Ubuntu repositories? sudo apt-get install libreoffice Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.4-0ubuntu1.1) but it is not going to be installed Depends: libreoffice-writer but it is not going to be installed Depends: libreoffice-calc but it is not going to be installed Depends: libreoffice-impress but it is not going to be installed Depends: libreoffice-draw but it is not going to be installed Depends: libreoffice-math but it is not going to be installed Depends: libreoffice-base but it is not going to be installed Depends: libreoffice-filter-mobiledev but it is not going to be installed Depends: libreoffice-java-common (>= 1:3.5.4~) but it is not going to be installed Recommends: libreoffice-gnome but it is not going to be installed or libreoffice-kde but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-28

    - by Bob Rhubart
    Follow the action: OTN's YouTube Channel Check out what's happening at Oracle OpenWorld and JavaOne with video coverage by the OTN crew. New interviews and more posted daily on the OTN YouTube channel. Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's Tinsel Town, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems–and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 Overview about the 5th SOA, Cloud and Service Technology Symposium | Jan van Zoggel Middleware consultant and author Jan van Zoggel shares an overview of three of the sessions he attended at this week's SOA, Cloud, and Service Technology Forum in the UK. OOW 2012: Questions to get answered during this conference | Lucas Jellema Oracle ACE Director Lucas Jellema shares "a quick list of some of the questions that are on the top of my head to get answered during thus year's conference." The list may be quick, but it is quit detailed, and well worth a look. Front-ending a SAML Service Provider with OHS | Andre Correa Oracle Fusion Middleware A-Team member Andre Correa shares a follow-up to a previous post covering Integrating OBIEE 11g into Weblogic's SAML SSO. Thought for the Day "Simplicity is prerequisite for reliability." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • Customer retention - why most companies have it wrong

    - by Michel Adar
    At least in the US market it is quite common for service companies to offer an initially discounted price to new customers. While this may attract new customers and robe customers from competitors, it is my argument that it is a bad strategy for the company. This strategy gives an incentive to change companies and a disincentive to stay with the company. From the point of view of the customer, after 6 months of being a customer the company rewards the loyalty by raising the price. A better strategy would be to reward customers for staying with the company. For example, by lowering the cost by 5% every year (compound discount so it does never get to zero). This is a very rational thing to do for the company. Acquiring new customers and setting up their service is expensive, new customers also tend to use more of the common resources like customer service channels. It is probably true for most companies that the cost of providing service to a customer of 10 years is lower than providing the same service in the first year of a customer's tenure. It is only logical to pass these savings to the customer. From the customer point of view, the competition would have to offer something very attractive, whether in terms of price or service, in order for the customer to switch. Such a policy would give an advantage to the first mover, but would probably force the competitors to follow suit. Overall, I would expect that this would reduce the mobility in the market, increase loyalty, increase the investment of companies in loyal customers and ultimately, increase competition for providing a better service. Competitors may even try to break the scheme by offering customers the porting of their tenure, but that would not work that well because it would disenchant existing customers and would be costly, assuming that it is costlier to serve a customer through installation and first year. What do you think? Is this better than using "save offers" to retain flip-floppers?

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • How do I force apt-get to install a package that will not install due to a bug introduced by Ubuntu 11.10?

    - by Hemm
    Ubuntu 11.10 separated out python-profiler from the Python standard library due to licensing philosophies. (According to what I could Google, correct me if I'm wrong.) This is an active bug since October for 11.10. I have Python 2.7.2 installed, so the dependency errors are wrong. 'apt-get check' does not resolve the problem. What is the best way to resolve to this? Thank you. sudo apt-get install python-profiler Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: python-profiler : Depends: python (>= 2.5) but it is not going to be installed Depends: python (< 2.8) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Installing mod_xsendfile on MAMP

    - by mail4alberto
    Hello, I'm having trouble installing mod_xsendfile on MAMP. I've used some sources to try to help me install it: http://iprog.com/posting/2008/04/compiling_mod_xsendfile_for_mac_os_x http://groups.google.com/group/phusion-passenger/browse_thread/thread/e6dac9d5ea0de9c1 I ended up installing apache20 via macports and used apsx command to create the module and then copy it into MAMP's modules folder. I was able to seem to load the module at least. But then I get this error in my apache logs: [Thu May 27 19:08:28 2010] [notice] child pid 68606 exit signal Bus error (10) [Thu May 27 19:08:41 2010] [notice] child pid 68607 exit signal Bus error (10) Can anyone help me out? :S

    Read the article

  • Wine on Ubuntu 12.04 64bit. wine : Depends: wine1.4 but it is not going to be installed

    - by Nikola Borisov
    I'm running Ubuntu 12.04 64bit and I want to install wine nikola@carbon:~$ sudo apt-get install wine [sudo] password for nikola: Sorry, try again. [sudo] password for nikola: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.4 but it is not going to be installed E: Unable to correct problems, you have held broken packages. nikola@carbon:~$ I spend 4 hours and I have not made any progress. I don't get it. Ubuntu is a popular distro, wine is very common thing for people to want to run. Using 64bit system is what everyone should be doing (I don't even get why are there 32 bit version of ubuntu). Here is how the dependencies looks like: wine - wine1.4 wine1.4 - wine1.4-amd64 wine1.4-amd64 - wine1.4-common wine1.4-common - wine1.4 I see a problem here.... :( Please help me.

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Event ID 17890 (A significant part... paged out.) with SQL Server 2008

    - by Godeke
    I have a machine that has SQL Server 2008 Standard installed. Periodically (about once an hour) I am getting Event ID 17890 several times in a row. An example: 6:28:54 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 10652, committed (KB): 628428, memory utilization: 1%%. 6:34:27 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 332 seconds. Working set (KB): 169780, committed (KB): 546124, memory utilization: 31%%." 6:38:55 "A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 600 seconds. Working set (KB): 245068, committed (KB): 546124, memory utilization: 44%%." This pattern repeated at 7:26 - 7:37, 8:26 - 8:36, 9:24 - 9:35 and so with the same increasing working set and memory utilization pattern. I don't have any (known) background tasks running at this time. Backups run at 2:00 This subsided from 11:00 at night until it resumed at 4:00 in the morning and has been continuing the intermittent 10 minute glitch periods. As this server has plenty of RAM (the commit charge has peaked at 2,871,564 of 4,194,012 physical) I disabled the paging files after reading several items I dug up searching Google and not finding any of them changing the situation. This pattern I am documented is after removing the paging files, so I'm not even sure where we are paging the SQL process could be going. I also changed the SQL process memory to have a minimum of 500MB and a maximum of 2GB of RAM (as this is a light duty database server serving only a small workgroup). Has anyone encountered this? Prior to disabling the page files this error would cause 5 minutes of disk thrashing that disabled access to the databases, files, IIS webs and so on. Since disabling the page files it just logs strange things, but I'm not seeing a performance drop at least. Any suggestions would be welcome.

    Read the article

  • Optimal sprite size for rotations

    - by Panda Pajama
    I am making a sprite based game, and I have a bunch of images that I get in a ridiculously large resolution and I scale them to the desired sprite size (for example 64x64 pixels) before converting them to a game resource, so when draw my sprite inside the game, I don't have to scale it. However, if I rotate this small sprite inside the game (engine agnostically), some destination pixels will get interpolated, and the sprite will look smudged. This is of course dependent on the rotation angle as well as the interpolation algorithm, but regardless, there is not enough data to correctly sample a specific destination pixel. So there are two solutions I can think of. The first is to use the original huge image, rotate it to the desired angles, and then downscale all the reaulting variations, and put them in an atlas, which has the advantage of being quite simple to implement, but naively consumes twice as much sprite space for each rotation (each rotation must be inscribed in a circle whose diameter is the diagonal of the original sprite's rectangle, whose area is twice of that original rectangle, supposing square sprites). It also has the disadvantage of only having a predefined set of rotations available, which may be okay or not depending on the game. So the other choice would be to store a larger image, and rotate and downscale while rendering, which leads to my question. What is the optimal size for this sprite? Optimal meaning that a larger image will have no effect in the resulting image. This is definitely dependent on the image size, the amount of desired rotations without data loss down to 1/256, which is the minimum representable color difference. I am looking for a theoretical general answer to this problem, because trying a bunch of sizes may be okay, but is far from optimal.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Figure: Create a new folder Give the local TFS WPG group full control of the directory   Figure: You need to use the App Tier Service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). Figure: The web.config for TFS is stored in the application folder <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> Figure: Adding this to the web.config will trigger a restart of the app pool Figure: Your web.config should look something like this The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • Do you know how to move the Team Foundation Server cache

    - by Martin Hinshelwood
    There are a number of reasons why you may want to change the folder that you store the TFS Cache. It can take up “some” amount of room so moving it to another drive can be beneficial. This is the source control Cache that TFS uses to cache data from the database. Moving the Cache is pretty easy and should allow you to organise your server space a little more efficiently. You may also get a performance improvement (although small) by putting it on another drive.. Create a new directory to store the Cache. e.g. “d:\TfsCache\” Give the local TFS WPG group full control of the directory Figure: You need to use the App Tier service WPG In the application tier web.config (~\Application Tier\Web Services\web.config) add the following setting (to the appSettings section). <appsettings> ... <add value="D:\" key="dataDirectory" /> ... </appsettings> The app pool will automatically recycle and Team Web Access will start using the new location.  If you then download a file (not via a proxy) a folder with a GUID should be created immediately in the folder from #1.  If the folder doesn’t appear, then you probably don’t have permissions set up properly.

    Read the article

  • solve a partition misalignment?

    - by learner
    I have a new Dell XPS laptop which had Windows 7 installed in it. It also had a default extra partition for "Dell Utility". I installed Ubuntu in it on an Extended Partition along with windows and specified the logical partitions myself (for /,/home and swap). Now when I open Disk Utility , it shows this "Partition misaligned by 512 bytes" error for the Dell Utility partition and "Partition misaligned by 1024 bytes" for the entire Extended partition where Ubuntu is installed. Deleting the extended partition and re-installing Ubuntu may solve the problem of misalignment in the extended partition. But how about the Dell Utility partition? If I re-install Windows 7 Dell Utility wouldn't be a part of the re-install. So that may not solve it either. How do I fix this? Note: The extended partition I made contains an NTFS logical partition for holding data accessible by both OSes(basically a personal data partition). EDIT: I deleted all my Ubuntu partitions and re-installed Ubuntu like before,this time making them partitions with GParted via LiveCD. Now the only problem is that there is a misalignment in the Dell Utility partition. The other misalignment got fixed. Now how do I get rid of that issue?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >