Search Results

Search found 29811 results on 1193 pages for 'table of contents'.

Page 189/1193 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Count Items in Access 2003

    - by Anna
    I have a table which contains a column with different items which i would like to count by there type. For example the table looks like the following: Id Type 1 Table 2 Table 3 TV 4 TV 5 Table 6 TV 7 TV The result should looks like: Type NumOfItems Table 3 TV 4 I use the following code which doesn't work for my Access 2003: SELECT Table1.Type, Count(Table1.Type) AS NumOfItems FROM Table1

    Read the article

  • Developers are strange

    - by DavidWimbush
    Why do developers always use the GUI tools in SQL Server? I've always found this irritating and just vaguely assumed it's because they aren't familiar with SQL syntax. But when you think about it it, it's a genuine puzzle. Developers type code all day - really heavy code too like generics, lamda functions and extension methods. They (thankfully) scorn the Visual Studio stuff where you drag a table onto the class and it pastes in lots of code to query the table into a DataSet or something. But when they want to add a column to a table, without fail they dive into the graphical table designer. And half the time the script it generates does horrible things like copy the table to another one with the new column, delete the old table, and rename the new table. Which is fine if your users don't care about uptime. Is ALTER TABLE ADD <column definition> really that hard? I just don't get it.

    Read the article

  • Is there any way to make Mac OS X Spotlight only index the file names and not the contents?

    - by aalaap
    I do understand that the point of Spotlight is to look inside files, but it also returns file name matches, and that's what I need most of the time. Besides, Spotlight is running so absurdly slow on my system (Snow Leopard on the iMac '08), it's just unusable. I downloaded Canary and Spotlight wasn't able to find the app file for 15 minutes. It was already in the download stack, but as far as Spotlight goes, the file doesn't exist. Hence, I would like to know of a way to make Spotlight only index the file names, which would perhaps make it a bit faster. I'm looking at mimicking the behaviour of Windows applications such as AvaFind or Search Everything Edit: Let me highlight the fact that I am looking for an AvaFind or Search Everything replacement for Mac OS X. Go try one of these on a Windows machine and you'll understand my disappointment with Spotlight or any other popular search tools in OS X.

    Read the article

  • nf_conntrack complaints in dmesg

    - by Alexander Gladysh
    While investigating complains on bad HTTP server performance, I've discovered these lines in dmesg of my Xen XCP host that contains a guest OS with said server: [11458852.811070] net_ratelimit: 321 callbacks suppressed [11458852.811075] nf_conntrack: table full, dropping packet. [11458852.819957] nf_conntrack: table full, dropping packet. [11458852.821083] nf_conntrack: table full, dropping packet. [11458852.822195] nf_conntrack: table full, dropping packet. [11458852.824987] nf_conntrack: table full, dropping packet. [11458852.825298] nf_conntrack: table full, dropping packet. [11458852.825891] nf_conntrack: table full, dropping packet. [11458852.826225] nf_conntrack: table full, dropping packet. [11458852.826234] nf_conntrack: table full, dropping packet. [11458852.826814] nf_conntrack: table full, dropping packet. Complains are repeated every five seconds (number of suppressed callbacks is different each time). What can these sympthoms mean? Is that bad? Any hints? (Note that the question is more narrow than "how to solve specific case of bad HTTP server performance", so I do not give more details on that.) Additional info: $ uname -a Linux MYHOST 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise $ cat /proc/sys/net/netfilter/nf_conntrack_max 1548576 The server is under about 10M hits / day load.

    Read the article

  • When importing an Access table into Excel, a look-up column is showing all values as numbers

    - by user3651997
    I have a basic Access to Excel question that has me frustrated. I have two Access 2010 data tables. One is a list of managers. The primary key is a manager ID (which is an autonumber because managers can have the same name), and each row also has manager name, manager email, etc. The second data table is a list of departments. The primary key for each row is a unique department code, and the foreign key is a manager ID (autonumber). I used the Look-up Wizard to create this connection. However, Access does not show the manager ID in the foreign key location. It shows Manager Name like I requested when I used the Look-up Wizard. Now I am trying to import the second table (departments) into Excel 2010. I clicked import from Access, chose the Department table, and everything popped into Excel. BUT, the Manager Name column is showing Manager ID instead. So I have a list of numbers instead of names. How can I make Excel show what I see in Access? Thanks!

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • How to filter a mysql database with user input on a website and then spit the filtered table back to the website? [migrated]

    - by Luke
    I've been researching this on google for literally 3 weeks, racking my brain and still not quite finding anything. I can't believe this is so elusive. (I'm a complete beginner so if my terminology sounds stupid then that's why.) I have a database in mysql/phpmyadmin on my web host. I'm trying to create a front end that will allow a user to specify criteria for querying the database in a way that they don't have to know sql, basically just combo boxes and checkboxes on a form. Then have this form 'submit' a query to the database, and show the filtered tables. This is how the SQL looks in Microsoft Access: PARAMETERS TEXTINPUT1 Text ( 255 ), NUMBERINPUT1 IEEEDouble; // pops up a list of parameters for the user to input SELECT DISTINCT Table1.Column1, Table1.Column2, Table1.Column3,* // selects only the unique rows in these three columns FROM Table1 // the table where this query is happening WHERE (((Table1.Column1) Like TEXTINPUT1] AND ((Table1.Column2)<=[NUMBERINPUT1] AND ((Table1.Column3)>=[NUMBERINPUT1])); // the criteria for the filter, it's comparing the user input parameters to the data in the rows and only selecting matches according to the equal sign, or greater than + equal sign, or less than + equal sign What I don't get: WHAT IN THE WORLD AM I SUPPOSED TO USE (that isn't totally hard)!? I've tried google fusion tables - doesn't filter right with numerical data or empty cells in rows, can't relate tables I've tried DataTables.net, can't filter right with numerical data and can't use SQL without a bunch of indepth knowledge, not even sure it can if you have that.. I've looked into using jQuery with google spreadsheets, doesn't work at all either I have no idea how I'm supposed to build a front end with my database. Every place that looks promising (like zohocreator) is asking for money, and is far too simplified to be able to do the LIKE criteria or SELECT DISTINCT stuff.

    Read the article

  • Auto-symlink contents of directory in my home directory?

    - by Nathaniel
    So I'm a dual-booter. I'm looking for an easy way to keep up-to-date symlinks in my Linux home folder pointing to every file and folder in the root of Windows personal directory. So, say I have foo.txt and bar.txt in C:\Windows\Documents and Settings\Nathaniel. I want symlinks of those files to automatically be made in /home/nathaniel/ (while I'm running Linux, of course).

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • Allow user to execute a shell script without seeing its contents?

    - by Autopulated
    I'd like to have an hg hook that sends email using a gmail account. Obviously I don't want anyone to be able read the email-sending script except me or root, since it has a password in, so here's what I've tried: -rwsr-xr-x 1 james james 58 Feb 18 12:05 incoming.email.sh -rwx--x--x 1 james james 262 Feb 18 12:04 send-incoming-email.sh where incoming.email.sh is the file executed as the hook: #! /bin/bash /path/to/send-incoming-email.sh However, when I try to run as another user I get the error: /bin/bash: /path/to/send-incoming-email.sh: Permission denied The send-incoming-email.sh file works fine when I run as myself. Is what I'm trying to do possible, or will setuid not propagate to commands executed from a shell script? System is Ubuntu 10.04.2 LTS.

    Read the article

  • Is there a macro to split the contents of an Excel spreadsheet into seperate spreadhseets?

    - by Sean Chadwick
    I know there are similar questions out there but I don't think they are quite the same. I have a excel spreadsheet with the following headings- First name -- Surname -- Host Trust -- Contact details -- etc -- etc It is a large spreadsheet. I have to send an email every week to host trusts to inform them of who will be working with them and it is a nightmare dividing this up manually. Is it possible to create a macro which will split this spreadsheet into several spreadsheets using the the data from the Host Trust column as the title of each spreadsheet?

    Read the article

  • Delete the pendive contents and also trash in mac?

    - by Warrior
    I am using mac pro.I copied some data from my pen drive to my mac and i deleted the content by moving it to trash.After that when i see the info of pen drive it give more value than the original value.If i cleaned the content of the trash only i am able to see the correct value of pen drive and able to copy data. Is mac has been designed like that or is there any other way to delete other than using "move to trash" option? Thanks.

    Read the article

  • how to link a c++ object to a local variable in Lua

    - by MahanGM
    I'm completing my scripting interface with Lua, but recently I've stuck at some point. I have several functions for my Entitiy events like Update(). I have a function called create_entitiy() which instantiate a new entity from a given entity index: function Update() local bullet = create_entity(0, 0, "obj_bullet") end create_entity returns a table which is the properties of the created entity. Now how can I make a connection between bullet variable and my newly created object? Right now for previously added objects to the scene, I simply set a global table for each of them and then after every call to Update(), I go through registered names to find object tables and perform new changes. Like the one below: function Update() if keyboard_key_press(vk_right) then obj_player.x += 3 end I can get obj_player table because I know its name from C++, plus I can get it as a global table and simply reach for the first instance named obj_player. Is there any solution for me to make bullet variable act like this? I was thinking to get all local variables in Update() function and check for every one to see if is it table and it has an unique field attached to it like id, this way I can determine that this is an object table and do the rest of the process. By the way, is this interface going to work easier with luaBind if I implement it? Bottom line: How can I make a local variable in Lua that receives a table from create_entity function and track that local variable to capture it from C++. e. g. function Update() local bullet = create_entity(0, 0, "obj_bullet") bullet.x = 10 <== Commit a change in table end Now I want to get variable bullet from C++. And it's not just this variable, there might be a ton of these local variables with different names.

    Read the article

  • Given a database table where multiple rows have the same values and only the most recent record is to be returned

    - by Jim Lahman
    I have a table where there are multiple records with the same value but varying creation dates.  A sample of the database columns is shown here:   1: select lot_num, to_char(creation_dts,'DD-MON-YYYY HH24:MI:SS') as creation_date 2: from coil_setup 3: order by lot_num   LOT_NUM                        CREATION_DATE        ------------------------------ -------------------- 1435718.002                    24-NOV-2010 11:45:54 1440026.002                    17-NOV-2010 06:50:16 1440026.002                    08-NOV-2010 23:28:24 1526564.002                    01-DEC-2010 13:14:04 1526564.002                    08-NOV-2010 22:39:01 1526564.002                    01-NOV-2010 17:04:30 1605920.003                    29-DEC-2010 10:01:24 1945352.003                    14-DEC-2010 01:50:37 1945352.003                    09-DEC-2010 04:44:22 1952718.002                    25-OCT-2010 09:33:19 1953866.002                    20-OCT-2010 18:38:31 1953866.002                    18-OCT-2010 16:15:25   Notice that there are multiple instances of of the same lot number as shown in bold. To only return the most recent instance, issue this SQL statement: 1: select lot_num, to_char(creation_date,'DD-MON-YYYY HH24:MI:SS') as creation_date 2: from 3: ( 4: select rownum r, lot_num, max(creation_dts) as creation_date 5: from coil_setup group by rownum, lot_num 6: order by lot_num 7: ) 8: where r < 100  LOT_NUM                        CREATION_DATE        ------------------------------ -------------------- 2019416.002                    01-JUL-2010 00:01:24 2022336.003                    06-OCT-2010 15:25:01 2067230.002                    01-JUL-2010 00:36:48 2093114.003                    02-JUL-2010 20:10:51 2093982.002                    02-JUL-2010 14:46:11 2093984.002                    02-JUL-2010 14:43:18 2094466.003                    02-JUL-2010 20:04:48 2101074.003                    11-JUL-2010 09:02:16 2103746.002                    02-JUL-2010 15:07:48 2103758.003                    11-JUL-2010 09:02:13 2104636.002                    02-JUL-2010 15:11:25 2106688.003                    02-JUL-2010 13:55:27 2106882.003                    02-JUL-2010 13:48:47 2107258.002                    02-JUL-2010 12:59:48 2109372.003                    02-JUL-2010 20:49:12 2110182.003                    02-JUL-2010 19:59:19 2110184.003                    02-JUL-2010 20:01:03

    Read the article

  • How to copy with cp to include hidden files and hidden directories and their contents?

    - by eleven81
    How can I make cp -r copy absolutely all of the files and directories in a directory Requirements: Include hidden files and hidden directories. Be one single command with an flag to include the above. Not need to rely on pattern matching at all. My ugly, but working, hack is: cp -r /etc/skel/* /home/user cp -r /etc/skel/.[^.]* /home/user How can I do this all in one command without the pattern matching? What flag do I need to use?

    Read the article

  • Delete the pendrive contents and also trash in Mac OS X?

    - by Warrior
    I am using a MacBook Pro. I copied some data from my pen drive to my Mac and deleted the content by moving it to trash. After that when I see the info of pen drive it give more value than the original value. If I cleaned the content of the trash only I am able to see the correct value of pen drive and able to copy data. Has Mac been designed like that or is there some other way to delete other than using the "move to trash" option? Thanks.

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Why do Windows 7 & 8 have different default behaviour when trying to modify contents of protected folder

    - by Ben
    Here's the situation: I have a Windows 7 PC and a Windows 8 PC and I'm logged in as the same domain user on both machines. My domain user is in the local Administrator group on both. When I run cmd.exe on each machine and then attempt to do this (also on both machines) mkdir "c:\Program Files\cheese" the Windows 8 PC gives an "Access Denied" error, while it works fine on the Windows 7 PC. I understand that C:\Program Files is a protected folder and I'm not interested in a debate on the morals of writing to such a folder directly. But I am interested in understanding what exactly has changed in Windows 8 to cause this. I don't seem to be able to find anything that acknowledges or explains this change in behaviour in Windows 8.

    Read the article

  • Copy contents of a folder, but none of its subfolders?

    - by wizlog
    I'm using the explorer bar (the bar to the left of the main folder/files section in an explorer window) to show folders. I don't have access to open any of the folders, themselves (at least none on C:\ (the C drive). For some reason, I do however have copy rights (I can copy the folders). Because I can only see the folders, is there a way for me to copy the main folder, but not any of the subfolders? A localized example could be how to copy all files in C:\documents...[user]\my documetns\ but none of the folders (ex. C:\documents...[user]\my documetns\my pictures...) Is there a way for me only to copy a folder, but not its subfolders?

    Read the article

  • How to interpret the contents of /proc/bus/pci/devices ?

    - by vivekian2
    The first few fields of 'cat /proc/bus/pci/devices' are understandable. Field 1 - BusDevFunc Field 2 - Vendor Id + Device Id Field 3 - Interrupt Line Field 4 - BAR 0 and the rest of the BAR registers (0 - 5) after that. After the BAR registers are printed out, what are the other fields? Specifically, what PCI configuration space registers(offsets) are printed out?

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • How to write to a file and, while the file is still being writen, read and parse its contents using

    - by Isabelle
    Hello. I'm actually trying to write a shell script that logs the output of a command to a file but, since the command takes a long time to complete (about 15 minutes), I would like to start parsing the output of the command (content of the file) before the command is completed, so I can send messages to the standard output (the user), like: 10% complete 45% complete and so on. Program steps Redirect command to a file: $(command) $FILE Start reading and parsing the output ($FILE) before the command is finished. I thought of using pararell programming, but I havent't got the hang of it. Any help you be appreciated. Best regards.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >