Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 127/398 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Oracle Sequences

    - by jkrebsbach
    Reminder to myself - SQL Server has nice index columns directly tied to their tables. Oracle has sequences that are islands to themselves. select seq_name.currval from dual; select seq_name.nextval from dual; currval - return current number at top of sequence nextval - increment sequence by 1, return new number   therefore - to create functionality in oracle similar to an index column - OPTION A) - Create insert trigger: CREATE OR REPLACE TRIGGER dept_bir BEFORE INSERT ON departments FOR EACH ROW WHEN (new.id IS NULL) BEGIN SELECT dept_seq.NEXTVAL INTO :new.id FROM dual; END; This will handle creating a unique identity, but will not necessarily inform process flow of identity without additional logic. OPTION B) - Select indentity into temp variable, insert whole item into tab **** When attemptint to query currval, the below error was being thrown - SELECT seq_name.currval from dual; ERROR : TABLE OR VIEW DOES NOT EXIST *** Although Oracle sys tables may have access to the sequences, that isn't to say the Oracle user may have access to those sequences - verify permissions when the system can't see object that are being reported in the object explorer.

    Read the article

  • GDD-BR 2010 [0E] Google Geo: Exciting New Features and Tools

    GDD-BR 2010 [0E] Google Geo: Exciting New Features and Tools Speaker: Ossama Alami Track: Google APIs Time: E [14:40 - 15:25] Room: 0 Level: 151 Did you know we have an elevation web service? That you can completely restyle the look of the map tiles? How to use Fusion Tables to host and visualize geo data? A session covering new launches across Google's Geo products and some APIs you might not be aware of. Covering Web services, Earth API, New KML Extensions, Maps Styling, Fusion Tables. From: GoogleDevelopers Views: 0 0 ratings Time: 44:16 More in Science & Technology

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • Fiscal Cliff

    - by Carolyn Cozart
    As December 31, 2012 quickly approaches, so does the deadline to extend the Bush Era Tax Cuts which were extended under the Budget Control Act of 2011.  PeopleSoft realizes that there may be some customer anxiety over the delayed withholding tables. PeopleSoft is unable to move forward with delivering the tax changes until we get the official ruling and withholding tables from the IRS.  Please be assured that our legislative analysts are in the loop and are monitoring this situation daily.  Any changes will be included in a special posting. We have created a Knowledge Document in My Oracle Support, Document ID 1332295.1 to keep you up to date on the pending changes.

    Read the article

  • SQL installation scripts for WebCenter Content 11g

    - by KJH
    As part of the installation of WebCenter Content 11g (UCM or URM), one of the main functions is to run the Repository Creation Utility (RCU) to establish the database schema and tables.   This is pretty helpful because it runs all the scripts you need to have without having to manually set anything up in the database.   In UCM 10g and earlier, the installation  itself would establish the database tables if you wanted it to.  Otherwise, the SQL scripts were available to be run independently ahead of time.  For DBAs who wanted to understand what was being done to the database for the application, this was helpful for them.  But in 11g, that is all masked now in RCU.  You don't get to see the scripts at all as part of it's establishing the tables.  But if you comb through the directories for RCU, you can track them down.  They are in the  /rcuHome/rcu/integration/contentserver11/sql/ directories.  And to understand the order in which they are run, you can open up the /rcuHome/rcu/integration/contentserver11/contentserver11.xml file and see how they are run there.  The order in which they are run are: contentserverrole.sql contentserveruser.sql intradoc.sql workflow.sql formats.sql users.sql default.sql contentprocedures.sql  If you are installing WebCenter Records (URM), it will run some additional scripts between the formats.sql and users.sql : MetadataSet.sql UIEnhancements.sql RecordsManagement.sql RecordsManagement_default.sql ClassifiedEnhancements.sql ClassifiedEnhancements_default.sql In addition to the scripts being available within the RCU install directories, they are also available from within the Content Server UI.  If you go to Administration -> DataStoreDesign SQL Generation, this page can allow you to download these various SQL scripts.    From here, you can select your particular database type and which components to include.  Several components make changes dynamically to the database when they are enabled, so these scripts give you a way to inspect what is being run during that startup time.  Once selected, click Generate and you now can either view or download the scripts from the Actions menu. DISCLAIMER:  Installations are ONLY supported when done with the Repository Creation Utility.  These scripts are for reference only and not supported to be run manually.

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • Powershell – script all objects on all databases to files

    - by Nigel Rivett
    <# This simple PowerShell routine scripts out all the user-defined functions, stored procedures, tables and views in all the databases on the server that you specify, to the path that you specify. SMO must be installed on the machine (it happens if SSMS is installed) To run - set the servername and path Open a command window and run powershell Copy the below into the window and press enter - it should run It will create the subfolders for the databases and objects if necessary. #> $path = “C:\Test\Script\" $ServerName = "MyServerNameOrIpAddress" [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName $IncludeTypes = @(“tables”,”StoredProcedures”,"Views","UserDefinedFunctions") $ExcludeSchemas = @(“sys”,”Information_Schema”) $so = new-object (‘Microsoft.SqlServer.Management.Smo.ScriptingOptions’) $so.IncludeIfNotExists = 0 $so.SchemaQualify = 1 $so.AllowSystemObjects = 0 $so.ScriptDrops = 0 #Script Drop Objects $dbs=$serverInstance.Databases foreach ($db in $dbs) { $dbname = "$db".replace("[","").replace("]","") $dbpath = "$path"+"$dbname" + "\" if ( !(Test-Path $dbpath)) {$null=new-item -type directory -name "$dbname"-path "$path"} foreach ($Type in $IncludeTypes) { $objpath = "$dbpath" + "$Type" + "\" if ( !(Test-Path $objpath)) {$null=new-item -type directory -name "$Type"-path "$dbpath"} foreach ($objs in $db.$Type) { If ($ExcludeSchemas -notcontains $objs.Schema ) { $ObjName = "$objs".replace("[","").replace("]","") $OutFile = "$objpath" + "$ObjName" + ".sql" $objs.Script($so)+"GO" | out-File $OutFile #-Append } } } }

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • JCP activities at Devoxx 2013!

    - by Heather VanCura
    Devoxx 2013 has officially started! Looking forward to catching up with Java community member friends--old and new this week. Tuesday (today) the Hackergarten has returned to Devoxx!  There are Java EE 7 tables and Java SE 8 Lambda tables.  Kudos to Andres Almirey for organizing the event and to Arun Gupta and Stuart Marks for leading the activities -- awesome Adopt-a-JSR participation in action! Wednesday there is a JCP 'quickie' session How to Participate in the Future of Java Quickie at 13:35-13:50.  We will also have a chat with the OTN team afterward!  Wednesday evening at 21:00, join us for our BOF session with Martin Verburg and Johan Vos: JCP & Adopt-a-JSR Workshop BOF. 

    Read the article

  • Google I/O Sandbox Case Study: The Bay Citizen

    Google I/O Sandbox Case Study: The Bay Citizen We interviewed The Bay Citizen at the Google I/O Sandbox on May 11, 2011. They explained to us the benefits of using fusion tables on Google Maps to build infographics for their online newspaper. The Bay Citizen built the Bike Tracker Infographic to display the prevalence of bike accidents at points across San Francisco. View the bike tracker here: www.baycitizen.org For more information about developing with Google Maps and fusion tables, visit: code.google.com For more information on The Bay Citizan, visit: www.baycitizen.org From: GoogleDevelopers Views: 21 0 ratings Time: 02:21 More in Science & Technology

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • Implementing a JS templating engine with current PHP project

    - by SeanWM
    I'm currently working on a PHP project and quickly realizing how useful a templating engine would help me. I have a few tables whose table rows are looped out via a PHP loop. Is it possible to use just a JS templating engine (like Handlebarsjs) to also work with these tables? For example: $arr = array('red', 'green', 'blue'); echo '<table>'; foreach($arr as $value) { echo '<tr><td>' . $value . '</td></tr>'; } echo '</table>'; Now I want to add a column via an ajax call using a JS templating engine. Is this possible? Or do I have to use a templating engine for both server side and client side?

    Read the article

  • My coworker created a 96 columns SQL table

    - by Eric
    Here we are in 2010, software engineers with 4 or 5 years or experience, still designing tables with 96 fracking columns. I told him it's gonna be a nightmare. I showed him that we have to use ordinals to interface MySQL with C#. I explained that tables with more columns than rows are a huge smell. Still, I get the "It's going to be simpler this way". What should I do? EDIT * This table contains data from sensors. We have sensor 1 with Dynamic_D1X Dynamic_D1Y [...] Dynamic_D6X Dynamic_D6Y [...]

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Construct sentences from tabular data

    - by Sumeet
    I have a huge set of html files an have to retrieve the meaningful information from them. Most of the task is accomplished, now the problem is with HTML tables. I have some literature on how to extract meaningful tables from html, but my problem is with creating meaningful sentences from tabular data (or attribute value pairs extracted from a table). Are there any NLP/Machine learning techniques to do this? Here is what I expect. Suppose below is a sample table: col_Name: Sumeet col_year: 2011 col_winner: quiz Can this be made to something meaningful like "Sumeet won quiz in 2011"?

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • C# ComboBox SelectedItem [closed]

    - by Diane
    I have a combobox with a list of values. I would like the selected index to be a value from the database. I first create the combobox, fill it with values from a dataset like this: ComboBox cmb_Relay = new ComboBox(); cmb_Relay.DataSource = ds2.Tables[0]; cmb_Relay.DisplayMember = "Relay"; cmb_Relay.ValueMember = "Relay"; Next, I set the SelectedIndex to the value of a specific field value: cmb_Relay.SelectedItem = ds2.Tables[0].Rows[j][2]; I get the follwing error: InvalidArgument=Value of '3' is not valid for 'SelectedIndex'. Parameter name: SelectedIndex

    Read the article

  • Can it be a good idea to create a new table for each client of a webapp?

    - by Will
    This is semi-hypothetical, and as I've no experience in dealing with massive database tables, I have no idea if this is horrible for some reason. On to the situation: Imagine a web based application - lets say accounting software - which has 20,000 clients and each client has 1000+ entries in a table. That's 20 million rows which I know can certainly slow down complex queries. In a case like this, does it make more sense to create a new table in the database for each client? How do databases react to having 20k (or more!) tables?

    Read the article

  • Is there a definitive reference on Pinball playfield design?

    - by World Engineer
    I'm looking at designing tables for Future Pinball but I'm not sure where to start as I've little background in game design per se. I've played scores of pinball tables over the years so I've a fairly good idea of what is "fun" in those terms. However, I'd like to know if there is a definitive "bible" of pinball design as far as layout and scoring/mode design goes. I've looked but there doesn't seem to be anything really coherent that I could find. Is it simply a lost art or am I missing some buried gem?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >