Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 86/358 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • Altering an embedded truetype font so it will be useable by Windows GDI

    - by Ritsaert Hornstra
    I am trying to render PDF content to a GDI device context (a 24bit bitmap to be exact). Parsing the PDF stream into PDF objects and rendering the PDF commands from the content dictionary works well, inclduing font rendering. Embedded fonts are decompressed from their FontFile streams and "loaded" using AddFontMemResourceEx. Now some embedded fonts remove some TrueType tables that are needed by GDI, like the NAME table. Because of this, I tried to modify the font by parsing the TrueType subset font into it's tables and modify those tables that have data missing / missing tables are regenerated with as correct information as possible. I use the Microsoft Font Validator tool to see how "correct" the generated font is. I still get a few errors, like for the maxp table the max values are usually too large (it is a subset) or The xAvgCharWidth field does not equal the calculated value of the OS/2 table is not correct but this does not stop other embedded fonts to be useable.The fonts embedded using PDFCreator are the ones that are problematic. Question: - How can I determine what I need to change to the font file in order for GDI to be able to use it? - Are there any other font validation tools that might give me insight into what is still wrong with the fontfile? If needed: I can make an original fontfile and an altered fontfile available for download somewhere.

    Read the article

  • I want to run two or more procedures in parallel

    - by binod gyawali
    I have list of procedures. All procedures are not dependent upon each other. So, I need to do is, to run the independent procedures in parallel. I have 4 procedures that are to be run parallel. When the procedures are run successfully, now I need to go to the next task. These procedures create about 10 tables. Next task is to execute the set of procedures. I have made one table, where I describe the dependency of these procedures to the tables created above. After any one of the above procedures is completed, I should come to this set of procedures, and find out those procedures whose dependency tables are already created. If any procedure whose dependent tables creation is completed, I need to execute this procedure. Running 4 procedures parallel is done by dts. But, difficulty for me is to transfer the task from the above 4 procedures to the below set of procedures. Please help me to complete my task. Thanks in advance

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • Need to exclude results in a MySQL query where two table fields are not of certain values (brain far

    - by DondeEstaMiCulo
    I don't know if I'm just burnt out and can't think, or what... But I can't seem to make this work right... (We're using MySQL 5.1...) I have two tables which have some transactional stuff stored in them. There will be many records per user_id in each table. Table1 and Table2 have a one-to-one relationship with each other. I want to pull records from both tables, but I want to exclude records which have certain values in both tables. I don't care if they both don't have these values, or if just one does, but both tables should not have both values. (Does this make any sense? lol) For example: SELECT t1.id, t1.type, t2.name FROM table1 t1 INNER JOIN table2 t2 ON table.xid = table2.id WHERE t1.user_id = 100 AND (t1.type != 'FOO' AND t2.name != 'BAR') So t1.type is type ENUM with about 10 different options, and t2.name is also type ENUM with 2 options. My expected results would look a little like: 1, FOO, YUM 2, BOO, BAR 3, BOO, YUM But instead, all I'm getting is: 3, BOO, YUM Because it's filtering out all records which has 'FOO' as the type, and 'BAR' as the name. I keep waiting for that D'oh! moment where it hits me and I feel like an idiot for not realizing what I'm doing wrong. But it hasn't come. And I still feel like an idiot, lol. I appreciate any light any of you can shed on this! Many thanks in advance for the help!

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • notify url is not called

    - by Jahangeer Ahmed
    Dim redirecturl As String = "" redirecturl = ConfigurationManager.AppSettings("papalUrl").ToString() & "us/cgi-bin/webscr?cmd=_cart&upload=1&business=" & ConfigurationManager.AppSettings("paypalemail").ToString() Dim j As Integer = 0 Dim dr1 As DataRow If ds.Tables("ReviewOrder").Rows.Count 0 Then Dim requestsFile As String = Server.MapPath("~/App_Data/PaymentRequests.xml") ' ds.Tables("ReviewOrder").WriteXml(requestsFile) For j = 0 To ds.Tables("ReviewOrder").Rows.Count - 1 dr1 = ds.Tables("ReviewOrder").Rows(j) redirecturl += "&item_name_" & j + 1 & "=" & dr1("varTitle") redirecturl += "&amount_" & j + 1 & "=" & dr1("flRate") redirecturl += "&image_url_" & j + 1 & "=" & ConfigurationManager.AppSettings("RSSurl").ToString() & dr1("imgImage") redirecturl += "&quantity_" & j + 1 & "=" & Convert.ToInt64(dr1("flQuantity")) ''redirecturl += "&item_name_2=Sample_testing2&amount_2=9.50" ''redirecturl += "&quantity_2=2" ''redirecturl += "&item_name_3=Sample_testing3" ''redirecturl += "&amount_3=8.50" ''redirecturl += "&quantity_3=3" redirecturl += "&custom_" & j + 1 & "=" & dr1("BasketID") Next End If redirecturl += "&currency=" & ConfigurationManager.AppSettings("CurrencyCode").ToString() redirecturl += "&first_name=" & firstName redirecturl += "&last_name=" & lastName redirecturl += "&city=" & city redirecturl += "&state=" & state redirecturl += "&zip=" & zip redirecturl += "&address1=" & address1 redirecturl += "&address2=" & address2 redirecturl += "&notify_url=" & Server.UrlEncode(ConfigurationManager.AppSettings("NotifyUrl").ToString() & "&rm=2") redirecturl += "&return=" & ConfigurationManager.AppSettings("SuccessURL").ToString() 'Failed return page url redirecturl += "&cancel_return=" & ConfigurationManager.AppSettings("FailedURL").ToString() Page.ClientScript.RegisterClientScriptBlock(Me.GetType(), "Redirect", "window.parent.location='" & redirecturl & "';", True)

    Read the article

  • How to add a WHERE clause on the second table of a 1-to-1 join in Fluent NHibernate?

    - by daddywoodland
    I'm using a legacy database that was 'future proofed' to keep track of historical changes. It turns out this feature is never used so I want to map the tables into a single entity. My tables are: CodesHistory (CodesHistoryID (pk), CodeID (fk), Text) Codes (CodeID (pk), CodeName) To add an additional level of complexity, these tables hold the content for the drop down lists throughout the application. So, I'm trying to map a Title entity (Mr, Mrs etc.) as follows: Title ClassMap - Public Sub New() Table("CodesHistory") Id(Function(x) x.TitleID, "CodesHistoryID") Map(Function(x) x.Text) 'Call into the other half of the 1-2-1 join in order to merge them in 'this single domain object Join("Codes", AddressOf AddTitleDetailData) Where("CodeName like 'C.Title.%'") End Sub ' Method to merge two tables with a 1-2-1 join into a single entity in VB.Net Public Sub AddTitleDetailData(ByVal m As JoinPart(Of Title)) m.KeyColumn("CodeID") m.Map(Function(x) x.CodeName) End Sub From the above, you can see that my 'CodeName' field represents the select list in question (C.Title, C.Age etc). The problem is that the WHERE clause only applies to the 'CodesHistory' table but the 'CodeName' field is in the 'Codes' table. As I'm sure you can guess there's no scope to change the database. Is it possible to apply the WHERE clause to the Codes table?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • SQL Server: Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

  • Make datalist into 3 x 3

    - by unknown
    This is my code behind for products page. The prodblem is that my datalist will fill in new items whenever I add a new object in the website. So may I know how to add the codes in so that I can make the datalist into 3x3. Thanks. Code behind Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim Product As New Product Dim DataSet As New DataSet Dim pds As New PagedDataSource DataSet = Product.GetProduct Session("Dataset") = DataSet pds.PageSize = 4 pds.AllowPaging = True pds.CurrentPageIndex = 1 Session("Page") = pds If Not Page.IsPostBack Then UpdateDatabind() End If End Sub Sub UpdateDatabind() Dim DataSet As DataSet = Session("DataSet") If DataSet.Tables(0).Rows.Count > 0 Then pds.DataSource = DataSet.Tables(0).DefaultView Session("Page") = pds dlProducts.DataSource = DataSet.Tables(0).DefaultView dlProducts.DataBind() lblCount.Text = DataSet.Tables(0).Rows.Count End If End Sub Private Sub dlProducts_UpdateCommand(source As Object, e As System.Web.UI.WebControls.DataListCommandEventArgs) Handles dlProducts.UpdateCommand dlProducts.DataBind() End Sub Public Sub PrevNext_Command(source As Object, e As CommandEventArgs) Dim pds As PagedDataSource = Session("Page") Dim CurrentPage = pds.CurrentPageIndex If e.CommandName = "Previous" Then If CurrentPage < 1 Or CurrentPage = pds.IsFirstPage Then CurrentPage = 1 Else CurrentPage -= 1 End If UpdateDatabind() ElseIf e.CommandName = "Next" Then If CurrentPage > pds.PageCount Or CurrentPage = pds.IsLastPage Then CurrentPage = CurrentPage Else CurrentPage += 1 End If UpdateDatabind() End If Response.Redirect("Products.aspx?PageIndex=" & CurrentPage) End Sub this is my code for product.vb Public Function GetProduct() As DataSet Dim strConn As String strConn = ConfigurationManager.ConnectionStrings("HomeFurnitureConnectionString").ToString Dim conn As New SqlConnection(strConn) Dim strSql As String 'strSql = "SELECT * FROM Product p INNER JOIN ProductDetail pd ON p.ProdID = pd.ProdID " & _ ' "WHERE pd.PdColor = (SELECT min(PdColor) FROM ProductDetail as pd1 WHERE pd1.ProdID = p.ProdID)" Dim cmd As New SqlCommand(strSql, conn) Dim ds As New DataSet Dim da As New SqlDataAdapter(cmd) conn.Open() da.Fill(ds) conn.Close() Return ds End Function

    Read the article

  • What is the return type of my linq query?

    - by Ulhas Tuscano
    I have two tables A & B. I can fire Linq queries & get required data for individual tables. As i know what each of the tables will return as shown in example. But, when i join both the tables i m not aware of the return type of the Linq query. This problem can be solved by creating a class which will hold ID,Name and Address properties inside it. but,everytime before writing a join query depending on the return type i will have to create a class which is not a convinient way Is there any other mathod available to achieve this private IList<A> GetA() { var query = from a in objA select a; return query.ToList(); } private IList<B> GetB() { var query = from b in objB select b; return query.ToList(); } private IList<**returnType**?> GetJoinAAndB() { var query = from a in objA join b in objB on a.ID equals b.AID select new { a.ID, a.Name, b.Address }; return query.ToList(); }

    Read the article

  • Inner or Outer left Join

    - by user1557856
    I'm having difficulty modifying a script for this situation and wondering if someone maybe able to help: I have an address table and a phone table both sharing the same column called id_number. So id_number = 2 on both tables refers to the same entity. Address and phone information used to be stored in one table (the address table) but it is now split into address and phone tables since we moved to Oracle 11g. There is a 3rd table called both_ids. This table also has an id_number column in addition to an other_ids column storing SSN and some other ids. Before the table was split into address and phone tables, I had this script: (Written in Sybase) INSERT INTO sometable_3 ( SELECT a.id_number, a.other_id, NVL(a1.addr_type_code,0) home_addr_type_code, NVL(a1.addr_status_code,0) home_addr_status_code, NVL(a1.addr_pref_ind,0) home_addr_pref_ind, NVL(a1.street1,0) home_street1, NVL(a1.street2,0) home_street2, NVL(a1.street3,0) home_street3, NVL(a1.city,0) home_city, NVL(a1.state_code,0) home_state_code, NVL(a1.zipcode,0) home_zipcode, NVL(a1.zip_suffix,0) home_zip_suffix, NVL(a1.telephone_status_code,0) home_phone_status, NVL(a1.area_code,0) home_area_code, NVL(a1.telephone_number,0) home_phone_number, NVL(a1.extension,0) home_phone_extension, NVL(a1.date_modified,'') home_date_modified FROM both_ids a, address a1 WHERE a.id_number = a1.id_number(+) AND a1.addr_type_code = 'H'); Now that we moved to Oracle 11g, the address and phone information are split. How can I modify the above script to generate the same result in Oracle 11g? Do I have to first do INNER JOIN between address and phone tables and then do a LEFT OUTER JOIN to both_ids? I tried the following and it did not work: Insert Into.. select ... FROM a1. address INNER JOIN t.Phone ON a1.id_number = t.id_number LEFT OUTER JOIN both_ids a ON a.id_number = a1.id_number WHERE a1.adrr_type_code = 'H'

    Read the article

  • Processing a resultset to look up foriegn keys (and poulate a new table!)

    - by Gilly
    Hi, I've been handed a dataset that has some fairly basic table structures with no keys at all. eg {myRubishTable} - Area(varchar),AuthorityName(varchar),StartYear(varchar),StartMonth(varcha),EndYear(varchar),EndMonth(varchar),Amount(Money) there are other tables that use the Area and AuthorityName columns as well as a general use of Month and Years so I I figured a good first step was to pull Area and Authority into their own tables. I now want to process the data in the original table and lookup the key value to put into my new table with foreign keys which looks like this. (lookup Tables) {Area} - id (int, PK), name (varchar(50)) {AuthorityName} - id(int, PK), name(varchar(50) (TargetTable) {myBetterTable} - id (int,PK), area_id(int FK-Area),authority_name_id(int FK-AuthorityName),StartYear (varchar),StartMonth(varchar),EndYear(varchar),EndMonth(varchar),Amount(money) so row one in the old table read MYAREA, MYAUTHORITY,2009,Jan,2010,Feb,10000 and I want to populate the new table with 1,1,1,2009,Jan,2010,Feb,10000 where the first '1' is the primary key and the second two '1's are the ids in the lookup tables. Can anyone point me to the most efficient way of achieving this using just SQL? Thanks in advance Footnote:- I've achieved what I needed with some pretty simple WHERE clauses (I had left a rogue tablename in the FROM which was throwing me :o( ) but would be interested to know if this is the most efficient. ie SELECT [area].[area_id], [authority].[authority_name_id], [myRubishTable].[StartYear], [myRubishTable].[StartMonth], [myRubishTable].[EndYear], [myRubishTable].[EndMonth], [myRubishTable].[Amount] FROM [myRubishTable],[Area],[AuthorityName] WHERE [myRubishTable].[Area]=[Area].[name] AND [myRubishTable].[Authority Name]=[dim_AuthorityName].[name] TIA

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • A Digital Forensics Student's Linux Workspace

    <b>Tech Source:</b> "Our next entry for the "The $100.00 (USD) Coolest Linux Workspace Contest" was sent all the way from the Netherlands by a digital forensics student named Huseyin. He is also working as an intern at an IT-audit company and described Linux as the best OS to do research on."

    Read the article

  • Skechers Leverages Oracle Applications, Business Intelligence and On Demand Offerings to Drive Long-Term Growth

    - by user801960
    This month Oracle Retail in the USA announced that Skechers - a world leading lifestyle footwear retailer - would be adopting several Oracle Retail products as part of their global growth strategy and to maximise business efficiency.  While based primarily in the USA, Skechers is a respected retailer across the world and has been an Oracle customer since 1997.  The key information about the announcement is below.  To find out more about Skechers visit their website: http://www.skechers.com/  Skechers U.S.A. Inc., an award-winning global leader in the lifestyle footwear industry, has upgraded and expanded its Oracle® Applications investment, implemented Oracle Database and moved to Oracle On Demand, Oracle’s premier cloud service to support rapid growth across its retail and wholesale channels. The new business information systems are part of a larger initiative for the billion-dollar-plus footwear company to fuel growth, reduce total cost of ownership and enable the business to respond faster to market opportunities. With more than 3,000 styles of shoes to design, develop and market, Skechers upgraded to Oracle’s PeopleSoft Enterprise Financial Management and PeopleSoft Supply Chain Management to increase operational efficiencies and improve controls by establishing an integrated, industry-specific platform. An Oracle customer since 1997, Skechers implemented PeopleSoft Enterprise Real Estate Management to meet the rapid growth of its retail stores worldwide. The company is the first customer to go live on the Real Estate Management module and worked closely with Oracle to provide development insight. Skechers also implemented Oracle Fusion Governance, Risk, and Compliance applications. This deployment enabled the company to leverage its existing corporate governance and compliance efforts throughout the global enterprise and more effectively manage the audit processes across multiple business units, processes and systems while reducing audit costs. Next, Skechers leveraged Oracle Financial Analytics, a pre-built Oracle Business Intelligence Application and PeopleSoft Enterprise Project Costing and PeopleSoft Enterprise Contracts to develop a custom Royalty Management dashboard, providing managers with better financial visibility to the company’s licensing contracts. The company switched to Oracle Database and moved database hosting and management to Oracle On Demand to reduce maintenance, implementation and system administration costs. As a result, Skechers is also achieving a better response time and is delivering a higher level of 24x7 support. OSI Consulting, a Platinum partner in Oracle PartnerNetwork (OPN), provided implementation and integration services to Skechers.   To view the full announcement please click here

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Learn Who Started that Trace with the Default Trace

    - by Jonathan Kehayias
    This is not Extended Event related but it came from a question on Twitter about how to tell who and from what machine a server side trace was created, and there is no way to explain this in 140 characters so here’s a blog post.  This information is tracked in the Default Trace and can be found by querying for EventClass 175 which is the Audit Server Alter Trace Event trace_event_id from sys.trace_events. select trace_event_id , name from sys . trace_events where name like '%trace%' To query...(read more)

    Read the article

  • Documentation in Oracle Retail Merchandising System (RMS) and Oracle Retail Fiscal Management System (ORFM), Release 13.2.4

    - by Oracle Retail Documentation Team
    The Patch Release 13.2.4 of the Oracle Retail Merchandising System (RMS) and its module, Oracle Retail Fiscal Management (ORFM)  is now available from My Oracle Support. End User Documentation Enhancements The following summarize the highlights of changes made to the documentation in conjunction with the new Brazil-related functionality: Foundation chapter in the Oracle Retail Merchandising System (RMS)/Sales Audit (ReSA) Brazil Localization User GuideThis chapter was updated with a non-base Localization Flexible Attribution Solution (LFAS) section that addresses the addition of several new custom attributes to Items and Suppliers through non-base LFAS for Brazil; it also addresses the extension of the Retail Tax Integration Layer (RTIL) through the Oracle Retail Merchandising System (RMS), and Oracle Retail Fiscal Management System (ORFM).  ORFM User GuideThe Purchase Order chapter was updated to include schedule related updates for a Nota Fiscal. The Fiscal Documents chapter was updated to include information on creating a new NF and searching for details using Vendor Product Number. Oracle Retail Fiscal Management/RMS Brazil Localization Implementation GuideThe Implementation Checklist chapter was updated with a note on multi-currency functionality. The Batch Processes chapter was updated with information on the NF EDI batch. The following summarize the highlights of changes made to the documentation in conjunction with the new technical certifications (see the RMS 13.2.4 Release Notes for more information): Installation Guides for RMS and for ORFM/RMS BrazilThese installation guides were updated extensively to account for the multiple technical certification enhancements in 13.2.4. White Paper: How to Upgrade from WebLogic11g 10.3.3 to WebLogic11g 10.3.4  (Doc ID: 1432575.1)See the previous blog entry regarding this new White Paper. New Documents on My Oracle Support for Brazil Localization Overview and Interfaces Tax Vendor Integration (Doc ID: 1424048.1)Oracle chooses to integrate with a third party tax expert to delivery the Brazilian solution. Oracle has built the Retail Tax Integration layer (RTIL) as the key integration component to support the integration of Oracle suite of products with external tax vendors. This paper addresses the RTIL integration interfaces with TaxWeb, providing guidance on the typical integration interfaces and operations that must be supported by other tax solutions in the Brazilian market. Oracle Retail Fiscal Management/RMS Brazil Localization: Localization Flexible Attribute Solution (LFAS) (Doc ID: 1418509.1)The white paper covers the definition of custom attributes in Localization Flexible Attribute Solution (LFAS) and enables retailers to perform data conversion changes. Retailers can add several new custom attributes to Items and Suppliers through non-base LFAS for Brazil and extend Retail Tax Integration Layer (RTIL) through the Oracle Retail Merchandising System (RMS), and Oracle Retail Fiscal Management System (RFM). Documents Published in RMS and ORFM Release 13.2.4 Oracle Retail Merchandising System Release Notes Oracle Retail Merchandising System Installation Guide Oracle Retail Merchandising System User Guide and Online Help Oracle Retail Sales Audit (ReSA) User Guide and Online Help Oracle Retail Merchandising System Operations Guide Oracle Retail Merchandising System Data Model Oracle Retail Merchandising Batch Schedule Oracle Retail Merchandising Implementation Guide Oracle Retail POS Suite 13.4.1 / Merchandising Operations Management13.2.4 Implementation Guide Oracle Retail Fiscal Management Data Model Oracle Retail Fiscal Management/RMS Brazil Localization Installation Guide Oracle Retail Fiscal Management/RMS Brazil Localization Implementation Guide Oracle Retail Fiscal Management User Guide and Online Help

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >